For a variety of reasons I feel like taking a look at the future. It is a fascinating place and completely unpredictable, part of what makes it so much fun to think about. The point of this exercise is really to highlight just how strange it could be. I’m not declaring prophetic truths (unless I am, in which case, I will totally take credit for it later), I'm just exploring interesting possibilities. Or I was until The Engineer hijacked my train of thought.
It must be futurist week here in the offices of Blogger & Blogger. At the same time I was drafting this blog entry, The Engineer posted his own discussing the Singularity and recent steps towards its realization. For those of you who have not heard of the Singularity (or who are planning to follow the above link "later, probably when I'm done reading this"), I'm just going to go ahead and steal The Engineer's definition (don't worry, I'm not spoiling anything from his post, but you should probably read it first anyway):
Mankind's progress and rate of learning so far has been limited by the ability of our brains to process, assemble, and assimilate information. There may come a time in the future when we build a robot or a software computer program that is, effectively, smarter than we are. At that point, the pace and progress of learning is no longer bound by our brains.The part where I disagree with The Engineer and other Singularity fans is when they define it as the point where the future becomes impossible to predict. The reason I disagree (and those of you paying attention should have already realized what this might be since I mention it in the second sentence) is that I think we've already reached that point. I consider the future unpredictable now (which, on review, is a strange sentence in and of itself). I suspect, however, that this is a semantic distinction, that those talking about the Singularity are using technical and highly constrained definitions of the words "predict" and "future."
So we won't go there, instead there's a larger issue I have with the basic definition selected by The Engineer, namely the idea that the processing power of our brains is currently the primary limiting factor in our technological progress. The point being made in the original definition is that someday we will make machines that are smarter than we are and that these machines will in turn be able to design even smarter machines (and, presumably, will choose to act on that ability) in an accelerating chain reaction that dramatically transforms the world into something we are completely incapable of comprehending with our (relatively) limited intellect. I don't disagree with that assessment, I disagree that it will take machines that are recognizably more intelligent than we are. Further, I disagree with the distinction I keep seeing between "human brains" and "technology." What I really mean is that I think it has already happened and that it occurred, depending on how extreme your definition, sometime between 1891, 1829, or 1680 (depending on your willingness to accept the declarations of certain bold book titles) and the invention of language.
Now before I go further, I must admit that, like much of the Internet, I'm speaking out of my rear-end here. I have read some of what has been written on the Singularity, but not all. I got through about half of Kurzweil's "The Singularity is Near" before it put me to sleep and, given that it was a Mids shift at work where we were allowed to read, but not encouraged to fall asleep, I had to put it aside and return it to the fellow who lent it to me. This was about five years ago and I never went back. So you're dealing with a certain amount of intellectual laziness here. I've cobbled together the bits and pieces I know and am presenting an armchair-general (or armchair-quarterback, if that's your preferred metaphor) assessment of the situation without the benefit of actual rigor in my investigations. This entry is a thought-experiment of sorts and if you want to know more I, like LeVar Burton, insist that you not take my word for it but instead seek out the numerous and varied (and more legitimate) resources who DID apply some rigor. Certainly don't quote me in an argument with someone who knows what they're talking about. It may not go well. And, if YOU happen to know what I'm talking about and what parts are complete bunk, please feel free to correct me in the comments section.
That being said, now I'm going to talk about something I DO know a little about, a concept known as distributed cognition. This is the idea, discussed by Edwin Hutchins in his book Cognition in the Wild, that our thought processes do not constrain themselves solely to the inside of our brains. We put our thoughts out in the environment, distribute our cognition amongst ourselves, amongst our tools, and across time. Even something as simple as a pen and paper enables us to process thoughts that would take much more effort alone if they could be considered at all. Our brains are NOT distinct from our technology any more than our brains are distinct from our bodies. Sure they can be separated conceptually, but separate them in practice and you're not going to get much done.
My earlier pronouncement was, perhaps, a bit ridiculous, but the point remains that even with the limited technology available, by the late nineteenth century we were already designing artifacts and systems that could not be comprehended in their entirety by a single person thinking alone. Throw a computer into the tool pile and now the "unpredictable future" horizon is much much closer. Link those computers and watch everything accelerate again. The Internet appears to be doing for human thought something much like what economics did for our production, enabling the efficient distribution of cognitive resources. Now we're talking about crowdsourcing, emergent behavior (ok, that one has exsited as long as insects, but the effects on our own development become much more dramatic), the long tail (Chris Anderson), blobjects and spimes (Bruce Sterling), and "cognitive surplus" (Clay Shirky). Okay so what I'm really doing in that last sentence is listing trendwords and popsci style bestsellers, but I think you can see why I believe artificial intelligence as we commonly think of it is not strictly necessary to make the future unpredictable. We are already much more intelligent than our brains can handle.
And now we're at the point I wanted to START this entry. Maybe in a future entry I'll talk about the computer program that can extrapolate the shape of Notre Dame from a pile of photographs (start at minute 4:00 if you don't want to watch the whole thing), the phone app that knows where all your friends are right now, the system capable of determining who will leave the bar with whose phone number before the participants know, and what I think these all mean for how hard it is going to be to explain to our kids what the world was like when we grew up. Forget walking to school in the snow (although that may be hard to explain, too), try explaining collect calls, or even a busy signal, to the child that grows up carrying the Internet in their back pocket. Then again, maybe I won't write that entry. After all, the future is unpredictable.
3 comments:
Nice writeup!
Another perfect example that suggests "maybe we've already hit the singularity" is computer chip design itself.
Modern processors consist of about 100 million transistors, all lined up in certain ways to perform certain tasks. No way does any human, or any group of humans, have any idea what goes on at the lowest level of a chip. We rely on supercomputers to figure that out for us. This article describes how most processors are designed by supercomputers, chewing through all possible solutions, looking for something statistically than its last iteration. We don't design the things that will design tomorrow; today's computers do that for us. Although, in the article, it's some consolation that human intuition can still win the day. :)
Wouldn't the 'smarter than us' computers of the future have enough intellect to predict the future thus solving the problem they create?
Alex, I don't think the singularity has occurred just yet, I was just arguing that "ability to predict the future" is not a useful litmus test. I do think we're close, though (and the chip thing reinforces that). I'm looking to the next few decades (but then, I'm a technological optimist)
Bruce, good point, but I'd counter that even if machines can predict the future they'd be creating, the concern is that they wouldn't bother to tell the humans :)
Post a Comment