I confess I haven’t read everything that Singularists have written about technoutopia, but I’m always suspicious when people start talking about the rate of change not only increasing but increasing at some staggeringly amazing rate. As James points out, we don’t really know all that much about intelligence at this point. If you don’t believe me, here is an easy way to prove it to yourself: google “data”, “information” and “knowledge”. You’re going to see wildly conflicting definitions of all three concepts.
If the human race can’t even decide at what point”I have a lot of information” becomes “I possess knowledge”, we’re still a long way off from understanding wisdom.
Fundamentally, the idea that computing power’s rate of growth has anything to do with achieving a truly artificial intelligence relies on two pretty major assumptions: that the rate will continue to grow unbounded, and that the ability to create artificial intelligence is dependent upon computing power. The first assumption is just totally crazy; “computing power” has *existed* in an electronic sense for less than a century. We don’t *know* anything about what the curve looks like when examined from a 1,000 year perspective. Our ability to build stronger computers is going to hit a wall (barring quantum computing) much quicker than even the theorists realizes. It has little to do with our ability to make faster chips, and everything to do with our economic need to make them and our ability to make use of them. The computer I’m writing this blog post on is actually *slower* than the one I used this time last year. More power efficient, to be sure, but not capable of the same number of computations in any practical sense (because of several reasons, not the least of which is that it’s dual core and software just isn’t written yet to take advantage of it properly). Graphics capability is probably the current biggest driver in processing need, but the cards out there are already more than capable of pushing any monitor’s capabilities to the limit.
HDTV’s haven’t really even made full penetration into the market yet, and already they’re passe’ in terms of resolution. What makes you think people are going to continue to demand better and faster graphics cards when they already can’t buy a display that can take advantage of it? Until someone builds a compelling holographic display, the need for 3D graphics is going to be constrained heavily by this two dimensional panel that I’m staring at. Technically, it isn’t all that much better than the top of the line gigantic CRT I used to use.
The second assumption (that intelligence has anything to do with computing power) is likewise of dubious foundation. If raw computing power was going to yield intelligence, our intelligence modeling methods are actually worse than horrible, they’re negatively designed. My computer can perform more computations than my head can. Hell, my *calculator* (a rather nice one, actually, I’ll post about it someday) has more compute power than my brain does. Given the raw power of the world’s supercomputer clusters, you’d think they’d have written something marginally intelligent already, just by using exhaustion methods. About the best that can be done is to beat humans at chess regularly, which although impressive definitely qualifies as a restricted-boundary problem.
There’s been some interesting AI projects. I don’t follow the field myself actively, I’ll admit. When I do I find new and interesting and cool things, but nothing that makes me sit back and say, “That’s it. Those guys have figured it out, and the Robot Revolution is coming.” I think we’ve got more than a long ways to go to achieve technirvana.
[edited to add] – a quantum theorist who started in AI offers up his (much more technically profound) thoughts on the subject.