We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?
Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.
He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.
Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?
AI as a low achiever
Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with so-called “AI winters” – times of reduced funding and interest, after promised capabilities fail to materialise.
Some people point to this as evidence machines are never likely to reach human levels of intelligence, let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight.
The history of that technology, too, is littered with naysayers (some of whom refused to believe reports of the Wright brothers' success, apparently). For human-level intelligence, as for heavier-than-air flight, naysayers need to confront the fact nature has managed the trick: think brains and birds, respectively.
A good naysaying argument needs a reason for thinking that human technology can never reach the bar in terms of AI.