The Second Wave
One of the crucial dynamics driving the computerization of human jobs is the evolution of machines intelligent enough to do work formerly consigned to humans. Beyond the rote, mechanical and dangerous work that robots are already beginning to do today (on factory floors, in war zones, etc.) lies the next wave of more sophisticated human skills, a wave that requires artificial intelligence to master.
Yet the development of ever-more intelligent machines carries with it immense danger beyond simply the loss of human jobs. These dangers have been spelled out most recently by documentarian James Barrat in his book Our Final Invention (which we reviewed here) and more recently by MIT physicist Max Tegmark. Put simply, the rise of machines as intelligent (and eventually more intelligent) than humans could potentially put human life and civilization at risk of extinction.
The creations of super-intelligent machines has been dubbed the Singularity by computer scientist Vernor Vinge, so named because, like the edge of a Black Hole, it is the point at which it is impossible to predict the course of human events because humans will no longer be masters of their destiny. Machines will.
Exactly when (or even if) the Singularity will occur is a matter of debate. Our efforts to build intelligent machines may hit a brick wall. But if progress towards intelligent machines continues unabated, you can be sure the debate regarding a "post-human" future that is currently a peripheral concern (at best) will take on increasing urgency. In such an environment, how will society react? Will people accept the fact that their destiny -- the destiny of their species -- may cease to be in their hands? Will they wish to pursue research that could destroy human life as we know it?
Some futurists think we'll happily welcome the prospect. Ray Kurzweil -- a popular proponent of the Singularity's benefits who is currently directing Google's artificial intelligence efforts -- has argued that since progress toward the Singularity will be incremental, people will be gradually socialized to the idea that the human race as we have known it for centuries will eventually be replaced by human-machine hybrids or simply conscious machines. The Singularity will come bearing gifts. First Google Glass, then Google Eyeballs, then Google Brain, then the uploading of "you" into the cloud for a life immortal, with nary a complaint along the way.
The less optimistic take sees at least some segment of humanity reacting negatively, and violently, against the coming Singularity. This scenario has been anticipated by computer scientists working in artificial intelligence. AI researcher Hugo de Garis for instance, wrote a book of speculative fiction positing a war between "Cosmists" and "Terrans" -- the former devoted to advancing the Singularity and the latter dedicated to stopping it at all costs.
Could such a war happen in the future? It's impossible to predict. But violent acts of sabotage and assassination directed at those working in companies and institutions deemed instrumental to creating a "post-human" future seem highly plausible -- even inevitable -- if progress continues. If Kurzweil is wrong and the rise of super-intelligent machines is viewed as more alien than benign, it won't be hard to convince people that their lives and the future of human civilization rest on stopping this work. People have killed over much less.
States, too, would take a keen interest in the progress of AI (indeed, they already have). In the scramble for geopolitical power and position, powerful states may view the race for AI as a new arms race, and take measures, such as preemptive war, to disadvantage their rivals.
So keep an eye on the Google bus fracas. It may symbolize a tragic irony: the "disruption" that so many tech firms pride themselves on may be coming. And it might not be pretty.