Recent Videos

Our Final Invention: How the Human Race Goes and Gets Itself Killed

By Greg Scoblete
‹‹Previous Page |1 | 2 |

Goodbye, Humanity

When (and Barrat is emphatic that this is a matter of when, not if) humanity creates ASI it will have introduced into the world an intelligence greater than our own. This would be an existential event. Humanity has held pride of place on planet Earth because of our superior intelligence. In a world with ASI, we will no longer be the smartest game in town.

To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.

Needless to say, that's not a good place for humanity to be.

And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.

Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human's mind, with its experiences, emotions and logic, or lack thereof. We could not anticipate what ASI would do because we simply do not "think" like it would. In fact, we've already arrived at the alarming point where we do not understand what the machines we've created do. Barrat describes how the makers of Watson, IBM's Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators -- and the mysterious Watson is not the only such inscrutable "black box" system in existence today, nor is it even a full-fledged AGI, let alone ASI.

Barrat grapples with two big questions in the book. The first is why an ASI necessarily leads to human extinction. Aren't we programming it? Why couldn't humanity leverage it, like we do any technology, to make our lives better? Wouldn't we program in safeguards to prevent an "intelligence explosion" or, at a minimum, contain one when it bursts?

According to Barrat, the answer is almost certainly no. Most of the major players in AI are barely concerned with safety, if at all. Even if they were, there are too many ways for AI to make an end-run around our safeguards (remember, these are human-safeguards matched up with an intelligence that will equal and then quickly exceed it). Programming "friendly AI" is also difficult, given that even the best computer code is rife with error and complex systems can suffer catastrophic failures that are entirely unforeseen by their creators. Barrat doesn't say the picture is utterly hopeless. It's possible, he writes, that with extremely careful planning humanity could contain a super-human intelligence -- but this is not the manner in which AI development is unfolding. It's being done by defense agencies around the world in the dark. It's being done by private companies who reveal very little about what it is they're doing. Since the financial and security benefits of a working AGI could be huge, there's very little incentive to pump the breaks before the more problematic ASI can emerge.

Moreover, ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory. (But don't rule out a Terminator scenario altogether -- one of the biggest drivers of AI research is the Pentagon's DARPA and they are, quite explicitly, building killer robots. Presumably other well-funded defense labs, in China and Russia, are doing similar work as well.)

Barrat is particularly effective in rebutting devotees of the Singularity -- the techno-optimism popularized by futurist Ray Kurzweil (now at Google, a company investing millions in AI research). Kurzweil and his fellow Singularitins also believe that ASI is inevitable only they view it as a force that will liberate and transform humanity for the good, delivering the dream of immortality and solving all of our problems. Indeed, they agree with Barrat that the "intelligence explosion" signals the end of humanity as we know it, only they view this as a benign development with humanity and ASI merging in a "transhuman" fusion.

If this sounds suspiciously like an end-times cult that's because, in its crudest expression, it is (one that just happens to be filled with more than a few brilliant computer scientists and venture capitalists). Barrat forcefully contends that even its more nuanced formulation is an irredeemably optimistic interpretation of future trends and human nature. In fact, efforts to merge ASI with human bodies is even more likely to birth a catastrophe because of the malevolence that humanity is capable of.

The next question, and the one with the less satisfactory answer, is just how ASI would exterminate us. How does an algorithm, a piece of programming lying on a supercomputer, reach out into the "real" world and harm us? Barrat raises a few scenarios -- it could leverage future nano-technologies to strip us down at the molecular level, it could shut down our electrical grids and turn the electronic devices we rely on against us -- but doesn't do nearly as much dot-connecting between ASI as a piece of computer code and the physical mechanics of how this code will be instrumental in our demise as he does in establishing the probability of achieving ASI.

That's not to say the dots don't exist, though. Consider the world we live in right now. Malware can travel through thin air. Our homes, cars, planes, hospitals, refrigerators, ovens (even our forks for God's sake) connect to an "internet of things" which is itself spreading on the backs of ubiquitous wireless broadband. We are steadily integrating electronics inside our bodies. And a few mistaken lines of code in the most dangerous computer virus ever created (Stuxnet) caused it to wiggle free of its initial target and travel the world. Now extrapolate these trends out to 2040 and you realize that ASI will be born into a world that is utterly intertwined and dependent on the virtual, machine world -- and vulnerable to it. (Indeed one AI researcher Barrat interviews argues that this is precisely why we need to create ASI as fast as possible, while its ability to harm us is still relatively constrained.)

What we're left with is something beyond dystopia. Even in the bleakest sci-fi tales, a scrappy contingent of the human race is left to duke it out with their runaway machines. If Our Final Invention is correct, there will be no such heroics, just the remorseless evolutionary logic that has seen so many other species wiped off the face of the Earth at the hands of a superior predator.

Indeed, it's telling that both AI-optimists like Kurzweil and pessimists like Barrat reach the same basic conclusion: humanity as we know it will not survive the birth of intelligent machines.

No wonder we're worried about robots.

‹‹Previous Page |1 | 2 |

Greg Scoblete (@GregScoblete) is the editor of RealClearTechnology and an editor on RealClearWorld. He is the co-author of From Fleeting to Forever: A Guide to Enjoying and Preserving Your Digital Photos and Videos.

(Image: St. Martin's Press)

Greg Scoblete
Author Archive