Recent Videos

What's Google Worried About?

By Greg Scoblete

"Google, which is acquiring DeepMind Technologies, has agreed to establish an ethics board to ensure the artificial intelligence technology isn't abused," Amir Efrati, The Information.

No doubt, the news was meant to be reassuring. While Google may be buying up military robots and powerful artificial intelligence (AI) software, you can rest assured that the company will stay true to its corporate motto to not "be evil." Why, they even have the ethics board to prove it.

Yet reports of Google's AI ethics board raises more questions than it answers. Namely what, exactly, is Google worried about? And if Google's worried (or at least, feigning concern), should we be as well?

While neither company responded to our requests for information, the question of AI safety is being raised with increasing urgency as advancements in the field continue to push AI into the world around us.

"AI flies our planes, drives our trains, plays the stock market, and is being developed for autonomous battlefield robots," said Luke Muehlhauser, Executive Director of the Machine Intelligence Research Institute (MIRI), an organization dedicated to researching artificial intelligence. "As we give AI algorithms increasing responsibility in society, we must also insist on ethical and safe use of those algorithms."

"AI has the potential to be a technology of transformative power, and any such technology has significant risks associated with its development and application," seconded Dr. Sean O'Heigeartaigh, Academic Project Manager at the Cambridge Centre for the Study of Existential Risk.

In fact, the world has already had a small taste of algorithms run amok. Take what O'Heigeartaigh deemed as a "trivial" example: the 2010 "flash crash" of the Dow Jones. In the span of minutes, the Dow dropped 600 points only to recoup those points just minutes later -- a dizzying spin provoked in part by high-frequency trading algorithms.

"These are very simple, rudimentary algorithms, but they illustrate a few key points," O'Heigeartaigh said. "These algorithms can operate much more quickly and efficiently than humans, which in ‘negative' scenarios can lead to situations escalating very quickly. [Secondly] designing the goals and rules of such algorithms, such that unforeseen catastrophic consequences cannot occur, turns out to be extremely difficult."

Events like the "flash crash" involve virtual assets, O'Heigeartaigh said, but as AI becomes more powerful, it will increasingly interact with real-world resources with more significant consequences.

"You could hypothetically imagine an algorithm controlling a factory, and through an unexpected glitch turning a huge amount of raw materials into vehicle parts; it would be hard to roll this back!"

Or back to Google. They are aggressively moving from the world of desktop search ads into the real world where Android phones "sense" what you want and serve it up to you and Nest thermostats that study your behavior and adjust your household heating and cooling accordingly. It's this Google -- the one embedding itself into the real-world's "internet of things" -- that is finding AI so attractive, the better to make sense and act on of the reams of data it's collecting. It is this Google that is trying to reassure us that we're in good (artificial) hands.

Taking the Long View

Google aside, what worries some researchers is not the current capabilities of AI but its trajectory.

1 | 2 | Next Page››

Greg Scoblete (@GregScoblete) is the editor of RealClearTechnology and an editor on RealClearWorld. He is the co-author of From Fleeting to Forever: A Guide to Enjoying and Preserving Your Digital Photos and Videos.

(AP Photo)

Greg Scoblete
Author Archive