Super AI: Genie in the Bottle

Super AI: The Genie in the  Bottle

By Ross Turner

Daily Life

“Artificial intelligence” is hard to get away from these days.  Not merely its mention in tech circles and popular media, but its increasing application in daily life.  From Alexa to self-driving cars, from Google Maps to the US military’s drone programs, artificial intelligence is integrating itself into the vital functions of our social, economic, and political lives.  And it’s not slowing down.  On the contrary, it is growing at a clip that has many AI researchers and scientists both excited and alarmed.  Humanity now stands on a road that inevitably leads to artificial general intelligence (AGI), but one laden with pitfalls that demand caution.  In order to understand why this is so and what worries pioneers in the field, we must first examine what artificial intelligence is, how it works, and what it can potentially do.

What is AI?

Computers and cell phones all have weak AI

Computers, cell phones, calculators are weak AG

Artificial intelligence (AI) is simply any type of non-biological intelligence; that is, intelligent outcomes produced by machines.  By far the most common and familiar type of AI is narrow AI, (also weak AI), which is used to perform a specific function or functions.  It may possess superhuman abilities in limited areas, but it has no capacity to apply that intelligence broadly to other domains outside of its expertise.  Its intelligence is not generalized.

Narrow and Safe

This can be anything as simple as a calculator, to most of the apps on your smart phone, to commercial and municipal applications such as traffic lights, aviation navigation systems, medical diagnostics, and high-frequency stock trading.  Narrow AI is exactly as safe as the outcome it is designed to produce; it will never go beyond its limitations and develop its own goals and instruments for achieving them.  While this makes it incredibly safe, it also severely limits what it is able to do and thus constrains the full benefits — and risks — of machine intelligence.

Strong AI

As with every human technology, AI brings both benefits & dangers

As with every human technology, AI brings both benefits & dangers

With AGI (also strong AI), this is not the case.  An AGI is one able to perform across the full spectrum of human cognitive abilities, or better.  This includes the ability to reason, plan, infer, communicate, learn from experience, think abstractly, solve problems, evaluate with limited information, and to use these in service of its goals. Though vastly different in architecture and “lived” experience from a human being, an AGI ought to be intellectually indistinguishable.  Researchers have realized over the decades the difficulty in this given the extreme complexity of the human brain, but new developments such as deep reinforcement learning indicate significant progress in achieving a true AGI1.

Inevitable Intelligence

Despite the staggering complexity involved, progress is all but inevitable

Despite the staggering complexity involved, progress is all but inevitable

Despite the staggering complexity involved, progress is all but inevitable.  As noted by neuroscientist Sam Harris, “something would have to destroy civilization as we know it…to prevent us from making improvements in our technology permanently.”  Therefore, it is not a question of if generalized artificial intelligence will come about, but when, and by whose hand.

Can’t Stop the AI Train

To fully grasp why this is so, envision the difficulty of a modern nation attempting to compete on the global stage without the internet.  This nation would be at a crippling economic and military disadvantage.  It would be at the total mercy of its more technologically endowed neighbors.  This is precisely the impulse, aside from man’s thirst for knowledge, that will propel AGI development forward and very likely into its final stage: superintelligence.

What Drives You

A superintelligent AGI is one that is vastly superior to human intelligence in all aspects, and, without safeguards against it, also the logical result of an AGI.  This has to do with an AGI’s fundamental imperatives or “basic drives.”  Among these drives are self-preservation, efficiency, acquisition, and creativity.  Taken together, an AGI will want to execute its goals, not change its goals (which would be a failure of its initial goal), use its resources in the most efficient manner possible to increase its chances of success, acquire more resources as needed to further its efficiency and hence goals, and be creative in finding ways to do this.

Explosive Intelligence

All of this means that once you’ve switched on your AGI, it will have every imperative to turn itself into a superintelligence, and given its inherently vast computational power, this transition will likely happen extremely quickly.  This event is called an intelligence explosion.  As superintelligence expert Nick Bostrom notes, “Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.”

Everything is Paperclips

AI was supposed to make paperclips and keep making paperclips

AI was supposed to make paperclips and keep making paperclips

In other words, unless the AGI has been taught to fundamentally value and identify with everything that humans care about, there is a very high probability that it will produce outcomes devastating to humanity.  This would not be out of malice, but out of raw efficiency and creativity in pursuit of its goals.  Nick Bostrom illustrates this with the colorful thought experiment of the “paperclip maximizer.”  Imagine an artificial intelligence instructed to simply make paperclips and keep making them. It may quickly decide to convert all humans into paperclips, both to prevent them from stopping the production of paperclips and to use their atoms to make more2.

Get it Right the First Time

All of this is to say that the initial conditions in producing even modest general artificial intelligence are beyond paramount.  There is only one chance to get it right, and the consequences of failure are cataclysmic.  To quote inventor and businessman Elon Musk, “Mark my words, AI is far more dangerous than nukes.  Far.  So why do we have no regulatory oversight? This is insane.”  This article has focused on the dangers inherent to AGI itself, but the risks are multiplied when human folly is introduced.  This includes neglecting safety protocols in the arms race to achieve AGI first, or the actions of malicious, rogue actors who seek to wield its power unscrupulously or destructively.  Russian President Vladimir Putin puts it succinctly: “…the one who becomes the leader in this sphere will be the ruler of the world.”  We’re in the process of creating an open-source deity whose power we don’t yet understand.

Unimaginable Possibilities Await

Artificial Intelligence its benefits and its dangers lie in how we use it

Artificial Intelligence its benefits and its dangers lie in how we use it

Of course, if AGI offered no benefits, it would remain just a scientific curiosity.  Beyond the economic, political, and military imperatives and implications, artificial intelligence holds the promise of assuaging some of mankind’s most intractable conditions.   It could alleviate material poverty, disease, suffering, and possibly even death.  As with every human technology, its benefits and its dangers lie in how we use it.  What we are creating in AGI, however, is unique.  It has the potential to be humanity’s greatest gift to itself, or the last invention it will ever create.

Notes

  1. “Deep reinforcement learning” refers to the combination of two types of machine learning: reinforcement learning, which is essentially trial-and-error, and deep learning, the ability of an AI to learn and build a knowledge base from raw inputs, such as pixels, without manual engineering.
  2. The Paperclip Maximizer is not intended to be a serious prediction of how AI might go wrong in the real world, but illustrative of the difference between raw, boundless intelligence and human values. One can imagine far more subtle variations in which an unintended consequence is magnified millions of times by the absolute logic and efficiency of a super AGI.

Works Cited

Arulkumaran, Kai, et al. A Brief Survey of Deep Reinforcement Learning. Cornell    University Library, 2017, https://arxiv.org/pdf/1708.05866.pdf . Accessed 12 Oct.     2017.

Li, Yuxi.  Deep Reinforcement Learning: An Overview. Cornell University Library, 2017,
https://arxiv.org/pdf/1701.07274.pdf .  Accessed 12 Oct. 2017.

Omohundro, Stephen M., The Nature of Self-Improving Artificial Intelligence. Self-  Aware Systems, 2008, https://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_    ai.pdf .  Accessed 12 Oct. 2017.

Omohundro, Stephen M., The Basic AI Drives. Self-Aware Systems, 2008,

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.393.8356&rep=rep1  &type=pdf . Accessed 12 Oct. 2017.

Yampolskiy, Roman V., Artificial General Intelligence and the Human Mental Model. University of Louisville, 2012, https://intelligence.org/files/AGI-HMM.pdf .   Accessed 12 Oct. 2017.

Poole, David.  Computational Intelligence and Knowledge. University of British     Columbia, 1998 http://www.cs.ubc.ca/~poole/ci/ch1.pdf . Accessed 12 Oct. 2017

“Ethical Issues in Advanced Artificial Intelligence.” Oxford University, 2003,

https://nickbostrom.com/ethics/ai.html . Accessed October 12, 2017.

“China, Russia and the US Are in An Artificial Intelligence Arms Race.” 2017

https://futurism.com/china-russia-and-the-us-are-in-an-artificial-intelligence-  arms-race/ . Accessed Ocober 2017.
“Can we build AI without losing control over it?” TED, 2016,

https://www.youtube.com/watch?v=8nt3edWLgIg&t=173s

Bookmark the permalink.

Comments are closed.