Building Ultron: The Rise of Rogue AI

Ultron walks into a bar, and orders a drink. Bartender says, “We don’t serve robots!”. Ultron replies, “You will.”

[Spoiler alert: This post contains plot details about the movie Avengers: Age of Ultron.]

Malevolent artificial intelligence (AI) is a reliable theme in science fiction. In the Avengers: Age of Ultron, the AI arises from one of several alien “Infinity Stones”, with a bit of dabbling by Tony Stark (Robert Downey, Jr.) and Bruce Banner (Mark Ruffalo), who are trying to protect the planet from potential alien invaders. The evil AI, named Ultron, seizes on a directive to protect the planet as a green light to justify human extinction.

In the comics, Ultron’s origin is very different than in the movie. It’s both more prescient and more frightening, starting as an experiment in artificial intelligence that, through rapid self-improvement, quickly escapes human control. Ultron is portrayed with a rather human-like ego imbued with outlandish motivations – not surprising for a comic book villain designed to appeal to a mass audience.

Neither the movie nor comics incarnations of Ultron get the threat quite right.

Artificial intelligence is generally classified as “general” or “super” intelligent (another class, artificial narrow intelligence – or weak AI – acts in a limited knowledge domain, like pattern classification). While an AI of general intelligence would be somewhat like us, an artificial super intelligence can be expected to possess some useful, but troubling characteristics. It will operate on the basis of mathematics and probability, without necessarily possessing the biases that cloud our own actions.  And unlike Ultron, its actions will not be transparent, and may in fact be impenetrable to human logic and motivations.

The Maria robot from Lang’s Metropolis (1927).

An AI that means us harm may even leverage time itself to its advantage, making use of the enhanced processing speed and distributed nature of electronic systems to out think us, and like Ultron, to improve itself. To a sufficiently advanced and empowered AI, a day might be as a human lifetime. Conversely, it might also lay out extended plans that take many human lifetimes to come to fruition – after all, you have all the time in the world when you are immortal and without a biological imperative for reproduction. Human beings are good at detecting rapid changes, but we are abysmal at reacting to longer-term extinction threats (think global warming). With unlimited time, subtle watching and waiting and gentle nudges could be an effective strategy for a rogue AI to exploit our blind spots – no dramatic boss fights needed.

Artificial intelligence is often portrayed as a mimicry of biologically-based intelligence, and in our hubris we imagine the crowning achievement of AI to be recognizably human (such as James Spader’s Ultron trading quips with Robert Downey, Jr.).  The truth is that AI could operate very differently than our imagination allows.

How close are we to a dangerous super intelligence? It’s hard to say for sure. But several pieces are coming together.

The Human Brain Project, to name just one example, has as a stated goal the simulation of a human brain within its massive banks of computers. If the effort is successful and the simulation runs, humanity may have created an artificial consciousness. “May have”, because we might not know whether this AI is conscious or not, and it might take some effort to determine this (think of how hard it is to determine whether a coma patient is conscious or not), or whether it possesses artificial general intelligence, or super intelligence. Alan Turing’s Imitation Game will not help us – the Turing test was an initial proposition made in a time of paper and mechanical computation, focused on similarities with human consciousness like ours. It wouldn’t work on an artificial super intelligence for one simple reason: it would be too smart to play.

As Nick Bostrom puts it, “telescoping” super intelligence could quickly move past humanity, and would be to us as we are to monkeys (Or, as we are to insects – choose your scale). And one must ask, how would an AI develop an ethical system that would protect humanity from the sort of exploitation that we, in the darkest regions of the human soul, are capable of visiting on the less powerful? Would an AI be less apt to destroy an ecosystem, or an entire world, in service of a computational directive than its biological counterpart? We won’t know for sure until it arrives.

How do we protect ourselves? We could stop working on AI, but the potential benefits are enormous, from self driving cars to medical diagnosis, and it would be impossible to achieve a moratorium. Isolated computing environments that are less connected to the physical world, careful restrictions of power, restricting connectivity, limiting access to manufacturing infrastructure, or limiting the clock speed of the architecture on which it depends may be partial solutions, but one must wonder whether we could devise a containment vessel that a super intelligence could not overwhelm.

Another AI presence in the movie, the Vision, may provide a clue that would point to another solution. The Vision was based on Tony Stark’s JARVIS AI, his constant companion throughout the series of movies. JARVIS’s ethics were learned over many years, more similar to the way a human brain learns. Maybe, part of the solution isn’t so much Asimov’s 3 Laws of Robotics, or another failsafe algorithm, but something more mundane: raise your children well. Perhaps AIs “raised” by us would outgrow us and move on as in the movie Her, but would be less likely to hate us.

One thing is clear: solving the problem of rogue AI will be terribly difficult before it exists, but it may be impossible afterward. I’m hopeful that expert dialog will deal with the ethical implications of our technological advances in a way that allows humanity to benefit from AI without being endangered by it.

Ultron should be remote and alien, but in the movie he’s either in a fancy body or his AI is distributed to many drones – once he placed himself in a bottle he could be defeated by the Avengers. Ultimately, Ultron’s brand of intelligence wasn’t enough to avoid head-to-head verbal and physical conflicts with his enemies and was only capable of devising a rather brute force extinction scheme that human (or at least superhero) intelligence could easily comprehend and defeat.

If rogue AI with discernable motivations is the stuff of our cinematic nightmares, we should be doubly wary of superintelligent AI that doesn’t play by our storytelling rules.


[Note: The Avengers, Ultron, Tony Stark, Bruce Banner, Vision and all Marvel characters and the distinctive likeness(es) thereof are Trademarks & Copyright © Marvel Characters, Inc. ALL RIGHTS RESERVED.]


2 thoughts on “Building Ultron: The Rise of Rogue AI

  1. Great post. Two comments:
    1. One way we make/think of AI-robots is that we make them separate entities. Each human is a separate body that can be on one place at a time, and has limited output resource (2 hands). I’d guess that, as a consequence, our consciousness is unified: we think of one thing at a time (although background stuff goes on in the brain). AI-robots are not limited in this way. Many agents could be linked to a shared brain, and, at the same time, have substantial local processing.
    2. Trust. A second way humans may fall behind AI-robots is that we humans will learn to trust AI-robots and leave important decisions to them. We already trust experts. When the AI-robots become the experts, we will likely trust them. I currently trust my computer and calculator to do math: I don’t check the results. This will almost certainly continue.

    Flag as inappropriate

    • John,

      I agree on both points.

      I feel as though an AI that truly meant us harm would act in ways that would exploit our weaknesses. Our biases and tendency to become dependent on our devices are certainly big ones. To risk another movie metaphor, I’m reminded of the scene from “The Imitation Game” where the Bletchley Park team first cracked Enigma, and possessed information that would save a convoy from certain destruction – but they held back because it would show their hand. A super intelligent AI would surely understand that overt actions would be revealing, and would therefore instead play a longer game. It’s also quite possible that AI, rather than extinguish us, might “help” us into submission by taking advantage of trust similar to your suggestion.

      The big point, of course, is that we’re not super intelligent AIs, so anything we propose will probably fall short of the strategy it might actually choose. But I’ll go out on a limb and guess that an embodiment like Ultron will not be how it goes!

      Flag as inappropriate

  2. Pingback: Bookmarks for June 2nd from 10:50 to 22:08 : Extenuating Circumstances

Leave a Comment