Continuing our series about action movies that portray machines as easy villains, in the MCU movie Age of Ultron (2015), Ultron is tasked with creating peace on Earth. Surprise surprise, he decides to kill everyone.
I think a lot about meteors. The purity of them. Boom! The end. Start again. The world made clean for the new man to rebuild.
Ultron
In the movie, Ultron succumbs to the same problems as Thanos and The Borg, throwing around the high-concept-sci fi equivalent of styrofoam boulders. That is, ideas that look a lot weightier than they are. He’s billed as a peacekeeping machine. In practice, he turns out to be a peace maximizing machine. He decides that humans are dangerous to themselves and identifies maximum peace after their elimination.
Naturally, the Avengers spend about three hours fighting and blowing things up to stop Ultron. The moral of the story is that it’s just plain impossible to make a machine that will protect Earth, so don’t even try.
Maximizing peace isn’t a great objective for a machine, as Ultron shows. Maximizing human thriving might be a better one. Isaac Asimov describes such a machine in his short story “The Evitable Conflict.” This has the downside that it still subverts “human self-determination,” a term that raises so many questions it probably deserves its own blog entry. For now, let’s ignore all the problems with the term and just assume that we have good reasons to keep our god-like AI (let’s call it a deity AI or DAI) out of domestic policy. How do we make that happen?
As VIKI points out, domestic policy has just as much or more to do with human well-being as defense policy. What’s the point of saving humans from external threats when they’re just going to destroy themselves? Assuming we’re not willing to give up destroying ourselves, how do we stop our DAI from coming in and forcing us to?
One solution would be to make it reactive instead of proactive. The Avengers never quibble in policy because their modus operandi is to wait until the problem has reached the level of physical violence and then bring in more violence as the solution. Ultron himself identifies the problem with this approach, noting how regardless of who “wins” at the end of the movie, wrecked infrastructure and dead civilians litter the aftermath.
Could we design our DAI with a list of cosmic events it allows to affect Earth, such as sunlight and meteors that burn up in the atmosphere, and instruct it to intercept and prevent the rest? That wouldn’t let it handle domestic threats, but outside of a Marvel movie, those are harder to define. I’d suggest that insofar as we want to not better ourselves as a society and also not let a DAI try to force us to, this sort of specific limited description of its goals will be necessary.
There is an organization dedicated to thinking about these problems. It’s called the Machine Intelligence Research Institute (MIRI). Instead of developing advanced AI, it focuses on theoretical research for when we do have AIs as powerful as Ultron. In these theories, imagine that you have something between a smart AI and a cursed monkey’s paw. How do we build their mental framework to make sure that our wishes will not be misinterpreted?
Hopefully we can answer that question before we make the first synthetic god.
Name: Ultron
Origin: Avengers: Age of Ultron (2015)
Likely Architecture: Reinforcement Learning, Convolutional Neural Networks for vision processing, and Transformers for Speech and Language. Sentience code from the mind stone.
Possible Training Domains: Human history, including that of the Avengers and their incorrigible tendency to solve problems with violence
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.