Anyone who has worked with computers has seen them fail suddenly and for no reason. They kick us out of programs, inform us that we need to restart, or stop responding altogether. Why shouldn’t artificial intelligences have dramatic failures on a regular basis as well? The concept of machines as logical processors that can’t understand contradictions inherent in human nature is a popular one and an easy excuse to show a humanoid robot freaking out or exploding on screen for laughs.

Advanced computing machinery and artificial intelligence do fail, and occasionally fire is involved. Needless to say, this is something manufacturers strenuously avoid. People don’t want their TV shooting sparks at them or setting the drapes on fire even if they demand to watch Season 4 of a show with only three seasons.
But that’s not to say that this silly trope has nothing to teach us about keeping our own heads securely attached to our bodies.
Classical artificial intelligence systems earned a reputation for rigid logic because they’re based on rules that don’t function well outside a limited set of expected inputs. Similarly, if we construct our own lives around rigid rules, we may find ourselves outside our comfort zone and responding poorly more often than we’d like.
What do AI experts do about this? They replace or supplement the rules with machine learning. These systems learn directly from data. That is, they learn the world as it is rather than as programmers have simplified and programmed it. They also learn how to respond in ambiguous situations. Even if they don’t give the right answer, they can guess. No need for cranial detonation.
That’s just the start, though. Human beings are learning machines, after all, and when we’re overwhelmed, we still can feel like our head is about to explode.
Once their machines can learn, ML developers go a step further. They give their models as much diverse data as they can. That way, not only does it understand how to deal with the diversity it sees, it also learns how to deal with diversity in general. In human terms, the more different kinds of people you meet, the more you learn about their personalities or cultures individually, but also the more you learn about what people have in common and how to work with someone you have not met. This is known as generalizability. To improve your ability to handle unexpected scenarios, surprise yourself. Do things you normally wouldn’t do. Get out of your comfort zone.
The more time you spend experiencing the unfamiliar, the more you’ll realize the rules you live by aren’t hard rules. Just like a machine gaining knowledge outside its human-provided rules, you can gain comfort with interruptions, contingencies, and contradictions. When you get caught flat footed you will be able to slow down and choose how to respond instead of…
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.
1 comment