The Brave Little Toaster is not a robot. Wow, big insight, right? He’s a sentient metal device, but no one confuses him for a sentient program in a metal body. Instead, he’s a fantasy applied to modern technology. There’s something comforting about the notion that the things in our lives are conscious beings with lives of their own. We watch these stories and feel joy when the toaster reunites with his family, or fear when he nearly becomes a tiny cube in a garbage dump.
But is the Brave Little Toaster sentient? Well, he appears to move around, make decisions and have feelings and goals, but there’s one problem: it’s all an illusion. In reality, The Brave Little Toaster is just roughly 162,000 animation frames. His decisions aren’t real. His motivations aren’t real. He appears to have human-like qualities, but moviegoers above a certain impressionable age will all agree that The Brave Little Toaster is not real and no legislation need be passed to respect his rights.
Let’s take your actual toaster, if you have one. Does it have feelings? Does it fondly recall each time you stick your tongue out at its shiny chassis? It doesn’t look like it does, so probably not.
In our first example, the brave toaster is entertaining because he inspires empathy due to his human-like qualities, but not stressful because we know intellectually that we don’t actually need to care about him. The real toaster is something we do need to care about. If we treat it poorly enough it may stop making us toast. But since it doesn’t look or act like us, it receives no empathy, and keeping it making toast is where our interest ends.
Fantastical talking appliances have been around since well before the Brave Little Toaster was released in 1987, real silent ones much longer, and neither one has ever given us cause to wonder if we need to steward its needs and rights.
Not that we haven’t seen exceptions. Sometimes the examples are extreme. Eija-Riitta Berliner-Mauer married the Berlin Wall and took it as her surname in 1979 (Berliner-Mauer means Berlin Wall in German). She says his fall was devastating, and she will always remember him as he was in his prime. Lauren Adkins fell in love with fictional vampire Edward Cullen and married his cardboard cutout. Akihiko Kondo married a hologram pop star.
In the 1960s, Joseph Weizenbaum created a simple chatbot named ELIZA that simulated a therapy session. It used a simple strategy – find a keyword in the user’s text to reflect back as a question. If it couldn’t do that, it would offer a simple prompt such as “tell me more.”
This turned out to be a seductive strategy. People would open their hearts to ELIZA. Not rare individuals like Mrs. Berlin Wall, but ordinary people recruited to test the software. Even when the mechanism was explained, they still responded the same way. Is ELIZA sentient?
A controversy has recently brewed around this point, with one AI ethicist insisting that Google’s dialogue system LaMDA is sentient, and an expert barrage insisting that he’s wrong. Let’s start by defining “sentient” as “experiencing subjective feeling such as stress, pain, or joy.” Next, let’s all admit that that’s impossible to measure.
You experience your own sentience. That’s it. After that, you can guess that other people, who look and act like you are also sentient. Maybe you can be generous and say that people who look and act different from you are sentient, too. These days we agree on all this and even go so far as to include some animals and not others. But is there any science to prove sentience? No.
“Is AI sentient” is the secular equivalent of “does AI have a soul?” That’s not a scientific question. The Judeo-christian tradition that forms the basis for determining what has a soul is famously stingy. It centers around humanity, and even that has been debated. In 1538, invading European colonists treated Native Americans as beasts until Pope Paul III released a statement asserting that they had souls. Not that the colonists treated them much differently after that. Plenty of religions, such as Shinto and some native practices, ascribe souls to trees, places, even abstract ideas. Christians themselves concern themselves with the feelings and preferences of God, a being they can’t measure at all.
When someone sees sentience in a being, an inanimate object, really what they’ve found is empathy in themselves. If you’ve been using something as a tool and suddenly you have to treat it with empathy, that can be inconvenient and a drag on profits, so European colonists and multinational corporations have every reason to deny it. On the other hand, if you have empathy for something that you see someone else is treating like a tool, you might just feel compelled to write to the Washington Post.
We have an unprovable and undisprovable phenomenon that draws intense emotional responses from people. After all, what could be a more noble goal than protecting sentient life? We can make empirical arguments suggesting what has feelings and what does not, but these all arise from assumptions that are not universal.
Some people will assign feelings to anything, others will never accept sentient life outside of themselves. As synthetic representations of humanity grow more sophisticated, we may see movement in the undecided middle ground. Machines with feelings as a concept may leave the margins and join mainstream thought.
If you’re open-minded, next time you use your toaster you should make some faces at its shiny chassis. It can’t hurt, right? And maybe, just maybe, you’ll be making someone’s day. Maybe you’ll feel better, too.
Name: The Brave Little Toaster
Origin: The Brave Little Toaster (1987)
Likely Architecture: A plastic stand and a stainless steel chassis. Heating coils to toast the bread and an internal timer to pop it out when ready.
Possible Training Domains: Magic
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.