In Buzz Lightyear’s origin story Lightyear, the robot therapy cat known only as Sox carries the whole movie. Not only do his adorable feline-automaton antics prop up the rest of the less memorable cast, he also doubles as a plot device for just about any situation the writers find themselves in. He’s a therapy cat, so he starts out offering therapeutic advice, but that’s just the beginning. Buzz can’t see in the dark? Sox has flashlight eyes. Buzz needs to open the station doors to fly out? Sox can hack the mainframe with a USB in his tail. Buzz’s friends need to cut through a steel door to save him? *hack* *cough* mouth-mounted blowtorch. Buzz needs five minutes to escape a villain? Mouth-mounted blowgun. Buzz needs to figure out how to make his sci-fi fuel stable at light speed? Sox can do that, too. Just give him sixty-five years.
I’m going to save myself an aneurysm and remind everyone that I only promised to explain one incredible or bizarre AI behavior in each entry. For Sox I want to explain how an AI can do science. I’ll level with you – today you will not find an AI capable of figuring out antimatter fuel recipes on its own. Given access to quick testing and differentiable feedback (i.e. not just “failure” but “71.913% success”) a system can optimize four parameters pretty quickly (if you didn’t see the movie, the “recipe” is actually just a ratio of four different input liquids).
But Sox didn’t have any testing at all. So how can an AI trained in psychological therapy figure out how to do antimatter physics in its head? Let’s just say I’m really glad that the writers at least gave Sox half a century to work on it.
To start with, he would need reading comprehension (perhaps with transformers and neurosymbolics for reasoning) just to learn how to do it. A dedicated symbolic module for math should come standard in future-tech general reinforcement learning machines, IMHO, so he would have no trouble with differential equations or linear algebra as long as he could connect his symbolic solver with his reading comprehension module to read the functions.
After that comes the heart of science. He’d need to be able to invent hypotheses, then conduct experiments – in this case thought experiments – to confirm or refute them. Coming up with hypotheses isn’t hard. If you think that creativity is a sticky question, check out my post on Commander Data’s art. Sox can start from some parameters he learned in his literature review and then run them in the antimatter combustion simulator he has constructed in his mind. Simulations are environments that take inputs and return outputs. Sox has a goal to achieve fuel stability. Inputs, outputs, and a goal, are all a problem needs to be a perfect fit for Sox’s reinforcement learning brain. Give it sixty-five years, and voila! Easy as interstellar rocket science!
How can Sox be sure his mental simulation matches real-life antimatter physics? He can’t without tests, and that’s why he makes a point to tell Buzz that his solution only works in theory. In order for the formula to work on the first try like it did in the movie, we just have to assume that the state of the literature available to Sox was extremely thorough.
Who’s the galaxy’s cutest theoretical physicist?
Origin: Lightyear (2022)
Likely Architecture: Reinforcement Learning, Convolutional Neural Networks for vision processing, and Transformers for Speech and Language. Integrated symbolic mathematics system. Self-programming on-device reinforcement learning environment. Hardcoded reward for belly rubs.
Possible Training Domains: Pretrained on recorded therapy sessions and with data from previous iterations of the product in the field. Self-training from sixty-five years working on theoretical antimatter physics.
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.