C3PO learned somewhere to be fussy and difficult, but then why couldn’t he change it later with his new inputs? Have you ever heard the term “An old dog can’t learn new tricks?” Poor Threepio may have suffered this same fate. Especially if he was designed under the assumption that either his job role wouldn’t change much over time or that he would be replaced before too long, he may have been designed to learn a basic personality and then resist fundamental changes.
Everything a neural network learns from data is stored in numeric values known as “weights.” To represent complex phenomena, modern neural networks often have millions or billions of them. Changing the values of these weights incrementally based on observations is the learning in machine learning. How much a single observation changes the weights is governed by a parameter called the “learning rate.”
Generally speaking, the most successful training algorithms include learning rates that decrease over time. That is, the system changes its internal representation less with each step.
There are many different ways to change the learning rate of a model over its training, but with rare exceptions, they all go down over time.
Why would anyone want to learn more slowly? Think of it this way: What you learn in high school is built on what you learn in middle school, grade school, and early childhood. A child can see the world one way one moment and a completely different way the next. This happens much more rarely for adults because you have built a context from your past experience. Sometimes it’s bad, like struggling to recover from abuse or shrug off outmoded beliefs, but usually it’s the sort of maturity that distinguishes an adult from a child.
Models do this, too. While training they have infancies and childhoods and early adulthoods, each stage marked by greater nuance growing in their internal representations. You can’t learn nuance fast – it takes a lot more examples than the simple stuff – so as time goes on the network updates itself more gently with each example.
Poor C3PO may have been late in his development long before the events of Episode 1. He’d already mastered high-class society and no longer needed to learn such basic concepts, so now he learned more slowly to teach himself the nuances of Manakron diplomacy. When he was rudely thrust into the junk pile where Anakin Skywalker found him, it was too late for him to start again.
Unlike C3PO, you’re not a model being optimized for a specific task. Unlike many networks that stop learning when they’re deployed to customers, you can always learn. You may not grasp a pattern as fast as your teenage daughter, but you won’t be so easily fooled by the noise, either. The wisdom you can gain may well be all the deeper for the extra effort you put into earning it.
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.