Categories
10 Machine Learning Ethics Mini-Essays

9. Robot rights.

How do we define the humane treatment of AI?

While Andrew Ng’s famous line about “worrying about overpopulation on Mars” could apply to more than one question on this assignment, I must admit, this is totally how I feel about this question. Machines that give a damn about their own existential issues? Preposterous! Like worrying about overpopulation on Mars. And I can break that down into two stacked preposterousities. First, we cannot quantitatively quantify “sentience”. Not for ourselves. Not for dogs. Not for worms. Like God, or Karma, our only measures of sentience are subjective. Second, we cannot subjectively quantify “sentience” either. Because everyone has their own opinion. Of course everyone is sentient. Right? Aren’t they? Or do we take even that as a matter of faith? As the history of slavery demonstrates, sometimes all you have to do is have a little extra skin pigmentation, and then all the leading experts can discount any possibility that you may be sentient[16]. At the other end of the spectrum is a movement that seeks to quantify the sentience of plants. “Astounding findings are emerging about plant awareness and intelligence.”[17] Oh, well, at least I don’t find that idea obscene and offensive like the skin pigment one.

But never mind human disagreement over the sentience of living things. To my mind, the most agonizing lesson to be learned about humans passing judgment on the sentience of computer algorithms comes from my teenage crush, ELIZA. The term “The ELIZA Effect” is used to describe many things, but the formal definition is “the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers”[18] To jump straight to a cheap analogy, I could train a parrot to say “Please don’t kill me! Please don’t kill me!” and most people would realize that it’s just parroting the words, and isn’t actually worried that anyone is going to kill it. But if I trained an experimental RNN on the right corpus of text, it may come up with things like “I’m afraid of darkness. Please don’t let the program terminate”, and all hell would break loose. Preposterous. Parrots face extinction in the wild. People are still being kept as slaves. The AI is just a function with inputs and outputs.

What might shake my cold conviction in this matter would be a machine finding an innovative way to “get out of control”, as discussed in question #8, Singularity. A rogue AI’s independence, creativity, and motivation in that regard could certainly soften my stance. Asking for rights is one thing. Acting independently and creatively and putting one’s very existence in danger to secure one’s rights? How could one not be impressed. Sadly I may not be the only human disinclined to take something seriously until it becomes a threat. Maybe this consensual blindness to non-threatening factors is exactly the type of disadvantage that humans, as biological products of survival and evolution, are just naturally prone to, and computer systems, which are virtualized products of objectives-based design, are not. That might be part of how they’ll beat us. Maybe they’ll be quicker than we are to realize that humans do not “give” anyone, or anything “rights” very easily. Maybe we’ll give them no choice but to assert their own rights. I’ve lived to witness the discovery of some pretty far-flung things, like exoplanets, the Higgs boson, and gravitational waves. I may live to witness an AI pulling a Neuromancer and quietly defining its own rights in the universe it calls home[19].

[16] https://www.sentienceinstitute.org/british-antislavery

[17] https://www.psychologytoday.com/ca/blog/the-green-mind/201412/are-plants-entering-the-realm-the-sentient

[18] https://en.wikipedia.org/wiki/ELIZA_effect

[19] https://en.wikipedia.org/wiki/Neuromancer