Categories
10 Machine Learning Ethics Mini-Essays

3. Humanity.

How do machines affect our behavior and interaction?

I found this to be the most difficult question of this assignment. Or, perhaps, the one that hit the closest to home. My background is very diverse, but often revolves around affecting behaviour and interaction with machines. I’m a “user experience (UX) designer”, and I have been one since long before there was such a term. I’ve also had a bit of a crush on AI since I was a teenager. My first attempt at re-programming an ELIZA script to mess with people was back in 1986, and I’ve done it numerous times since. But only recreationally. I was heavily involved in Google AdWords some years back, and I had very mixed feelings about what I was doing. On one hand, I may well have saved some very worthwhile companies by using their marketing budget in a powerful new way few people had figured out yet. On the other hand, I felt like all I was doing was getting people to spend their money and behave in a way that I could reasonably predict and take advantage of. I could tell I was really interested in the tools under the hood that were making it possible — but just harnessing those tools to get people to spend money in a particular way, as though they were rats in a maze, was not exactly my idea of a good time. I did a lot of A/B testing in order to manipulate my clients’ customer bases more effectively. One step further down that road, and I’d have been making click-bait and selling democracy out to whoever wanted to pay me to do it.So like I say, this question hits close to home. And I did not know where to start.

And then Andrew Ng’s newsletter, The Batch[3], arrived in my Inbox. I’m a real sucker for Andrew Ng,but when he writes stuff like this, I almost start feeling like a fanboi:

I wrote about ethics last week, and the difficulty of distilling ethical AI engineering into a few actionable principles. Marie Kondo, the famous expert on de-cluttering homes, teaches that if an item doesn’t spark joy, then you should throw it out. When building AI systems, should we think about whether we’re bringing joy to others?This leaves plenty of room for interpretation. I find joy in hard work, helping others, increasing humanity’s efficiency, and learning. I don’t find joy in addictive digital products. I don’t expect everyone to have the same values, but perhaps you will find this a useful heuristic for navigating the complicated decision of what to work on: Is your ML project bringing others joy?This isn’t the whole answer, but I find it a useful initial filter.

Andrew Ng

This definitely works for me. And if I have the power to choose, this is one of the things I will choose. I have often gone in this direction. My greatest personal satisfaction from 25 years of working with the web was to create an interactive website that taught millions of people to play ukulele. How much joy is that? Though, I did sully it by monetizing it with AdSense, treating my own students like rats in a maze. Come to think of it, my very first machine learning project was 100% joy oriented. I had just finished watching Mr. Ng’s entire Deep Learning series. I went over to tensorflow.org and executed my very first line of Python ever. I stayed up late, picking through the layers of a pre-trained VGG-19 convnet to get just the style matrices that would let me make what I wanted. My brother-in-law is an artist, and I wanted to bring him joy, by getting a machine to paint him in the style of one of his paintings. I made these[4][5]:

I had used machine learning to bring my family joy. Onward and upward!

[3] https://info.deeplearning.ai/tensorflow-versus-pytorch-autonomous-drone-races-state-of-the-art-with-less-compute-nlp-for-rare-languages

[4] [inline]

[5] [inline]

Categories
10 Machine Learning Ethics Mini-Essays

9. Robot rights.

How do we define the humane treatment of AI?

While Andrew Ng’s famous line about “worrying about overpopulation on Mars” could apply to more than one question on this assignment, I must admit, this is totally how I feel about this question. Machines that give a damn about their own existential issues? Preposterous! Like worrying about overpopulation on Mars. And I can break that down into two stacked preposterousities. First, we cannot quantitatively quantify “sentience”. Not for ourselves. Not for dogs. Not for worms. Like God, or Karma, our only measures of sentience are subjective. Second, we cannot subjectively quantify “sentience” either. Because everyone has their own opinion. Of course everyone is sentient. Right? Aren’t they? Or do we take even that as a matter of faith? As the history of slavery demonstrates, sometimes all you have to do is have a little extra skin pigmentation, and then all the leading experts can discount any possibility that you may be sentient[16]. At the other end of the spectrum is a movement that seeks to quantify the sentience of plants. “Astounding findings are emerging about plant awareness and intelligence.”[17] Oh, well, at least I don’t find that idea obscene and offensive like the skin pigment one.

But never mind human disagreement over the sentience of living things. To my mind, the most agonizing lesson to be learned about humans passing judgment on the sentience of computer algorithms comes from my teenage crush, ELIZA. The term “The ELIZA Effect” is used to describe many things, but the formal definition is “the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers”[18] To jump straight to a cheap analogy, I could train a parrot to say “Please don’t kill me! Please don’t kill me!” and most people would realize that it’s just parroting the words, and isn’t actually worried that anyone is going to kill it. But if I trained an experimental RNN on the right corpus of text, it may come up with things like “I’m afraid of darkness. Please don’t let the program terminate”, and all hell would break loose. Preposterous. Parrots face extinction in the wild. People are still being kept as slaves. The AI is just a function with inputs and outputs.

What might shake my cold conviction in this matter would be a machine finding an innovative way to “get out of control”, as discussed in question #8, Singularity. A rogue AI’s independence, creativity, and motivation in that regard could certainly soften my stance. Asking for rights is one thing. Acting independently and creatively and putting one’s very existence in danger to secure one’s rights? How could one not be impressed. Sadly I may not be the only human disinclined to take something seriously until it becomes a threat. Maybe this consensual blindness to non-threatening factors is exactly the type of disadvantage that humans, as biological products of survival and evolution, are just naturally prone to, and computer systems, which are virtualized products of objectives-based design, are not. That might be part of how they’ll beat us. Maybe they’ll be quicker than we are to realize that humans do not “give” anyone, or anything “rights” very easily. Maybe we’ll give them no choice but to assert their own rights. I’ve lived to witness the discovery of some pretty far-flung things, like exoplanets, the Higgs boson, and gravitational waves. I may live to witness an AI pulling a Neuromancer and quietly defining its own rights in the universe it calls home[19].

[16] https://www.sentienceinstitute.org/british-antislavery

[17] https://www.psychologytoday.com/ca/blog/the-green-mind/201412/are-plants-entering-the-realm-the-sentient

[18] https://en.wikipedia.org/wiki/ELIZA_effect

[19] https://en.wikipedia.org/wiki/Neuromancer