Categories
10 Machine Learning Ethics Mini-Essays

3. Humanity.

Machine Learning Ethics assignment, Question #3, “Humanity”, for York University CSML1000 “Machine Learning in the Business Context”, Fall 2019.

How do machines affect our behavior and interaction?

I found this to be the most difficult question of this assignment. Or, perhaps, the one that hit the closest to home. My background is very diverse, but often revolves around affecting behaviour and interaction with machines. I’m a “user experience (UX) designer”, and I have been one since long before there was such a term. I’ve also had a bit of a crush on AI since I was a teenager. My first attempt at re-programming an ELIZA script to mess with people was back in 1986, and I’ve done it numerous times since. But only recreationally. I was heavily involved in Google AdWords some years back, and I had very mixed feelings about what I was doing. On one hand, I may well have saved some very worthwhile companies by using their marketing budget in a powerful new way few people had figured out yet. On the other hand, I felt like all I was doing was getting people to spend their money and behave in a way that I could reasonably predict and take advantage of. I could tell I was really interested in the tools under the hood that were making it possible — but just harnessing those tools to get people to spend money in a particular way, as though they were rats in a maze, was not exactly my idea of a good time. I did a lot of A/B testing in order to manipulate my clients’ customer bases more effectively. One step further down that road, and I’d have been making click-bait and selling democracy out to whoever wanted to pay me to do it.So like I say, this question hits close to home. And I did not know where to start.

And then Andrew Ng’s newsletter, The Batch[3], arrived in my Inbox. I’m a real sucker for Andrew Ng,but when he writes stuff like this, I almost start feeling like a fanboi:

I wrote about ethics last week, and the difficulty of distilling ethical AI engineering into a few actionable principles. Marie Kondo, the famous expert on de-cluttering homes, teaches that if an item doesn’t spark joy, then you should throw it out. When building AI systems, should we think about whether we’re bringing joy to others?This leaves plenty of room for interpretation. I find joy in hard work, helping others, increasing humanity’s efficiency, and learning. I don’t find joy in addictive digital products. I don’t expect everyone to have the same values, but perhaps you will find this a useful heuristic for navigating the complicated decision of what to work on: Is your ML project bringing others joy?This isn’t the whole answer, but I find it a useful initial filter.

Andrew Ng

This definitely works for me. And if I have the power to choose, this is one of the things I will choose. I have often gone in this direction. My greatest personal satisfaction from 25 years of working with the web was to create an interactive website that taught millions of people to play ukulele. How much joy is that? Though, I did sully it by monetizing it with AdSense, treating my own students like rats in a maze. Come to think of it, my very first machine learning project was 100% joy oriented. I had just finished watching Mr. Ng’s entire Deep Learning series. I went over to tensorflow.org and executed my very first line of Python ever. I stayed up late, picking through the layers of a pre-trained VGG-19 convnet to get just the style matrices that would let me make what I wanted. My brother-in-law is an artist, and I wanted to bring him joy, by getting a machine to paint him in the style of one of his paintings. I made these[4][5]:

I had used machine learning to bring my family joy. Onward and upward!

[3] https://info.deeplearning.ai/tensorflow-versus-pytorch-autonomous-drone-races-state-of-the-art-with-less-compute-nlp-for-rare-languages

[4] [inline]

[5] [inline]