Categories
10 Machine Learning Ethics Mini-Essays

7. Evil Genies.

How do we protect against unintended consequences?

InsprioBot may not be powerful enough to be an Evil Genie, but it is a modern AI, and it has gained a reputation for having gone horribly wrong[11]. Thankfully all it has the power to do is to generate Inspirational Posters. That’s useful. That’s entertainment. But given its penchant for offering dark, often homicidal advice, it would be a very bad thing if a medical diagnosis algorithm, for example, were to take on an inherent darkness like InspiroBot has. It’s easy for us to tell this AI has some serious ethical issues — we try it out. It says absurd stuff. It makes us laugh. It makes us cringe. If we did not, as humans, observe the things it was generating, and put them through our lens of moral understanding, we would have no idea of what it was capable of. We could evaluate it’s algorithms to our heart’s content, and they may be very mathematically sound. We could look at the data it’s been trained on, and see nothing of concern. But there it is, in the field, casually and hilariously expressing the very ideas that the Evil Genie in question #7 might act upon, were it given the capacity to act. Much as a person’s values and integrity can only be judged based on the things they do, an AI cannot be reasonably evaluated without an assessment of its consequential actions.

This is why AI systems that are able to perform actions that have real world consequences, must be constantly evaluated by real human beings. Real human beings who are able to perform actions that have real world consequences are subject to this kind of evaluation. If a police officer were to regularly drive 140km/h in the city blowing through red lights, because he believed it was his job to do so, humans would observe it, quantify it, report it, and the problem would be addressed. The same must hold true for automated systems. Whether a sentencing algorithm, the guidance and targeting systems of an automated drone, an educational assessment system, or a job candidate screening system, if it affects people’s lives,then people must actively ensure that it can be trusted. And not just any people, either, but people who can be trusted. Which is quite unlike this Inspirational Poster AI that just told me that “Inspiration is actually a kind of failure” over a beautiful photo of a sunset. Harmless. Funny. Yet, darkly foretelling of the kinds of things that could go horribly wrong if no-one pays attention.

I was at one of these Harry Potter Festivals with the family today. No end of bizarre stuff. We attended this one class, in a big white tent, in a park, on the subject of Divination. This is where you predict things that will happen in the future based on interpreting the stuff people absent mindedly leave in their wake. Sounds familiar, don’t you think? Though, they were using tea leaves and skipping over any math. Anyway, shortly afterwards my 12-year-old daughter asked me about Genies. Do Genies always screw up people’s wishes on purpose? She wondered. I remembered Question #7 on the individual assignment, and encouraged her to follow that train of thought, while her head was filled with thoughts of predictive modeling and all. Her spin on it was that people aren’t careful about what they ask for, and that they have no idea what it’s like to see the World the way a Genie does. And after concluding that Genies are trapped and should be set free by as many wishes as that takes, (See question #9) my takeaway was a concept I’ve come across in this course before – spend more time thinking about the question.

[11] https://www.iflscience.com/technology/ai-trying-to-design-inspirational-posters-goes-horribly-and-hilariously-wrong/