Categories
10 Machine Learning Ethics Mini-Essays

5. Racist Robots.

Machine Learning Ethics assignment, Question #5, “Racist Robots”, for York University CSML1000 “Machine Learning in the Business Context”, Fall 2019.

How do we eliminate AI bias?

The problem of human bias expressing itself through machine learning algorithms requires deliberate intervention to minimize its impact. Because ML systems are trained using data generated through human activity, the biases expressed in that human activity express themselves automatically in any system that trains on that data. Google recently disbanded its AI Ethics Board, after employees protested the board’s inclusion of Heritage Foundation president Kay Coles James[7]. Ms. James and the Heritage Foundation have a hateful agenda.One can easily imagine Ms. James, among other things, obstructing any guidelines advocating the deliberate removal of human bias from ML systems on the basis of Divine Provenance or Data Sovereignty or Freedom from Censorship or Sick of Political Correctness or some nonsense like that. All of which is to say, bias doesn’t just exist in data, we must also remain watchful for groups and individuals who work towards ensuring that biases remain in important systems. Or, as data scientist Cathy O’Neil says in the TED Talk that is part of our course material[8],

Data laundering. It’s a process whereby technologists hide ugly truths inside black-box algorithms, and call them objective. Call them meritocratic.

I did not understand at first, why Google’s employees wanted this board shut down. The more I read about the characters who had infiltrated it, the more it became clear. As Ms O’Neil concludes,

Data scientists, we should not be the arbiters of the Truth. We should be translators of ethical discussions that happen in larger society.

My best understanding so far of eliminating bias from ML systems comes from the field of Natural Language Processing. Language processing algorithms are trained on vast amounts of real human language usage. Bias that occurs in a corpus of text will be incorporated into a word embedding, and that bias will be inherited by any machine learning algorithm that trains using that word embedding. The most effective way to remove this bias is to modify the word embedding itself, identifying words that suffer from bias, and mathematically eliminating or reducing that bias as much as possible. I saw an excellent description of how to remove gender bias from a word embedding, in the Andrew Ng lectures. Words like male, female, boy, girl, King, and Queen do not need gender bias removed, because they are gender specific. Words like doctor, nurse, engineer, strong, and attractive, on the other hand, will often suffer from a biased gender correlation in any corpus of text. We can “move” these words in our word embedding, to the nearest point that is equidistant from male and female. By manually selecting which words will be treated this way, we can retain the gender distinction between words like King and Queen, while hopefully erasing any gender correlation with words like doctor and nurse. While this may seem like an artificial, heuristic, or hackish approach, it is the kind of intervention that is required in a world where our data will be biased. The same would be true of training an algorithm to predict re-offending criminals, or to process job applications, or to identify human beings walking in traffic. We have to be curious about and aware of the biases in the data we are using, we have to be willing to remove that bias even if we face an opposing agenda, and where it is not possible to de-bias the data itself, we must be aware of the biases that do exist, and then deliberately and mathematically eliminate them from model training, to whatever extent possible.

[7] https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board

[8] https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end#t-764688