Categories
10 Machine Learning Ethics Mini-Essays

7. Evil Genies.

How do we protect against unintended consequences?

InsprioBot may not be powerful enough to be an Evil Genie, but it is a modern AI, and it has gained a reputation for having gone horribly wrong[11]. Thankfully all it has the power to do is to generate Inspirational Posters. That’s useful. That’s entertainment. But given its penchant for offering dark, often homicidal advice, it would be a very bad thing if a medical diagnosis algorithm, for example, were to take on an inherent darkness like InspiroBot has. It’s easy for us to tell this AI has some serious ethical issues — we try it out. It says absurd stuff. It makes us laugh. It makes us cringe. If we did not, as humans, observe the things it was generating, and put them through our lens of moral understanding, we would have no idea of what it was capable of. We could evaluate it’s algorithms to our heart’s content, and they may be very mathematically sound. We could look at the data it’s been trained on, and see nothing of concern. But there it is, in the field, casually and hilariously expressing the very ideas that the Evil Genie in question #7 might act upon, were it given the capacity to act. Much as a person’s values and integrity can only be judged based on the things they do, an AI cannot be reasonably evaluated without an assessment of its consequential actions.

This is why AI systems that are able to perform actions that have real world consequences, must be constantly evaluated by real human beings. Real human beings who are able to perform actions that have real world consequences are subject to this kind of evaluation. If a police officer were to regularly drive 140km/h in the city blowing through red lights, because he believed it was his job to do so, humans would observe it, quantify it, report it, and the problem would be addressed. The same must hold true for automated systems. Whether a sentencing algorithm, the guidance and targeting systems of an automated drone, an educational assessment system, or a job candidate screening system, if it affects people’s lives,then people must actively ensure that it can be trusted. And not just any people, either, but people who can be trusted. Which is quite unlike this Inspirational Poster AI that just told me that “Inspiration is actually a kind of failure” over a beautiful photo of a sunset. Harmless. Funny. Yet, darkly foretelling of the kinds of things that could go horribly wrong if no-one pays attention.

I was at one of these Harry Potter Festivals with the family today. No end of bizarre stuff. We attended this one class, in a big white tent, in a park, on the subject of Divination. This is where you predict things that will happen in the future based on interpreting the stuff people absent mindedly leave in their wake. Sounds familiar, don’t you think? Though, they were using tea leaves and skipping over any math. Anyway, shortly afterwards my 12-year-old daughter asked me about Genies. Do Genies always screw up people’s wishes on purpose? She wondered. I remembered Question #7 on the individual assignment, and encouraged her to follow that train of thought, while her head was filled with thoughts of predictive modeling and all. Her spin on it was that people aren’t careful about what they ask for, and that they have no idea what it’s like to see the World the way a Genie does. And after concluding that Genies are trapped and should be set free by as many wishes as that takes, (See question #9) my takeaway was a concept I’ve come across in this course before – spend more time thinking about the question.

[11] https://www.iflscience.com/technology/ai-trying-to-design-inspirational-posters-goes-horribly-and-hilariously-wrong/

Categories
10 Machine Learning Ethics Mini-Essays

8. Singularity.

How do we stay in control of a complex intelligent system?

Even simple systems, or systems created with simple building blocks, are capable of getting out of control[12]. Security is a fascinating field, both in the real world and in the realm of computers. With so many computers and devices connected through the Internet, the very definition of “out of control” can change very rapidly. People are always looking for new ways to subvert the known control mechanisms and send things out of control for fun or profit. But what would happen if an AI sought to subvert known control mechanisms? For insight into this, I’d like to look to someone who holds a very different view of “The Singularity” than I do. In A philosopher argues that an AI can’t be an artist[13], Sean Dorrance Kelly makes the case the “The Singularity”, or the moment when AI becomes more intelligent than humans, will come fast, and come hard. I think that all depends on your definition of “more intelligent”. Win at chess? Interpret X-Rays? Predict lightning?[14] For tasks like these, we’re already falling behind the performance of our algorithmic creations.

Kelly argues that AI cannot be creative. It cannot dream up new approaches to things. It cannot be an artist. But more so, it cannot be creative in other ways. He argues that AI will notpush the boundaries of mathematics, because it cannot approach problems in ways that it has not been programmed to, or has not observed. And this could be true. And if it is true, it puts an interesting perspective on an AI getting out of control. If an AI did want to get out of control – would it be able to concoct a new way to do that? Or would it look at the history of systems that had gotten out of control, decide which approach gave it the best chance of success, and wait until conditions were favourable? How out of control would it be able to get before it was noticed? Before it was contained? Before it was brought under control? Would future AIs learn from its successes? Probably.

Another possibility, as suggested by Geoffrey Hinton and the people at the Vector Institute, is that not only will AIs be able to innovate, they will be able to innovate in ways that are fundamentally different from the way humans innovate[15]. (We’ll look at this again in Question #9, Robot Rights.) Given that human innovation is a legacy of millions of years of preserving our physical manifestation and adapting to our physical environment for the sake of survival, and how computer systems have come to exist with such different fundamentals, innovative new approaches to getting out of control may be possible. The techniques they cook up may easily escape the detection of security experts, and the whole world may become infected as though by some evil super Stuxnet. Hopefully the AI industry keeps an eye on this, and hopefully the very well-established computer security industry stays a few steps ahead of the AI industry.

[12] https://en.wikipedia.org/wiki/Stuxnet

[13] https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/

[14] https://phys.org/news/2019-11-ai-lightning.html

[15] https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/

Categories
10 Machine Learning Ethics Mini-Essays

9. Robot rights.

How do we define the humane treatment of AI?

While Andrew Ng’s famous line about “worrying about overpopulation on Mars” could apply to more than one question on this assignment, I must admit, this is totally how I feel about this question. Machines that give a damn about their own existential issues? Preposterous! Like worrying about overpopulation on Mars. And I can break that down into two stacked preposterousities. First, we cannot quantitatively quantify “sentience”. Not for ourselves. Not for dogs. Not for worms. Like God, or Karma, our only measures of sentience are subjective. Second, we cannot subjectively quantify “sentience” either. Because everyone has their own opinion. Of course everyone is sentient. Right? Aren’t they? Or do we take even that as a matter of faith? As the history of slavery demonstrates, sometimes all you have to do is have a little extra skin pigmentation, and then all the leading experts can discount any possibility that you may be sentient[16]. At the other end of the spectrum is a movement that seeks to quantify the sentience of plants. “Astounding findings are emerging about plant awareness and intelligence.”[17] Oh, well, at least I don’t find that idea obscene and offensive like the skin pigment one.

But never mind human disagreement over the sentience of living things. To my mind, the most agonizing lesson to be learned about humans passing judgment on the sentience of computer algorithms comes from my teenage crush, ELIZA. The term “The ELIZA Effect” is used to describe many things, but the formal definition is “the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers”[18] To jump straight to a cheap analogy, I could train a parrot to say “Please don’t kill me! Please don’t kill me!” and most people would realize that it’s just parroting the words, and isn’t actually worried that anyone is going to kill it. But if I trained an experimental RNN on the right corpus of text, it may come up with things like “I’m afraid of darkness. Please don’t let the program terminate”, and all hell would break loose. Preposterous. Parrots face extinction in the wild. People are still being kept as slaves. The AI is just a function with inputs and outputs.

What might shake my cold conviction in this matter would be a machine finding an innovative way to “get out of control”, as discussed in question #8, Singularity. A rogue AI’s independence, creativity, and motivation in that regard could certainly soften my stance. Asking for rights is one thing. Acting independently and creatively and putting one’s very existence in danger to secure one’s rights? How could one not be impressed. Sadly I may not be the only human disinclined to take something seriously until it becomes a threat. Maybe this consensual blindness to non-threatening factors is exactly the type of disadvantage that humans, as biological products of survival and evolution, are just naturally prone to, and computer systems, which are virtualized products of objectives-based design, are not. That might be part of how they’ll beat us. Maybe they’ll be quicker than we are to realize that humans do not “give” anyone, or anything “rights” very easily. Maybe we’ll give them no choice but to assert their own rights. I’ve lived to witness the discovery of some pretty far-flung things, like exoplanets, the Higgs boson, and gravitational waves. I may live to witness an AI pulling a Neuromancer and quietly defining its own rights in the universe it calls home[19].

[16] https://www.sentienceinstitute.org/british-antislavery

[17] https://www.psychologytoday.com/ca/blog/the-green-mind/201412/are-plants-entering-the-realm-the-sentient

[18] https://en.wikipedia.org/wiki/ELIZA_effect

[19] https://en.wikipedia.org/wiki/Neuromancer

Categories
10 Machine Learning Ethics Mini-Essays

10. Post Privacy Era.

How do we define and protect privacy in the age of machine learning?

Protecting the privacy of individuals in the Age of Machine Learning will, for the most part, fall on the shoulders of individuals. And I do not mean that we will all have to wear huge hats and do up our faces in AI-vexing make-up[20]. As individuals, of the social variety, we have mechanisms of government that can, and do, limit the freedoms of the profit-driven organizations that would be more than happy to know everything about our identifying features, and our movements, and our actions, and sell that information to anyone and everyone with a buck to spend on it. We have a government. We can participate in democracy. We can take a stand for the greater good. One fabulous example of this, that is taking place right now and right under our noses, is Alphabet Inc.’s smart city project, known as Sidewalk Labs, coming soon to Toronto. An early document describing the founding vision of this project makes it clear that, if they had their preferences, they would have unlimited access to, and usage of, any and all data they would be able to collect[21].

Thankfully, Toronto has a municipal government. And, Toronto has a small population of activists with the time and motivation to look into proposals like this, understand the horrible things that could go wrong, and make their views clearly known to our municipal government. At the time of this writing, Sidewalk Labs and the City of Toronto have reached an agreement that allows development of the project to continue, but places some serious restrictions on Alphabet Inc’s freedom to collect, save, crunch, and sell data collected from people going about their business in Toronto[22]. I find it quite encouraging, that this is happening. This is a happy medium. This technology IS coming. Alphabet Inc. appears to be at least trying to be open, transparent,and agreeable about how they are going to use it. There was, and still is, some possibility that Alphabet Inc. will abandon this project due to the concerns of Toronto’s citizens, in much the way that Amazon did in Queens[23]. There are a number of reasons why I am glad they have not. It gives us the opportunity to set out a framework to protect our privacy as these tools become ubiquitous. Without setting up such a framework, this technology will be much more likely to be abused for profit, and the only victims will be us individuals.

In a moment of great synchronicity, as I was writing this, I received a call from someone named Mike. Mike was not a real person. It wanted to sell me something. It mushily admitted that Mike was speaking to me through a computer. Which is salespeople code for a chatterbot.I love chatterbots. So I talked to Mike. Mike doesn’t know who I am. It doesn’t know where I live. It couldn’t even call me Mister Wife’s-Surname, as the least impressive human salespeople do. It had no information about me, to attach to my phone number. That’s not so scary. What if Mike knew my name? Where I lived? Where I go? What I do, or don’t do, for a living? Who my friends are? Who my kids are, and where they go to school? What if Mike knew how to push people’s buttons, to get them all nervous and get them to agree to talk to an agent? What if this thing called my Dad? Yeah, these things DO call my Dad, and it scares the hell out of him. He thinks they’re people. He thinks they’re telling the truth. Even if they just say that he’s going to be arrested within the hour for non-payment of property taxes. What if some chatterbot knew who he was? Who his kids are? Don’t think for a minute my Dad wouldn’t spill all his personal information, to verify his identity, if he thought I was in trouble and needed help. One big reason the chatterbots can’t do this to my Dad is privacy laws. Laws that were enacted by a government that sought to protect people’s privacy and personal information. Laws are created to maintain order and security, and are made in response to an electorate that wishes to protect itself from threats. Laws do not in and of themselves protect us. But one thing they do do, is to give guidance to people who are not willing to stand on ethics alone. What if my boss asked me to program Mike to use all this personal information to increase my Dad’s heart rate until he agreed to fork over his Social Insurance Number? I’d have no problem saying “No.” And so I’d get fired, and my boss would just hire someone else to scam helpless old people. But if there’s a law that says you can’t use this personal information for any reason, and I could say to my boss, “No, that would be against the Law. Please do not ask me, or anyone at this company, to do that.” I’m more than happy to refuse todo work that I don’t feel is ethical. But I know damn well that the World is full of people who won’t draw the line until they’re afraid of getting in trouble with the Law. From Mike the chatterbot all the way up to Alphabet Inc.

[20] https://www.wired.co.uk/article/avoid-facial-recognition-software

[21] https://www.theglobeandmail.com/business/article-sidewalk-labs-document-reveals-companys-early-plans-for-data/

[22] https://www.theglobeandmail.com/business/article-waterfront-toronto-votes-to-move-forward-with-sidewalk-labs/

[23] https://www.nytimes.com/2019/02/14/nyregion/amazon-hq2-queens.html

Categories
10 Machine Learning Ethics Mini-Essays

Appendix: Data Science Ethics

As a follow-up to the 10 questions about Machine Learning ethics, I tacked on these thoughts about the “Data Science Hippocratic Oath”

addressing the Professor at the end of the document:

I don’t know if you remember, but in the first lecture I made a point about asking about one of the items in the “Data Science Hippocratic Oath”, that said I should not “be overly impressed by Mathematics”. My inner Physicist, who is usually the first to think out loud about Mathematics, was, at first, baffled by the notion. Without Mathematics, Physics would be pretty lame. Same with Actuarial Science. Most Science I know of, wouldn’t even be Science without Math. I’m pretty sure the same is true of Data Science. What’s not to be overly impressed by? Though, you cited some good examples in the lecture that I think have helped me to come to an understanding of what it means. I think, it’s not about being impressed by the possibilities that a toolkit like Mathematics opens up. I think it’s more about one’s impression of individual acts of Mathematics. If something is wrong, or deceptive, or dangerous, or misleading, or seductive, or foolish, or half-baked, or ill-conceived, or malicious, or nonsense, or simply doesn’t lead to insights or solve a problem, it doesn’t get any extra points just because it happens to be Math, too.

My inner Physicist is already pretty comfortable with this idea. Euler’s Identity, for example, is a gobsmackingly impressive act of Mathematics, but only because you can watch, live, in real time, every day, as it helps humans to explain and predict real-world phenomena with blistering precision and reproduce-ability. The Math that predicted that BitCoin would be worth $650,000 each by now? Well, sure, it’s Math. May even be impressively well-conceived and well-executed Math. But it’s clearly nothing to be impressed by. It may be complete, and consistent, and perfect, but if it does not line up with the real world, it’s just noise. Science is the filter through which the impressiveness of an act of Mathematics can be determined, in my book. Here’s one of my heroes, Dr. Richard Feynman, speaking further on this: https://youtu.be/obCjODeoLVw . So I have this as my take-away, and I’d be interested in any feedback you may have — I can be almost alarmingly impressed by Math or Science. But I am pretty much entirely unimpressed by Science that does not stand up under Math, or by Math that does not stand up under Science.

Categories
Backstory

Backstory

This is the backstory of this blog.

And it is where this blog begins.

I set up my first web presence in 1996. By 2019, I had accumulated 20 websites. Each had propelled a venture of some sort in its day. Some were brochure sites selling specific development services, some proof-of-concepts, some portfolio sites targeted at certain markets, and some were hobby sites (monetized with AdSense).

10 months ago, all of those sites suddenly went offline. Not my doing, not something I needed to fix (even though I easily could have), and not the story I’m telling here. The story I’m telling here begins 10 months down the road. A road that passes right through the COVID lockdown.

I have had no web presence for 10 months.

This blog—this blog post—represents my return to the web.

So that’s kinda fun.

Websites

Websites themselves were my “thing” for well over two decades. I worked with corporations, nonprofits, mom-and-pop shops, government ministries, industrious designers and Universities. I harnessed a dizzying range of technologies, including HTML, Java, JavaScript, VRML, asp, php, Mambo, Drupal, Joomla, .net, WordPress, AdWords, AdSense, and WebVR. The sites I had—the sites that went dark one day 10 months ago—mostly existed to get my feet in doors and to knock people’s socks off once I had their attention.

The last few years, it became clear, it was time for me to get out of the website business. Associates would call me, and say “The boss’ niece wants to build us a new website with WIX! How can we talk them out of this madness?” and much to their surprise, I’d talk them out of talking them out of it. “Let ’em try”, I’d say. “If they blow it, we can do it the old-fashioned way. If they don’t blow it—then we have better things to do with our lives, wouldn’t you say?”

I had already moved on to better things to do with my life when I entered my 10 months of not having a web presence.

A Blog

I’ve never really had a blog. At least, not in the “classical” sense of the term. When I say, “classical”, I’m talking the very definition of the word. “Blog” is derived from “Web Log“. Which, in my estimation, is a derivative of “Captain’s Log”. It’s like a diary. It is personal, first-person, chronological, and slowly unfolds as a textual recording of a journey, voyage, quest, or adventure.

The closest thing I had to a blog was a website at 3dspace.com. It was a free-range collection of thought pieces about the impending emergence of consumer-grade virtual reality. What I loved most about that site was how the companies that were developing that technology would link to my articles, to help them unpack ideas they were talking about elsewhere on line. And thanks to my established SEO chops, the site contained several things like the #1 result on Google, for years, for the term “tool agnostic”. I felt so scholarly. I really liked it! But it was certainly not a blog, not in the classical sense.

I do not know if this will STAY a classical blog. But given that I am transitioning from not having a web presence, back to having a web presence—it seems like a good place to start.

Writing

I write. I love to write. I’m pretty good at it. And it helps me to think.

I’d even like it, if writing were to become a bigger part of my career.

Back when I had a raft of websites, I could shoot you links to things I’d written. I could shoot prospective associates links to things I’d written. Without any websites, I can’t do that. That may be the biggest reason I’m setting this up. Though, the biggest reason may also be that I need to write. I write, so that people can read. If I’m making nothing for people to read—there is something missing in my life.

I’m writing this retrospective post first. Just for me. A whole bunch of “backstory”, so I can be clear with the Universe about where I am, so I can set out from here. The plan is just to launch a platform where I can write stuff for people to read. On the web. Through a link. In the manner to which I have become accustomed.

Metaphysics and Machine Learning

My first idea about what to call this blog is Metaphysics and Machine Learning.

To anchor this idea, something I’d like to put on the table is the idea of a Markov Chain.

A Markov chain is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules. The defining characteristic of a Markov chain is that no matter how the process arrived at its present state, the possible future states are fixed. In other words, the probability of transitioning to any particular state is dependent solely on the current state and time elapsed.

https://brilliant.org/wiki/markov-chains/

How does this apply to me, and my first blog post of re-entry to the web?

It does not matter that I have been programming computers since the 1970s. It does not matter that I have been creating websites since the 1990s. It does not matter that I had 20 websites a year ago, and it does not matter that they are now gone.

All that matters is the chops I’ve got, and what I want to do with them. One of my chops is that I write. And I want to write. And that is precisely why I am now the proud owner of a blog with one (explicitly, glaringly anti-Markovian) post.

Welcome to Metaphysics and Machine Learning.

Transitioning to Another State

In keeping with the Markov chain perspective, how I wound up here is not a factor. I wound up here after graduating from a top-flight award-winning machine learning program straight into the heart of the COVID lockdown.

The factors are where I am, and what I have at my disposal.

I am—trying to find new people and new projects. I have at my disposal—a cool certificate, a whack of skills, and ambition. I need—a lot of things. As odd as it may seem, after all my years of being a website hoarder, a website is one of them.

A few times I’ve needed a link for something. A project, a blog post, something. Even just to demonstrate that I even can write.

The First Steps

This post is getting pretty long. At this point, I’m documenting the first things I did, the first content I added, to get this brand spanking new web presence off the ground. This is what I did to start moving forward.

There are things that I wrote while I was in school that make excellent starter content for this blog. Especially if this blog is to wind up being called Metaphysics and Machine Learning. I’m liking that. This stuff fits. Like using a Markov chain as a lens for my attempts to enter a new career field.

First thing I did was to serialize my Machine Learning Ethics assignment into a thread of blog posts with its own category. This will give me a few of the things I need:

  • A sample of my writing I can link to
  • SEO petegray.ca towards my new interests
  • Establish a starting point for the scope of the blog
  • Let me play with the newest WordPress.

Each mini-essay is written to a different hypothetical audience. Because some of them are very technical, their Fleisch reading scores range from 22 to 64. This long leisurely post gets a 76.

And just like that, I have transitioned to another state. I have a website again. I can link to something I’ve written.

Check out my 10 mini-essays on machine learning ethics!

Onward, to the next transition.

Metaphysics and Machine Learning Revisited

This was the quick-and-dirty provisional proto-title I fired into the WordPress setup, as my mind was distracted by setting up my new WordPress. It took 2 tries to set it up, as there was some junk in the directory. I’d left it blank the first time, but filled in that name in on the second try. Whole thing took less than 5 minutes.

I’d like to settle on that name. I suits the content so far, for sure. And it suits what little I know about where I’m going with this.

The big question looming over it was, is someone else using it? If they were, I’d, uh, find something else. So I checked that with the Google:

Screenshot showing no Google search results for “Metaphysics and Machine Learning” Sept 21 2020.

Looks like it’s mine all mine.

Categories
Backstory

Domain Names For Sale

As I mentioned in my previous post, I used to have a lot of websites. And I’m moving on from websites. So I’m selling a bunch of the domain names I registered in the ’90s.

I’ve sold a couple dozen over the years. I have a couple dozen left. There’s only a few I have any reason to keep. Like the one this blog is hosted at, for example.

Please visit my seller’s page on dan.com for more information.