How do we define and protect privacy in the age of machine learning?
Protecting the privacy of individuals in the Age of Machine Learning will, for the most part, fall on the shoulders of individuals. And I do not mean that we will all have to wear huge hats and do up our faces in AI-vexing make-up. As individuals, of the social variety, we have mechanisms of government that can, and do, limit the freedoms of the profit-driven organizations that would be more than happy to know everything about our identifying features, and our movements, and our actions, and sell that information to anyone and everyone with a buck to spend on it. We have a government. We can participate in democracy. We can take a stand for the greater good. One fabulous example of this, that is taking place right now and right under our noses, is Alphabet Inc.’s smart city project, known as Sidewalk Labs, coming soon to Toronto. An early document describing the founding vision of this project makes it clear that, if they had their preferences, they would have unlimited access to, and usage of, any and all data they would be able to collect.
Thankfully, Toronto has a municipal government. And, Toronto has a small population of activists with the time and motivation to look into proposals like this, understand the horrible things that could go wrong, and make their views clearly known to our municipal government. At the time of this writing, Sidewalk Labs and the City of Toronto have reached an agreement that allows development of the project to continue, but places some serious restrictions on Alphabet Inc’s freedom to collect, save, crunch, and sell data collected from people going about their business in Toronto. I find it quite encouraging, that this is happening. This is a happy medium. This technology IS coming. Alphabet Inc. appears to be at least trying to be open, transparent,and agreeable about how they are going to use it. There was, and still is, some possibility that Alphabet Inc. will abandon this project due to the concerns of Toronto’s citizens, in much the way that Amazon did in Queens. There are a number of reasons why I am glad they have not. It gives us the opportunity to set out a framework to protect our privacy as these tools become ubiquitous. Without setting up such a framework, this technology will be much more likely to be abused for profit, and the only victims will be us individuals.
In a moment of great synchronicity, as I was writing this, I received a call from someone named Mike. Mike was not a real person. It wanted to sell me something. It mushily admitted that Mike was speaking to me through a computer. Which is salespeople code for a chatterbot.I love chatterbots. So I talked to Mike. Mike doesn’t know who I am. It doesn’t know where I live. It couldn’t even call me Mister Wife’s-Surname, as the least impressive human salespeople do. It had no information about me, to attach to my phone number. That’s not so scary. What if Mike knew my name? Where I lived? Where I go? What I do, or don’t do, for a living? Who my friends are? Who my kids are, and where they go to school? What if Mike knew how to push people’s buttons, to get them all nervous and get them to agree to talk to an agent? What if this thing called my Dad? Yeah, these things DO call my Dad, and it scares the hell out of him. He thinks they’re people. He thinks they’re telling the truth. Even if they just say that he’s going to be arrested within the hour for non-payment of property taxes. What if some chatterbot knew who he was? Who his kids are? Don’t think for a minute my Dad wouldn’t spill all his personal information, to verify his identity, if he thought I was in trouble and needed help. One big reason the chatterbots can’t do this to my Dad is privacy laws. Laws that were enacted by a government that sought to protect people’s privacy and personal information. Laws are created to maintain order and security, and are made in response to an electorate that wishes to protect itself from threats. Laws do not in and of themselves protect us. But one thing they do do, is to give guidance to people who are not willing to stand on ethics alone. What if my boss asked me to program Mike to use all this personal information to increase my Dad’s heart rate until he agreed to fork over his Social Insurance Number? I’d have no problem saying “No.” And so I’d get fired, and my boss would just hire someone else to scam helpless old people. But if there’s a law that says you can’t use this personal information for any reason, and I could say to my boss, “No, that would be against the Law. Please do not ask me, or anyone at this company, to do that.” I’m more than happy to refuse todo work that I don’t feel is ethical. But I know damn well that the World is full of people who won’t draw the line until they’re afraid of getting in trouble with the Law. From Mike the chatterbot all the way up to Alphabet Inc.