Categories
10 Machine Learning Ethics Mini-Essays

4. Artificial Stupidity.

How can we guard against mistakes?

In order to guard against mistakes, any system must have robust human oversight. This idea will be explored in Question #7. While guarding against mistakes is always critical, it is essential that we learn what Artificial Stupidity is, what conditions or processes lead to its creation, and how we can avoid wasting untold human potential upstream and downstream from its creation. Squandering resources on an Artificial Stupidity would be a mistake. Setting one loose and letting people live with the consequences would be another mistake. In an interview about the book Rebooting AI[6], Gary Marcus says

But right now AI is dangerous, and not in the way that Elon Musk is worried about. But in the way of job interview systems that discriminate against women no matter what the programmers do because the techniques that they use are too unsophisticated. I want us to have better AI. I don’t want us to have an AI winter where people realize this stuff doesn’t work and is dangerous, and they don’t do anything about it.

Gary Marcus

Marcus believes that Classical AI, which is more of a rules-based framework for building cognitive models, can play a role in transcending Artificial Stupidity. “The machine-learning stuff is pretty good at learning from data, but it’s very poor at representing the kind of abstraction that computer programs represent. Classical AI is pretty good at abstraction, but it all has to be hand-coded, and there is too much knowledge in the world to manually input everything. So it seems evident that what we want is some kind of synthesis that blends these approaches.” An AI system that was capable of understanding when its own decisions were going off the rails because of a subtle shift in the data would still require human oversight. Butit would require less intervention and fewer mistakes would be made.

[6] https://www.technologyreview.com/s/614443/we-cant-trust-ai-systems-built-on-deep-learning-alone/

Categories
10 Machine Learning Ethics Mini-Essays

8. Singularity.

How do we stay in control of a complex intelligent system?

Even simple systems, or systems created with simple building blocks, are capable of getting out of control[12]. Security is a fascinating field, both in the real world and in the realm of computers. With so many computers and devices connected through the Internet, the very definition of “out of control” can change very rapidly. People are always looking for new ways to subvert the known control mechanisms and send things out of control for fun or profit. But what would happen if an AI sought to subvert known control mechanisms? For insight into this, I’d like to look to someone who holds a very different view of “The Singularity” than I do. In A philosopher argues that an AI can’t be an artist[13], Sean Dorrance Kelly makes the case the “The Singularity”, or the moment when AI becomes more intelligent than humans, will come fast, and come hard. I think that all depends on your definition of “more intelligent”. Win at chess? Interpret X-Rays? Predict lightning?[14] For tasks like these, we’re already falling behind the performance of our algorithmic creations.

Kelly argues that AI cannot be creative. It cannot dream up new approaches to things. It cannot be an artist. But more so, it cannot be creative in other ways. He argues that AI will notpush the boundaries of mathematics, because it cannot approach problems in ways that it has not been programmed to, or has not observed. And this could be true. And if it is true, it puts an interesting perspective on an AI getting out of control. If an AI did want to get out of control – would it be able to concoct a new way to do that? Or would it look at the history of systems that had gotten out of control, decide which approach gave it the best chance of success, and wait until conditions were favourable? How out of control would it be able to get before it was noticed? Before it was contained? Before it was brought under control? Would future AIs learn from its successes? Probably.

Another possibility, as suggested by Geoffrey Hinton and the people at the Vector Institute, is that not only will AIs be able to innovate, they will be able to innovate in ways that are fundamentally different from the way humans innovate[15]. (We’ll look at this again in Question #9, Robot Rights.) Given that human innovation is a legacy of millions of years of preserving our physical manifestation and adapting to our physical environment for the sake of survival, and how computer systems have come to exist with such different fundamentals, innovative new approaches to getting out of control may be possible. The techniques they cook up may easily escape the detection of security experts, and the whole world may become infected as though by some evil super Stuxnet. Hopefully the AI industry keeps an eye on this, and hopefully the very well-established computer security industry stays a few steps ahead of the AI industry.

[12] https://en.wikipedia.org/wiki/Stuxnet

[13] https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/

[14] https://phys.org/news/2019-11-ai-lightning.html

[15] https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/