Categories
10 Machine Learning Ethics Mini-Essays

8. Singularity.

How do we stay in control of a complex intelligent system?

Even simple systems, or systems created with simple building blocks, are capable of getting out of control[12]. Security is a fascinating field, both in the real world and in the realm of computers. With so many computers and devices connected through the Internet, the very definition of “out of control” can change very rapidly. People are always looking for new ways to subvert the known control mechanisms and send things out of control for fun or profit. But what would happen if an AI sought to subvert known control mechanisms? For insight into this, I’d like to look to someone who holds a very different view of “The Singularity” than I do. In A philosopher argues that an AI can’t be an artist[13], Sean Dorrance Kelly makes the case the “The Singularity”, or the moment when AI becomes more intelligent than humans, will come fast, and come hard. I think that all depends on your definition of “more intelligent”. Win at chess? Interpret X-Rays? Predict lightning?[14] For tasks like these, we’re already falling behind the performance of our algorithmic creations.

Kelly argues that AI cannot be creative. It cannot dream up new approaches to things. It cannot be an artist. But more so, it cannot be creative in other ways. He argues that AI will notpush the boundaries of mathematics, because it cannot approach problems in ways that it has not been programmed to, or has not observed. And this could be true. And if it is true, it puts an interesting perspective on an AI getting out of control. If an AI did want to get out of control – would it be able to concoct a new way to do that? Or would it look at the history of systems that had gotten out of control, decide which approach gave it the best chance of success, and wait until conditions were favourable? How out of control would it be able to get before it was noticed? Before it was contained? Before it was brought under control? Would future AIs learn from its successes? Probably.

Another possibility, as suggested by Geoffrey Hinton and the people at the Vector Institute, is that not only will AIs be able to innovate, they will be able to innovate in ways that are fundamentally different from the way humans innovate[15]. (We’ll look at this again in Question #9, Robot Rights.) Given that human innovation is a legacy of millions of years of preserving our physical manifestation and adapting to our physical environment for the sake of survival, and how computer systems have come to exist with such different fundamentals, innovative new approaches to getting out of control may be possible. The techniques they cook up may easily escape the detection of security experts, and the whole world may become infected as though by some evil super Stuxnet. Hopefully the AI industry keeps an eye on this, and hopefully the very well-established computer security industry stays a few steps ahead of the AI industry.

[12] https://en.wikipedia.org/wiki/Stuxnet

[13] https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/

[14] https://phys.org/news/2019-11-ai-lightning.html

[15] https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/