Whether we like it or not, we’re quickly heading toward a future where we live side by side with robots, whether as a coworker, virtual assistant, or driving our cars.

And as the line between human intelligence and artificial intelligence blurs, most recently with Microsoft’s accidentally racist twitterbot Tay and Google’s recent board game champion AlphaGo, there have been some questions raised how to develop ethics in robots, alongside making tech smarter.

David Gunkel, a professor of communication at Northern Illinois University, dives headfirst into these ethically murky waters in his work. He’s the author of The Machine Question: Critical Perspectives on AI, Robots and Ethics, and studies the intersection of artificial intelligence and ethics. In this interview with Chicago Inno, he breaks down the recent incidents with Tay and AlphaGo, plus discusses how the biggest variable in the future of robots is humans.

David Gunkel
David Gunkel

Okay, let’s get the obvious out of the way: are we headed toward a future of evil robot overlords? 

In a word, no…at least not the way we usually think about it. But we are, I would say, at the beginning of a robot invasion. This invasion might not look like your typical science fiction film, with robots descending on the planet from some alien and distant world. It is more of a slow and steady incursion–like the fall of Rome–where machines increasingly come to occupy influential positions in our everyday lives, like recommending films, caring for grandpa, or driving our automobiles. So we’re not talking “evil robot overlords” but something more mundane and less dramatic, like companions, assistants, and co-workers.

Let’s consider Tay, Microsoft’s bot. Should we have expected that neo-Nazi meltdown to happen? Why or why not?

It’s not at all surprising. Tay is a chatterbot designed to leverage machine learning technology. It therefore does not just follow preprogrammed instructions; its behavior is an emergent phenomenon that is shaped by actual interactions with human users. So it is not surprising that some users recognized and took advantage of this unique opportunity to teach the machine some bad behaviors.

What is surprising to me is how people responded to this event, especially Microsoft Research. Learning algorithms are intentionally designed to do things that the creators of the system cannot anticipate or control. So what is truly surprising is that we were somehow surprised by this. 

We are, I would say, at the beginning of a robot invasion. 

How much can robots be held accountable for their actions, given they’re the product of humans? 

This is, for me at least, the really interesting and important question. Usually when something goes wrong with a computer system we hold the designer responsible. This is what is called the “instrumental theory of technology,” and this understanding has served us very well. But it begins to fall apart with learning algorithms, like Tay. Clearly the engineers at Microsoft Research did not set out to design a racist Twitterbot. So who, we can ask, was responsible for the racist tweets?

Initially, Microsoft tried to blame the victim. The problem, a company spokesperson explained, was that some users decided to “abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Microsoft therefore initially blamed us—or some of us. Tay’s racisms was our fault. A day later, Peter Lee, the VP at Microsoft Research, apologized for the “unintended offensive and hurtful tweets from Tay.” But this apology is also unsatisfying. Microsoft only took responsibility for not anticipating the bad outcome. The hate speech was still identified as Tay’s fault.

And since Tay is a kind of “minor”—a teenage girl AI—who is under the protection of her parent corporation, Microsoft stepped-in, apologize for their “daughter’s” bad behavior, and put Tay in a time out. Consequently, we not only have machines that can exceed the control of their designers and surprise us with unanticipated outcomes, but we have begun to accept the explanation that it was the computer’s fault. This changes everything.

Tay.ai (Credit: Twitter/Microsoft)
Tay.ai (Credit: Twitter/Microsoft)

Can we teach Tay to understand racism and harassment? To go further, can we teach morals to artificial intelligence? Is it necessary?

Teaching an AI to understand racism and harassment is a tall order, mainly because the term “understanding” is already ambiguous, imprecise, and not entirely settled, even for us. What can be done is to train the algorithm to recognize and exclude certain patterns that would produce less than acceptable outcomes or to select with greater discretion what data sets are employed to do the training.

But the second part of your question is actually more interesting. We are currently developing machines that are increasingly autonomous in their decision making and actions. For this reason researchers now believe that it may be necessary to include something like a “moral sense” in the design of these systems. The argument is quite simple: As machines come to play an ever increasing and influential role in our everyday existence, we will need to count on them to make appropriate decisions. But formulating an “ethics for machines” is easier said than done, and many questions remain: Which moral theory or tradition do we operationalize? How do we make it computable? And how do you reward or punish a mechanism for getting it right or getting it wrong? This is a brand new area in moral philosophy.

Let’s shift to the recent Go games between Google’s AlphaGo and Lee Se-dol, where the AI beat the top human player. Is it cause for concern? Or has Google just built an AI that’s really, really good at Go?

AlphaGo, like Tay, was designed with machine learning capabilities. The system, as Google DeepMind explains it, “combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.” In other words, AlphaGo does not play the game of Go by following a set of cleverly designed moves fed into it by human programmers. It is designed to formulate its own instructions. Consequently, the engineers who built AlphaGo have no idea what the systems will do once it is in operation.

This means that we now have computer systems that in one way or another have “a mind of their own.” Right now this capability is limited to the notoriously difficult game of Go, but soon systems like this will be everywhere and doing just about everything. So the time to start thinking about the potential consequences of machine learning is now…while it is all still just a game.

Is there anything you currently see in the tech world that makes you really nervous for our future with robots?

I am less nervous about the robots and more concerned about us. Let me explain by way of an example. It is now possible to create mechanisms that exhibit the external signs of feeling pain. Obviously one can and should ask “Is this really pain or just the simulation of pain?” The problem is that we cannot be entirely sure. How do I know whether anything other than myself—another human person, a dog, a squid, or a robot—actually experiences pain? And what do we mean by pain, in the first place? Is it just a response to adverse stimulus? Or does it require something more? The best we can do is observe the behavior of others and make a kind of educated guess based on what they do or do not do.  

The time to start thinking about the potential consequences of machine learning is now…while it is all still just a game. 

But this is where things get tricky. So let’s say we have a robot that pleads with you not to shut it off, because the experience is painful. If we try to convince ourselves that this behavior is just a simulated response and not real pain, this will require fighting against our natural inclination for empathy. But if we go the other direction and decide that the robot does in fact feel something, we risk ascribing emotional capability to an artifact that might not actually possess it. So the future will depend not only on engineering better and more capable mechanisms; it will also depend on us and how we respond in the face of these increasingly social devices.

What needs to be considered and built into the innovation process in order to make our relationship with robots symbiotic?

We not only need brilliant engineers trained in STEM (Science, Technology, Engineering and Mathematics); we also need to tap centuries of human understanding about and experience with the “human condition.” This means including the best intelligence from philosophy, literature, sociology, anthropology, music, etc.…all those so-called “soft disciplines” that help us formulate and make sense of what it means to be human. In effect, we need engineers who can appreciate the profound insight of Van Gogh and philosophers who know how to hack code. We are currently in the process of designing the future, and we should apply the best thinking available to this task.

Note: interview edited for length and clarity.

Image credit: Flickr/Stephen Chin CC BY 2.0