AI is no longer the future—it’s now here in our living rooms and cars and, often, our pockets. As the technology has continued to expand its role in our lives, an important question has emerged: What level of trust can—and should—we place in these AI systems?
To explore this question, we spoke to 30 AI scientists and leading thinkers. They told us that building trust in AI will require a significant effort to instill in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and consumers. And, they say, this effort must be collaborative across scientific disciplines, industries and government.
Instilling human values in AI
As AI becomes more pervasive, so too has the concern over how we can trust that it reflects human values. An example that gets cited frequently to show how difficult this can be is the moral decision an autonomous car might have to make to avoid a collision: Suppose there’s a bus coming toward a driver who has to swerve to avoid being hit and seriously injured; however, the car will hit a baby if it swerves left and an elderly person if it swerves right—what should the autonomous car do?
“Without proper care in programming AI systems, you could potentially have the bias of the programmer play a part in determining outcomes. We have to develop frameworks for thinking about these types of issues. It is a very, very complicated topic, one that we’re starting to address in partnership with other technology organizations,” says Arvind Krishna, Senior Vice President of Hybrid Cloud and Director of IBM Research, referring to the Partnership on AI formed by IBM and several other tech giants.
There have already been several high profile instances of machines demonstrating bias. AI technicians have experienced first-hand how this can erode trust in AI systems, and they’re making some progress toward identifying and mitigating the origins of bias.
“Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar. “And it could be not only unintentional bias due to a lack of care in picking the right training dataset, but also an intentional one caused by a malicious attacker who hacks into the training dataset that somebody’s building just to make it biased.”
“AI can be used for social good. But it can also be used for other types of social impact in which one man's good is another man's evil. We must remain aware of that.”
— James Hendler, Director of the Institute for Data Exploration and Applications, Rensselaer Polytechnic Institute
As Gabi Zijderveld, Affectiva’s head of product strategy and marketing explains, preventing bias in datasets is largely a manual effort. In her organization, which uses facial recognition to measure consumer responses to marketing materials, they select a culturally diverse set of images from more than 75 countries to train their AI system to recognize emotion in faces. While emotional expressions are largely universal, they do sometimes vary across cultures. For example, a smile that appears less pronounced in one culture might actually convey the same level of happiness as a smile in another culture. Her organization also labels all the images with their corresponding emotion by hand and tests every single AI algorithm to verify its accuracy.
To further complicate efforts to instill morality in AI systems, there is no universally accepted ethical system for AI. “It begs the question, ‘whose values do we use?’” says IBM Chief Watson Scientist Grady Booch. “I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven’t necessarily done so yet.”
And perhaps the value system for a computer should actually be altogether different than that of humans, posits IBM Research Manager in affective computing David Konopnicki. “When we interact with people, the ethics of interaction are usually clear. For example, when you go to a store you often have a salesman that is trying to convince you to buy something by playing on your emotions. We often accept that from a social point of view—it’s been happening for thousands of years. The question is, what happens when the salesman is a computer? What people find appropriate or not from a computer might be different than what people are going to accept from a human.”
“What does the notion of ethics mean for a machine that does not care whether it or those around it continue to exist, that cannot feel, that cannot suffer, that does not know what fundamental rights are?”
— Vijay Saraswat, Chief Scientist for IBM Compliance Solutions
Taking it a step further, IBM Chief Scientist for Compliance Solutions Vijay Saraswat points out that even if machines get equipped with a clearly defined, universally accepted value system, their inability to truly feel consequences on an emotional level like a human could make them imperfect ethical actors.
“Artificial intelligence will be a different form of thinking—a different form of being, existing,” says Saraswat. “In the ethical space, what does the notion of ethics mean for a machine that does not care whether it or those around it continue to exist, that cannot feel, that cannot suffer, that does not know what fundamental rights are? There's no inherent, inalienable notion of this kind built into the very fiber of our computational systems, so how would they then have a sense of ethical behaviors when the basics on which human morality is built are missing?”
There are a few techniques AI scientists are experimenting with to instill AI systems with ethical principles. One that shows some promise is inverse reinforcement learning. This method of training an AI system, IBM Distinguished Researcher Murray Campbell explains, involves letting the system observe how people behave in various situations and figure out what people actually value, allowing the system to make decisions consistent with our underlying ethical principles.
Creating transparency
AI thought leaders also note that transparency is key. To trust computer decisions, ethical or otherwise, people need to know how an AI system arrives at its conclusions and recommendations. Right now, deep learning does poorly in this regard, but some AI systems can serve up the passages from text documents in their knowledge bases from which they drew their conclusions. The AI experts agree, however, that this is simply not enough.
“We will get to a point, likely within the next five years, when an AI system can better explain why it's telling you to do what it's recommending,” says Rachel Bellamy, IBM Research Manager for human-agent collaboration. “We need this in all areas in which AI will be used, and particularly in business. At that point, we'll gain a more significant level of trust in the technology.”
AI application developers must also be transparent about what the system is doing as it interacts with us. Is it gathering information about us from various places? Is it “looking” at our faces through a web camera to read our expressions? And, the experts say, people should have the ability to turn some of these functions off whenever they like.
“A similar parallel right now is how willing people are to share their location information with an app. In some cases it has a clear benefit, while in others, they may not want to share because the benefit isn’t significant enough for them,” says Jay Turcot, Head Scientist and Director of Applied AI at Affectiva. “The key is transparency on how information is used and providing user control—I think that model will be a good one moving forward. From a privacy point of view, I think it will always be a tradeoff between utility and privacy and that each user should be able to make that choice for themselves.”
Transparency through education
Another effective way to provide transparency is through education. Misconceptions about what AI can and can’t do abound, eroding trust in the abilities they do possess. And, perhaps most significantly, a lack of clarity over which jobs AI might impact breeds an additional level of distrust in the technology. AI thought leaders universally agree that it’s mission critical to educate people about where disruption might occur and to teach the skills that will be needed to perform new jobs AI will create in the future.
In most job areas, the experts contend that human input will still be required and will often provide the most valuable component. “There are certain types of advising questions we don't need to spend precious faculty time on,” says Satinder Singh, Director of the University of Michigan’s Artificial Intelligence Lab. “We can handle them in an automated system and reserve the faculty time for the really challenging psychological questions, for example, where somebody's having trouble with sleep or health or mental health or whatever is causing them difficulty in a university setting.”
Where jobs do experience disruption, many technology veterans say history provides examples of how technology often breeds new, unimaginable jobs. “My optimism on jobs rests in history. Things incrementally improve by 1% a year, which is almost invisible except as it accumulates and is looked at retrospectively,” says Kevin Kelly, co-founder of Wired and author of the best-seller The Inevitable. “In the past we have undergone huge disruptions and revolutions in livelihood, the last one being the agricultural age where many farmers lost their jobs. The jobs that replaced them were unimaginable to those farmers 150 years ago—their descendants became web designers, mortgage brokers.”
“There's no silver bullet,” says Kelly. “But I think it's also important to remember that this is not a technical issue. We know how to retrain people en mass. The market cannot do it alone. It needs government, too. And we should start teaching it in our schools—the essential techno-literary skills of learning how to learn, learning how to relearn and becoming a lifelong learner.”
“My optimism on jobs rests in history.”
— Kevin Kelly, co-founder of Wired and author of the best-seller The Inevitable
Collaborating for responsible advancement of AI
The societal benefits and implications of AI are enormous. Developing and deploying the technology in a responsible manner requires an effort of equal proportion, which means one or even a handful of organizations working toward that end is not enough. A significant collaboration within and across academia, industry and government is required to meet the challenge and ultimately build trust among consumers that their best interests are truly at heart.
With its unique combination of both extending human capabilities and often deriving its very architecture from the structure of the human brain, AI requires a multidisciplinary scientific effort to simply advance its usefulness. “We need to approach AI in a multidisciplinary way because the brain itself is a bundle of interdependent elements which support thinking and behavior that’s describable on many different levels. To accelerate our understanding of human cognition, neuroscientists, linguists, psychologists, philosophers, anthropologists, deep learning experts and others need to come together,” says Margaret Boden founder of the University of Sussex cognitive science department and advisor at the Leverhulme Centre for the Future of Intelligence.
“We need to approach AI in a multidisciplinary way because the brain itself is a bundle of interdependent elements.”
— Margaret Boden, Cognitive Science Research Professor, University of Sussex
Affectiva’s Zijderveld believes bringing AI to its most useful state will require technologists to come together to establish common standards and platforms for AI. “As a consumer I’m not going to want to have to jump through hoops to get my phone to talk to my car, right? These are systems that should all be able to talk to each other, from the enterprise applications that we use in business to our mobile devices. Yet there are no standards for interoperability. I believe a consortium of industry will have to come together to solve this, cross-verticals and cross-use cases.”
Even as we advance the development of transparent, ethical AI, there’s still one necessary ingredient for trust that can’t be manufactured—time.
“How do we get to AI systems that we can trust? Ultimately, it's going to be through getting our AI systems to interact in the world in a shared context with humans, side-by-side as our assistants,” says IBM’s Saraswat. “And over time, we’ll develop a sense for how an AI system will operate in different contexts. If they can operate reliably and predictably, we’ll start to trust AI the way we trust humans.”