Source: Adobe Stock
How-To Build Trust in Artificial Intelligence Solutions
A Psychologist’s perspective on trust-building in AI and what mechanisms companies need to understand to meet the needs of their customers and users.
·
Published in
·
9 min read
·
Jun 24, 2020
I interviewed Marisa Tschopp who is an organizational psychologist conducting research about Artificial Intelligence from a humanities perspective with a focus on psychological and ethical questions. She is also a corporate researcher at scip AG, a technology and cybersecurity company based in Zurich. And she is the Women in AI Ambassador for Switzerland.
Please describe who you are in 2–3 sentences.
Currently, I am focusing on trust in AI, Autonomous Weapons Systems, and our AIQ project, which is a psychometric method to measure the skills of digital assistants (conversational AI), like Siri or Alexa.
So, obviously, I’m a researcher, but I’m also a mother of two toddlers, a wife, a daughter, a sister, a volleyball player, hopefully, a fun friend to be with, an activist, an idealist, a collaborator, and a semi-professional Sherpa (I love hiking in the Swiss Alps and therefore have to carry my kids on my back!).
Let us start with understanding trust better. What is trust and why is it important, especially in the context of AI?
In the context of AI, there is a critical underlying assumption: “No trust, No Use”. Since AI holds great promises (as well as dangers), tech-companies and AI enthusiasts are especially concerned about how to build trust in AI to foster adoption or usage.
Trust seems like the lasting, kind of mysterious, competitive edge.
Without trust, there would be no family, no houses, no markets, no religion, no politics, no rocket science.
According to trust researcher
Trust is the social glue that enables humankind to progress through interaction with each other and…