Privacy laws are only part of what help people trust AI

More and more Americans are bringing artificial intelligence into their daily lives.

The author(s)

  • Dan Delaney Senior Research Director, US, User Experience
Get in touch

This morning I had two AI experiences before I ate my cereal. First, I asked my voice assistant device for a weather forecast. Then, I asked my banking bot, for the most recent charges to my credit card, because I still hadn’t found my misplaced wallet. I found it in the car, but the bot kept me from stressing out. I’m not alone. More and more Americans are bringing artificial intelligence into their daily lives.

In the US, 58% of us access voice assistants through our smartphones per data released earlier this year from our data partner Statista. One out of three U.S. homes has at least one personal assistant device, like an Amazon Echo or Google Home, according to the Consumer Technology Association. We are using chatbots to keep track of our credit cards and order pizzas. The advance of AI relies on an ever-increasing need for personal data to fuel its growth and capabilities. That’s how your pizza bot can tempt you with Hawaiian toppings. However, legislation is in development that could be on a collision course with AI’s evolution.

The California Consumer Privacy Act is set to take effect in January 2020. In its planned form, it would significantly limit how companies handle, store, and use consumer data. It lays out an even tougher set of regulations for this type of data than that recently required by the European Union (GDPR). Hawaii, Massachusetts, Washington and Brazil are pursuing similar legislation. This has big consequences for AI.

Privacy + Trust = Relationship with AI

There will be a race to secure the permissions that these laws will require. But in speeding to address the letter of the new privacy law, there is a danger of thinking that protecting privacy is all that is needed to accept AI-informed technologies. Privacy is only one aspect of the larger issue of trust, which in turn is only one aspect of the larger issue of forming a relationship with an AI device.

Our user experience studies about AI-driven conversational interfaces­­–those that you talk to via voice or text–illustrate the centrality of relationship with these devices. People who have found ways to incorporate these conversational interfaces into their lives begin to form relationships with them. They obviously know these agents are not human but begin to associate human characteristics to them which in turn strengthens acceptance and use.

For example, a young mother shared the observation, “I don’t know what I would do without her (Alexa). She’s really my assistant. She keeps my shopping list, reminds me about appointments and even keeps the kids entertained while I’m making dinner. She’s my partner. She’s invaluable.” Referring to a voice assistant as “she” and “her” is a very common behavior of customers who have embraced the use of these systems. It’s an important step to a relationship. The next step in strengthening a relationship is “getting to know each other” to personalize interactions in ways that make them both more enjoyable and useful. This requires sharing a good deal of personal data that must be entrusted to the system.

Trust that technology will deliver

But trusting an AI system depends on more than knowing it safeguards personal data. Customers also need to trust the information, recommendations, and actions of the system. They need to know the system is acting with their best interests at heart and rather than catering to the interests of sponsoring companies. When intelligent agents suggest music, or products, or news, customers need to be confident that the suggestions reflect what they want without being limited to what the company wants to sell or promote. Friends help each other – they don’t exploit. Sharing data with someone you trust is never a problem. About half of adults surveyed (48%) agree that the use of AI by companies should be regulated more strictly than it currently is, according to a recent Ipsos poll published by the World Economic Forum.

Creating clear and informative permissions to address the requirements of the new data privacy laws will be a difficult and time-consuming process. However, if we only address the letter of the privacy law we will be missing the opportunity to actually build trust. And building trust will require deep thinking about ways to communicate how all the actions of the AI system, including storing personal data, benefit the customer.

Dan Delaney, Ipsos Senior Research Director, User Experience, contributed to this post.

The author(s)

  • Dan Delaney Senior Research Director, US, User Experience

Customer Experience