We are worried about irresponsible uses of AI
In less than five minutes of reading time we’ll give you all the data and context you need to get you up to speed on Ipsos’ latest wave of the Consumer Tracker.
See the full data and methodology.
Here’s what we know today from the Ipsos Consumer Tracker:
- 78% of older Americans say they are likely to get a new vaccine booster recently approved for those aged 65 or older and vaccinated.
- People are split on whether the potential benefits of AI, such as increased efficiency and productivity will outweigh the potential job loss. 43% agree vs 42% who disagree, which is in line with when we asked earlier this year.
- The number of people worried about the pandemic is lower than ever
- 60% said were likely to take advantage of selecting an option for fewer boxes or fewer delivery trips, the highest number of the sustainability questions we asked
Read on for data about AI regulation, sustainability, our worries, our hopes, and how much we care about the day’s news.
We are worried about irresponsible uses of AI
Why we asked: AI is coming at us fast. According to Bloomberg, “AI was uttered more than 200 times on earnings calls by Meta Platforms Inc., Alphabet Inc. and Microsoft Corp.” Meanwhile, a large group of AI researchers signed an open letter calling for developers to pump the brakes on releasing new AI tools. They think we have reasons to be worried that the development is moving too fast and could have serious consequences that we won’t be able to walk back. So are we freaked out?
What we found: Yup, for the most part we are. A majority of people are worried about each idea we surveyed on. We are worried that:
- AI tools will discriminate or show bias (57%) or even cause harm (65%)
- That outputs won’t be clear or easy to understand (60%)
- and the algorithms will be opaque (59%).
- We’re especially worried about privacy concerns like having our data shared (72%) and about being able to reach a human when we want to (72%).
- Finally, a strong majority of seven in ten are also worried that we won’t be able to discern what is AI or produced by humans (71%), and that we will have more misinformation spread online (70%).
This is not a rosy picture just yet, so perhaps Americans think we need some… oversight.
Nearly everyone thinks government should have some role in oversight of AI
Why we asked: In a previous wave (and again in this wave) we asked people how they felt in general about AI regulation and there seemed to be support. This wave we drilled into that further.
What we found: Only 13% think government should have no role at all. But there was division on how much of a role. About half (49%) say a minor role and 38% say a major role. There were party splits in that, of course. Democrats were much more likely (45% to 31%) to say the government should play a major role.
We want more gentle sticks than carrots with AI regulation
Why we asked: If there’s high support for regulation and many want government to take a major role in regulation, what exactly are people interested in having governments do? Once again, we seeded this question by asking ChatGPT for some ideas of how government could regulate it and its brethren which we then edited and added to.
What we found: A strong majority supported every one of ChatGPT’s ideas except one – tax incentives for companies that use AI responsibly. The most popular ideas tested were relatively light regulations about transparency.
“Guidelines that would require people to be notified when they are interacting with an AI system,” was supported by 81% of Americans, followed by “requiring companies to disclose information about their AI systems, such as data sources, training processes, and algorithmic decision-making methods” at 77%. A potentially more cumbersome regulation, requiring developers to be licensed or certified, was still favored by three in four Americans (74%).
People aren't hopeful that polarization in the U.S. will subside
Why we asked: Hope (and its counterpart fear) is an important driver of our behavior as consumers, citizens and patients. So we’re launching a new Consumer Tracker feature that wil measure our hope among a number of different dimensions in coming waves. This wave we begin with some big geopolitical concerns.
What we found: There is strong bipartisan agreement on one thing: That political divisions will continue in the U.S. Only 40% overall are hopeful that we’ll see less polarization, with no differences between political affiliation. That’s some depressing meta-ness right there. There is a partisan split on the economic outlook, as Democrats are way more likely (60%/40%) to think we’ll avoid a recession in the U.S. Only about half of us think the war in Ukraine will end in the next year, or the rise of AI will happen in a positive way. And as we often see in data like this, we’re most hopeful about ourselves, with 73% thinking their personal finances will improve, despite being less hopeful about many other aspects of the economy and world around us.
The Care-o-Meter
Why we asked: Every day, we are bombarded with news and information. Some is of the utmost importance in the short-term or long-term. Some is frivolous. Regardless, we make a million decisions in our own personal taxonomy of how much we are going to care about All the Things. So we decided to start measuring this with our new feature: The Care-o-Meter. This will take a pulse of the zeitgeist and also how much of a bubble we are in. It’s a simple pair of questions: How much do you know about a series of in-the-news events, and how much do you care about them.
What we found: A slight majority said they were familiar with many of the top stories of the week, including Fox News’ settlement with the election equipment maker Dominion (56%), and the leaking of top-secret information on social media chat platform Discord (50%).
We are far less aware of AI creating a song in the style of real performers Drake and The Weeknd (26%) or Twitter taking away the blue checkmark icons that denoted verified users on the platform. It was alllllll anyone on Twitter was tweeting about, but only 43% of the general public said they knew about it.
We know people tend to overstate familiarity. And can often overstate how much they care. And of course, you don’t need facts to have an opinion. But the discrepancies are really interesting. While only half knew about the Discord leaks, three in four cared. Conversely, even though 40% said they knew about the Twitter icons, only 22% said they cared.
Over time, tracking this Care/Know ratio should build into a fascinating dataset, and maybe we’ll trend some of the more ongoing topics, too.
A new datapoint in the post-pandemic world
Why we asked: Since Wave 4 we have been asking people to rate on a five-point scale how they feel about the pandemic, with 1 being essentially it’s no big deal.
What we found:
We’ve been staring at one datapoint for nearly three years now, waiting for it to drop like the final putt in Caddyshack. The mean has stubbornly failed to drop below 2.1 – until W74! We finally hit 2.0. Let’s see if it drops into the 1s finally in future waves.
Signals
Here’s what we’re reading this week that has us thinking about the future.
- The shoplifting crisis is concentrated (via NYT)
- PimEyes sees dead people (via Wired)
- Ozempic and all the industries it could upend (via WSJ)
- Generative AI and health records (via Modern Healthcare)
For complete toplines for all waves, please see the full data and methodology.