Generative AI and Audiences: Revisiting UK public attitudes to AI in media

Ipsos research for the BBC shows GenAI has moved from experimentation to everyday use

Generative AI has entered the mainstream in the UK. Our latest research for the BBC shows awareness is near universal (98%) and a majority (58%) have used GenAI tools, with one in three (35%) now using them weekly. Usage is rising fastest among older groups (a 250% increase among 55+ year-olds), and families are heavy adopters (74% have used). 

Yet deeper use has brought deeper questions: people value AI’s convenience but worry about over-reliance, authenticity, and trust, especially in media.

Media matters because it shapes culture, identity and public understanding. In this context, audiences are cautious: 58% say AI in media makes them nervous, 70% prefer human-driven movies, and 78% prefer human-written online news. The emerging rule is clear: GenAI is welcome when it helps behind the scenes; it is resisted when it replaces human creativity, voice or editorial judgement.

The Impact on Media
The study suggests strong acceptance for supportive tasks for audio, such as AI-generated playlists, background music, and audio versions of written articles. However, there is clear resistance to AI-hosted podcasts, cloned presenter voices, and AI voice-over for live sports commentary, where personality, spontaneity and emotion matter. 

For visual and video, there is a growing comfort with AI-enhanced production (e.g., animation, VFX) as a natural evolution of creative tools. On the other hand, we saw pushback against AI-written scripts or fully AI-generated shows, suggesting audiences believe storytelling requires lived experience and human perspective.

Importantly, in news, tolerance remains narrow and functional. AI can assist with accessibility and headlines when a journalist leads the reporting. Importantly, audiences overwhelmingly reject AI writing original news or generating editorial imagery, citing misinformation risks and the erosion of trust.

A new benchmark for responsible use
In 2025, audiences apply three linked expectations:

Value: AI should be genuinely useful and enhance discovery, access or personalisation without diluting accuracy or authenticity.
Humanity: Protect the human elements of creativity, emotional nuance and judgement. Keep human oversight visible through bylines, editorial sign-off and clear responsibility.
Trust: Be transparent about where, why and how AI is used, and who is accountable. With half of UK adults believing AI will worsen online disinformation, verification matters; 43% say they are more likely to trust AI use in news if outputs are fact-checked. While 63% look to government and independent regulators for governance, they still hold media brands directly responsible.

Discover the full report here: https://www.bbc.co.uk/mediacentre/articles/an-update-on-ai-at-the-bbc 



About the research
Ipsos conducted a 10-minute online survey of 2,000 UK adults aged 16–75 (April–May 2025), alongside six in-depth online workshops with a cross-section of participants across the UK. The findings track shifting behaviours and expectations and outline where audiences now draw the line on AI in media.
 

The author(s)

More insights about Media & Entertainment

Related news