The future of AI in public services
AI has the potential to automate repetitive tasks and enhance efficiency in our public services. Using insights from Ipsos research and an interview with Dr. Jonathan Bright from The Alan Turing Institute, Daniel Cameron and Reema Patel identify different types of AI technology and their applications in the public sector. However, they also explore the challenges such as bias, lack of transparency, data privacy, and the broader social impacts facing the use of AI in this capacity.

![]() |
![]() |
Reema Patel Research Director, Head of Deliberative Engagement |
Daniel Cameron Research Director |
Interview with Dr Jonathan Bright, Head of AI for Public Services at The Alan Turing Institute
The future of AI in public services
Since the launch of ChatGPT in early 2023, the potential for generative AI to fundamentally change society has rapidly risen in the public consciousness and in policy debates. Although the underlying algorithmic technologies have existed for a decade, many people only recently realised the profound impact AI can have. The technology continues to develop at pace, and we can expect further enhanced functionality in the coming years1.
When considering the future for public services, it is crucial to examine the role of AI in shaping policy, service design and delivery. The versatile nature of AI technologies and their rapid development makes it challenging to make firm predictions about the impact on public services. The speed of this revolution is also a subject of debate among experts2, though they generally agree that AI will ultimately transform our lives and work in ways that are currently unimaginable3.
To explore the future of AI and public services, this article brings together insights from Ipsos’ understanding of the public sector, our research with the public and experts exploring perceptions of AI, and our interview with Dr Jonathan Bright, Head of AI for Public Services at The Alan Turing Institute.
At Ipsos, we have another source of insight to draw on: our generative AI tool Ipsos Facto. This gives us walled-off access to large language models (LLMs) like ChatGPT. We have used Ipsos Facto to provide a different perspective on AI in public services, as demonstrated below4. It does a good job of outlining the key points – although human contributions have played a crucial role in shaping this piece as well!
What’s the potential for AI in public services?
Dr Bright described three broad types of AI technologies in his interview with Reema Patel, Head of Deliberative Engagement at Ipsos. These are summarised below, with examples of how they have been or could be used in the public sector:
- Perceptive AI is capable of interpreting sensory data, such as images, sounds, and text. It basically replicates human senses in a way that machines can understand. Examples include facial recognition technology used by the police for identifying suspects and using perceptive AI to analyse medical images to detect diseases.
- Predictive AI uses historical data to make predictions about the future. It leverages machine learning algorithms to identify patterns and trends that can help anticipate future outcomes. This could be used by regulators to prioritise inspection resources, with the FSA already developing a proof-of-concept tool for this purpose5. Similarly, predictive AI can be used to predict disease outbreaks or patient readmissions.
- Generative AI can create new content or outputs based on learned patterns. It understands the structure and characteristics of certain input and can generate similar output. AI could write summaries of interactions with public services to be checked by humans. But it could also create public information campaigns tailored to specific demographics – generating emails or social media posts to inform the public about a new policy or initiative.
There are already various examples of AI technology being applied or explored in different public sector contexts. In the immediate future, the focus will be on better embedding these technologies.
Ipsos Facto agrees with Dr Bright that the main opportunity for public services is AI automating repetitive tasks. This includes administrative tasks that currently consume a significant amount of time for trained professionals. AI has the potential to greatly enhance efficiency, freeing public sector professionals to dedicate more time on complex or value added tasks, ultimately making a positive impact on people’s lives.
"One of the most significant opportunities for AI in public services is the ability to automate repetitive tasks. For instance, AI can be used in the healthcare sector to automate administrative tasks such as appointment scheduling, freeing up staff time for more critical tasks.”
- Ipsos Facto
I think the technology should focus on automating really easy tasks. Removing high workload, high intensity but low intelligence tasks that people have to do on a repetitive basis – and we should leave the humans doing the more complicated tasks where human intelligence is required.”
- Dr Jonathan Bright, Alan Turing Institute
What are the risks for public services – and the public?
There are several challenges that need to be addressed before AI can be effectively and widely used in public services. Ipsos Facto has highlighted some of the main risks that experts agree on, including bias, lack of transparency, data privacy and the broader societal impact.
"The inherent bias in AI algorithms is a significant risk.”
- Ipsos Facto
Bias is a well-known issue for AI6 and can be introduced in various ways. If AI systems are trained on biased data, the system will learn and reproduce that bias in its decisions. Additionally, the algorithms used to process data and make decisions can introduce bias too when they oversimplify complex issues and undervalue certain features or characteristics. However, addressing bias in public services requires considering not only the bias within AI systems but also the role of human and systemic biases in shaping how this technology is used and its impact7.
"The lack of transparency in AI decision-making processes is a concern.”
- Ipsos Facto
Decisions made by public services can significantly impact people’s lives, often without alternative options. Therefore, it’s crucial to establish methods to explain and communicate these decisions so they are understood and may be challenged. This is especially important before public services can heavily rely on automated decision-making.
"The use of AI in public services raises serious concerns about privacy and data security.”
- Ipsos Facto
Data plays a vital role in training AI systems and generating outputs that are valuable for public services such as summaries, predictions, decisions, or new content. It is essential to ensure appropriate protection for personal data, a current requirement for public services that will only increase in importance as AI becomes more integrated.
The adoption of AI in public services also poses a risk of job displacement.”
- Ipsos Facto
There are a range of social risks associated with AI, such as the impact on jobs highlighted by Ipsos Facto8. These risks are currently emerging or unknown, and there will undoubtedly be unintended consequences in public services, both positive and negative.
These and other risks are examined in more detail in our recent report, which describes leading experts’ perspectives on the current landscape and future challenges of responsible AI9.
What do the public think?
Public understanding and acceptability are key challenges for embedding AI in public services. A new Ipsos survey finds that only a minority of UK adults feel that they know much about AI, but fears about its impact are widespread. Around a quarter of the public, 25%, say they know at least a fair amount about AI (these are about twice as likely to be men as women), but a much bigger number know only a little about it10.
Context matters too in shaping people’s comfort with AI. When it comes to tasks that humans struggle with or where there are clear benefits, such as improving traffic flow, or early disease detection through wearables, people tend to be more comfortable with AI. Additionally, identifying security threats, and tailoring learning materials to individual students are also seen as positive use cases. However, opinions are more negative when it comes to certain public sector uses of AI including treatment recommendations, exams and assessments, and decisions related to welfare support.
There is understandable caution about AI and an appetite for greater regulation. Most people, 60%, believe that the UK government is not doing enough to regulate the development and use of AI. Only one in six believe they are doing enough or too much. Similarly, a majority feel that international governments and technology/media companies are not doing enough to regulate AI (64% and 57% respectively).
What should policymakers do?
So, AI has huge potential to improve public service design and delivery. But given the low public understanding and varying comfort levels with AI technologies, how can the public sector make the most out of AI?
Below we set out five key considerations for policy makers and public services:
- Identify areas where AI can add value with minimal risks. There are some areas where – with the right safeguards – AI can add significant value without the potential for significant negative impacts. AI excels at tasks such as note-taking, summarising documents, and generating initial ideas for policy or communications. By delegating these time-consuming tasks to AI, humans can focus on adding creativity and insight, and making higher level connections that AI is currently unable to make.
- Ensure public acceptability and fairness at the outset by addressing concerns and being transparent about AI use. Policymakers and regulators should engage with those affected by proposed AI use from the outset to understand the potential impact. It is important to clearly explain the benefits and address perceived risks. This should be followed by on-going monitoring to address any issues that arise.
“You can have your own view on whether these technologies are good or bad. .. but the key point is that once the public have decided they don’t want this, good luck making that technology happen.”
- Dr Jonathan Bright, Alan Turing Institute
- Be aware of bias in AI and take steps to understand and address it. This includes recognising the potential for different kinds of bias within AI, including training data, algorithms, and humans using AI technologies. While AI has the potential to mitigate bias, it can also amplify it. Therefore, public services should proceed with care and ensure that bias is properly managed.
- It is important to not rely solely on AI to take decisions due to the challenges related to bias and transparency. While AI can offer ideas and suggestions based on patterns, the involvement of humans is crucial in the decision-making process.
“I think we need more sophisticated ways of thinking about how humans are really going to be in the loop… we need better and more systematic procedures for having oversight of a given algorithm… being able to tell on an ongoing basis is it still working as well as we thought it was, being conscious of the weaknesses.”
- Dr Jonathan Bright, Alan Turing Institute
- Consider the readiness of your organisation or stakeholders. This includes ensuring that the data environment is conducive to the use of AI, that the workforce have the necessary skills, and that relevant institutions and stakeholders are on board to ensure the successful implementation and adoption of AI systems.
The pace of change and potential benefits of AI technology mean the public sector will need to embrace it, with careful consideration of the risks and impacts. Used in a responsible way, AI can bring huge benefits to both the public and those working in public services.
"I think AI has the power to do a lot of good… but it needs to be done really cautiously. Having mechanisms to say exactly how well a given algorithm is working – what are its strengths and weaknesses – is really important.”
- Dr Jonathan Bright, Alan Turing Institute
Reflecting the future that is coming, the final word goes to Ipsos Facto:
"In conclusion, AI can bring considerable value to UK public services, improving efficiency and service user experience. However, it is essential to adopt a cautious approach, ensuring that the benefits do not come at the expense of privacy, fairness, transparency, and job security. The adoption of AI in public services is not merely a technological change but a societal one, requiring careful consideration and management.”
- Ipsos Facto
References
1 World Economic Forum (2023). Here's how experts see AI developing over the coming years
3 OECD Home. Artificial intelligence
4 The following prompt was used with a walled-off version of OpenAI GPT-4: “A research company wants to write a thought piece on the use of AI in UK public services. Write a 1000-word article that sets out the key opportunities and risks for UK public services when adopting AI. Consider the use of AI in all aspects of developing policy and delivering public services, including specific examples of where AI might bring most efficiency, value and improvements to service user experience.” Short quotes from its response have been used in this article unedited.
6 Ipsos (2023). Global Views on AI 2023
8 Ipsos (2023). Global Views on AI 2023
More insights about Public Sector