Why responsible AI will unblock our worries

The future of AI depends on the decisions we make today. Ipsos’ FAST framework can guide brands to make ethical decisions as they develop AI tools, says Lorenzo Larini, CEO of Ipsos North America.
Download the full What the Future: Intelligence issue
Download the full What the Future: Intelligence issue

For those deploying artificial intelligence tools, it’s important to keep the humans in mind. Ipsos developed a FAST framework (fair, accountable, secure and transparent) that can guide ethical development. It’s based on Ipsos research, which shows that people want AI tools to be developed without bias to allow for developers to be responsible for their work; for data and privacy to be protected; and for it to be clear when and how AI is being used.

Product testing will be essential. A lack of guards against some of the most significant sources of AI risks can represent an existential risk for organizations, says Lorenzo Larini, CEO of Ipsos North America.

“The FAST framework builds in precautions that will help people feel safe in using AI without sacrificing speed.”

Public perception is shifting quickly. Now is the time to act and measure, full stop.

What worries people most about Al

← Read previous
How we can build needed trust in AI through equity

 

Read next →
Why ethics should be at the center of new AI tech


For further reading

Most Americans want tech companies to commit to AI safeguards

Ipsos Top Topics: Artificial Intelligence

AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll

The author(s)