Trust: a crucial line of code when applying Artificial Intelligence to Healthcare
Trust and reputation are essential considerations for companies seeking to deliver AI technology in healthcare. It is vital to demonstrate transparency, and have meaningful engagement with the influencers that help shape public opinion, as well as the stakeholders that are instrumental in determining the regulatory response to new uses of personal data.
Artificial Intelligence (AI) is one of the most exciting and promising technologies of our time. Although it has been around for decades, recent developments point in the direction of radical breakthroughs across a wide range of applications. From transport and logistics to security or healthcare, these algorithms have the potential to come up with solutions that are often well-beyond the human brain's capacity.
The case of healthcare is a particularly interesting one. Those involved in the industry have to balance moral, societal and economic challenges, in a race to treat, and ideally cure, complex diseases. A major obstacle is that often, not even world-class scientists with access to large pools of resources are able to find the answers. It is in these situations where Artificial Intelligence technology, and its capacity to find patterns in vast amounts of patient data, provides unique opportunities.
However, equally unique are the risks: fear of the unknown, privacy and data security breaches or loss of human control to machines are only some of the challenges.
An increasing number of players are trying to crack the code of disease using AI as a tool. Google's London-based AI company, DeepMind, has set up a healthcare branch, working closely with the NHS. In July 2017 GSK unveiled a new $43 million deal to enhance the drug discovery process. Others such as Merck & Co, Johnson and Johnson and Sanofi are also investing large sums in the technology to try to streamline the drug discovery process.
As a company wanting to explore this route, you need the support of your stakeholders, and the public opinion in your favour. Trust becomes an essential line of code in your AI algorithm.
In fact, if you are not trusted to have the patient’s interests at heart, to treat highly confidential data securely, protect privacy, and demonstrate a commitment to applying the technology in an ethical manner, chances are that you will find a bumpy road ahead: increased regulation, overall scrutiny, and even challenges to your licence to operate are likely to follow.
Ipsos Global Trends data exploring trust in pharmaceutical companies and public sector healthcare providers to use personal information suggests that there is a lot to be done to achieve this. When we look at the current levels of trust among those actively involved in applying AI to healthcare (public sector providers, global pharmaceutical, and telecommunication companies) we find that, globally two in five (39%) distrust public sector healthcare providers to use the information they have about them. This goes up to almost half (46%) distrusting pharmaceutical companies, and half (50%) for telecommunication companies.
To get ahead with AI and deliver successful applications of this technology in healthcare, getting the public onside is vital.
Equally important are the hearts and minds of the influencers that help shape public opinion and the stakeholder groups that will be instrumental in setting the agenda for the regulatory response to new developments where personal data is used. Companies intending to deliver AI-led innovations in healthcare will need to demonstrate transparency and have meaningful engagement with the public and stakeholders on the opportunities and risks that come with this technology.
In this context, trust and a solid reputation are essential parts of the equation. Developing, maintaining and understanding their reputations will be key to companies in this space – whether Pharma or Tech – and will help them open doors with stakeholders whose support they need to take advantage of AI.