Artificial intelligence: Four points of vigilance

This article on AI from Helen Zeitoun, CEO Science Organisation and CEO Ipsos in France, was originally published in French in Strategies, March 2019.

These days, it would probably be impossible to enact any lasting transformation of a company without having done some serious thinking about artificial intelligence and its applications. Both the "explainability" of AI - how humans can come to understand and trust it - and how it can be deployed are key factors in navigating the changes underway. This is also the case in the market research industry, where I'm fortunate enough to head up Ipsos in France.

Nevertheless, we need to be clear that AI is simply a tool. The question we face is: How do we go about reinventing our profession and improving the accuracy of business decisions in a world undergoing radical change? We believe this involves a new triangular paradigm involving technology, sciences and know-how, all of which are "data agnostic". Viewed through this lens, AI can be seen primarily as a matter of human, organisational and cultural change.

So why are so many professional transformation projects often reduced to a simple "investment in AI", devoid of what this truly means?

Yes, artificial intelligence is now around us, and since it does indeed raise questions of a social, ethical and legal nature, plans for company transformation are often challenged in relation to the myth of "technological singularity" (replacing people with machines), while setting aside the opposing thesis, which places hope in AI as the salvation of mankind, at the dawn of the fourth industrial revolution.

In short, there can be no ignoring the fact that, as AI accelerates change, it both frightens people and fires their imaginations at the same time. Moving beyond these clichés, surely it is now time to reflect upon the true responsibility of companies when it comes to their work with AI, as well as the media's responsibility for how they portray it. The complaints or concerns that we hear in opposition to AI tend to coalesce around four key issues: individual freedoms, employment, social cohesion and human intelligence. Here, we address each by recommending "points of responsible vigilance" for companies and the media:

1- AI and individual freedoms

Eric von Hippel, Professor of Technological Innovation at the Massachusetts Institute of Technology (MIT), demystifies artificial intelligence as follows: “AI in its current state is nothing but a tool. And like any tool, it’s the way it is used that is decisive. Take a knife for example: it’s a tool that can also be used as a weapon.”

China is developing artificial intelligence to monitor and control its citizens, a move which has been denounced by AI researcher Yoshua Bengio. The Chinese system aggregates a variety of data on each individual and attributes to that person a “social score.” Having a bad score means you can’t travel by air, or apply for certain jobs…

In this respect, scientist Joël de Rosnay thinks that “what is being done in China also represents a threat for western democracies that might be tempted by these practices as a means to limit asocial behavior. […] We talked about it in Davos, AI ethics committees are being set up to prevent any such drifts”.

As for the European Union, it has taken action to protect individual freedoms through the General Data Protection Regulation (GDPR) which came into force in May 2018. And, well before the arrival of big data, France had been careful about protecting the private lives of its citizens: the creation of the Commission Nationale de l’Informatique et des Libertés (CNIL) dates back to 1978. The market research world, for its part, has always intrinsically respected the confidentiality of individuals questioned, the principle of anonymous data, principles which have evolved in line with the legal framework.

Nevertheless, AI is now deeply rooted in our everyday lives, without our always being aware of it: from the pages of Google results to Netflix or Amazon recommendations, the selections Facebook makes for you based on your interests, instant translation and personal voice assistants, chatbots and GPS navigation.

2- AI and employment

Forecasts about the impact of AI on the job market have been a subject of debate. According to a McKinsey* report, the reduction in full-time equivalent employment due to AI will reach 18% internationally by 2030. But this will be compensated for by a 17% job creation rate!

“The novelty of artificial intelligence corresponds to the process of creative destruction, so dear to Schumpeter,” points out Joël de Rosnay. “Yes, jobs will disappear, but AI will also create more jobs and new professions.”

So, wouldn’t it be more apt to talk in terms of a reformation of the job market, rather than a massive destruction of jobs? That was also one of the findings of a BCG Gamma and Ipsos** study conducted in 7 countries and published last year: 53% of employees who already use tools based on AI expect to see their job transformed, while only 33% believe that their job will disappear.

The real risk lies more in a definitive split between highly qualified and lower skilled or less valued professions. Sociologist Antonio Casilli, in his latest book En attendant les robots (Seuil), criticizes the digital labor behind AI now in operation for the big digital platforms: “Myriads of non-specialized micro-jobbers carry out necessary work in selecting data, improving it, and making it interpretable. […] Digital labor proves essential in producing what is ultimately just ‘hand-made’ artificial intelligence.”

This explains the warning from French mathematician and politician Cédric Villani in his report Giving meaning to artificial intelligence, presented to the French government in March 2018: “The priority should lie in developing the means for a rich complementarity between human work and machine activity.”

In the market research world, for example, AI has created new applications, new professions, new forms of value. Companies like Syntec are working on promoting new opportunities in the data sciences with text, images, video and voice. The large-scale automation of chatbots, or reinforced quality processes, such as panelist anti-fraud measures, all represent new jobs, with a variety of training backgrounds and novel roles. At Ipsos, following the purchase of Synthesio, the leading social listening platform, we created a Data Management Center in France, now a hub for these new knowledge types. It is working in-depth supporting our own cultural change, alongside all our experts, and hand-in-hand with our clients.

3- AI and social cohesion

Due to the choices that we have in how we process data and define the criteria for configuring algorithms, AI can reproduce, and even amplify, any biases and discriminations that are already present in society. “Artificial intelligence cannot be a new machine to exclude. It’s a democratic requirement within a context in which these technologies […] are opening up remarkable opportunities for value creation and development of our societies and individuals. These opportunities should benefit everyone,” warns Cédric Villani in his report. Vigilance is required of all the players in the AI ecosystem, from the algorithm developer to the end-user, to ensure that AI facilitates and maintains social cohesion.

Another question is whether AI will impact the delicate balance which exists in our current societies. For example, could the increasingly forensic profiling of individuals, present the risk of locking people into their own cultural bubbles, instead of them feeling part of a wider, more diverse and interrelated society, and in this way weaken democratic pluralism?

While this may be a valid concern, certain initiatives are showing that AI can strengthen the social fabric and enable new forms of solidarity to emerge on a large scale. For example, within the framework of the United Nations Development Program (UNDP), the recently launched network of national accelerator laboratories works across 60 countries. It aims to use social networks to spot innovative local solutions to the everyday problems of underprivileged populations, enable their implementation and accelerate their dissemination. This is an ambitious initiative whose algorithmic protocol was presented to the United Nations by MIT and Ipsos, and which testifies to AI’s potential for solidarity in a number of different ways.

4- AI and human intelligence

The revelation of the existence of the GPT-2 program at Elon Musk’s company, OpenAI, capable of creating fake media articles that are absolutely credible, is indeed frightening. It presents a challenge to human intelligence in the age of fake news. OpenAI has actually now abandoned distribution of GPT-2.

But AI can also be a remarkable tool for pooling individual intelligence in a way that generates bottom-up momentum and increases collaborative intelligence. Take, for example, the Lead User Innovation Identification method. This approach has been developed under the leadership of Eric von Hippel, international specialist in “Open Innovation,” at MIT’s Innovation Lab, of which Ipsos is a member. The fast and efficient research method is based on a semantic analysis system using words, photos and videos. In conversations on social networks, it detects novel product creations and inventions developed by the end-users themselves who are experts in a sporting or other field. This provides valuable material for rethinking marketing models and inspiring companies to come up with truly people-centric strategies in their quest for new products or services. So, through AI, we can encourage the emergence and furtherance of non- artificial, human intelligence.

Seen in this way, AI accelerates smart thinking. “I’d rather talk about auxiliary intelligence, rather than artificial intelligence,” says Joël de Rosnay. “That auxiliary intelligence will allow us to better reflect on things together and will lead to a positive change in human nature.”

The debate on artificial intelligence obliges us to question our models of society, given that companies inevitably influence them, through their product and service offerings. At a time when citizens, consumers and associates are constantly in search of more meaning, it’s up to the companies that use AI to communicate in a way that shows themselves to be responsible and ethical, combining respect for individuals with economic goals and corporate integrity.

With this in mind, Ipsos is now launching a multi-disciplinary and international Science Organization, within the framework of its Total Understanding program, in which ethics will take on their full meaning alongside AI. We will be adopting an in-depth, holistic and scientific understanding of what drives consumption and opinion-forming which draws from the social sciences, language sciences, cognitive sciences, neurosciences and data sciences.

As each company transforms, it should form an ethics committee adapted to the sector and this should operate alongside its use of AI. This would provide proof of authentic people-centricity and account for the ethical aspirations of consumers, citizens and associates. Just think: what if all this “auxiliary intelligence” were to awaken the most enlightened side of humankind?

This article was originally published in French - see Strategies [No 1987], March 2019