Digital extremism: How algorithms feed the politics polarisation
In the 2018 presidential election, we saw political polarisation reach unprecedented heights in Brazil. We witnessed clashes in the streets, within families, in WhatsApp groups and, above all, on social media.

Staying "on the fence" was difficult, and choosing one's side in the battle meant, in many cases, severing real or virtual ties with those on the other. It was a dispute that intensified in the 2022 elections and that, with victories and defeats on both sides over the last few years, has no prospect of ending any time soon. But how did Brazil become so divided?
The path that digital transformation has led us down is inevitable: not only because political parties have used social media as their main campaign platform, but also because of how we consume and create content on the internet. In this post-web era, candidates have moved on from using websites to using social networks as a central element of their campaigns. This change is relatively recent: it was only in 2010 that Brazilian electoral legislation allows parties to make extensive use of the internet and it was that same year that the first cases of fake news and the use of social networks to smear opponents were registered, with the main focus on then-candidate Dilma Rousseff. But it was not until 2014 that the presidential campaigns were truly built around social media – a dynamic that continues to this day.
The internet is not only a space for the dissemination of content by candidates, but it also serves to guide political campaigns and decisions based on the interests expressed by voters. This timeline helps us to visualise how the digital transformation has been accelerated: in just 13 years, we have moved from a point where networks were still little known and exploited by owners and politicians, and quickly migrated to a scenario in which they are at the heart of political transformation and social depth.
Fifth power
When the German philosopher Jürgen Habermas presented his concept of the public sphere as a social dimension that mediates between the state and the population, he probably didn't imagine that this role would one day be taken over by social networks. For him, the public sphere is a force that emanates from civil society in the sense that it puts pressure on governments, and it is where topics of public interest are discussed. Since social networks have completely changed the way information is disseminated and have also democratised and widened access to all kinds of knowledge, it was inevitable that all issues of social, cultural, and political interest would be addressed.
Social networks have also given us greater power to mobilise and collect, and for this reason can be seen today as a fifth power, alongside the other three (executive, legislative and judicial) and the press. What's more, much of what we think and believe comes from it. If the original concept of public opinion is that society's opinions are formed through debate, then it is through digital channels that these conversations take place and through them that individual worldviews change. Public opinion only exists when there is broad access to information and freedom of debate and expression – rights that are, of course, guaranteed in a democracy. However, the way in which these dynamics occur in virtual environments is complex and involves a powerful information curation mechanism: algorithms.
Algorithmic bias
The dynamics of social networking are simple: if we produce a piece of content and make it available to our followers, it is necessary for them to interact with this content – through likes, comments, shares and views – so that it can be passed on to friends of friends and even suggested to people outside our virtual circle. The breadth of this reach depends on how much engagement the post has been able to generate. This is done by the platforms' algorithms, which use the metrics of each publication and the data from each user's navigation to determine what content is delivered to each person.
Every action taken on the internet leaves a trace, or "virtual footprint", which gives the algorithms individual clues about profile, interests, hobbies, consumption habits, beliefs and, of course, political positioning. This data is used not only to help advertisers segment their communications and reach their target audience, but also to make the user experience more enjoyable. For example, by understanding that a particular internet user is a first-time mum and dog owner who enjoys traveling and make-up, the algorithm will start to prioritise content such as articles about motherhood, beauty videos, and product ads for pets. So, you’re guaranteed more interaction from your target audience, and a greater chance of conversion with advertising.
It is worth remembering that this modus operandi is not exclusive to social networks: it extends to search engines (such as Google) and to various news channels, which also use the personalisation of content based on data from navigation, clicks and geolocation. The same logic applies to the political-ideological field: if an internet user tends to interact with content that has a conservative agenda, they will be increasingly influenced by publications of this type, i.e., that reinforce their beliefs and values. Meanwhile, the progressive agenda is no longer visible – this is the origin of bubble filters, a term coined by the American writer Eli Pariser to describe the isolation that algorithms create by restricting internet users' access to points of view that differ from their own. This creates an echo chamber, a phenomenon that makes people more intolerant of opposing views, as they tend to isolate themselves in groups that share the same ideas.
Freedom of expression is also harmed by this process as, within ideological groupings, people who think differently are discouraged from expressing themselves. In this context, when our opinions are constantly reinforced by similar visions, the perception of reality is shaken and extremism gains strength, resulting in extreme polarisation. This is evidenced by the serious consequences that Brazil has recently experienced, such as the invasion of government buildings in January 2023, and is an ongoing issue – especially during election periods with fake news.
What is true?
For fake news to be perceived as true, there must be at least two factors; firstly, that it reinforces something the recipient already believes or is inclined to believe based on their ideological bias, and second, that it comes from a source that looks trustworthy – such as a person, a content producer they admire, or a group they identify with. This is precisely why bubbles give strength to fake news and allow it to be used as a political manipulation strategy. Our Trust Misplaced report found that almost 58% of Brazilians are confident that can tell real news from fake news. However, only 26% think that other people in Brazil can do the same, suggesting a confidence that probably overestimates personal ability. We live in the post-truth era, where emotions and certainties override individual facts when each person decides, even subconsciously, to believe or not to believe a piece of information. By creating bubbles and echo chambers, algorithms reinforce the confirmation bias that creates this distorted view of reality, and they increase the reach of fake news based on the logic of engagement.
The volume of fake news circulating in Brazil is high: the Fundação Getúlio Vargas monitored the circulation of links related to distrust of the Brazilian electoral system on Facebook and YouTube between 2014 and 2020, identifying more than 337,000 publications that sought to discredit the smoothness of the process, essentially spreading disinformation. They generated a total of 16 million interactions and more than 23 million views. Sensationalist content, which can use click bait or even hate speech, also generates very high volumes of clicks, comments, likes and shares, so it is very much driven by digital platforms. And there are government bodies and political parties that take advantage of this to weaken the image of their opponents or strengthen their own positions through disinformation.
Crisis of confidence
With so much access to information and sophisticated manipulation strategies, it’s hard to know who to trust. The CIGI-Ipsos Global Survey: internet security and trust showed that only half (51%) of Brazilian respondents trust search engines and only 43% trust news feeds on social networks. Meanwhile, an Ipsos study, Fake news, filter bubbles, post-truth and trust, found that 73% believe that the news often deliberately says something that isn't true and that 71% believe that people in Brazil are less trusting of what politicians say than 30 years ago.
There's no denying that there's a crisis of trust in government, media communications, and businesses. We are constantly bombarded with information from a wide variety of sources, and in the race to keep up, the time and care needed to produce in-depth content and fact-checking - both on the part of those producing news and those receiving it - is losing ground. Fake news itself has been used to discredit sources previously seen as highly reliable, such as the media and academia, and to increase feelings of insecurity. Fostering distrust in democratic institutions creates a scenario of instability that makes us as a society even more vulnerable to manipulation.
Technonoly: the enemy of democracy?
For democracy and technology to coexist in harmony, there needs to be a real push in commitment to facts. Work is needed to educate the population, and this role can be shared between different groups, including brands, whose reputation can also be impacted by false information. This is the case of Coca-Cola, which launched #ÉBOATO to clarify the lies circulating on the internet about its products. At the height of the COVID-19 pandemic, Fiocruz used the same tool to refute data on the vaccine and went a step further by producing information material to help the public identify biased publications.
It is important that brands proactively avoid any association with the sources of false information and engage in educational activities to raise public awareness. One of the most debated issues is the lack of transparency in algorithms that do not give users control over what is shown or hidden from their timeline. In addition to the unbalanced influence that algorithms can have on electoral results, there is also the question of their potential to radicalise, promoting videos with hateful and conspiratorial content. There's an urgent need for more effective automatic detection of this type of content and for platforms to be genuinely committed to tackling it.
From another perspective, it is possible to be optimistic about algorithms as disseminators of good ideas and awareness of important issues. A good example is #MeToo – the use of this hashtag on social networks created a space that empowered thousands of people around the world to share their own stories of abuse and harassment. This example highlights the role of digital media as a powerful space for sharing, creativity, and debate – a tool for positive social and cultural change.
Table of content
- An introduction to Flair Brazil 2024: Nostalgia or perspectives
- Inflation vs. porfolio: The brand vacuum
- Brands and social purpose in a politically divided time
- Digital extremism: How algorithms feed the politics polarisation
- The importance of female representation in Brazil
- The role of companies in taking responsibility and action
- Conclusion
Previous | Next |