Harriet Kingaby

Defunding the disinformation economy

The consequences of our global misinformation crisis, and our inability to control it, are far reaching and devastating in scope - and advertising is funding it. Harriet Kingaby from the Conscious Advertising Network on why the disinformation economy must be defunded to tackle the perverse incentives that drive it.

Misinformation in the mainstream

Earlier this year, on a break in France, a racist conspiracy theory was broadcast to me over my hotel breakfast. There, as I tucked into my muesli, a presidential candidate on mainstream TV was talking about the ‘Great Replacement Theory’ to a room of hotel guests.

For the uninitiated, this conspiracy theory suggests that white populations are being ‘replaced’ with non-white peoples, through mass migration, demographic growth and a drop in the birth rate of white Europeans. It is reportedly believed by 67% of French people and half the American public, where it is particularly prevalent in those who supported the 2021 US insurrection. It has been linked to multiple massacres, including the Buffalo shooting earlier this year.

A few months later, a friend in the USA sent me a screenshot. Fresh from the revelations that the legal precent protecting US abortion rights, ‘Roe vs Wade’, was to be rescinded, ‘conservative’ sites were publishing demonstrable medical disinformation. My friend had been sent one of these articles on a chat group by someone concerned the lies were true, and who repeated the slurs back to the group.


Last year, the National Abortion Federation reported a 128% increase in cases of assault and battery against abortion clinics and clinicians. Although this cannot be directly attributed to misinformation alone, the sheer amount of resources being spent on campaigns to undermine reproductive rights and bodily autonomy will be affecting public opinion and behaviour, and there’s plenty of evidence to suggest that the dehumanisation of groups (in this case abortion seekers and providers) can have violent and unsavoury consequences.


Just this month, the Conscious Advertising Network were sent evidence that Facebook had allowed its advertising tools to be used to amplify calls for violence and genocide in Ethiopia. This is years after similar findings in Myanmar which led to Facebook being actively linked to genocide.


In 2024, three quarters of the world’s most populous countries will hold a major election. Billions of people will be making decisions about definitive issues affecting their countries, and who will be best suited to lead them to solutions. Many of those decisions will be made against a backdrop of hate and disinformation, and narratives that deliberately target trust in democratic processes.


The global misinformation crisis has gone mainstream – its breadth and depth are far reaching and devastating in scope and as yet we have not been able to control it.

How advertising funds disinformation

Underpinning the widespread production and amplification of this disinformation, is a powerful global economy funded by advertising. Advertising funds our media environments, from TV to online news sources, from podcasts to radio. This has created ‘perverse incentives’ to maximise consumer exposure to advertising and reach via social media. Figures from Newsguard and Comscore suggest that $2.6bn of advertising money each year are going to creators of extremism and disinformation. These ‘perverse incentives’ include:

Social media algorithms designed to keep us on the sites longer to serve us more ads - outrage and curiosity are key drivers of attention and engagement
News outlets embracing clickbait headlines and hyperbole to encourage us to click and share
Commentators expressing extreme views to drive traffic to their channels
Advertising tools enabling sophisticated identification and targeting of those most likely to be susceptible to hate and misinformation. These are used by everyone from corporations intent on greenwashing, to populist politicians and hate groups

Disinformation and digital networks - a perfect pair?

Disinformation, with its hyperbolic and often outrageous nature, lends itself neatly to our digital media ecosystems. Shared language creates vast ecosystems, meaning that Australian and US commentators can drive conversation in the UK and Brazilian misinformation is picked up in Portugal.


The Institute for Strategic Dialogue’s analysis of the COP26 conversation illustrates well how these vast networks drive and skew international conversations quickly and effectively, driven by small numbers of bad actors, and whip through social media quicker than the truth. These narrative shifts happen far quicker and more effectively than attempts to fact check them.


In fact, research from Stop Funding Heat found that Facebook’s Climate Science Centre fundamentally failed to tackle climate misinformation on the platform effectively, and research from the Centre for Countering Digital Hate found that 97.5% of posts on the platform about the Great Replacement Theory were not addressed.

Beyond social media

But let’s not just focus on social media, the disinformation system is happily aided by adtech platforms who monetise dubious websites and channels so that ad agencies can buy cheap engagement for their clients. This monetises a whole new set of actors – chancers, such as the ‘fake news factories’ behind the Pizzagate conspiracy theory, to political actors, whose disinformation campaigns make money for the cause, or simply pay for themselves.


For example, pro-Kremlin sites such as Russia Today and Sputnik News were both monetised by advertising until very recently, and this advertising was only removed once the conflict was already underway. Advertisers were effectively paying for the production of information warfare, despite the companies supplying the advertising technology to these sites having sight of the escalating conflict situation for many months.


This poses huge concern for those advocating for peace, science, or progressive causes –  making the environment in which they must communicate harder to navigate, while threatening the public and political mandates for action.


Egregious content is all too often not only being inadvertently endorsed, but actively monetised, by the world’s biggest brands. Or worse, they choose to actively fund disinformation to commercially benefit from the perceived ‘engagement’ it creates. For example, brands choosing to advertise on TV or radio stations that feature ‘shock jocks’, or buying cheap ‘performance’ or ‘blind’ programmatic advertising which has a high level of fraud and funds low quality sites.

A decade to deliver disinformation solutions

This has to stop. We have less than a decade to fix the climate, resolve complex geopolitical situations, and a cost of living crisis. We must clean up our media ecosystems in the next 18 months. This must start with dismantling this economy and attacking these economic incentives at a global level.


Trying to deal with hate and misinformation without talking about the role of the advertising industry is like trying to deal with climate change without talking about the role of the fossil fuel industry. 


Addressing this issue must be at the heart of any game-changing policy strategy around misinformation: to cut this issue off at the source, and address the incentives for its production. That’s why steps such as the EC Code of Practice on Disinformation are so powerful, now we need to think globally.

Going global to tackle disinformation

Progress is being made: many of the social media platforms now label or demonetise hate and disinformation where they find it. The under-funded and over-stretched fact-checking industry recently received a small cash injection from Meta, Twitter has updated its policies on crisis misinformationGoogle, Youtube and Pinterest now have climate misinformation policies, and Tiktok has doubled down on enforcement of its ban of paid political influencers.


Recognising that self regulation has not worked, the EU passed the Digital Services Act this year and the UK continues to work on its Online Safety Bill. Signs are encouraging, but the global nature of the problem means we need co-ordinated action at a global level too. Key to this is recognising misinformation as a form of “pollution”, a negative externality generated as a side-product of the current advertising business model, which must be addressed.


We already have the tools to help us tackle the problem and protect our fundamental rights. The UN has done great work in effectively defining hate speech and outlining how it can be protected while maintaining freedom of expression via the Rabat Plan of Action.


To supplement groundbreaking legislation and policy, we need additional treaties and support:

Global treaties which involve the NGO, government and corporate communities, including plans and targets to tackle misinformation across borders and language communities. The proposal of an Intergovernmental Panel for the Information Environment (IPIE) is both exciting and utterly necessary.
Universal definitions of mis and disinformation in key topic areas crafted by subject matter experts. In 2022 this means working to define mis and disinformation around the Great Replacement Theory, transgender people, reproductive rights, as well as medical mis and disinformation on major public heath crises. The Climate Action Against Disinformation (CAAD) Coalition’s definition of climate mis and disinformation is a powerful tool that widens the definition of climate misinformation from simply scientific consensus, to include demonstrably false solutions which feature in much corporate greenwashing, for example.
Adoption of these definitions by policy makers, regulators and tech platforms involved in the disinformation economy. They must be embedded into policy, and the loopholes which allow them to continue to spread must be closed. Google’s climate misinformation policy is a great example of how a platform can show leadership through restricting access to advertising tools.
Enforcement of these policies, which will only be as good as the action taken on them. Repeat offenders in particular must be targeted.
Transparency on the scale of the problem, and the effectiveness of the solutions taken to tackle them from platforms. Progress to be reported in platform transparency reports, and researchers to be given access to data, to ensure third party accountability.

The global misinformation crisis requires global thinking that cuts to the root cause of the problem – its funding and amplification models. We will see more piecemeal action, or action that strives to divert attention, or attribute individual responsibility from those profiting from the disinformation economy until pressure is applied.

Global pollution issues have seen great success from joined up and target driven thinking. Only by rethinking and reshaping the advertising ecosystem will we solve this vast and existential threat.

Harriet Kingaby is co-chair of the Conscious Advertising Network, a coalition of 150 members which seeks to break the economic link between advertising and harmful content.