Opinion
The hidden threat of AI advertising in the Voice referendum
October 20, 2022
Imagine you’re scrolling through your phone before bed and stumble upon a shocking video: a massive crowd marching through Sydney, fervently opposing the voice. Except, the video is fake — generated by artificial intelligence and fine-tuned to exploit your political beliefs. As Australia prepares for its first referendum in nearly 24 years, a critical question looms: how will AI-generated content affect how we vote, and what might be the consequences?
Artificial intelligence is rapidly permeating political campaigns, not just in Australia but around the globe. From the US, where Republicans recently ran an ad showing a dystopian America under a second Biden Presidency, to right next door in NZ, where the Nationals recently ran dramatic ads showing terrified fake nurses and a ram-raid that didn’t happen. In Australia, a Melbourne Crypto trader and No campaigner ran YouTube ads that featured AI-generated indigenous voices suggesting they were voting no (despite 80%+ of indigenous people supporting the voice).
The advertising industry is racing to adopt AI — McKinsey predicts that 90% of advertisers will do so within two years — but the technology’s deployment in political contexts offers unique dangers. Open AI CEO Sam Altman recently said it was a major concern for him personally as “personalised 1:1 persuasion, combined with high-quality generated media, is going to be a powerful force [in future elections].”
It can be harmful for two reasons. The first is that bad actors can create a distracting smoke screen by generating an overwhelming amount of content and then targeting select pieces to small, hard-to-notice target audiences. There are some safeguards in place for this already; for example, the platforms have made digital advertising much more transparent over the last two federal elections and include caps on the total number of ads any one person can run.
The second is that AI advertising exponentially accelerates the spread of disinformation. It’s not just about quantity; AI can create false content that looks astonishingly real and is thus incredibly persuasive. Throughout COVID-19, we saw how damaging disinformation made by humans can be. But this takes it to a whole new level. An Oxford study found that AI-generated fake news articles were more likely to be shared on social media than real news articles, while a University of Zurich study found that AI-generated content was more likely to be believed than disinformation written by people.
This problem becomes acute in the binary setting of a referendum. Unlike federal elections, where candidates campaign in more than 151 different regions, each with specific issues and histories, a referendum poses a single, straightforward question. You don’t have the leniency that comes with preferential voting or different competing policies. It is a simple question about national identity — do you believe we are a nation that says yes to constitutional recognition or a nation that says no?
Australians haven’t had to make a choice like this for 24 years. In such a simplistic context, disinformation becomes both easier to spread and much more damaging. Bad actors have seized opportunities like this to cause Brexit and elect Donald Trump. To disrupt the 2020 Taiwan Presidential Election and the Columbia Peace Agreement Referendum — irreparably damaging civic life along the way by fracturing traditional political alliances, making it challenging to rebuild broad-based coalitions, and often leading to voter radicalisation, destabilising the democratic process for years to come.
Herein lies the rub: none of those politically seismic and highly polarising events had the added fuel of AI-generated content. As Australia faces its own watershed moment, the potential for similar or even more severe disruption is real. The introduction of AI-generated content and AI-driven disinformation is a wildcard that could tip the scales, possibly leading to unexpected outcomes in the referendum. Because of the sheer volume of such content, we may only recognise its impact after the votes are counted — when it’s too late.
As we navigate this uncharted territory, we need a coordinated response that brings together government, civil society, and tech platforms. We must keep up the push to make advertising systems transparent — letting every Australian see the forces shaping public opinion. And if someone employs AI to make their point, let it be known — Google’s policy change requiring all AI-generated content to be flagged as such is a step in the right direction. Once the votes are counted, a comprehensive audit should follow to glean lessons for the future. Yet immediate action starts with all of us. With you. Over the next few weeks, let’s be extra vigilant, be extra sceptical. And when you encounter something questionable, don’t just scroll past; engage with whoever shared it and start a conversation. We’re at a democratic crossroads; let’s ensure the path we take is illuminated by the light of reality, not shadowed by manufactured illusions.