By Sarala Hapugalle
In 2022, a video of Ukrainian President Zelensky was shared, urging troops to lay down their arms and surrender to Russia. In the 2024 US election, another video was circulated – this time of President Joe Biden – seemingly encouraging democrats not to vote in the presidential primary elections and to instead ‘save’ their vote for the November general election. In Indonesia, a video of deceased former leader Suharto was posted to X by the Golkar political party, endorsing the party’s candidates as being capable of continuing “my dream of Indonesia’s progress”.
Disinformation and misinformation have long been some of the greatest threats to democracy, especially in the digital age. The emergence of Artificial Intelligence has greatly compounded this threat, with fake hyper realistic videos of individuals, commonly known as ‘deep fakes’, being shared left, right and center, as well as the increasing use of technology like algorithms, which manipulate public opinion without one even realizing it.
The Threat of Deep Fake Videos
Whether it’s fake images of politicians or audio recordings of conversations that never happened, deep fakes have spread rapidly across social media with the recent surge of readily available, user-friendly AI. These images blur the line between fact and fiction, creating confused political landscapes where individuals are bombarded with a flurry of media, without the means to verify them.
This is especially worrying in tense political climates where individuals who are already emotionally on edge receive loaded media. Such a situation can be seen even in the recent Indonesian protests, where an already volatile situation was made worse via a deep fake video of Finance Minister Indrawati saying teachers were a “burden” on the country. Similarly, a well-timed video released the night before an election or a referendum could have the capability to change a nation’s entire future, with individuals voting based on fake information and volatile mindsets.
Such deep fakes thus have the potential to artificially maneuver uprisings and manufacture political change – a worrying phenomenon for the future of democracy. Politics thus no longer functions according to the genuine will of the people, but much rather the will of a particular individual or group, which has been pushed onto society by taking advantage of social unawareness and sensitivities.
AI as an official tool of the state
AI isn’t just being abused by individual citizens or political opponents. Governments and politicians themselves have begun to utilize AI in an official capacity. In 2023, citizens received public service announcements from the Mayor of New York, Eric Adams speaking multiple languages such as Mandarin and Spanish, all languages he does not speak. Adams had utilized voice cloning AI to send out official messages. While such practices might seem simple, and even advantageous on the surface, this frivolous use of AI also implicitly conveys that it is reliable and legitimate. What happens then is that a video cannot simply be dismissed as ‘AI’ and thus unofficial – this issue arising when one retains the ability to distinguish AI from reality in the first place.
AI Powered algorithms and Manufactured Consent
In addition, AI-powered micro targeting and profiling represent one of the most subtle, albeit extremely powerful threats to democracy, using AI algorithms to analyze vast amounts of personal data – from social media activity, search history, and even consumer behavior – to build psychological profiles of voters. This allows campaigns to deliver highly personalized political ads designed to tap into individual fears, desires, and biases.
The Cambridge Analytica scandal in particular offers a striking foresight of what is yet to come, with the company harvesting Facebook data from millions of users without consent to build psychographic profiles. The company was then alleged to have been hired to sway multiple key elections such as the Brexit referendum as well as the 2010 Trinidad and Tobago general election. In the latter, Cambridge Analytica allegedly ran a campaign called “Do So!”, which was publicized as an anti-establishment, grassroots movement, delivering tailored messaging to specific groups such as young Afro-Trinidadian voters, encouraging them to abstain from voting. This movement did not gain as much traction among Indo-Trinidadian groups, resulting in a resounding victory for the predominantly Indo-Trinidadian party in power.
While the above case did not rely on the advanced AI of today, these processes have been made much easier and widely accessible with the increasingly developed systems of today, with processes like predictive modeling easily allowing campaigns to fine tune their messaging strategies. By selectively targeting specific demographics with messages that either mobilize or demobilize them, AI-aided campaigns can quietly manipulate electoral outcomes without the public ever even realizing. Further, AI-driven social media algorithms also retain the ability to amplify outrage and polarizing content because it drives engagement, reinforcing filter bubbles and creating echo chambers where users are rarely exposed to opposing views.
This process weakens rational debate, increases political tribalism, and fragments the electorate, reinforcing existing biases and making cross party dialogue or consensus that much harder, thus worsening the already critical political polarization of today. Over time, democracy is hollowed out. Citizens are not participating according to their own free will, or on the basis of ideas and beliefs they have come to on their own, via freely accessible information, but are, much rather, being nudged by algorithms that privilege persuasion over truth. Moreover, when the public cannot even agree on whether such influence has even occurred, with most of these practices lacking legitimate means of tracing them, the danger is only worsened.
Pathway to Authoritarianism?
Another emerging dimension of AI’s influence on democracy and freedom is its use in surveillance and authoritarian control. Amidst the longstanding challenge of nations to balance individual autonomy with regulation, AI presents a stark opportunity for power-hungry leaders to consolidate power. Advanced facial recognition systems, predictive policing algorithms, and real-time data analysis allow regimes to monitor dissent, identify protest leaders, and stifle opposition even before it mobilizes. China’s social credit system integrates AI and big data to track citizens’ behavior, rewarding “good” conduct and punishing disobedience, effectively shaping political compliance.
Even in so called ‘democratic’ contexts, mass surveillance programs risk creeping toward authoritarian practices, as governments justify expanded powers in the name of ‘security’ or ‘countering disinformation’, with such a phenomenon clearly seen in the past activities of institutions such as the US National Security Agency, which enabled the mass collection of citizen’s data and internet activity under the justification of preventing terrorism. AI makes this kind of data collection much easier, and much more intrusive. Political opposition and citizens’ rights to protest and criticize government action are thus gradually fading away, instead creating a climate of fear and self-censorship which erodes the free political participation that forms the base of most democratic systems.
The way forward
Artificial Intelligence is thus transforming politics in extremely dangerous ways. Deep fakes, micro targeting, algorithms, and even state surveillance have blurred the line between truth, fiction, freedom and manipulation. Democracy in its truest sense relies not just on elections, but on factors like informed consent, public trust, and freedom to receive legitimate and accurate information, all of which are being destabilized by AI. Left unchecked, these technologies threaten to turn political rhetoric into a competition of manufactured narratives, where the loudest and most emotionally charged messages control public discourse, regardless of whether or not it is true.
Stronger regulation of data use, both officially by higher-ups, as well as by ordinary citizens in society, alongside transparency in algorithmic decision-making and an investment in digital literacy are necessities to protect free and legitimate dialogue and debate. The future of democracy thus depends on our ability to ensure that technology serves the ideas and interests of people, rather than it being used as a means of manufacturing these ideas to control the populace.
Sarala Hapugalle is a second-year undergraduate pursuing a dual academic path: Politics and International Relations through the University of London degree program at Royal Institute Colombo, alongside Law studies at the University of Colombo. She can be reached through saralahapugalle1@gmail.com
Factum is an Asia-Pacific-focused think tank on International Relations, Tech Cooperation, and Strategic Communications accessible via www.factum.lk.
The views expressed here are the author’s own and do not necessarily reflect the organization’s.