ScienceBlogs
Home

The use of AI in elections

1
1
The use of AI in elections
AI - © Pexels Saksham Choudhary

In 2024, elections will be held in more than 60 countries around the world including the European Union. In other words, about 50% of the world’s population will be asked to cast a vote. These countries include India, Mexico, Russia, South Africa, Ukraine, United Kingdom and the United States of America. It is therefore a crucial year for democracy around the world, although the quality of democratic governance differs significantly between all these countries. Nevertheless, the outcome of these elections is not only important for the respective countries, but will also impact the global political landscape.

The use of technology in elections is not new, but there has been a rapid development of artificial intelligence (AI) during the last few years, in particular the increased use of generative AI such as ChatGPT (Open AI) and Copilot (Microsoft), that could have a significant impact on elections and democracy. These developments create opportunities to enhance electoral efficiency and democracy, but they also raise concerns about the misuse of AI in elections. Elections are about convincing voters to support a political party or a particular candidate, and politicians use different ways of engaging and informing voters. Throughout time, albeit to differing degrees, voters have been exposed to many false promises and misinformation provided by human political representatives. It is part of the political reality, and members of opposing political parties are mostly quick to identify the misinformation or false promises their opponents give. But what happens if instead of a human being, an AI model aims to influence voters? How could this phenomenon impact the goal of free and fair elections?

During the 2016 presidential elections in the USA, the data analytics company, Cambridge Analytica, hired by the Trump campaign, used an AI model to influence voters through microtargeted advertisements. The scandal, which was only exposed later, entailed a massive breach of privacy by Cambridge Analytica in view of their illegal use of the personal data of millions of Facebook users, and their manipulation of voters through this AI model, and in doing so, obviously raised concerns about the undue influence on democratic processes.

The rapid growth of generative AI during the last year, enabled new opportunities for its misuse of AI especially in terms of fake images, videos and text that could be used to promote election disinformation. This fabricated information could and did negatively impact the ability to achieve free and fair elections. The high quality of many fake images, videos and audio clips created by generative AI and which appear to be authentic, contribute to misleading voters to support specific candidates or political parties, or perhaps not to vote at all. It is also possible to create AI generated text messages in multiple languages as part of a propaganda campaign based on false information. Furthermore, the ease of combining various generative AI tools to create a disinformation campaign to influence an election, thus undermining democracy, is a serious worry.

In view of the possibility of large-scale impacts in various elections around the world, the misuse of generative AI is a global concern. In February 2024, a group of 20 tech companies signed an agreement at the Munich Security Conference to cooperate in preventing deceptive AI from being used to influence elections. The signatories to this historic agreement included companies such as OpenAI and Microsoft that produce generative AI models as well as social media platforms such as Facebook (Meta), WhatsApp and TikTok. While it is important that technology companies such as these are committed to preventing the misuse of their AI models during election campaigns, it is not clear how effective this agreement will be. The situation will have to be monitored closely. Independent electoral bodies and human rights watchdogs could make a useful contribution in this regard.

In a research report, Countering Disinformation Effectively, published in January 2024, the Carnegie Endowment for Peace discussed the use of generative AI in the context of disinformation. The report not only acknowledged the potential dangers of misusing generative AI in this context, but also offered another perspective. It indicated that political deepfakes, such as videos created to misinform or mislead people, have so far had a limited impact. The report argued that people’s willingness to believe false information often depends on factors such as a viewer’s state of mind, their group identity or who they perceive as an authority, more so than the quality of the fake image.

Despite these concerns, the use of AI in elections has various benefits such as analyzing large amounts of data such as voter patterns, political speeches, news articles and reports of government performance. AI could also provide valuable insights for political parties to employ during their election campaigns. Also, in the case of the administration of elections, the institutions managing themcould deploy appropriately designed AI based programs to indicate how to optimize resource allocation to particular voting stations throughout a country and to support voters abroad.

In view of the potential negative impact and harm that could be caused by AI during election campaigns and even on the election days, it is necessary to consider appropriate countermeasures to strengthen democracy. Such measures vary in nature and scope and include the following:

  • An agreement between technology companies to prevent the misuse of generative AI during elections, such as the one mentioned earlier.
  • Changes in electoral legislation to stipulate how AI may be used legally in an election campaign.
  • An agreement among political parties and the institution overseeing the election about the use of AI.
  • Strengthening independent media which allows rigorous professional journalism to support democracy.
  • Using generative AI to combat disinformation and to create new tools to improve checking of facts and information published on social media.

In Europe there will be elections for the European Parliament as well as other parliamentary elections in EU member states such as Belgium, Austria and Romania. The UK will also have parliamentary elections, while Russia and Ukraine will have presidential ones. The war in Ukraine casts a dark shadow on all of these elections, but there are also a variety of other factors affecting the election debates. The impact of these elections is not limited to the respective countries where they take place, but the combined effect of domestic elections with the European parliamentary ones will create a much larger impact. It is expected that AI might play a role in the various election campaigns, but to what extent remains to be seen. The EU AI Act is not yet in operation, but it provides clear guidance regarding the risk involved in different kinds of AI, some of which will be prohibited, such as AI that deploys subliminal manipulative or deceptive techniques that could impair a person’s decision-making ability. The misuse of AI in any of these elections could have a negative impact on democracy.

On a lighter note, some people might think it will be better to have an AI representative in parliament rather than a real politician, but this is far from reality since only natural living persons who meet all the requirements stipulated in a country’s constitution may be elected to a parliament.

It will be interesting to see how proactive the new EU AI Office and the respective data protection authorities in EU member states as well as in the UK will be in issuing directives on the use of generative AI during elections. In view of the importance of all these elections, it is necessary that there should be clear guidelines on what is acceptable use of AI during election processes.

Dirk Brand

Dirk Brand

Dirk Brand is an independent legal consultant, special counsel at Swart Law and an Extraordinary Senior Lecturer at the School of Public Leadership, Stellenbosch University. He holds the following degrees: BComm, LLB, LLM (European Union Law) and LLD (Constitutional Law). He is part of the LegalAIzers team who won the First St.Gallen Grand Challenge in Switzerland on the application of the EU AI Act (July 2023). He is also a guest lecturer at the Hochschule Kehl, Germany and the Law Faculty, University of Verona, Italy. Dirk loves to travel and keep fit by running in the neighbourhood or in the nearby vineyards.

Tags

Citation

https://doi.org/10.57708/bbrmnheedr2gqnuj8oeelsa
Brand, D. The use of AI in elections. https://doi.org/10.57708/BBRMNHEEDR2GQNUJ8OEELSA

Related Post

 L’inglese come lingua franca
ScienceBlogs
eureka

L’inglese come lingua franca

Cecilia Vera LagomarsinoCecilia Vera Lagomarsino
Vox Pop: Jazmin from Northern Ireland
ScienceBlogs
eureka

Vox Pop: Jazmin from Northern Ireland

Theresia MorandellTheresia Morandell
Is it still all about old men in grey suits? Gender (im)balance in EU Parliament
ScienceBlogs
eureka

Is it still all about old men in grey suits? Gender (im)balance in EU Parliament

Carolin ZwillingCarolin Zwilling