top of page


May 14, 2023

Lydia Baccino, Gabriel Helupka, Arnold R. Koka, Ludovica Leccese, Benedetta Bisaccia, Sonia Savci

Jennifer Loy, Chief Editor


Election Infographic[1]

Event: Artificial intelligence (AI) specialists and political scientists have warned about the use of AI-enabled generative tools that could sway the upcoming 2024 elections through the production and dissemination of synthetic images, video, and audio material known as “deep fakes” on social media. The low-cost and convincing realism of so-called deep fakes, along with the speed of dissemination achieved through powerful social media algorithms, can potentially undermine future elections also on a global scale. The most alarming scenarios enabled by AI-generated synthetic media include the impersonation of candidates, automated audio inciting violent actions or fake voting instructions, and fake audio and video interviews. The warnings include the potential damage AI-generated disinformation could cause to people’s trust in Western democracies, and the chances of these tools being used by groups aiming to erode trust in democratic institutions.[2] Evidence of AI meddling recently surfaced on May 14, 2023, during the Turkish presidential election, where several doctored videos of incumbent President Recep Erdoğan and challenger Kemal Kilicdaroglu giving speeches appeared on social media. Experts said this was the first major test of election interference challenges posed by AI technology in upcoming major 2024 elections, highlighting the manipulative role deep fakes can play in flooding social media with inauthentic videos regardless of intent.[3]

Significance: Domestic and foreign actors will very likely employ AI tools to mislead voters through misinformation and disinformation during the next 2024 US elections, very likely impacting the final results. The polarization and destabilization effect will almost certainly be intensified by the likely convergence of information operations with physical and cyber threats and the dynamic nature of malicious actors. Deep fakes with non-malicious intentions will very likely be exploited, likely flooding social media with deep fakes posted by a large network of inauthentic accounts. AI tools will very likely allow for an increase in propaganda accounts to exploit political discourse, undermine political legitimacy, discourage people from voting, and alter public perception. Foreign influence operations will likely exploit commercial AI generative tools to produce deep fakes to overwhelm social media platforms with confusing and contradictory material and erode trust in mainstream information sources. Targeted synthetic material will likely extend to impersonating journalists, community leaders, and policy advocates to undermine citizens' empowerment and the targets’ professional credibility. Promoting awareness on the matter and providing citizens with tips and tools to recognize synthetic media will almost certainly increase media literacy, likely reducing the threat of AI-generated political disinformation, including during future elections/sensitive national events.


  • The Counterterrorism Group (CTG) recommends monitoring the spreading of images, videos, and messages targeting highly specific audiences on social media to avoid mis- and disinformation.

  • CTG recommends governments worldwide and social media companies start an informative campaign for their citizens on the ongoing threat. This protection could be implemented in a similar fashion to the protections social media companies used to guard against COVID-19 mis- and disinformation. Each individual post is flagged letting the viewers know about the potential of misinformation while attaching a checked and legitimate website containing true and up-to-date information.

  • CTG recommends private companies adapt algorithms that can delay the virality of malicious and unverified content. This could be further enhanced through the introduction of algorithms that would allow for the attribution of authorship, diminishing the threat posed by anonymity.

  • CTG recommends governments and private companies monitor possible state-sponsored information operations empowered by established international partnerships, socio-cultural institutions, and university collaborations abroad.

  • CTG recommends private companies implement increased verification methods, such as the Know Your Customer (KYC) approach, to create a legal deterrence possibility across platforms.

  • CTG recommends that national governments worldwide increase cooperation to establish a multi-stakeholder approach for common definitions and response frameworks to counter information operations.

  • CTG recommends that politicians running electoral campaigns adopt policies of disclosure to label their own social media content and political advertisements during the campaign. This could be used to help users distinguish between authentic and inauthentic content. This concept has already been introduced by New York Congresswoman Yvette Clarke under the REAL Political Ads Act. CTG's recommendation is to consider implementing this bill nationally.

  • If there is any additional and or critical information please contact us at The Counterterrorism Group (CTG) by Telephone 202-643-2848 or email


[3] AI content is meddling in Turkey’s election. Experts warn it’s just the beginning, Euronews, May 2023,


bottom of page