Image

Addressing GenAI Election Concerns: OpenAI’s Plan

OpenAI wants to ease worries about how its technology might impact elections, especially since more than a third of the world’s population is preparing to vote this year. Countries with scheduled elections include the United States, Pakistan, India, South Africa, and the European Parliament.

In a blog post on Monday, OpenAI expressed its commitment to ensuring the safe development, deployment, and use of its AI systems. Acknowledging the benefits and challenges of this new technology, OpenAI emphasized its continuous learning and adaptation to better understand how its tools are utilized.

People are getting worried about how generative AI (genAI) tools might be used to interfere with democratic processes, especially since OpenAI, supported by Microsoft, released ChatGPT in late 2022. The tool is famous for its ability to generate text that looks human-like. Another tool, DALL-E, can create very realistic fake images, often called “deep fakes.”

OpenAI gears up for elections

OpenAI is taking steps to address concerns about the use of its technology in elections. ChatGPT will now direct users to CanIVote.org for specific election-related questions. OpenAI is also working on making AI-generated images from its DALL-E technology more transparent by adding a “cr” icon to indicate they are AI-generated.

Additionally, OpenAI plans to improve ChatGPT by integrating it with real-time global news reporting, including proper attribution and links. This effort builds upon a previous agreement with the German media conglomerate Axel Springer, allowing ChatGPT users to access summarized versions of select global news content from Axel Springer’s media channels.

OpenAI is working on techniques to identify content created by DALL-E, even after the images undergo modifications.

Growing concerns about mixing AI and politics

There isn’t a set rule on how generative AI should be used in politics. Last year, Meta stated it would ban political campaigns from using genAI tools in their ads and required politicians to disclose such usage. YouTube also mandated content creators to reveal if their videos include “realistic” but altered media, including those made with AI.

Meanwhile, the US Federal Election Commission (FEC) is considering whether current laws against “fraudulently misrepresenting other candidates or political parties” apply to AI-generated content. (A formal decision on this matter is still pending.)

Fake and misleading information has been part of elections for a long time, according to Lisa Schirch, the Richard G. Starmann Chair in Peace Studies at the University of Notre Dame. However, generative AI now enables a larger number of individuals to produce increasingly realistic false propaganda.

Many countries have established cyberwarfare centers with large teams to create fake accounts, generate deceptive posts, and spread false information on social media, Schirch explained. For instance, just before Slovakia’s election, a fabricated audio recording was circulated, depicting a politician supposedly attempting to manipulate the election.

Like ‘gasoline…on the burning fire of political polarization’

“The issue isn’t just false information; it’s that bad actors can create emotional portrayals of candidates meant to spark anger and outrage,” Schirch explained. “AI bots can sift through vast amounts of online material to predict what kinds of political ads might be convincing. In this way, AI adds fuel to the already intense fire of political polarization. AI makes it simple to produce content designed to maximize persuasion and manipulation of public opinion.”

Concerns about genAI and attention-grabbing headlines are primarily related to deep fakes and images, according to Peter Loge, director of the Project on Ethics in Political Communication at George Washington University. The more significant threat comes from large language models (LLMs) capable of instantly generating countless messages with similar content, flooding the world with fakes.

“LLMs and generative AI can flood social media, comments sections, letters to the editor, emails to campaigns, and so on, with nonsense,” he added. “This has at least three effects — the first is an exponential rise in political nonsense, which could lead to even greater cynicism and allow candidates to disavow actual bad behavior by saying the claims were generated by a bot.

“We have entered a new era of, ‘Who are you going to believe, me, your lying eyes, or your computer’s lying LLM?'” Loge said.

Stronger protections are needed ASAP

According to Gal Ringel, the CEO of the cybersecurity firm Mine, current protections are not strong enough to prevent genAI from playing a role in this year’s elections. He said that even if a nation’s infrastructure could deter or eliminate attacks, the prevalence of genAI-created misinformation online could influence how people perceive the race and possibly affect the final results.

“Trust in society is at such a low point in America right now that the adoption of AI by bad actors could have a disproportionately strong effect, and there is really no quick fix for that beyond building a better and safer internet,” Ringel added.

Social media companies need to develop policies that reduce harm from AI-generated content while taking care to preserve legitimate discourse, said Kathleen M. Carley, a CyLab professor at Carnegie Mellon University. For instance, they could publicly verify election officials’ accounts using unique icons. Companies should also restrict or prohibit ads that deny upcoming or ongoing election results. And they should label election ads that are AI-generated as AI-generated, thus increasing transparency.

“AI technologies are constantly evolving, and new safeguards are needed,” Carley added. “Also, AI could be used to help by identification of those spreading hate, identification of hate-speech, and by creating content that aids with voter education and critical thinking.”