The first U.S. election in the era of generative AI: Regulatory gaps may bring huge changes to global politics

Source: The Paper

Reporter Fang Xiao Intern Chen Xiaorui

Some political campaigns began using AI-generated fundraising emails and promotional imagery, a phenomenon that only slowly infiltrated a few months ago, but now it has converged into a huge torrent and began to rewrite the rules of the game in democratic elections around the world.

AI could accelerate the erosion of trust in the media, government, and society, deepening bias and widening partisan divides, driving voters deeper into a polarizing information bubble. Building new guardrails is a priority.

Conservative candidate Anthony Furey used artificial intelligence-generated images as campaign material in Toronto's mayoral race, including one showing a city street lined with people who appeared to be camping next to buildings homeless.

Artificial intelligence (AI) technology has gradually been involved in some aspects of political campaigns. As the 2024 U.S. presidential election approaches, AI technology is increasingly used in political campaigns. Some campaigns use AI-generated false images, videos, and texts to mislead voters, deepen prejudice, and undermine fair competition. Currently, the United States lacks effective laws and regulations to address this challenge. The gap in election rules has brought huge changes to the presidential election, which is related to US and even global politics.

Experts fear the technology could hasten the erosion of trust in the media, government and society. An unflattering fake video, an email filled with false stories, or a fake image of urban decay can all deepen prejudice and widen partisan divides by showing voters what they expect to see. People may be trapped even deeper in a polarizing information bubble, trusting only the sources they choose to trust. This presents a similar picture to the 2016 U.S. election that brought Trump to power.

From slow infiltration to huge torrent

The 2024 U.S. presidential election will be the first U.S. election since generative AI has broadly influenced humans. According to the "New York Times" report on June 25, some political campaigns began to use AI-generated fundraising emails and promotional images. Rules of the game for democratic elections.

美国共和党全国委员会在总统拜登宣布竞选连任后公布了一段视频,其中人工智能合成的图像展示了拜登连任后的世界末日景象。(00:46)

For example, the US Republican National Committee released a video after President Biden announced his re-election bid, in which artificial intelligence synthesized images showed the end of the world after Biden was re-elected; Florida Governor Ron DeSantis (Ron DeSantis) Using AI to synthesize images, false photos of former President Trump and former health official Dr. Anthony Fauci were posted on social platforms; the Democratic Party tried fundraising messages drafted by AI in the spring and found that they were often more accurate than those written by humans. Write copy that better encourages voter participation and donations; in April, a candidate in the Chicago mayoral race complained that a Twitter account posing as a news outlet used artificial intelligence in a way that suggested he condoned police brutality Clone his voice.

Some politicians believe that AI technology can help reduce campaign costs. For example, it can be used to provide instant responses to debate questions or offensive advertisements, or to analyze some data that would otherwise require expensive expert analysis.

Conservative candidate Anthony Furey used artificial intelligence-generated images as campaign material in Toronto's mayoral election on June 26, including one showing a city street lined with what appeared to be It's a homeless person camping next to a building, but a closer look at the foreground reveals someone that looks more like a rendering; another photo shows two people who appear to be having an important discussion, and the one on the left has three arms. Despite attacks from his rivals, the composite images bolstered Fury's profile in a 101-field mayoral race.

The man on the left has three arms in AI-generated campaign material for Toronto mayoral candidate Anthony Fury.

"Healthy skepticism encourages good habits (such as lateral reading and searching for reliable sources), and this technology could prompt a shift from healthy skepticism to unhealthy skepticism that it’s impossible for people to know what’s real.”

How AI Can Affect the 2024 US Election

The American Institute of Political Consultants recently condemned the use of deepfakes in political campaigns as an ethical violation. Larry Huynh, the group's president, said: "People are tempted to push the envelope and see what they can do with things. Like any tool, they can be used for bad purposes or conduct to deceive voters, to mislead voters, to make voters believe things that don't exist."

"If someone can create noise, create uncertainty, or create a false narrative, that can be an effective way to influence voters and win elections," said Darrell M. West, a senior fellow at the Brookings Institution. M. West, wrote in a report in May of this year, "Since the 2024 presidential election may depend on tens of thousands of voters in a handful of states, anything that can tip people in one direction or another may In the end it was the deciding factor.”

The report, "How Artificial Intelligence Will Transform the 2024 Election," asks three questions. First, politicians can use generative AI to immediately respond to developments in their campaigns. In the coming year, response times may be reduced to minutes, not hours or days. AI can scan the internet, think up tactics, and make a powerful call, which could be a speech, press release, picture, joke or video touting the benefits of one candidate over another.

Second, AI can target audiences very precisely. Candidates don't want to waste money on voters who are already for or against them, but instead want to target a small number of swing voters. Due to the high rate of political polarization in the United States, only a small percentage of voters expressed indecision. The Center for Public Influence has published a report on how data from the 2016 US election and Cambridge Analytica were used to deliver targeted ads based on the "individual psychology" of social media users. "The problem with this approach is not the technology itself, but the covert nature of the campaign and the blatant dishonesty of its political messaging," the report said. "Different voters received different messages, based on predictions of sensitivity to different arguments."

Additionally, AI may democratize disinformation by bringing tools to ordinary people interested in promoting their preferred candidates. One no longer needs to be a programmer or a video professional to generate text, images, video or programs, anyone can be a political content creator and seek to sway voters or the media. New technologies also allow people to monetize discontent and make money out of other people's fear, anxiety or anger.

Currently, artificial intelligence technology is much more powerful than before, and while it is not perfect, it improves quickly and is easy to learn. In May, OpenAI CEO Sam Altman told a Senate subcommittee at a hearing that he was deeply concerned about the 2024 presidential election and that the technology "manipulates, persuades, provides a kind of one-on-one The ability to interact with disinformation” is “an important area of concern.”

Push to build new "guardrails"

However, as more and more sophisticated AI-generated content frequently appears on social networks, most of these social networking platforms are unwilling or unable to police it. Ben Colman, chief executive of Reality Defender, which provides AI-generated content detection services, said regulatory gaps allow unflagged AI-generated content to cause "irreversible damage" before it can be addressed.

"For the millions of users who have already seen and shared fake content, explaining it was fake after the fact is too late and has little effect," Coleman added.

Many political consultants, election researchers and lawmakers say creating new guardrails, such as laws to regulate synthetic advertising, is a top priority. Existing defenses, such as social media rules and services that claim to detect AI content, have failed to stem the tide effectively.

Rep. Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle "will be the first election in which AI-generated content prevails." She and congressional Democrats like Sen. Amy Klobuchar of Minnesota have introduced legislation that would require political ads to disclose their use of AI-generated content. A similar bill was recently signed into law in Washington state.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)