Poll: AI worries Americans in 2024 elections

artificial intelligence
Three out of five adults worry about artificial intelligence driving election misinformation.

The warnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.

Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.

The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.

By comparison, 6% think AI will decrease the spread of misinformation while one-third say it won’t make much of a difference.

Just 30% of American adults have used AI chatbots or image generators and fewer than half (46%) have heard or read at least some about AI tools. Still, there’s a broad consensus that candidates shouldn’t be using AI.

When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).

The sentiments are supported by majorities of Republicans and Democrats, who agree it would be a bad thing for the presidential candidates to create false images or videos (85% of Republicans and 90% of Democrats) or to answer voter questions (56% of Republicans and 63% of Democrats).

The bipartisan pessimism toward candidates using AI comes after it already has been deployed in the Republican presidential primary.

In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the country if President Joe Biden is reelected. It used fake but realistic-looking photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad disclosed in small lettering that it was generated by AI.

Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look as if former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease specialist who oversaw the nation’s response to the COVID-19 pandemic.

Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.

The Federal Election Commission is currently considering a petition urging it to regulate AI-generated deepfakes in political ads ahead of the 2024 election.

Adults associated with both major political parties are generally open to regulations on AI. They responded more positively than negatively toward various ways to ban or label AI-generated content that could be imposed by tech companies, the federal government, social media companies or the news media.

About two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads, while a similar number want technology companies to label all AI-generated content made on their platforms.

Biden set in motion some federal guidelines for AI on Monday when he signed an executive order to guide the development of the rapidly progressing technology. The order requires the industry to develop safety and security standards and directs the Commerce Department to issue guidance to label and watermark AI-generated content.

Americans largely see preventing AI-generated false or misleading information during the 2024 presidential elections as a shared responsibility. About 6 in 10 (63%) say a lot of the responsibility falls on the technology companies that create AI tools, but about half give a lot of that duty to the news media (53%), social media companies (52%), and the federal government (49%).

Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility for technology companies, the news media and the federal government.

____
The poll of 1,017 adults was conducted Oct. 19-23, 2023, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, designed to represent the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.

Associated Press


2 comments

  • My Take

    November 4, 2023 at 1:09 pm

    I find the picture unsettling for some unknown subconscious reason. Brrrrr….

  • My Take

    November 4, 2023 at 1:19 pm

    The infamous photodoctoring in new editiòns of the Great Soviet Encyclopedia now pàle in comparisòn.

Comments are closed.


#FlaPol

Florida Politics is a statewide, new media platform covering campaigns, elections, government, policy, and lobbying in Florida. This platform and all of its content are owned by Extensive Enterprises Media.

Publisher: Peter Schorsch @PeterSchorschFL

Contributors & reporters: Phil Ammann, Drew Dixon, Roseanne Dunkelberger, A.G. Gancarski, Anne Geggis, Ryan Nicol, Jacob Ogles, Cole Pepper, Gray Rohrer, Jesse Scheckner, Christine Sexton, Drew Wilson, and Mike Wright.

Email: [email protected]
Twitter: @PeterSchorschFL
Phone: (727) 642-3162
Address: 204 37th Avenue North #182
St. Petersburg, Florida 33704




Sign up for Sunburn


Categories