Whether its clips Joe Biden talking about his favorite type of pot or Never Back Down’s TV spot featuring Trump basing Trump, artificial intelligence is changing the political advertising landscape.
Though sometimes humorous, these ads have raised concerns among political experts and everyday voters alike — and the closer they get to crossing the uncanny valley, the more likely they are to underhandedly influence elections.
According to a recent poll conducted by The Associated Press-NORC and the University of Chicago, nearly three in five U.S. adults believe AI-generated political advertisements will further the spread of false and misleading information ahead of the 2024 election.
Part of the angst stems from how convincing AI content has become. But with fewer production demands, AI content can also be churned out quickly and cheaply. That makes it primed for microtargeting, and voters are prone to believe what they want to hear, even if it’s algorithm that’s doing all the talking.
These ads will undoubtedly be shared on social media, and Meta has taken a proactive approach to ensuring voters know who — or what — created the clips they see during their daily scrolls.
The parent company of Facebook, Instagram and other social media outlets outlined a new policy on Wednesday that is geared toward helping users understand when ai or digital methods are used in political or social issue ads.
The approach targets the PACs, candidates and organizations purchasing ad space on Meta platforms by requiring them to disclose whether an advertisement depicts a person saying or doing something they did not say or do; feature a realistic-looking person who does not exist or an event that did not happen; or portray non-genuine audio, video or images of an event that only allegedly occurred.
If the ad-buyer checks one or more of those boxes, users will see an alert appear on the ad. Meta said the policy will be deployed globally in the new year.
In addition to flagging advertiser disclosures, Meta said it will continue working alongside independent fact-checkers to review and rate viral misinformation. Meta platforms will not run any advertisement that its fact-checking partner have rated as “False,” “Altered,” “Partly False,” or “Missing Context.”
For example, fact-checking partners can rate content as “Altered” if they determine it was created or edited in ways that could mislead people, including through the use of AI or other digital tools.
One comment
Larry Gillis, Libertarian (Cape Coral)
November 9, 2023 at 7:30 pm
FACT-CHECK THIS, YOU.
We Libertarians distrust any program, platform, or person that pretends to “protect” us from purported “untruths”. We prefer to do our own fact-checking, thank you very much. Let it all hang out.
Comments are closed.