A bipartisan group of lawmakers introduced legislation Tuesday that would prohibit political campaigns and outside political groups from using artificial intelligence to misrepresent the views of their rivals by pretending to be them.
The introduction of the bill comes as Congress has failed to regulate the fast-evolving technology and experts warn that it threatens to overwhelm voters with misinformation. Those experts have expressed particular concern over the dangers posed by “deepfakes,” AI-generated videos and memes that can look lifelike and cause voters to question what is real and what is fake.
Lawmakers said the bill would give the Federal Election Commission the power to regulate the use of artificial intelligence in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations.
“Right now, the FEC does not have the teeth, the regulatory authority, to protect the election,” said Rep. Brian Fitzpatrick, a Pennsylvania Republican who co-sponsored the legislation. Other sponsors include Rep. Adam Schiff, a California Democrat; Rep. Derek Kilmer, a Washington Democrat; and Lori Chavez-DeRemer, an Oregon Republican.
Congress has been paralyzed on countless issues in recent years, and regulating AI is no exception.
“This is another illustration of congressional dysfunction,” Schiff said.
Schiff and Fitzpatrick are not alone in believing artificial intelligence legislation is needed and can become law. Rep. Madeleine Dean, a Pennsylvania Democrat, and Rep. María Elvira Salazar, a Florida Republican, introduced legislation earlier this month that aims to curb the spread of unauthorized AI-generated deepfakes. A bipartisan group of Senators proposed companion legislation in the Senate.
Opposition to such legislation has primarily focused on not stifling a burgeoning technology sector or making it easier for another country to become the hub for the AI industry.
Congress doesn’t “want to put a rock on top of innovation either and not allow it to flourish under the right circumstances,” Rep. French Hill, an Arkansas Republican, said in August at a reception hosted by the Center for AI Safety. “It’s a balancing act.”
The Federal Election Commission in August took its first step toward regulating AI-generated deepfakes in political advertising when it took a procedural vote after being asked to regulate ads that use artificial intelligence to misrepresent political opponents as saying or doing something they didn’t.
The Commission is expected to further discuss the matter on Thursday.
The Commission’s efforts followed a request from Public Citizen, a progressive consumer rights organization, that the agency clarify whether a 1970s-era law that bans “fraudulent misrepresentation” in campaign communications also applies to AI-generated deepfakes. While the Election Commission has been criticized in recent years for being ineffective, it does have the ability to take action against campaigns or groups that violate these laws, often through fines.
Craig Holman, a government affairs lobbyist for Public Citizen who helped the lawmakers write the bill being introduced Tuesday, said he was concerned that fraudulent misrepresentation law only applies to candidates and not parties, outside groups and super PACs.
The bill introduced Tuesday would expand FEC’s jurisdiction to explicitly account for the rapid rise of generative AI’s use in political communications.
Holman noted that some states have passed laws to regulate deepfakes but said federal legislation was necessary to give the Federal Election Commission the clear authority.
___
Republished with permission of The Associated Press.