Site icon InnerCity News

Deep Fake Sparks Urgent Call To Regulate Use Of Artifical Intelligence In U.S. Elections

U.S. Sen. Richard Blumenthal speaks to reporters at the Legislative Office Building in Hartford on Friday, Jan. 26, 2023, along with state Sen. James Maroney, about a robocall that went out during the New Hampshire primary that was a deep fake of President Joe Biden’s voice. Blumenthal and Maroney said they were hoping to pass legislation to make the use of deep fakes illegal in election campaign communications. Credit: Doug Hardy / CTNewsJunkie

by Izetta Asikainen CTNewsJunkie

A phone call to New Hampshire voters asked them to sit out this week’s primary and save their votes for the November election, and while it sounded like President Joe Biden, it wasn’t. It was created using artificial intelligence.
The call prompted U.S. Sen. Richard Blumenthal to join the call for regulation of so-called deep fake communications.
“One of the major threats to our democracy is that AI will be weaponized to spread disinformation,” Blumenthal said at a state Capitol press conference Friday. “Suppressing the vote in this way, is not only false, it could materially impact an election.”

Blumenthal suggested adding the use of AI to distort, deceive, and mislead, to the list of prohibitions under federal election laws with the exception of satire and parody which is protected under the First Amendment. 
Blumenthal said he is pushing for a bipartisan framework similar to the bipartisan Protect Elections from Deceptive AI Act. This measure would require a disclaimer on online ads stating who is paying for them and using a clear watermark if AI was used. 
“If AI is used to produce an ad, it has to be disclosed. Greater transparency and disclosure will at least help people know when a parody or satire or any other use of AI is involved,” Blumenthal said. 

The Protect Elections from Deceptive AI Act, introduced by U.S. Sen. Amy Klobuhar, D-Minnesota, will also allow federal candidates targeted by this materially deceptive content to have content taken down and enable them to seek damages in federal court.
Blumenthal hopes this measure will be passed before the upcoming November election, stating, “this kind of step is (regrettably) absolutely necessary to protect our democracy.”
But it’s about more than protecting political candidates. 

“Not only is there the possibility for mass disinformation, it can be personalized,” state Sen. James Maroney, D-Milford, said. According to Maroney, over 90% of deep fakes are for non-consensual intimate images. 
“We are going to look at putting forward legislation requiring disclaimers on any campaign ad that’s generated by AI, but we’re also going to be looking at digital forgeries, preventing these non-consensual intimate, sexualized images from being created and spread around,” Maroney said. 
Blumenthal hopes a bipartisan framework will have a vote soon. “We need this legislation as soon as possible because the fact the president’s voice was deep faked shows that this threat is imminent and real, and may be occurring without people actually knowing it right now,” Blumenthal said. 

According to Maroney, similar legislation has been introduced in over 24 states across the country and has successfully passed in five: Michigan, Minnesota, California, Texas, and Washington. 

Exit mobile version