California’s “Deepfake” Bill is a Bad Omen

July 18, 2019   •  By Luke Wachob   •    •  

Sarah Palin could not see Russia from her house. Nevertheless, Tina Fey’s impression of Palin on Saturday Night Live became so iconic that fact-checkers had to debunk the belief that Palin made the remark herself. Now, new technology that can produce more convincing parodies than an SNL sketch is sparking further debate about free speech in the digital age.

Members of Congress and state legislators alike are worried about “deepfakes” – videos altered by artificial intelligence to make it appear as though someone did something they never did, or said something they never said. Politicians fear the use of this technology to misrepresent their statements and actions to the public, sowing confusion in the electorate.

However, the technology that makes deepfakes possible, also known as synthetic media, is a tool for expression with as many positive applications as negative ones. Major motion pictures have used similar CGI tools to allow characters to live on after the actors who portrayed them pass away. Peter Cushing’s Grand Moff Tarkin appeared in Rogue One: A Star Wars Story over 20 years after the actor’s death, and the Fast & Furious franchise’s Furious 7 used CGI to add additional scenes with Paul Walker.

YouTubers and other video creators have used deepfakes to make all sorts of content. Some are just weird, like a video that puts the face of Steve Buscemi on the body of Jennifer Lawrence. Others are incisive social commentaries that follow in the long tradition of satire and parody of public figures. In one notable example, a deepfake video featured Facebook CEO Mark Zuckerberg gloating about his company’s control over user information, bringing attention to the power of big data – while simultaneously testing the company’s attitude towards deepfakes of its own CEO. “Whoever controls the data, controls the future,” says the deepfake Zuckerberg. Recognizing the social value of such speech, Facebook chose not to remove the video from its platforms.

The Zuckerberg deepfake shows how efforts to ban the technology’s use in politics could imperil free political speech. Broad prohibitions or restrictions on the ability to create or share synthetic media would likely violate First Amendment rights to mock public figures. Such policies could harm everyone from random Facebook accounts and YouTubers all the way up to the largest platforms hosting the content.

Early state efforts to legislate on this issue do not give confidence that government will respect free speech rights when regulating deepfakes.

California Assemblyman Marc Berman’s A.B. 730 would make it illegal to “knowingly or recklessly distribute deceptive audio or visual media” of a political candidate within 60 days of an election “with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

The bill was quickly opposed by the ACLU of California and several press rights organizations, citing free speech concerns. Mark Powers, Vice President of the California Broadcasters Association, said the bill would be impossible to comply with for radio and television broadcasters. “By passing this bill, you put your broadcasters in extreme jeopardy,” he told the Senate Elections and Constitutional Amendments Committee. Broadcasters may choose not to accept any ads about candidates rather than risk liability for disseminating deepfakes, if verifying each ad’s authenticity proves too costly.

California News Publishers Association Staff Attorney Whitney Prout called the bill “an ineffective and frankly unconstitutional solution that causes more problems than it solves,” noting that A.B. 730 would impose content-based restrictions on speech that must survive strict scrutiny.

A.B. 730 awaits a vote by the full Senate following the Legislature’s summer recess. It remains to be seen whether California lawmakers heed the free speech concerns articulated by these groups or whether they throw caution (and First Amendment rights) to the wind.

Free speech was a topic of discussion at a recent Congressional hearing on deepfake videos as well. Testifying before the House Permanent Select Committee on Intelligence, Boston University Law Professor Danielle Citron spoke about her efforts to develop a model statute prohibiting “harmful false impersonation” to address deepfakes. She stated her belief that a widely circulated, altered video of House Speaker Nancy Pelosi (D-CA) making Pelosi appear to slur her words should have been taken down from all major social media networks.

When it was time for Congressman Jim Himes (D-CT) to address the panel, he expressed his concerns about First Amendment rights getting caught in the crossfire as governments attempt to ban deepfakes.

“The theme of this hearing is how scary deepfakes are, but I’ve gotta tell you, one of the more scary things I’ve heard this morning is your statement that the Pelosi video should have been taken down,” Rep. Himes said in reference to Professor Citron’s comments. “As awful as I think we all thought that Pelosi video was, there has to be difference if the Russians put that up, which is one thing, versus if MAD Magazine does that as satire… We don’t have a lot of protections as public figures with respect to defamation… [I want to] hear more about where that boundary lies and how we can protect that long tradition of free expression.”

Later in the hearing, Congressman Denny Heck (D-WA) reiterated Rep. Himes’ concerns about protecting free speech. He noted that fact-checking services such as Snopes.com already exist in the private market, and people can choose to use or not use them as they wish.

Indeed, governments and private companies possess the tools to identify deepfake videos in a timely manner. University at Buffalo Professor David Doermann testified that he could determine whether a video was authentic or altered in about 15 minutes. That’s a pretty quick turnaround.

It won’t satisfy everyone, however. Congressman Adam Schiff (D-CA), the Chair of the Intelligence Committee, argued that even when a falsehood is corrected, your brain still can’t unhear it. Some politicians cannot abide a misleading statement about them even if it is corrected quickly, prominently, and repeatedly to anyone who will listen.

Public figures cannot control what people think or say about them, and they should know better than to try. Politicians, in particular, have bigger problems to worry about than being made to look silly on social media. New technology often presents new challenges, but it does not call the right to parody political leaders into question.

Luke Wachob

Share via
Copy link
Powered by Social Snap