Comments on Artificial Intelligence in Campaign Ads

February 5, 2024   •  By Brad Smith   •    •  

On January 31, 2024, the Institute for Free Speech submitted comments to the Federal Election Commission in regards to REG 2023-02: Artificial Intelligence in Campaign Ads.

Read a PDF of the comments here.

 

January 31, 2024

Federal Election Commission
1050 First Street, N.E.
Washington, DC 20463

Re:       REG 2023-02: Artificial Intelligence in Campaign Ads

Dear Commissioners:

The Institute for Free Speech (“IFS”) respectfully asks the Federal Election Commission (“Commission” or “FEC”) to accept these late-submitted comments on the above-referenced rulemaking petition (the “Petition”) filed by Public Citizen. As discussed below, the Commission should consider recent uses of generative artificial intelligence (“AI”) in the 2024 presidential campaign that have occurred after the close of the public comment period in deciding whether to initiate the requested rulemaking.

Amplifying on what other commenters have noted, the Commission should deny the Petition because the rule the Petition envisions would exceed the Commission’s statutory authority. Specifically, the Petition asks the Commission to prohibit certain content that uses generative AI concerning federal candidates that falls within the ambit of defamation law and is adjudicated by courts. The Commission does not have authority under the Federal Election Campaign Act of 1971, as amended (“FECA” or “Act”) to regulate defamation. Moreover, in order to address the types of speech the Petition urges regulation of, the Commission would have to adopt an unconstitutionally vague and subjective content regulatory standard.

  1. The Commission has no authority to regulate the use of generative AI in the form of defamation.

Fundamentally, the Petition asks the Commission to regulate certain content that uses generative AI concerning federal candidates that falls within the ambit of defamation law. However, the Commission has no statutory authority for regulating defamation.

The “fraudulent misrepresentation” provision of the Act—52 U.S.C. § 30124—consists of two parts. As other commenters have already explained, Part (a) prohibits fraudulent misrepresentations of campaign authority or impersonation.[1] For example: The campaign of Candidate Smith puts out an ad purporting to be from the campaign of Candidate Jones in order to damage Candidate Jones. Part (b) prohibits fraudulently soliciting campaign contributions and misrepresenting that they will be used for or on behalf of a candidate or political party. For example: a “scam PAC” purports to act on behalf of Candidate Smith and solicit funds for the campaign but spends all of the money enriching the PAC’s operators and not on supporting Candidate Smith.

The Petition notably does not identify which part of Section 30124 supports the requested rule. And for good reason. Part (a), which is the only part that could plausibly support the Petition, does not in fact authorize the requested rule. The Petition purports that “[t]he Commission has already recognized its statutory authority to regulate under the law against ‘fraudulent misrepresentation’”[2] to support its requested rule. But the two Commissioner statements of reasons the Petition attaches to support this proposition specifically address Part (b) of the statute—the fraudulent solicitation ban.[3] The Petition does not ask the Commission to promulgate any rule that is limited to addressing the use of generative AI for solicitations.

Rather, the Petition asks for a much broader rule to prohibit “[a] deepfake audio clip or video by a candidate or their agent that purports to show an opponent saying or doing something they did not do,” or “falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not . . . in a way deliberately intended to damage him or her.”[4] The type of content the Petition asks the Commission to regulate is not inherently a campaign finance activity within the Commission’s statutory authority or the scope of the FECA. Rather, the Petition is describing the use of generative AI for speech that is subject to regulation under defamation law.[5]

Specifically, speech is defamatory if it is false and “tends so to harm the reputation of [an individual] as to lower him in the estimation of the community or to deter third persons from associating or dealing with him.”[6] This encapsulates what the Petition seeks to capture through an FEC rulemaking. Falsely depicting someone engaged in an act or saying something they did not do or say by digitally manipulating the individual’s image or voice in a public communication can constitute defamation per se if it harms the person’s reputation.[7] And while the Supreme Court has set a high bar for public figures to establish liability for defamation, those who use AI to generate knowingly false depictions of candidates may still trigger the applicable “actual malice” standard.[8]

Admittedly, there does not appear to be an abundant body of caselaw yet applying defamation law to content created by generative AI, and specifically content depicting political candidates. However, this is because generative AI is a new technology, and its use to depict actual persons and political candidates has yet to be adjudicated. It is not the legitimate role of the Commission to be at the vanguard of shaping and policing this emerging area of tort law.

Notwithstanding their names, the Federal Election Campaign Act of 1971, as amended, and the Federal Election Commission do not regulate all activities related to campaigns and elections for federal office. Rather, they very narrowly regulate only certain campaign finance practices. There are many transgressions that can arise in the course of campaigns and elections, such as vote fraud and voter suppression, that fall entirely outside of the Act’s scope and the Commission’s jurisdiction, even if campaign funds are used to pay for such activities.

The regulated community understands this. For example, campaign operatives routinely file frivolous and politically motivated campaign finance complaints with the Commission. However, when allegedly false claims are made against a candidate in a campaign ad, it is well-known that the appropriate recourse is to send a cease-and-desist letter to the ad sponsor and the broadcast stations and cable and satellite operators. More determined candidates will litigate the matter in court under a defamation claim.[9] They do not file FEC complaints attempting to shoehorn defamation into the Act’s “fraudulent misrepresentation” statute, as the Petition tries to do with the requested rulemaking.

To be clear: IFS does not contend that defamation is protected speech, regardless of whether generative AI is involved. However, insofar as the Act authorizes the Commission to regulate political speech based on content, the Commission’s authority is limited to determining what qualifies as “express advocacy,”[10] “electioneering communications,”[11] “federal election activity,”[12] and similar issues. Because the Commission lacks the jurisdiction to regulate defamation, it also lacks the expertise to do so. The broader regulatory net that the Petition asks the Commission to cast therefore would risk the agency punishing and chilling protected speech in the course of attempting to regulate so-called “deepfakes” in political speech.

  1. The Petition seeks a vague and unworkable regulatory content standard.

52 U.S.C. § 30124(a) prohibits fraudulent misrepresentation in a manner “which is damaging” to a candidate or political party. The Petition acknowledges that any rule concerning the use of generative AI must follow this statutory standard by prohibiting deepfakes that portray a “candidate in a way deliberately intended to damage him or her.”[13]

The Petition would have the Commission introduce a new, additional “deliberately intended to damage” content standard for regulating political speech, alongside the preexisting express advocacy; promote, support, attack, or oppose (“PASO”); and electioneering communication standards. Moreover, according to the Petition, this new content regulatory standard should carve out “cases of parody, where an opposing candidate is shown doing or saying something they did not, but where the purpose and effect is not to deceive voters, and, therefore, where there is no fraud.”[14]

As the Holtzman Vogel commenters have noted, “the Commission does not consider Section 30124(a) cases with great frequency,” and none of the Commission’s enforcement cases addressing this provision grappled specifically with whether the communications were actually “damaging.” Rather, the cases were resolved based on whether the communications purported to impersonate or misrepresent the speaker as speaking on behalf of a candidate or political party committee.[15]

When Section 30124(a) is properly read, the inherent problems of the “damaging”-to-a candidate-or-political-party content standard are limited because the prohibited speech also must impersonate a candidate or political party committee or misrepresent the speaker as speaking on their behalf. However, the Petition would have the Commission apply the “damaging” content standard in a much broader context when generative AI is involved that is untethered to impersonation and misrepresentation of the speaker’s identity. The “damaging” content standard would be unworkably subjective and unconstitutionally vague in such broader applications.

On its face, the “damaging” content standard is vaguer and broader than even the PASO standard. Presumably, in order for speech to be regulated as PASO, it must use language that “explicit[ly]” promotes, supports, attacks, or opposes a candidate or political party.[16] That is not so for the “damaging” content standard.

Consider the two examples involving generative AI that the Petition raises and seeks to regulate:

In Chicago, a mayoral candidate in this year’s city elections complained that AI technology was used to clone his voice in a fake news outlet on Twitter in a way that made him appear to be condoning police brutality.

. . . The presidential campaign of Gov. Ron DeSantis . . . posted deepfake images of former President Donald Trump hugging Dr. Anthony Fauci.[17]

Neither of these examples uses language that explicitly promotes, supports, attacks, or opposes the referenced candidate. Rather, the conclusion that these uses of generative AI are “damaging” to the candidate would depend wholly upon context and “the varied understanding of [the] hearers and consequently of whatever inference may be drawn as to [the speaker’s] intent and meaning”—factors the Supreme Court has said are unconstitutionally vague for regulating speech.[18]

Consider two additional examples of how generative AI has been used after the close of the comment period for this Notice of Availability that have garnered national media attention recently:

  • We Deserve Better, a super PAC supportive of Democratic presidential candidate Dean Phillips, released an online chatbot depicting Phillips that purports to answer voters’ questions about his positions on issues.[19] Because super PACs may not coordinate with candidates, they can sometimes get crosswise with the candidates’ campaigns and engage in activities that the campaigns believe are counterproductive. The open feud between former Republican presidential candidate Ron DeSantis’ campaign and the Never Back Down super PAC was a prime example of this.[20]

If the We Deserve Better chatbot were to misrepresent Phillips’ positions on issues, or if the Phillips campaign were to believe the chatbot is otherwise counterproductive, would this be a “damaging” use of AI that would be prohibited under the type of rule the Petition urges? How could the Commission possibly determine this?

  • Robocalls with a digitally generated voice impersonating President Biden were disseminated to voters ahead of the recent New Hampshire primary urging them not to vote in the primary.[21]

The Democratic National Committee (“DNC”), which is presumably aligned with the incumbent Democratic president, did not sanction the New Hampshire Democratic primary because the national party apparatus preferred to feature South Carolina as the party’s first official primary.[22] In line with the DNC’s decision, President Biden refused to campaign in New Hampshire or to appear on the state’s Democratic primary ballot.[23] Biden won the primary after an independent write-in campaign for him was mounted.[24] While his campaign manager Julie Chavez Rodriguez celebrated the result, she notably side-stepped the fact that the primary was even held.[25]

In light of the DNC’s and Biden’s opposition to New Hampshire Democrats holding their presidential primary on January 23, was the robocall depicting Biden urging voters not to vote in the primary a “damaging” use of generative AI that would be prohibited under the rule that the Petition urges? How could the Commission possibly determine this?

In light of the Commission’s long struggle with the PASO standard,[26] it will have even greater difficulty articulating and enforcing the “damaging” content regulatory standard that the Petition urges the agency to adopt.

The petition also suggests that the Commission adopt two exemptions for parody and communications that include a disclaimer disclosing the use of generative AI.[27] Neither of these exemptions would alleviate the difficulties the Commission would encounter in crafting and administering the type of rule the Petition requests.

First, the Commission has struggled to distinguish parody from serious campaign and fundraising efforts. This is not surprising, since the distinction necessarily is highly dependent on context and “the varied understanding of” listeners.[28] As the Petition suggests, whether the use of generative AI is for parody would depend on the “purpose and effect” of the speech.[29] The Commission’s struggle with such determinations demonstrates why the Supreme Court has held that such a regulatory standard is unconstitutionally vague.[30]

For example, in MUR 7273, three commissioners voted to dismiss charges that the musician known as “Kid Rock” failed to register and report with the Commission for publishing and distributing materials purporting to campaign for U.S. Senate. The musician claimed the materials were a parody to promote concerts. However, the Office of General Counsel (“OGC”) and one commissioner took these materials at face value and treated them as serious campaign materials.[31]

Another example that the Petition itself offers (but which does not support the Petition) is MUR 7140.[32]  In that matter, the Commission was presented with a pair of PACs whose names spelled out the acronyms “ASS PAC” and “TROLL.” Two commissioners credited the respondent’s position that these were “satirical political committee[s]” and parody.[33] However, the Commission initially deadlocked 3-3 on OGC’s recommendation to find “reason to believe” that the respondent had violated the Act, and ultimately voted 4-2 to dismiss on grounds of prosecutorial discretion.[34]

At bottom, MUR 7140 hinged on the subtlety of the speaker’s satire and whether that subtlety gibed with individual OGC attorneys’ and commissioners’ own varied understandings of satire. This is hardly a tenable basis for distinguishing between regulated and unregulated speech, as the Petition proposes.

Second, the Petition proposes a safe harbor for content featuring “a sufficiently prominent disclosure” of the fact that generative AI was used to “portray[] fictitious statements and actions.”[35] However, many speakers may believe that such a disclaimer would detract from their message and choose not to rely upon the safe harbor. The Commission would then be left trying to police the unconstitutionally and unworkably “damaging” content regulatory standard and determining whether speech qualifies as parody.

  1. Conclusion

 The Petition asks the Commission to issue a rule under a tortured reading of 52 U.S.C. § 30124 to regulate certain content in which generative AI is used to falsely depict a candidate in a damaging manner. The type of rule the Petition urges the Commission to issue not only is unauthorized by the statute, but it is inherently at odds with the Commission’s status as an administrative agency. Fundamentally, the Petition urges the Commission to police instances in which campaign speech using AI-generated content may be defamatory. That is the role of the judiciary. The Commission is not a supervisory body with plenary authority to referee all activities related to campaigns for federal office.

As if that were not reason enough to deny the request, the Petition would have the Commission adopt an unconstitutionally and unworkably vague content regulatory standard. And as for the policy issue of regulating the use of generative AI in campaigns more generally, that is a matter only Congress may address.

Under our system of separation of powers, the Commission simply may not address the issues raised or implicated by the Petition.

 

Respectfully submitted,

Bradley A. Smith                                                        Eric Wang

Chairman                                                                    Senior Fellow

 

[1] Comments of Thomas J. Josefiak, et al. (Oct. 16, 2023) at 2; Comments of the Republican National Committee (Oct. 16, 2023) at 2; Comments of the Administrative Law Clinic at the Antonin Scalia Law School (Oct. 16, 2023) at 1.

[2] Petition at 4.

[3] Compare id. with id. Appx. A (“The prohibition against other persons misrepresenting candidates to solicit contributions is at issue in this matter.”) (emphasis added) and Appx. B (“This matter involves allegations that an independent expenditure-only political committee . . . solicited contributions  by fraudulently misrepresenting that it was acting as an agent of [a] congressional candidate”) (emphasis added).

[4] Id. at 3.

[5] See, e.g., Jessica Ice, Defamatory Deepfakes and the First Amendment, 70 Case W. Rsrv. L. Rev. 417, 432-35 (2019); Erik Gerstner, Face/Off: ‘DeepFake’ Face Swaps and Privacy Laws, Defense Counsel Journal (Jan. 2020) at 4-5, available at https://www.iadclaw.org/defensecounseljournal/faceoff-deepfake-face-swaps-and-privacy-laws/.

[6] Restatement (Second) of Torts (Am. Law Inst. 1977) §§ 558, 559. In the context of this rulemaking petition, the “community” would be the relevant electorate and the deterrence of association would be the effect the speech has on causing voters not to support a candidate.

[7] See, e.g., Tharpe v. Lawidjaja, 8 F. Supp. 3d 743, 785-86 (W.D. Va. 2014). While that case involved the defendant publishing digital images of the plaintiff that the defendant manually manipulated using Adobe Photoshop software, there is no material difference between using Photoshop or a generative AI tool such as DALL-E to create such content.

[8] See New York Times Co. v. Sullivan, 376 U.S. 254, 280 (1964).

[9] See, e.g., Aaron Sanderford, Nebraska Supreme Court returns defamation case against state GOP to lower court; Nebraska Examiner (Jan. 12, 2024); Jeff Pope, Schneider to pay Tarkanian $150,000 to settle lawsuit; Las Vegas Sun (Aug. 3, 2009).

[10] 52 U.S.C. § 30101(17).

[11] Id. § 30104(f)(3)(A).

[12] Id. § 30101(20).

[13] Petition at 3.

[14] Id. at 4.

[15] Comments of Thomas J. Josefiak, et al. at 2-3 (discussing MURs 5089 (Tuchman), 3960 (NRCC), and 2205 (Foglietta)).

[16] See McConnell v. FEC, 540 U.S. 93, 170 n.64 (2003).

[17] Petition at 2 (internal citations omitted).

[18] Buckley v. Valeo, 424 U.S. 1, 43 (1976); FEC v. Wisconsin Right to Life (hereinafter, “WRTL”), 551 U.S. 449, 473-74 (2007).

[19] Meryl Kornfield and Elizabeth Dwoskin, Silicon Valley insiders are trying to unseat Biden with help from AI, Wash. Post (Jan. 18, 2024).

[20] See, e.g., Alex Isenstadt, DeSantis campaign blames its own super PAC for leaks, bad TV advertising, Politico (Dec. 1, 2023).

[21] Cristiano Lima-Strong, Fake Biden robocall fuels calls for AI regulation, Wash. Post (Jan. 23, 2024).

[22] Matt Loffman, Biden isn’t on the ballot in New Hampshire’s primary. Here’s why, PBS News Hour (Jan. 19, 2024).

[23] Will Weissert, Biden wins New Hampshire primary through a write-in effort after declining to campaign there, Assoc. Press (Jan. 23, 2024).

[24] Id.

[25] Elena Schneider and Holly Otterbein, Biden wins a New Hampshire write-in campaign, Politico (Jan. 23, 2024).

[26] See, e.g., Explanation and Justification for Final Rules on Coordinated Communications, 75 Fed. Reg. 55947, 55955 (Sep. 15, 2020) (explaining that the Commission was not adopting a PASO content standard in its coordinated communications rules, and noting that the Commission was not adopting a definition of PASO); MUR 7197 (Greitens for Missouri), First General Counsel’s Report at 6 (“the Commission has never formally defined the terms ‘promote’ or ‘support’ in its regulations”).

[27] Petition at 4.

[28] Buckley, 424 U.S. at 43; WRTL, 551 U.S. at 473-74.

[29] Petition at 4.

[30] WRTL, 551 U.S. at 469 (rejecting a speech content regulatory standard based on “amorphous considerations of intent and effect”).

[31] MUR 7273 (Robert J. Ritchie), Statement of Reasons of Chair Caroline C. Hunter and Commissioner Matthew S. Petersen, Vote Certification dated Oct. 23, 2018, and Statement of Reasons of Vice Chair Ellen L. Weintraub.

[32] Petition Appx. B.

[33] Id.

[34] MUR 7140 (Americans for Sensible Solutions PAC), Vote Certification dated Feb. 9, 2021.

[35] As the Comments of Thomas J. Josefiak, et al. (at 5) explain, the Commission has no statutory authority to affirmatively require such a disclaimer.

Brad Smith

Share via
Copy link
Powered by Social Snap