Michael Lasky & Samantha Rothaus
Michael Lasky & Samantha Rothaus

Election season is fully underway in the United States and, as in most presidential election years, publications, airwaves, and social media feeds are increasingly becoming crowded with campaign messaging. The emergence of generative artificial intelligence technology is a curveball that has the potential to significantly disrupt and erode voter trust in the media and governmental leadership. The adoption of new laws, rules, policies, and norms around how Gen AI technologies can and cannot be used in political communications, and around how the use of Gen AI is disclosed to voters, is of utmost importance in the coming months.

Use cases for Gen AI in content creation

Corporations and consumers alike have come to rely on artificial intelligence and machine learning technologies over the last several years to streamline tasks and create efficiencies – from the mundane, such as predictive text when drafting emails or conducting online searches, to the must-have, such as digital assistants like Siri and Alexa, and everything in between. However, it was not until the explosive public launch of ChatGPT in November 2022 that the world experienced the “generative” capabilities of machine learning technologies on a massive scale.

Now, in early 2024, Gen AI platforms have become a popular way for individuals and companies to create music, artwork, text, and videos.

How is gen AI used in political campaigns?

Political advertisers have quickly caught on. Campaigns are routinely using Gen AI tools to create photos, videos and audio clips that align with their political message. However, more sinister use cases have emerged, as these tools have become more ubiquitous and more powerful.

Here are but three recent examples:

  • Last summer the Ron DeSantis campaign used Gen AI to create a so-called deepfake of Donald Trump’s voice attacking the Republican governor of Iowa.
  • The DeSantis campaign also ran advertising that used Gen AI to falsely depict Donald Trump embracing Dr. Anthony Fauci during the COVID-19 pandemic.
  • The Republican National Committee used AI-generated images of boarded-up storefronts and military on the streets of U.S. cities to show what they envision happening if President Biden is re-elected.

The rise of deepfakes and other forms of synthetic media is not a new phenomenon.

However, the increasing availability and ease of use of powerful Gen AI technologies make these dangers ever more present. Just last month, during the New Hampshire presidential primary election, thousands of voters reported receiving robocalls that used a convincing clone of President Biden’s voice to encourage voters not to go to the polls. The state’s investigation revealed that the operation was spearheaded by an individual who owns a telemarketing company.

Without regulation, the increasing use of Gen AI to create deepfakes and sow disinformation as part of a political campaign strategy will make it increasingly difficult for voters to tell truth from fiction.

Efforts toward regulation

Private companies and government authorities have launched to require a label or disclosure to be presented on content that was created using Gen AI. Some of the specific proposals that have recently been implemented or are being considered are described below.

Google’s new policy

As of November 2023, Google now requires that any political advertisements on its platforms (including Google Search and YouTube) featuring synthetic content that “inauthentically represents real or realistic-looking people or events” must include a clear and conspicuous label disclosing to viewers that the content contains AI-generated material. The label must be placed in a manner where it will be easily noticed by viewers. This policy applies to still images, video footage and audio content.

Google’s policy specifies that any ad with synthetic content that “makes it appear as if a person is saying or doing something they didn’t say or do” or that “alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place” would need a disclosure. Google also clarified that political ads would not need such a disclosure if Gen AI was used in an immaterial or inconsequential way, such as image re-sizing or color correction.

Google offered examples of what could constitute an acceptable disclosure, such as: “This audio was computer generated”; “This image does not depict real events”; and “This video content was synthetically generated.”

Meta follows suit

Shortly after Google’s policy was announced, Meta announced that it would restrict political advertisers from using its Gen AI advertising products, in an effort to reduce election-related disinformation. TikTok and Snapchat respectively announced that they would simply be banning political advertising from their platforms.

Meta also began including a built-in “Imagined with AI” label on photorealistic images created using its proprietary Gen AI tool, Meta AI. Meta also announced plans to introduce a labeling standard for all AI-generated content appearing on its platforms, regardless of where the content originated. While the technology is being finalized, Meta has also added built-in disclosure and label tools on its platforms, which allow users to identify whether realistic-seeming material they post was created using Gen AI. Meta stated that its platforms “may add a more prominent label if appropriate” in situations where “digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance.”

Lawmakers and regulators weigh in

The Federal Election Commission also signaled its openness to regulating AI-generated political advertisements. The FEC opened public comment following a petition for rulemaking that was filed last summer, proposing that the Commission amend current regulations that prohibit political candidates from “fraudulently misrepresenting other candidates or political parties” to make clear that this prohibition would apply to deliberately deceptive AI-generated campaign ads. While this move suggests that the FEC is taking the issue seriously, the Commission has not yet indicated whether it will actually make changes to its current regulations, despite the thousands of comments that were submitted through the fall.

The Federal Communications Commission (FCC) has also weighed in, issuing an order in February announcing that automated robocalls using AI-generated voices will be considered “artificial” calls and are therefore restricted under the Telephone Consumer Protection Act unless the caller has received the prior express consent of the consumer being called. The FCC has been contemplating the impact of Gen AI on telemarketing practices for several months, but in light of the events taking place during the New Hampshire primary, its recent order has new urgency in the context of political campaigning.

Rep. Yvette Clark introduced the REAL Political Ads Act, which would require a disclaimer on any political ads that use images or video generated by AI, no matter the medium or platform on which those ads appear. Similar legislation was also introduced in the Senate by senators Amy Klobuchar, Cory Booker and Michael Bennett. The future of these legislative initiatives is unclear given partisan gridlock in both the Senate and House of Representatives.

On a state level, several individual states have enacted new legislation to prohibit or regulate the misleading or malicious use of Gen AI to create deepfakes in political advertising. These include California, Michigan, Minnesota, New Jersey, Texas, and Washington. Other states, including Florida, Illinois, Kentucky, New Hampshire, New York, South Carolina, and Wisconsin, are also currently considering pending legislation to address these issues.

Takeaways

Gen AI has the benefit of offering marketers and communicators many new tools to create and disperse content. This new technology has also made it easier and cheaper for political groups and campaigns to create convincing but fictitious attack ads targeting political rivals. As we progress through 2024’s election cycle, communicators working in the political space should prioritize transparency when using these new technologies to produce content. In particular, there's an array of new tactics being proposed by private industry, as well as government actors, that would require political advertisers to clearly and conspicuously disclose when generative AI tools were used to create the content in their advertising.

Those working with political campaigns and operatives to create and distribute political advertising should familiarize themselves with existing rules and requirements and closely watch the rapidly changing developments as the 2024 election approaches.

***

Michael Lasky is founder and chair of the Davis+Gilbert's Public Relations Law Practice. Samantha Rothaus is a partner in the Davis+Gilbert's Advertising + Marketing Practice.