Written by: Sarah Johnson | March 4, 2024

*** Watch Sarah and Stephen discuss this bill on Tiktok or our Substack, or listen to the discussion on our Podcast: BillTrack50 Beyond the Bill Number on Apple Podcasts, Spotify, YouTube or where ever you get your podcasts ***

2023 was the Year of Artificial Intelligence, so it is no surprise states are already incredibly concerned about how AI can and will be used in the 2024 elections. Just this week, news outlets all over the world reported that viral images of former President Donald Trump with black people were AI generated. Radio show host Mark Kaye and his team are responsible for some of these images and told the BBC that it "was the individual's problem if their vote was influenced by AI images". In February, a political consultant representing several New York politicians admitted he was behind the reported fake robocall telling President Joe Biden's supporters not to vote in New Hampshire. This week, a look at states looking to or who have already outlawed the use of deepfake artificial intelligence in political campaigns.

What is a 'Deepfake’?

We looked at deepfakes last year with IssueVoter Bill of the Month (Dec 2023): Preventing Deepfake Scams Act, so this is a good place to start!

Deepfakes are a form of artificial intelligence where an artificial, or fake, image, video, or audio is created by a specific type of machine learning called "deep learning". Artificial intelligence in actuality is where an algorithm is fed information, and using that information, it “learns” and produces an output based on the knowledge it is fed. This is called training AI models. Deep learning is like other types of machine learning many of us just call "AI".

Deep learning involves “hidden layers”, or more than one type of machine learning algorithm. Deep learning converts real images/videos/audio into fake images/videos/audio based upon the prompts requested by the creator. The output of "deep learning" AI are artifacts humans recognize as pretty normal, but, the underlying technology is much more complicated than most of what we see with AI. This is because deepfake AI is based on something called "neural networks".

Neural networks describe the process of producing complex deepfakes, using two different types of algorithms. One algorithm is trained to produce the best fake replicas possible of real, sourced images, video, and audio. The second algorithm is trained to detect when the media is fake and when it’s not (which may be the secret to identifying AI generated content). The media is then created in almost a dance like fashion between the two models, with them iterating back and forth, each responding to and improving upon the result of the other, generating new, different images. The end result, like the tango, is a masterpiece. And, these models are getting better all the time, it is no longer common to be seeing people with six fingers!

What is the Issue?

In today's world, generally what people see on the internet is what people see as truth. It stands to reason that if you can create a deepfake about anything you want, you can theoretically influence people with your misinformation, encouraging, to some extent, for people to go along with a fake agenda. Unchecked, deepfake-based misinformation could wreak havoc on our society.

There are so many different ways deepfakes can be used to influence people. We can think as small as the nuclear family level, creating videos "showing" someone's relative in serious trouble, asking for money to help them. We could think of it on a larger level, showing politicians saying or doing things that do not align with their character or platform, influencing elections. We could even see deepfakes of world leaders, saying or doing things that could provoke violence, fuel unrest, or even start a war.

As I mentioned earlier, neural networks use a model trained to "spot" AI. Should we invest more time in resources in these models, training them to recognize parts of AI generated media not detectable by us mere mortals, we could possibly combat deepfakes. That would, however, require essentially every photo/video/audio file on the internet to go through some type of audit, which creates in privacy and free speech concerns.

One of the largest issues today when looking at deepfakes is how these can be used during election cycles to influence people's opinions on candidates and issues, ultimately resulting in election interference. This interference could come from opponents, PACs, or even outside state actors. But also, like we saw this week with the Trump supporters' AI images, could come from everyday people looking to help get their candidate elected.
So given this concern, what have states and the nation been doing? 

States are busy when it comes to deepfakes and elections. 

Currently, there are over 50 bills (BillTrack50 Mobile Access Code: WVIFLHB) pertaining to “deepfakes” and elections around the country, with many states already having passed legislation this year. Most of the legislation shares an overall objective of curbing the use of deepfakes in elections through requiring the creators/users of this content disclosure the images, video, or audio is AI-generated. 

As of writing this article, 15 states have passed legislation aiming to limit or combat using deepfakes as tools for the upcoming elections. Before 2024, six states passed legislation aiming to prevent and penalize the spread of misinformation through deepfakes: California (2019), Michigan (2023), Minnesota (2023), Texas (2019), and Washington (2023), and Wisconsin (2023). Most states are doing this via disclosure requirements. These disclosures pertain to anyone distributing election materials in many states, but Wisconsin’s bill specifically targets candidates, parties, and committees.

The majority of these bills also use the phrase "Synthetic media", generally defined in bills as "an image, an audio recording, or a video recording of an individual's appearance, speech, or conduct that has been created or intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video."

National legislation 

On the national level, the Protect Elections from Deceptive AI Act (in committee since September 2023) aims to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office. The Candidate Voice Fraud Prohibition Act (in committee since July 2023) aims to prohibit the distribution, with "actual malice", of certain political communications containing "materially deceptive" audio generated by artificial intelligence which impersonate a candidate’s voice and intended to injure the candidate’s reputation or to deceive a voter into voting against the candidate. Finally, the DEEPFAKES (Defending Each and Every Person from False Appearances by Keeping Exploitation Subject) Accountability Act of 2023 (in committee since September 2023), aims to protect national security against threats posed by deepfake technology and provide legal recourse to victims of harmful deepfakes.

A deeper look at state legislation in 2024

On the state level, this year Arizona (second bill), Florida, Georgia, Indiana, Kentucky (second bill), New Mexico, Utah, West Virginia, and Wyoming all passed deepfake election legislation. 

Some states are taking the approach of requiring anyone distributing election materials to disclose the fact that they are AI-generated within a certain time frame. In Arizona, Iowa, Mississippi, New Hampshire, Rhode Island, South Carolina, Tennessee, and New Jersey, West Virginia legislation aims to impose a 90 day disclosure requirement on AI generated materials being used as part of an election.

The Arizona bill (SB 1359) outlines a few different items, stating: 

  • Within ninety days before an election at which a candidate for elected office will appear on the ballot, a person who acts as a creator shall not sponsor or create and distribute a synthetic media message that the person knows is a deceptive and fraudulent deepfake of that candidate or of a political party that is on that ballot unless the synthetic media message includes a clear and conspicuous disclosure that states that the media includes content generated by artificial intelligence. 
  • If the media consists of audio only and no visual disclosure is possible, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio and, if the audio is longer than two minutes in length, interspersed within the audio at intervals of not more than two minutes each. 
  • A candidate whose appearance, action or speech is depicted through the use of a deceptive and fraudulent deepfake in violation of this section may seek injunctive or other equitable relief from the sponsor or the creator of the media prohibiting the publication of the deceptive and fraudulent deepfake.

This Arizona bill also puts forth some situations in which the bill would not apply, particularly pertaining to radio or television broadcasting stations, including a cable or satellite television operator, programmer or producers, and some internet sites. It would also not apply to media that are constituted as “satire or comedy”. 

In New Jersey, the proposed bill allows any registered voter, along with political candidates, to sue for an injunctive or other equitable relief. New Hampshire and South Carolina are considering similar bills.

The New Hampshire bill calls out the fact this legislation leads to some cost concerns associated with potential court cases. Reportedly, the Judicial Branch anticipates increased litigation from the new cause of action to the tune of:  $1,321 for complex cases in Superior Court and $494 for routine cases. For an original entry or third-party claim, the court can collect $280 per case, and $160 if a case is reopened.

A Massachusetts bill would create a task force to study the use of deepfake and digital content forgery. This task force would be tasked with evaluating: 

  1. The proliferation of deepfakes impacting state and local government, Massachusetts- based businesses, and residents. 
  2. The risks, including privacy risks, associated with the deployment of digital content forgery technologies and deepfakes on Massachusetts state and local government, Massachusetts businesses, and Massachusetts residents. 
  3. The impact of digital content forgery technologies and deepfakes on civic engagement, including the use of deepfakes to influence or deceive a voter. 
  4. The legal implications associated with the use of digital content forgery technologies and deepfakes.
  5. The best practices for preventing digital content forgery and deepfake technology to benefit the Commonwealth of Massachusetts, local government, Massachusetts-based businesses, and Massachusetts residents.

The bill also defines “election deepfake”, as a deepfake that depicts a candidate, ballot question committee or political party with the intent to injure the reputation of the candidate or party or otherwise deceive a voter.

A bill introduced in California (AB 2355) aims to update the existing law, which prohibits a distributing, “with actual malice”, materially deceptive media of a candidate with the “intent to injure” the candidate’s reputation or to deceive voters. “Qualified political advertisement” is defined in the new bill to include any paid advertisement relating to a candidate for office or any election to federal, state, or local office, a ballot measure, or a bond issue, containing any image, audio, or video that is generated, in whole or in part, using artificial intelligence. AB2355 requires a person, committee, or other entity that creates, originally publishes, or originally distributes a qualified political advertisement to disclose so. The bill would also authorize any registered voter to bring an action in superior court seeking a temporary or permanent restraining order or injunction against the publication, printing, circulation, posting, or distribution of any qualified political advertisement that violates these disclosure requirements.

Conclusion

This is a huge issue, and is something that we need to act on swiftly to ensure that we are appropriately monitoring this technology and how people can use it to influence the public. The disclosure is a first step, as without these outlined disclosures, deepfakes could mislead voters into believing falsities and ultimately influence the outcome of an election. But also, I wonder if calling out that something a campaign, PAC, or person is using is “synthetic media’ will even have an impact? How many people will see the disclosures? Once you see an image, does then knowing it is fake complete negate how you felt about it?

Do you think this disclosure will limit the impact this type of media can have on people? What else, if anything should be done?

 

Cover Photo by Cash Macanaya on Unsplash

About BillTrack50 – BillTrack50 offers free tools for citizens to easily research legislators and bills across all 50 states and Congress. BillTrack50 also offers professional tools to help organizations with ongoing legislative and regulatory tracking, as well as easy ways to share information both internally and with the public.