Written by: Stephen Rogers | December 5, 2023

A golden robot

Though it may be difficult to believe, ChatGPT, the ubiquitous conversational AI, is only one year old. It launched on 30 November 2022, and was instantly a global phenomenon, becoming the fastest growing consumer application in history. It reached 1 million users in just 5 days - in comparison, it took Instagram 2 and a half months to hit the same milestone. By mid-December, the website had more than 58 million visits. That figure grew to about 1.8 billion monthly visits by 2023’s halfway point, with the company behind ChatGPT, OpenAI, stating that its website had become one of the most visited domains in the world.

While artificial intelligence is not new, and there are other generative AI systems like ChatGPT out there, the launch of ChatGPT marked a major inflection point in their use and public awareness of the potential benefits, the risks and the dangers. The year has been full of controversies around copyright and ethics created by the use of this new technology - from the New York-based lawyer who was caught out using ChatGPT to research cases only to have it make up fake citations, to legal action by digital artists claiming the work it generates is based on stolen art. Celebrities have sued OpenAI, alarmed over alleged copyright infringement. It was even front and center of the Writers Guild of America (WGA) strike, with writers concerned they would be replaced by AI. OpenAI itself has suffered some ill effects from its stratospheric rise - founder Sam Altman was shockingly ousted by the board only to be rapidly reinstated, over a row concerning ethical controls.

Prompted by ChatGPT and other similar services, 2023 has been, by some way, the year of artificial intelligence. This has been reflected in legislatures around the country, with states and the federal government scrambling to introduce legislation to address the risks and harness the benefits of a technology with the potential to revolutionize the way we live, work and vote. So let's take a look at some of that legislation, a quick look at the international picture and the history of such legislation in the US.

Firstly, some definitions. In addition to 'generative AI' (where AI systems create content such as text or images) the technology includes such elements as 'deepfake' imagery - where videos of a person are altered digitally so that they appear to be someone else. This is a particular concern in the pornography industry (where fake celebrity videos abound) and when used to misrepresent the words or views of politicians or business leaders. Deepfake is also referred to as 'synthetic media'. AI also includes facial recognition technology, now ubiquitous as a way of unlocking your mobile phone but also increasingly used by law enforcement and for other purposes. And the increasing use of 'automated decision tools' to speed up and reduce bias in hiring or housing decisions for example, and which can often introduce their own bias instead.

The Global Picture

Legislating in this space is hard. Governments around the world have struggled to overcome obstacles such as the fact that the technology is evolving so fast it's difficult to keep up or in fact to actually understand what it does. Add to that the very real concerns that badly drafted legislation could impact free speech and that content is created on borderless online platforms and then instantly travels around the world, while governments are constrained by borders and move slowly to respond. Regardless, legislation has been put in place trying to overcome these issues:

In the UK, the Online Safety Act 2023 became law in October this year. The wide ranging and controversial piece of legislation has had a long and rocky road to becoming law, with critics claiming that it has been watered down along the way. It puts the onus on tech companies to police the content on their platforms and remove anything illegal, such as terrorist content or child sexual abuse material and also deepfake pornography. But it doesn't address other types of AI-generated content that was created without the subject's consent. Companies that fail to comply with the new rules face fines of up to £18 million ($21.9 million) or 10 percent of their annual revenue, whichever is larger.

In the EU, the Digital Service Act (DSA) came into force in November 2022 but new rules are applicable from February 2024. It regulates digital services and requires companies to assess risks, outline mitigations and undergo third party audits. Non compliance can result in penalties of up to 6% of annual worldwide revenue. The Act has been overtaken by events somewhat, as it doesn't explicitly address the dangers of AI. So the EU has hastily drafted the AI Act, which is expected to pass at the end of 2023 or early 2024. The draft Act includes a ban on using AI for police surveillance purposes, requiring generative AI systems to label content as AI-generated and will create a category of 'high risk' AI which will include anything aimed at influencing elections.

South Korea, which has some of the world's toughest anti pornography laws, also leads the way in anti deepfake legislation. In 2020 it amended sexual crimes legislation to include a provision which prohibits the creation and distribution of 'false video products', with harsh penalties of up to 12 years in prison.

Meanwhile, China this year adopted rules requiring any manipulated content to have the subject's consent and be clearly labelled. However, even the Chinese state will struggle to police such rules and it remains to be seen what impact it will have.

Legislation in the United States

The story to date

Back in the USA, there has been a plethora of legislation and attempts to curb the potential harm of these new technologies. This stakeholder page details 17 of the most interesting pieces of legislation between 2019 and 2022 that have actually become law.

The National Defense Authorization Act 2021, enacted following a congressional override of then President Trump's veto, requires the Department of Homeland Security to issue an annual report on deepfakes for five years. Here's the first report (from January 2023). While a riveting read, it provides little in the way of solutions or recommendations and in fact serves more as a baseline assessment of the technologies and risks. It does promise more meaty content in the years to come. President Trump did sign into law the Identifying Outputs of Generative Adversarial Networks Act in 2020, which directed the National Science Foundation and the National Institute of Standards and Technology to work together to develop ways of identifying and combating deepfakes.

On a state level, Virginia was the first state to legislate against deepfakes with SB1736 in 2019. Many states have since passed legislation either attempting to regulate or exploit AI. 2019 was a big year, with both California and Texas passing laws prohibiting the use of deepfakes to influence elections. California also passed a law prohibiting the use of deepfakes in pornography without the explicit consent of the subject, in an attempt to allow the technology to be used in a way that doesn't cause harm to third parties. Btw, probably the most sexually explicit bill I've come across - reader discretion is advised!

Illinois created a first-of-a-kind law, requiring employers to provide job applicants with full details of how AI analysis of job interview videos will be used, to obtain consent and to guarantee confidentiality. This reflects the increase in the use of AI at all stages of the hiring process over recent years, and concerns around privacy and bias being unintentionally fed into the algorithms. For example, in 2018 Amazon was forced to scrap an experimental AI hiring tool after finding it disfavored resumes that included the word "women's", due to the fact it had been trained on resumes of applicants hired by Amazon - who were overwhelmingly male.

In 2022, Idaho passed a law prohibiting animals or inanimate objects from being granted personhood, principally to prevent any efforts to increase environmental protections for animals or inanimate objects by granting them the same legal rights as people. Interestingly, they also included AI in the list, reflecting fears that in the future AI might bid for the same status as human beings. “We don't want our children to be inferior to artificial intelligences,” said Republican Rep. Tammy Nichols, the sponsor of the act.

Also in 2022, Colorado passed SB113 setting up a task force to examine the use of facial recognition technology and restricting its use by state bodies, and also prohibiting its use by state schools until 2025. In particular, the act prohibits law enforcement from using facial recognition in surveillance and requires them to obtain a warrant authorizing any use of the technology. The law was in response to concerns that schools were using AI to screen visitors and also perceived flaws with technology itself. This landmark 2020 study from Harvard found facial recognition algorithms are consistently less accurate at identifying female and non-white faces. Evidence presented during bill passage highlighted cases such as that of Nijeer Parks in 2019 who was wrongly accused of a series of crimes based on faulty facial recognition scan of a photo left at the scene.

The Year of AI

Which brings us to 2023. There has been a bumper crop of bill introduced this year. I've pulled together a stakeholder page listing almost 300 of the most relevant bills. (Florida's H8039 'Gator Day' act disappointingly isn't about creating artificial alligators or using AI to boost their intelligence, it is about recognizing the accomplishments of the University of Florida. I leave it there for your enjoyment nonetheless.). This is the distribution across the country (the darker the red the more bills there are) Congress accounts for a whopping 130 bills though predictably none have yet become law.

The distribution of AI bills across the US

While most bills attempt to regulate or limit the use of AI, there are some which seek to explore the positive uses. Such as New Mexico's SB78 which provides funding for a dryland resilience center, including the development of AI solutions to diagnose and predict vulnerabilities in dryland environments (of which New Mexico is replete). The bill unfortunately died in committee. On the other end of the hydrological spectrum, North Carolina's S684 deals with permits for controlling stormwater runoff. It includes funding for exploring the use of AI systems to generate permits. It also died in committee.

The federal government also introduced some bills aimed at exploiting AI. HR1697 / S734 promote precision agriculture (using technology to optimize farming inputs to protect the environment and increase productivity) and HR4373 aims to improve weather forecasting capabilities within the national integrated drought information system by using AI and machine learning.

Most bills, however, deal with the dangers of AI. Let's take a quick look at some of the more interesting:

New York has introduced a welter of bills in 2023 looking at AI - 38 made it onto my bill sheet. A07906 seeks to regulate the use of 'automated decision tools' by landlords in making housing decisions, in particular to prevent unfair discrimination. A08110 amends criminal procedure law to prevent evidence created or processed by AI being the sole basis of a criminal case. And A08129 would introduce an 'artificial intelligence bill of rights' to ensure there is proper regulation and oversight of any decision making systems which impact on the lives of residents of the state.

North Dakota joins Idaho in enacting a bill which defines a 'person' to not include AI, animals or inanimate objects in HB1361. Congress, always fans of fun bill titles, created the No Robot Bosses and Stop Spying Bosses bills which prohibit the use of certain surveillance and AI decision making tools in the workplace. These contrast nicely with the Jobs of the Future bill which promotes a 21st century artificial intelligence workforce. None of these have made it out of committee yet.

And we must mention Massachusetts S31, which aims to regulate the use of generative AI technologies such as ChatGPT and was in fact drafted with the help of ChatGPT itself. A disclaimer states that any inaccuracies are the result of human authors rather than the language model. I'll leave it to other to decide whether the bill is a better draft than those drafted purely by flesh and blood people, but I'm sure we'll see much more use of such systems to help draft legislation going forward.

Two areas of concern feature prominently in AI legislation, to let's finish by looking more closely at the legislation that has been introduced in 2023 to address them.

Deepfakes

This stakeholder page identifies 34 bills introduced this year dealing to some extent with the production and distribution of deepfake videos, across 11 states and Congress. Here is the distribution. New York leads the way with 12 bills.

Distribution of deepfake legislation around the US

Six states have enacted legislation, but nothing has passed Congress so far. Louisiana, Minnesota, New York and Texas enacted legislation creating offenses for producing and distributing non consensual deepfake pornography. Many of the unsuccessful bills were on a similar topic.

New Jersey introduced S3926 which amends identity theft legislation to include the use of deepfake technology. New York introduced a range of bills, including requiring advertisements to disclose the use of 'synthetic media' and creating a crime of aggravated harassment by means of digital communication.

Meanwhile Congress has introduced the amusingly titled DEEPFAKES (Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to) Accountability bill and the Preventing Deep Fake Scam bill, both aimed at providing federal protection against the criminal use of deepfake technology. Both remain in committee.

To finish on a lighter note, Washington state introduced a bill to bolster media literacy and digital citizenship in K-12 education, including by creating grant programs for a range of topics including a focus on 'synthetic media'. And Massachusetts introduced a similar bill with H560. So there is at least a little recognition that to combat the damaging use of such new technology, future generations will need to understand how it works.

Election Related AI Bills

Another key area is the impact of AI on elections and voting. There is a large overlap with deepfakes here, as many bills relate to the use of deepfake technology to affect the outcome of an election by creating fake videos showing candidates making negative or offensive statements. This stakeholder page lists 31 of the key bills introduced this year. Congress comes top with 10 bills introduced.

The distribution of AI election related legislation around the US

Michigan has enacted 4 pieces of legislation, aimed at creating offenses for distributing deceptive media, regulating campaign advertising, to define artificial intelligence for this purpose and to provide sentencing guidelines. Washington state also enacted SB5152, defining synthetic media in campaigning and outlines penalties for improper use (for example, where an appropriately prominent disclaimer was not included).

Congress has also introduced a number of bills which address this issue. None have yet made it out of committee, but they seek to provide federal protection for elections in similar ways, by requiring any election media using deepfake technology to clearly show it as such or to prohibit the use of such media entirely. HR5495 is interesting - it seeks to ban email services from using filtering algorithms to make an email appear to come from a particular political campaign when it does not.

Conclusions?

Given the pace at which this technology is developing, it is difficult to predict the ultimate impact on our lives. It seems likely however that the use of deepfake technology and other AI systems will be a feature of the upcoming 2024 election cycle, though it's difficult to know exactly how pervasive it will be. Malign actors tend to be early adopters of new technology so it's entirely possible there will be influences that we don't even know exist. A strong, coherent and proportionate system of regulation and legislation can help to combat such effects, but that is something that at the moment the United States lacks with its piecemeal approach to tackling the problem.

As regards abuse of deepfake technology to create sexually explicit imagery, it can only be expected to become easier and cheaper to create and therefore ever more common as generative AI systems become ubiquitous. You cannot understate the impact of non consensual deepfake pornography on its victims. As noted by Kaylee Williams in this article, there is a large body of evidence suggesting that victims of any form of image-based sexual abuse are more likely than non-victims to report adverse mental health consequences like posttraumatic stress disorder, anxiety, depression, and suicidal ideation, as well as challenges keeping or maintaining meaningful employment after being victimized. It's often difficult to pinpoint the creator or distributor of such content, but it is imperative that governments around the world and tech companies cooperate to swiftly remove such content and bring the perpetrators to justice. Better and more legislation to that effect is needed in the United States.

About BillTrack50 – BillTrack50 offers free tools for citizens to easily research legislators and bills across all 50 states and Congress. BillTrack50 also offers professional tools to help organizations with ongoing legislative and regulatory tracking, as well as easy ways to share information both internally and with the public.

Photo by Lyman Hansel Gerona on Unsplash