HomeUS Federal AI Legislation in 2024: The Current LandscapeResourcesUS Federal AI Legislation in 2024: The Current Landscape

US Federal AI Legislation in 2024: The Current Landscape

It is estimated that about 42% of companies are using AI in some way. While this can bring considerable benefits to both organisations and consumers by automating tasks, streamlining processes, and allowing greater personalization, it is important to balance innovation with compliance.

This is because AI systems under the scope of relevant existing laws in various jurisdictions, and several high-profile lawsuits have already been brought against companies using AI in ways that do not prioritise compliance.

There is also an increasing impetus to regulate AI specifically due to the novel risks that it can pose, resulting in lawmakers around the introducing requirements that codify responsible AI practices.

Much of this activity has occurred in the US, where AI risk management has been codified into proposed laws at the local, state, and federal levels. This blog post explores several key pieces of horizontal and vertical US federal AI laws making their way through the legislative process in 2024 that organizations need to keep in mind.

What Horizontal AI Legislation Exists at the Federal Level?
Horizontal legislation is legislation that affects multiple applications of AI systems – typically those that are used to make critical decisions that may affect users’ life changes – rather than focusing on one specific sector or use case.

Algorithmic Accountability Act (AAA) – requires some businesses that use automated decision systems to make critical decisions to report on the impact of such systems on consumers. Enforcement rules will be defined by the Federal Trade Commission (FTC) along with relevant stakeholders and specified state officials will have enforcement power. The bill also establishes a Bureau of Technology to advise the FTC about technological aspects.
Status: The AAA was introduced in the House of Representatives in 2019 and 2022, where it failed to pass both times. It was reintroduced for a third time on 21 September 2023, and has been referred to the Committee on Commerce, Science, and Transportation.

Covered applications:

If an organisation uses a system that dictates access to, availability of, or cost of any of the following, it would fall under the act:

Education and vocational training
Employment, self-employment, and workers management
Essential utilities
Family planning (including adoption and reproductive services)
Financial services
Healthcare
Housing or lodging
Legal services
An organisation would also fall under the AAA if it falls under the jurisdiction of the FTC.

Requirements: The potential requirements of the AAA are algorithmic impact assessment and annual summary reports.

Federal Artificial Intelligence Risk Management Act – Makes National Institute of Standards and Technology’s (NIST) voluntary AI Risk Management Framework legally binding for government agencies.
Status: Introduced to the Senate on 2 November 2023, and has been referred to the Committee on Homeland Security and Governmental Affairs.  

Requirements:

NIST to review and develop, alongside relevant stakeholders, voluntary consensus standards for testing, evaluation, verification, and validation of AI acquisitions
NIST and the Office of Management and Budget (OMB) to issue guidance on implementing the AI RMF into current AI risk management efforts
OMB to establish an initiative to allow federal agencies access to diverse expertise, as well as periodically report to Congress on agency implementation and conformity to the framework
The Government Accountability Office to study the impact of the framework on agency use of AI
The Administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council to act to ensure that federal agencies procure AI systems that incorporate the AI RMF

TEST AI Act of 2023 – Directs NIST to coordinate with the Department of Energy to create testbeds to advance trustworthy AI tools, and to improve interagency coordination in the development of such tools.
Status: Introduced to the Senate on 30 October 2023 and referred to the Committee on Commerce, Science, and Transportation.

Requirements: The coordination will be implemented via a Memorandum of Understanding (MOU) between the Secretary of Commerce and the Secretary of Energy, which will ensure that the relevant federal agencies will have access to the required Department of Energy resources.

Artificial Intelligence Environmental Impact Act 2024 - Directs NIST to develop standards to measure and report AI’s environmental impacts, and create a voluntary framework for AI developers to report environmental impacts.
Status: Introduced to Senate on 1 February 2024 and has been referred to the Committee on Commerce, Science, and Transportation.

Requirements:

A study on the environmental impacts of AI by the Environmental Protection Agency (EPA), the Department of Energy, NIST, and the Office of Science and Technology Policy. A report on this study needs to be submitted to Congress and made publicly available no later than 2 years after this act takes effect.
An AI Environmental Impacts Consortium with relevant stakeholders to identify needs and standards for measuring the environmental impact of AI
A voluntary reporting system for entities developing or operating AI to report environmental impacts
A joint report from NIST, the Department of Energy, and the EPA to Congress within 4 years detailing the consortium’s findings, thee voluntary reporting system, and recommendations for further legislative and executive action.

What HR Tech Legislation Exists at the Federal Level?
In addition to the several horizontal AI laws that exist at the federal level, there are also initiatives targeting specific sectors and applications of AI, including HR tech.

Stop Spying Bosses Act – establishes requirements for employers with more than 10 employees (including government employers) that engage in worker surveillance and collect employee or applicant data.
Status: Introduced to Senate on 2 February 2023 and has been referred to the Committee on Health, Education, Labor, and Pensions.  

Requirements:

Requires employers that conduct workplace surveillance to disclose to workers and applicants what data is collected, how it is used, and how such surveillance affects performance assessments
Requires employers to disclose any work-related decision that relies on workplace surveillance data and allow the worker to review the data
Sets requirements that employers have to meet before transferring data to a third party
Prohibits employers from using workplace surveillance for the following purposes:
Monitor activities related to a labour organisation
Collect health information unrelated to job duties
Monitor a worker who is off duty or in a sensitive area
Use an automated decision system to predict worker behaviour unrelated to their job
Creates rules on the use of automated decision systems to empower workers in employment decisions
Establishes a Privacy and Technology Division at the Department of Labour to enforce and regulate workplace surveillance as novel and emerging technologies
Provides for enforcement by private right of action, states, or other specified agencies

No Robot Bosses Act – prohibits certain uses of automated decision systems (ADS) by employers, require employers to disclose how and when ADS are being used, and to add protections for employees and applicants related to ADS. Damages awarded by the court for confirmed violations may be between $5000 and $100,000.
Status: Introduced to Senate on 20 July 2023 and has been referred to the Committee on Health, Education, Labor, and Pensions.  

Requirements:

Prohibits employers from relying solely on ADS in making employment-related decisions
Requires pre-deployment and periodic testing and validation of ADS for issues like discrimination or bias before they’re used in employment-related decisions
Requires employers to train individuals or entities on the operation of ADS
Directs employers to provide independent human oversight of ADS outputs before using outputs in employment-related decisions
Requires employers to timely disclose their use of ADS, the data inputs and outputs of these systems, and employee rights related to decisions aided by these systems
Establish a Technology and Worker Protection Division at the Department of Labour to regulate the use of ADS in the workplace

Is There Federal Legislation for Generative AI?
In addition to HR tech, another application of AI that is gaining attention from senators in generative AI, particularly in light of the upcoming elections.

No AI Fraud Act - provide for individual property right to voice and likeness, and give them legal ground to sue to protect identities from AI abuse, including deepfakes
Status: Introduced to House of Representatives on 10 January 2024 and has been referred to the House Committee on the Judiciary. The Act would take effect 180 days after it is enacted.

Key provisions:

Strengthens protections of an individual’s likeness and gives individuals the right to control the use of their identifying characteristics. These rights are transferable and descendible in whole or part and do not expire upon death
Provides a mechanism for liability for any person or entity that distributes digital voice replicas, digital depictions, or personalised clones of an individual without their consent
Action can be taken by the replicated individual or anyone that they have licensed their rights to or have an exclusive contract with if the victim is a musician
Seeks to balance speech and innovation with intellectual property interest

Preventing Deep Fake Scams Act – establishes a Task Force on Artificial Intelligence in the Financial Services (FS) Sector to report to Congress on AI issues in FS.
Status: Introduced in House of Representatives on 28 September 2023 and has been referred to the House Committee on Financial Services.

Requirements:

The report issued to Congress shall be developed alongside industry and expert stakeholders, and should include:
How banks and credit unions use AI to proactively protect themselves and consumers from fraud
Standard definitions including terms like “generative AI”, “machine learning”, “natural language processing”, “algorithmic AI”, and “deep fakes”.
Risks arising from bad actors using AI to commit fraud and steal consumer data and identity
Best practices for financial institutions to protect customers from fraud and theft
Legislative and regulatory recommendations to regulate AI to protect customers from fraud and data and identity theft
This report would be due to Congress no later than one year after the enactment of the bill, and the Task Force would be terminated 90 days after the final report is issued.

Protect Elections from Deceptive AI Act – prohibits the distribution of materially deceptive AI-generated media of candidates for Federal office
Status: Introduced in Senate on 12 September 2023 and has been referred to the Committee on Rules and Administration. 

Requirements:

Any entity is prohibited from knowingly distributing deceptive AI-generated media of candidates for Federal office, or carrying out any Federal election activity, with the intention of influencing an election or soliciting funds
Exceptions are made for certain news coverage containing disclaimers on AI-generated media, and satire or parody
A candidate may bring action for general or special damages

AI Disclosure Act of 2023 – requires AI-generated outputs to be accompanied with a specified disclosure stating that such outputs have been generated by AI. Violations of requirements would be treated as a violation of section 18(a)(1)(B) of the FTC Act 15 U.S.C. 57a(a)(1)(B)
Status: Introduced in House of Representatives 5 June 2023 and has been referred to the Subcommittee on Innovation, Data, and Commerce.

Requirements:

Any output generated by generative AI should include the following, ‘Disclaimer: this output has been generated by artificial intelligence.’

Candidate Voice Fraud Prohibition Act – amends the Federal Election Campaign Act of 1971 to ban the distribution (within 60 or 90 days before an election depending on the type) of some political communications containing materially deceptive AI-generated audio that impersonates a candidate’s voice and is intended to harm their reputation or to deceive voters.
Status: Introduced in House of Representatives on 13 July 2023 and has been referred to the House Committee on House Administration. The Act would take effect 90 days after enactment.

Requirements:

Title III of the Federal Elections Campaign Act of 1971 is amended by adding a section at the end that prohibits the distribution of certain communications that contain AI-generated deceptive audio
Exceptions apply to certain types of broadcasts, news, internet providers, and satire or parody.
Knowing and wilful violators can be fined and/or imprisoned
The Federal Election Commission shall submit a report to the Committee on House Administration of the House of Representatives and the Committee on Rules and Administration of the Senate no later than 3 years after the act takes affect and annually following this.
The report must contain:
Information on compliance and enforcement of this act
Recommendations on any modifications to the act

What Federal Legislation Covers Online Platforms and Communications?

Platform Accountability and Transparency Act – increases access to data of large technology platforms (with more the 50 million unique monthly users in US), specifically requiring platforms to disclose data to researchers in a programme jointly established by the National Science Foundation (NSF) and the FTC. Also provides providing secure pathways for independent research on data held by large internet companies to support research about the impact of digital communication platforms on society. A platform’s or researcher’s failure to comply shall be treated as a violation of a rule defining an unfair or deceptive act or practice prescribed under section 18(a)(1)(B) of the FTC Act (15 U.S.C. 57a(a)(1)(B)).
Status: Introduced to Senate on 21 December 2022 and has been referred to the Committee on Health, Education, Labor, and Pensions. 

Requirements:

The NSF and the FTC are required to introduce a joint research programme to review applications for qualified research projects, and shall set privacy and cybersecurity safeguards for researchers accessing this data
Sets out access limitations to data provided to qualified researchers ,with risk of civil and criminal enforcement under any applicable Federal, State, or Local laws if they intentionally, recklessly, or negligently violate the established security safeguards
Platforms must inform users that they are required to share data with researchers under this act
Platforms may appeal the approval of a qualified research project if it cannot provide the data requested, providing the data requested would lead to security issues, or the security safeguards set up are deemed inadequate to protect the data
The NSF and FTC must report to Congress in detail about safeguards, researchers, projects, data, and any recommendations to the operation of this act. This report must be made within two years of enactment of this act and annually thereafter
This act also amends the Communications Decency Act of 1934 to add sections about data access and transparency compliance

REAL Political Advertisements Act – also known as the Requires the Exposure of AI–Led Political Advertisements Act. Amends the Federal Election Campaign Act of 1971 for further transparency and accountability when using AI-generated content in political ads by requiring disclaimers, outlining specifications for clear disclaimers on different media types.
Status: Introduced in House of Representatives on 2 May 2023 and has been referred to the House Committee on House Administration. 

Requirements:

The Federal Election Commission (FEC) is directed to issue a regulation on generative AI, including criteria for determining whether an ad contains AI-generated content
The Federal Election Campaign Act of 1971 is amended to add definitions for qualified internet and digital communications, online platforms, qualified political ads, and third-party advertising vendors
The FEC should report to Congress about the compliance and enforcement of this act as well as any recommended modifications and ways to bring further transparency to political ads

Political BIAS Emails Act of 2023 – also cited as the Political Bias in Algorithm Sorting Emails Act of 2022. Makes it unlawful for commercial email service providers to use a filtering algorithm to label an email from a political campaign unless specific action was taken to apply such a label. The FTC will enforce this act, and violations shall be treated as violations of rules defining unfair or deceptive acts or practices under FTC acts.
Status: Introduced in House of Representatives on 14 September 2023 and has been referred to the Subcommittee on Innovation, Data, and Commerce. This prohibition would become effective 3 months from when the Bill is enacted.

Requirements:

Each operator of an email service is required to publicly publish quarterly transparency reports that include details about how emails from political campaigns were flagged, also specifying emails from the Democratic or Republican Parties
This obligation starts with the first year that begins on or after the date that is 120 days after the date of enactment of this act. 3 months after this act is enacted, political campaigns may also request each operator of an email service to.
Exemptions apply to certain email services based on their size.

Compliance is key
Whether these acts pass or not, there is going to be increased regulatory scrutiny on AI and automated systems. The FTC, Consumer Financial Protection Bureau (CFPB), Equal Employment Opportunity Commission (EEOC), and Department of Justice’s Civil Rights Division (DOJ) released a joint statement on 25 April 2023 reiterating their enforcement powers and actions already taken against AI and automated systems. The statement also highlighted that multiple components of these tools can violate existing laws if they are not considered or accounted for throughout the entire lifecycle of AI systems, and a number of lawsuits have already been brought against AI applications under existing laws. These agencies have resolved to take appropriate action to protect rights and promote responsible innovation.

With this incoming wave of requirements, preparing early is the best way to ensure compliance.

  • Home
  • Products
  • Solutions
  • Resources
  • Contact