United States of America

What's new

Regulations

Announced January 2023

Guidance

United States

The AI RMF is designed to better manage risks associated with AI in the U.S. The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. 

Passed April 2023

Regulation

New York City

The law prohibits employers and employment agencies from using “automated employment decision tools” in New York City to screen candidates for employment or assess employees for promotions unless such tools have been subject to independent bias audits, the results of which must be summarized and posted publicly on the employers’ or employment agencies’ websites. The law also requires employers and employment agencies to provide candidates and employees with disclosures regarding the use of automated employment decision tools.

In January 2023, the New York City Bias Audit Law (Local Law 144) was enacted by the NYC Council in November 2021. Originally due to come into effect on January 1, 2023, the enforcement date for Local Law 144 was pushed back to April 15, 2023 due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection’s (DCWP) proposed rules to clarify the requirements of the legislation. From April 15, 2023 onward, companies are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias. 

Announced March 2022

Regulation

United States

The law requires companies to risk assess AI systems used and sold, creates new transparency obligations about when and how they can be used and empowers consumers to make informed choices about the automation of critical decisions. 

2023

Principles

United States

The Executive Order aims to harness the positives of AI by addressing the risks associated, and places an emphasis on establishing best practices and standards. The EO sets out a list of 8 guiding principles and priorities that executive departments and agencies should adhere to. Although its focus is on U.S. agencies, the EO has implications on AI developers more generally as the U.S. Government will publish guidance on certain AI practices and also be able to request developers for information on their AI models. 

Announced October 2022

Guidance

United States

The Blueprint for the AI Bill of Rights identifies 5 principles that should guide the design, use, and deployment of AI which includes: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

In October 2022, the U.S. Government released the Blueprint for an AI Bill of Rights. It is non-binding but the White House also announced that federal agencies will be rolling out related actions and guidance regarding their use of AI systems, including new policies regarding procurement. 

Adopted May 2024

Principles

OECD

The OECD drafted Principles on Artificial Intelligence. The OECD's 36 member countries and partner countries (including Argentina, Brazil, Columbia, Costa Rica, Peru and Romania) adopted them in May 2019. In May 2024 the OECD principles were updated to include reference to misinformation and disinformation, the rule of law and bias. 

Published 30 October 2023

Principles

G7

The leaders of the G7 countries issued International Guiding Principles on AI and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. They have outlined 11 guiding principles which provide developers, deployers and users of AI a blueprint to promote safety and trustworthiness in their technology.

Passed May 2024

Regulation

Colorado

On May 17, 2024, Colorado Governor Jared Polis signed the Colorado Artificial Intelligence (AI) Act (CAIA), the first broadly scoped U.S. AI law. Similar to the EU AI Act, the CAIA takes a risk-based approach and focuses on high-risk AI systems. It requires developers and deployers of such systems to use reasonable care to avoid algorithmic discrimination in high-risk AI systems. Developers and deployers must disclose specified information to stakeholders. Deployers must also conduct impact assessments, implement risk management plans, and provide consumers with a mechanism to appeal adverse decisions. The Colorado Attorney General has exclusive authority to enforce and adopt rules implementing the CAIA. The CAIA takes effect on February 1, 2026.