Senators’ bipartisan legislation would bolster innovation and increase transparency and accountability for higher-risk AI applications

WASHINGTON — U.S. Senators Amy Klobuchar (D-MN), John Thune (R-SD), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM), all members of the Senate Committee on Commerce, Science, and Transportation, introduced the Artificial Intelligence (AI) Research, Innovation, and Accountability Act. The bipartisan legislation establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.

“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” said Klobuchar. “This bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI – like in our critical infrastructure – and improve transparency for policy makers and consumers.”

“AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” said Thune. “As this technology continues to evolve, we should identify some basic rules of the road that protect Americans and consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention. This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.” 

“Artificial Intelligence technologies hold incredible potential for the future of our country, and it is important for Congress to provide guidance to industry as new applications emerge,” said Wicker. “This legislation would provide a light-touch framework to help protect Americans and ensure this promising capability is not crushed by an overly burdensome and reactive approach.”

“We’re entering a new era of Artificial Intelligence,” said Hickenlooper. “Development and innovation will depend on the guardrails we put in place. This is a commonsense framework that protects Americans without stifling our competitive edge in AI.”

“I am glad to partner with my colleagues to introduce a bipartisan first step towards addressing the development of AI,” said Capito. “Our bill will allow for transparent and commonsense accountability without stifling the development of machine learning. I look forward to working with my colleagues to advance this important legislation.”

“Artificial Intelligence is advancing faster than ever. AI presents immense opportunities, but these high-tech capabilities also pose significant risks,” said Luján. “That’s why I’m proud to join my colleagues in introducing this bipartisan legislation to support research and innovation in AI, and approaches for accountability and safeguards for its use. Congress must keep pace with the development of AI systems with the American people in mind, and this bill will do just that through the creation of risk-based guardrails, transparency requirements and corporate accountability.”

The AI Research, Innovation, and Accountability Act:

  • Strengthens accountability by creating enforceable testing and evaluation standards for highest-risk AI systems. 
    • Directs the Department of Commerce to issue standards for the testing and evaluation of AI systems that are used to make decisions that impact critical infrastructure, criminal justice, or biometric identification (“critical-impact AI systems”). Companies will be required to submit risk assessments to Commerce prior to deploying critical-impact AI systems outlining risks the company has identified in the AI system and what it is doing to mitigate or control those risks. 
  • Boosts transparency by requiring transparency reports from companies using high-impact AI systems.
    • Requires transparency reports for companies using high-impact AI systems to make decisions about housing, employment, credit, education, healthcare, or insurance in a way that poses a significant risk to fundamental constitutional rights or safety. Transparency reports submitted to the Department of Commerce must describe, among other things, the intended purpose of the AI system, how it was tested and trained (e.g., what data was used), and the expected benefits. 
  • Informs policymakers by directing NIST to make sector-specific recommendations for regulation of high-impact AI systems
    • Based on the risk assessments and transparency reports submitted to the Commerce Department, the National Institute of Standards and Technology (NIST) will provide sector-specific recommendations tailored to each federal agency to conduct oversight of artificial intelligence systems to improve the safe and responsible use of such systems. Agencies must respond to the recommendations and the Office of Management and Budget (OMB) will monitor their progress in implementing them. 
    • The bill also directs the Government Accountability Office to produce a report on best practices and barriers to use of AI by the Federal government.
  • Provides consumers with clearer distinctions between human and AI-generated content
    • Directs research on the development and standardization of ways to clearly and easily verify and label AI-generated content (e.g., watermarking) and also attribute credit to human creators.   
    • Requires the large internet platforms to provide clear and easy to understand notice to users when the platform is using generative AI to create content the user sees.
    • Directs NIST to support standardization of methods for detecting and understanding how AI systems interact with one another in order to prevent any unanticipated behavior.
    • Requires the Commerce Department to develop recommendations for consumer education efforts about AI systems.

Klobuchar has led efforts to address the threat of misleading AI-generated content.

Earlier this month, Klobuchar and Senator Susan Collins (R-ME) called on the Federal Trade Commission and the Federal Communications Commission to continue efforts to raise awareness about and prevent artificial intelligence voice cloning scams in order to protect Americans from this growing fraud. 

In October, Klobuchar and Congresswoman Yvette D. Clarke (D-NY) sent a letter to the CEOs of Meta Platforms, Inc. and X Corp., Mark Zuckerberg and Linda Yaccarino, respectively, seeking information on how their organizations are addressing AI-generated content in political ads hosted on their social media platforms, and in November, Meta announced that it will bar the use of its generative AI tools in political ads and will require disclaimers on AI-generated political ads.

In September, Klobuchar and Senators Josh Hawley (R-MO), Ranking Member of the Judiciary Committee’s Subcommittee on Privacy, Technology and the Law; Chris Coons (D-DE), Chair of the Judiciary Committee’s Subcommittee on Intellectual Property; and Susan Collins (R-ME), Vice Chair of the Appropriations Committee and former Chair of the Committee on Homeland Security and Governmental Affairs, introduced the Protect Elections from Deceptive AI Act, bipartisan legislation to ban the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads to influence federal elections. This legislation has also been cosponsored by Senators Michael Bennet (D-CO) and Pete Ricketts (R-NE). 

In May, Klobuchar and U.S. Senators Cory Booker (D-NJ) and Michael Bennet (D-CO) introduced the REAL Political Ads Act. This legislation would require a disclaimer on political ads that use images or video generated by artificial intelligence. Companion legislation is led in the U.S. House of Representatives by Congresswoman Yvette D. Clarke (D-NY).

In July, Klobuchar and U.S. Senator Ben Ray Luján and Congressman Adam Schiff (D-CA) wrote to the Federal Election Commission (FEC) encouraging the Commission to begin a rulemaking to regulate fraudulent AI-generated campaign ads. 

In June, Klobuchar and U.S. Senators Peter Welch (D-VT) and Dick Durbin (D-IL), Chair of the Senate Judiciary Committee, called on Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, and Twitter CEO Linda Yaccarino for answers after reporting highlighted a significant reduction of staff dedicated to countering misinformation. In light of these layoffs, the senators expressed concerns about these companies’ ability to effectively respond to election-related disinformation, including deceptive AI-generated content about elections and campaigns. 

In February, Klobuchar reintroduced the Honest Ads Act with U.S. Senators Lindsey Graham (R-SC), Ranking Member of the Senate Judiciary Committee, and Mark Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, to improve the transparency and accountability of online political advertising by requiring online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

###