Six members of the Senate Commerce, Science and Transportation Committee have proposed a bipartisan bill that seeks to create a framework to foster innovation in the field of artificial intelligence and improve security and accountability when it comes to developing and operating AI in high-impact applications.
The proposed AI Research, Innovation and Accountability Act of 2023 would direct the National Institute of Standards and Technology to perform research to drive the development of standards that would provide provenance information for online content and help detect and understand emergent properties in AI tools.
“This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications,” Sen. John Thune, R-S.D., said in a statement published Wednesday.
Thune introduced the measure with Sens. Amy Klobuchar, D-Minn.; Roger Wicker, R-Miss.; John Hickenlooper, D-Colo.; Shelley Moore Capito, R-W.Va.; and Ben Ray Lujan, D-N.M.
The bill would require critical-impact AI organizations to self-certify their compliance with standards prescribed by the Department of Commerce, direct companies to field critical-impact AI to conduct detailed risk assessments and ask NIST to come up with recommendations to agencies for technical, risk-based guardrails on “high-impact” AI platforms.
The legislation would also direct the Commerce Department to create a working group to offer recommendations for the development of consumer education initiatives for AI systems.