The National Institute of Standards and Technology has unveiled the first version of its guidance document for helping organizations manage risks posed by artificial intelligence systems.
NIST said Thursday the Artificial Intelligence Risk Management Framework outlines four core functions for ensuring trustworthiness in AI platforms: govern, map, measure and manage.
The “govern” function focuses on building a culture of risk management within organizations to identify and manage threats an AI system can pose. This area incorporates processes to assess potential impacts and provides a structure by which AI risk management functions can align with organizations’ principles and policies.
The next step is to “map” the broader contributing factors of AI risks. In this step, NIST seeks to help organizations contextualize risks related to an AI system to anticipate and address their potential sources.
According to the framework, outcomes in the map function will serve as the foundation for the remaining two steps.
In the “measure” function, organizations are advised to employ “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.”
For the last function, framework users will put in place a plan for prioritizing risk and regular monitoring and improvement.
The Potomac Officers Club will hold its 4th Annual Artificial Intelligence Summit on Feb. 16 to discuss the federal government’s AI priorities and initiatives for 2023 and beyond. Click here to register and hear from the speakers, including Craig Martell, chief digital and AI officer at the Department of Defense, and Lakshmi Raman, director of AI at the CIA.