MITRE has released a new paper outlining a set of recommendations for establishing a regulatory framework for addressing potential security risks posed by artificial intelligence.
The paper, titled “A Sensible Regulatory Framework for AI Security,” lays out regulatory considerations in three categories of application: AI as a component or subsystem; AI as human augmentation; and AI with agency, MITRE said Wednesday.
“Differentiating these categories is important because the threats and risks differ based on how AI manifests in applications, as do the approaches to mitigating threats and risks,” according to the paper.
When implementing AI as a subsystem, MITRE recommends that organizations reduce vulnerabilities by enhancing industry-specific assurance approaches. This includes developing a response plan to the National Institute of Standards and Technology’s AI Risk Management Framework.
To ensure the security of AI tools that aim to augment human capabilities, MITRE suggests requiring system auditability to hold individuals who misuse the technology to cause harm accountable.
Moreover, regulations that cover AI implementations that have a level of agency must reduce risks through critical infrastructure hardening.