Hello, Guest.!
MITRE Report Offers Recommendations to Advance AI Red Teaming
Artificial intelligence_272x270
/

MITRE Report Offers Recommendations to Advance AI Red Teaming

1 min read

MITRE’s Center for Data-Driven Policy has released a report outlining a set of recommendations for the incoming administration to support artificial intelligence red teaming.

According to the report, AI red teaming uses “adversarial thinking to both identify exploitable AI systems’ vulnerabilities and allow the AI community to counter those threats before they occur.”

The nonprofit corporation said Wednesday the first two recommendations in the report are mandating that independent parties perform AI red teaming on high-risk AI systems prior to executive branch acquisition and regularly using AI red teaming to ensure continued security and safety.

Other recommendations are promoting transparency and trust in AI-enabled systems used by the U.S. government through the release of public AI red teaming, assurance and testing reports and adopting an AI science and technology intelligence approach to security.

For the first 100 days, the report recommends that the incoming administration evaluate existing AI red teaming capabilities across the federal government and industry to identify “centers for excellence” and start establishing mandates for industrial and government contractors for AI red teaming during the development and use of AI systems in all U.S. agencies.

For the first six months, the administration should require federal agencies to implement independent AI red teaming and report on their efforts and launch a National AI Center of Excellence, MITRE said.