Christine Lai and Jonathan Spring of the Cybersecurity and Infrastructure Security Agency said technology developers should ensure that artificial intelligence software are secure by design.
The AI engineering community should apply Secure by Design practices, which other safety principles and other guardrails rely on, and institute Common Vulnerabilities and Exposures and other vulnerability identifiers, Lai and Spring wrote in a blog post published Friday.
“Since AI is software, AI models – and their dependencies, including data – should be captured in software bills of materials. The AI system should also respect fundamental privacy principles by default,” they noted.
They discussed the importance of AI-specific assurance issues and the difference between adversarial inputs that drive misclassifications and security detection bypass.
Lai is AI security lead and Spring is a senior technical adviser at CISA.
Listen to public sector leaders and technology experts as they talk about the opportunities and risks associated with generative AI and related tools at ExecutiveBiz’s Trusted Artificial Intelligence and Autonomy Forum on Sept. 12. Register here.