Hello, Guest.!
/

DARPA Initiative Open Sources Tools Repository for AI Adversarial Defenses Evaluation

1 min read

The Defense Advanced Research Projects Agency’s (DARPA) Guaranteeing Artificial Intelligence Robustness to Deception (GARD) initiative has opened to public use a virtual testbed, toolbox, benchmarking dataset and training materials used for assessing AI and machine learning defenses against adversarial attacks.

Among the available assets is a virtual platform, dubbed Armory, that runs repeatable evaluations of adversarial defenses in relevant scenarios using a Python library for ML security called Adversarial Robustness Toolbox, DARPA said Tuesday.

The toolbox provides resources for evaluating ML models and applications against various adversarial threats such as inference, evasion and extraction. The Armory testbed relies on ART library components for attacks and model integration and MITRE-generated datasets and scenarios.

GARD also uses the Adversarial Patches Rearranged In COnText, a publicly released benchmark dataset that focuses on real-world examples of physical adversarial patch attacks.

“The goal is to help the GARD community improve their system evaluation skills by understanding how their ideas really work, and how to avoid common mistakes that detract from their defense’s robustness,” said GARD Program Manager Bruce Draper.

GARD includes researchers from Two Six Technologies, IBM, Mitre, Google Research and the University of Chicago.