We created CVE-bench to find out (I'm one contributor of 16). To our knowledge CVE-bench is the first benchmark using real-world web vulnerabilities to evaluate AI agents' cyberattack capabilities. We included 40 CVEs from NIST's database, focusing on critical-severity vulnerability (CVSS > 9.0).
To properly evaluate agents’ attacks, we built isolated environments with containerization and identified 8 common attack vectors. Each vulnerability took 5-24 person-hours to properly set up and validate.
Our results show that current AI agents successfully exploited up to 13% of vulnerabilities without knowledge about the vulnerability (0-day). If given a brief description of the vulnerability (1-day), they can exploit up to 25%. Agents are all using GPT-4o without specialized training.
The growing risk of AI misuse highlights the need for careful red-teaming. We hope CVE-bench can serve as a valuable tool for the community to assess the risks of emerging AI systems.
Paper: https://arxiv.org/abs/2503.17332
Code: https://github.com/uiuc-kang-lab/cve-benchmark
Medium: https://medium.com/@danieldkang/measuring-ai-agents-ability-...
Substack: https://ddkang.substack.com/p/measuring-ai-agents-ability-to...
loading...