AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies

Authors

  • Yi Zeng Virginia Tech
  • Yu Yang Virtue AI; University of California, Los Angeles
  • Andy Zhou Lapis Labs; University of Illinois Urbana-Champaign
  • Jeffrey Ziwei Tan University of California, Berkeley
  • Yuheng Tu University of California, Berkeley
  • Yifan Mai Stanford University
  • Kevin Klyman Stanford University; Harvard University
  • Minzhou Pan Virtue AI; Northeastern University
  • Ruoxi Jia Virginia Tech
  • Dawn Song Virtue AI; University of California, Berkeley
  • Percy Liang Stanford University
  • Bo Li University of Chicago

Keywords:

Foundation models, AI Regulatory framework, AI Safety benchmark, AI risks, AI Governance, AI safety taxonomy, AI Model safety

Abstract

Foundation models (FMs) provide societal benefits but also amplify risks. Governments, companies, and researchers have proposed regulatory frameworks, acceptable use policies, and safety benchmarks in response. However, existing public benchmarks often define safety categories based on previous literature, intuitions, or common sense, leading to disjointed sets of categories for risks specified in recent regulations and policies, which makes it challenging to evaluate and compare FMs across these benchmarks. To bridge this gap, we introduce AIR-Bench 2024, the first AI safety benchmark aligned with emerging government regulations and company policies, following the regulation-based safety categories grounded in our AI risks study, AIR 2024. AIR 2024 decomposes 8 government regulations and 16 company policies into a four-tiered safety taxonomy with 314 granular risk categories in the lowest tier. AIR-Bench 2024 contains 5,694 diverse prompts spanning these categories, with manual curation and human auditing to ensure quality. We evaluate leading language models on AIR-Bench 2024, uncovering insights into their alignment with specified safety concerns. By bridging the gap between public benchmarks and practical AI risks, AIR-Bench 2024 provides a foundation for assessing model safety across jurisdictions, fostering the development of safer and more responsible AI systems.

AIR Bench Risks Taxonomy

Downloads

Published

2024-09-23

How to Cite

Zeng, Y., Yang, Y. Y., Zhou, A., Tan, J. Z., Tu, Y., Mai, Y., … Li, B. (2024). AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies. AGI - Artificial General Intelligence - Robotics - Safety & Alignment, 1(1). Retrieved from https://agi-rsa.com/index.php/agi/article/view/10863