The AI Risk Repository

A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence

Authors

  • Peter Slattery MIT Future Tech, Massachusetts Institute of Technology, Ready Research
  • Alexander K. Saeri MIT FutureTech, Massachusetts Institute of Technology, Ready Research
  • Emily A. C. Grundy MIT FutureTech, Massachusetts Institute of Technology, Ready Research
  • Jess Graham 3School of Psychology, The University of Queensland
  • Michael Noetel School of Psychology, The University of Queensland, Ready Research
  • Risto Uuk Future of Life Institute, KU Leuven
  • James Dao Harmony Intelligence
  • Soroush Pour Harmony Intelligence
  • Stephen Casper Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
  • Neil Thompson MIT FutureTech, Massachusetts Institute of Technology

Keywords:

ai risks, ai governance, agi risks, agi governance, ai risks database, ai risks taxonomy

Abstract

The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets. We construct our Repository with a systematic review of taxonomies and other structured classifications of AI risk followed by an expert consultation. We develop our taxonomies of AI risk using a best-fit framework synthesis. Our high-level Causal Taxonomy of AI Risks classifies each risk by its causal factors (1) Entity: Human, AI; (2) Intentionality: Intentional, Unintentional; and (3) Timing: Pre-deployment; Post-deployment. Our mid-level Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental, and (7) AI system safety, failures, & limitations. These are further divided into 23 subdomains. The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.

Figure A. Methodology to identify and categorize AI risks

Downloads

Published

2024-09-23

How to Cite

Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., … Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. AGI - Artificial General Intelligence - Robotics - Safety & Alignment, 1(1). Retrieved from https://agi-rsa.com/index.php/agi/article/view/10881