Models That Prove Their Own Correctness

Authors

  • Noga Amit University of California, Berkeley
  • Shafi Goldwasser University of California, Berkeley
  • Orr Paradise University of California Berkeley
  • Guy N. Rothblum Massachusetts Institute of Technology

Keywords:

]AI Model safety, Self-Proving models, Verification algorithm, Soundness property, interactive proof, bounded probabilistic proof, software validation, software verification

Abstract

How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured *on average* over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train *Self-Proving models* that prove the correctness of their output to a verification algorithm V via an Interactive Proof. Self-Proving models satisfy that, with high probability over a random input, the model generates a correct output *and* successfully proves its correctness to V. The *soundness* property of V guarantees that, for *every* input, no model can convince V of the correctness of an incorrect output. Thus, a Self-Proving model proves correctness of most of its outputs, while *all* incorrect outputs (of any model) are detected by V. We devise a generic method for learning Self-Proving models, and we prove convergence bounds under certain assumptions. The theoretical framework and results are complemented by experiments on an arithmetic capability: computing the greatest common divisor (GCD) of two integers. Our learning method is used to train a Self-Proving transformer that computes the GCD *and* proves the correctness of its answer.

Self-proving models schematic

Downloads

Published

2024-09-23

How to Cite

Amit, N., Goldwasser, S., Paradise, O., & Rothblum, G. N. (2024). Models That Prove Their Own Correctness. AGI - Artificial General Intelligence - Robotics - Safety & Alignment, 1(1). Retrieved from https://agi-rsa.com/index.php/agi/article/view/10867