The first comprehensive introduction to benchmarking, the engine behind progress in AI
In machine learning, researchers split their data into training and test sets, let model builders compete on the test set, and call it a benchmark. Statistical tradition prescribed locking test sets in a vault, but machine learning practitioners shared them freely. Benchmarking shouldn't have worked, but it did, and the machine learning community never figured out the science behind it. How did benchmarking, despite its flaws, lead to advances in AI? In The Emerging Science of Machine Learning Benchmarks, Moritz Hardt investigates why benchmarking works, and what purpose it serves. Hardt draws on a growing body of work that has begun to lay out the science underpinning benchmarks; what emerges is a rich landscape of theoretical and empirical observations that can inform practitioners. He begins with the foundations, both mathematical and empirical, covering enough background material to make the book self-contained. He finds that model rankings, rather than model evaluation, are the primary scientific product of machine learning benchmarks. Turning to the challenges of benchmarking large language models, Hardt explains how benchmarks influence model training, complicating direct model comparisons. As model capabilities exceed those of human evaluators, researchers are running out of ways to test new models. If benchmarks are to serve us well in the future, we must place them on solid scientific ground. With this book, Hardt lays the foundation.