Science & Baseball
Back in the 1990′s, the Oakland A’s were a mediocre team. In baseball, the traditional way to improve a team has been to hire a bunch of super star players. The trouble with being a mediocre team, though, is that it’s hard to sell tickets, which makes it tough to pay for expensive talent. To make matters worse, A’s were one of the poorest team in baseball, with a total payroll of only 1/3 of the richest team, the Yankees. So the A’s were stuck.
Desperate to change things, Billy Beane, the A’s general manager, embarked on a new and strange strategy: he started using ideas from an obscure, mimeographed fan-zine on baseball statistics in his hiring decisions. There was tremendous push-back and resistance to the new approach, and indeed, the new hires were a strange lot. Instead of the young, athletic guys the recruiters were used to courting, now they were hiring older players, overweight players, even a pitcher with a club foot. But the new players were cheap, and Oakland started winning, and within a few years, they made it most of the way through the World Series, losing only to the Yankees.
What happened? It turns out that the amateur baseball statisticians that Beane started listening to were on to something. The traditional measures of baseball performance miss some important things. For example, batting averages, the traditional way of rating hitters, don’t credit players who choose not to swing at wild pitches and instead get walked. Beane’s new ways of evaluating players proved better at measuring their actual contribution to the game, and that gave him a huge advantage: he was able to identify players who were valuable in ways that nobody else could see, and he was able to hire them at a fraction of the price of similar talent that was more identifiable.
Michael Lewis tells the full story of the A’s rise in his excellent book Moneyball: The Art of Winning an Unfair Game. Even if you don’t care for baseball, I highly recommend the book. The reason: it’s not really about baseball. It’s about metrics, why they matter, and how difficult it is to get people to pay attention to the right ones.
Science funding is heavily influenced by metrics, in particular the NSF’s Science and Engineering Indicators. Are we measuring the right things? Or are there things we should be doing better, like the Oakland A’s, to direct resources to untapped opportunities?
I’ve been invited to participate in a National Academies assessment of existing science metrics, so over the next 2 years or so, we’ll see.