January 11th, 2011

“The mathematics of narcissism”


POSTED BY:

Geoff Davis

Fellow mathematician Jordan Ellenberg has an unusual take on the NRC’s rankings: in Slate he compares the NRC’s approach to ranking graduate programs to a new method psychologists are using for classifying mental illnesses.

The article is worth reading in full, but the gist of it is that there are two standard approaches to dealing with high dimensional data sets: you can cluster items into groups, and you can use statistical techniques to reduce dimensionality, typically by discarding dimensions that carry the least amount of information. The NRC uses one method, and Ellenberg thinks they might benefit from using the other.

The forthcoming Diagnostic and Statistical Manual of Mental Disorders (the DSM-V) is switching from a clustering-centric approach to a dimension reducing approach, replacing clusters like “narcissistic personality disorder” with a collection of 6 measurements (“negative emotionality, introversion, antagonism, disinhibition, compulsivity, and schizotypy”). This is apparently leading to grumblings from psychologists who find value in the clusters as opposed to the more abstract 6-dimensional vectors.

The NRC has also chosen a dimensionality reduction approach, boiling 20 program measurements down to a single quality dimension. Ellenberg suggests that a clustering approach might be more helpful, and cites a recent experiment:

The NRC, on the other hand, might have done better to toss the idea of rankings entirely, and just clustered the departments into natural groupings. The statistician Leland Wilkinson ran a quick and dirty clustering on the NRC data for math departments. He found that the departments broke up into five clusters: 10 elite departments, a big group of 59 upper-tier departments, 47 lower-tier departments, and two smaller clusters whose meaning, if any, isn’t clear to me. This is much coarser information than a full ranking—but it has the advantage of not depending on politically contentious choices as to which criteria matter most.

It’s an interesting idea, and I think there’s some value to the approach. Indeed, the Carnegie Foundation already does something similar for universities, though probably not in a particularly statistically rigorous fashion. Having well chosen clusters would provide for saner comparisons – it doesn’t really make sense to compare some kinds of programs directly, as they really cater to very different audiences with different goals.

That said, I very much doubt that the clustering approach would prove any more satisfactory than what the NRC actually did. Do you think that a prospective student or department chair would be any happier to learn that a program fell into a set of 59 “upper-tier departments” than to know that the program ranked between 16th and 27th on the NRC’s quality scale?

While a clustering approach sidesteps the need to explicitly choose important criteria, there is very much a devil-in-the-details problem. Different clustering approaches can yield very different clusters. Even the simplest methods involve many choices – at the very least you have to choose a measure of similarity, and that in turn will emphasize and de-emphasize different program characteristics. You’re essentially trading an explicit, principled choice about what’s important for an implicit and opaque choice.

Regardless, I’d be curious to see more details of Wilkinson’s approach. I imagine he just did some kind of k-means clustering – simple, but likely interesting.