I have heavily revised this post. The new version appears on New Apps on November 2, 2014 under the same title.
According to a recent post on Brian’s Leiter’s Blog, you will soon be receiving surveys to fill out for the PGR.
My co-editor Brit Brogaard (Miami) and her RA have done a great job finishing the evaluator and faculty list spread sheets, and the IT professionals here should have a testable version of the survey ready for us to try out during the weekend. If all goes well, Brit will send out the invitations to evaluators early next week (Monday or Tuesday is our goal). We agreed to a somewhat shorter window for responses (two weeks, rather than three weeks) due to the late start date this year and our goal of getting the results out in time for students applying in the current cycle.
UPDATE: The IT folks are still working out certain bugs in the survey program, so we won’t be able to test it before Monday. That means, at the soonest, Prof. Brogaard will be sending out invitations on Tuesday or perhaps Wednesday of next week (Oct. 21 or Oct. 22).
I am sure that you must be aware of the controversy surrounding the PGR’s rankings, which appear legitimate because philosophers are responsible for them. There have been many persuasive pieces written about the biases inherent in surveys of this sort. I write as someone convinced that rankings do more harm than good. A comprehensive informational web site with a sophisticated search engine would be my personal preference. But I will not try to convince you of this here. I write to run some numbers by you and ask that you consider them before filling out this year’s survey. I am not claiming my concerns are original. But I do want to highlight some of them as you consider whether to fill out the survey. Many philosophers do not fill out the survey when they receive it, and there are good reasons for you to take a pass on it this year. Here’s why.
According to Leiter, he is currently working from a list of 560 nominees to serve as evaluators for the 2014-2015 PGR. During the last go-around in 2011, 271 philosophers filled out the part of the survey dealing with overall rankings, and a total of 300 filled out the overall and specialty rankings. Leiter claims that in 2011 the on-line survey was sent to 500 philosophers. So many philosophers decided NOT to fill it out even after receiving it.
Let’s consider some of the numbers. Three hundred may seem to be a reasonable number of evaluators, but the total number of philosophers obscures crucial details, and one doesn’t need any sophisticated form of statistical analysis to judge how problematic they are. If you look at the thirty-three specializations that are evaluated in the PGR, slightly more than 60% have twenty or fewer evaluators. That’s right, twenty or fewer. Please think about this for a moment, twenty or fewer philosophers, in one case as few as three, are responsible for ranking 60% of the specializations found in the PGR, what many consider to be the most important feature of the PGR.
But it is actually worse than this. There are certain areas that have many fewer evaluators than other areas. For example, the PGR lists nine specializations under the History of Philosophy rubric. Six of the nine have twenty or fewer evaluators. And one of the specializations, American Pragmatism, has only seven. As a matter of fact, the only general category to have the majority of specializations with more than twenty evaluators is “Metaphysics and Epistemology.” Five of its seven specialties have more than twenty. But none of the others–Philosophy of Science and Mathematics, Value Theory, and the History of Philosophy—have a majority of specializations with more than twenty evaluators. And in the three specializations outside of these rubrics we find: eleven evaluators for feminism, three for Chinese, and four for philosophy of race. (Yes, the PGR actually provides rankings for Chinese Philosophy with three evaluators.)
But don’t take my word for this problem. Here’s what Leiter says on the 2011 survey site.
Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3). More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping.
I’m sorry. The urging of students “not to assign much weight at all to small differences” does not solve the problem. No weight should be assigned to specializations ranked by so few people. This is not rocket science. This is common sense. You can’t evaluate the quality of specializations that have so many facets with so few people, who themselves were selected by another small group of people, the Board, which clearly favors certain specializations given the distribution of evaluators. (This is especially true when there hasn’t even been a public discussion about what should constitute standards for rankings of specializations in philosophy.) Yet Leiter’s advice makes it appear that one should take the specialization rankings seriously, that is, if one just doesn’t assign too much weight to small differences. This is a shady rhetorical move.
I honestly don’t know how one could fill out the survey in good faith knowing that so few people are participating in ranking so many specializations. When you fill out the survey you are making statement. You are providing your expertise to support this enterprise. The fact that you might be an evaluator in M & E, with more evaluators than the other areas, doesn’t lift the responsibility of involvement. At minimum, you are tacitly endorsing the whole project.
Ah, you say, but perhaps this year’s crop of evaluators will be more balanced. However, the way that the PGR is structured undermines this hope. The evaluators are nominated by the Board, which has roughly fifty members. Most of the same people are on the Board this time around as last time. But here’s the kicker: Brian asks those leaving the Board to suggest a replacement. The obvious move for a Board member here would be to nominate a replacement in his or her own area, probably from his or her own circle of experts. In Leiter’s words, “Board members nominate evaluators in their areas of expertise, vote on various policy issues (including which faculties to add to the surveys), serve as evaluators themselves and, when they step down, suggest replacements.” So there is no reason to believe that the make up of the pool of evaluators would have markedly changed since the last go around.
The 2014-2015 PGR survey will be in place for at least the next two years, maybe more, given the difficulties that the PGR faces. There are a lot of young people who will be influenced by it. Please consider taking a pass on filling out the survey. If enough of you do so, the PGR will have to change or go out of business. Given the recent and continuing publicity surrounding the PGR, we should try to avoid embarrassment, which is likely to occur when those outside of philosophy, especially those who know about survey methods, discover our support for such a compromised rating system.
1) I purposely sought to keep the statistics as simple and as straightforward as possible in this post in order to raise basic questions about imbalances and sampling size in the current PGR. Based on these and other considerations I ask prospective evaluators to reconsider filling out the survey. Gregory Wheeler has a nice series on some of the more in-depth statistical work in “Choice & Inference.” See, the series and concluding piece, “Two Reasons for Abolishing the PGR.”
2) If there is public content regarding changes to the PGR that is available, and that I somehow missed, I would appreciate being informed about it. As far as I know, no fundamental change is taking place in this year’s PGR.
3) I counted the number of evaluators in the different categories. Of course I could have made an error in the count somewhere. But the numbers are certainly correct enough to back up my concerns.