A preliminary. I did not expect, nor did I desire to write, (yet) another post on the Philosophical Gourmet Report. I thought it probably best for philosophers to ignore the PGR and watch it pass quietly into the night, unloved and unwanted. For those who find its prestige-driven myopia to their liking, I would just say, go with God.
But then I saw the notice for the new PGR on the Daily Nous. I looked at the site, and while I managed to avoid being turned to stone, what I saw was in some ways more disturbing than PGRs of the past, this time all dressed up in a new web suit. I worried that younger people, people unaware of the well-documented problems with the rankings, might find the new PGR on the internet and innocently consume its results. I felt an obligation to address at least a few of the problems with the new iteration, problems which are manifest in the editors’ description of what the PGR has to offer, as well as the in the constitution of the new Board.
___________________
The PGR is back, boasting a shiny new website with conspicuous ads from Wiley Blackwell. Leaving aside its financial benefits (for Wiley Blackwell, for Leiter’s blog, for faculty and departments at well-ranked programs), at some level it’s just strange that we keep getting updated PGRs, for there isn’t much that changes between the reports. As the PGR site notes about the current PGR, “the results were remarkably stable from prior years.” And you can check out how little change there has been between the 2006 PGR’s overall rankings and the current survey in an analysis done by one of the co-editors of the PGR, Christopher Pynes, here. As a matter of fact, you can go back to Leiter’s original PGR, discussed here, to see how little has changed regarding the overall rankings of so-called the top tier schools in the last 20 years later. Seriously, can the most recent PGR pass a basic pragmatic test? Is there, between versions, a difference that makes a difference?
Before and after the brouhaha that proceeded the publication of the last Philosophical Gourmet Report (2014-15), I and others issued many, many essays and analyses addressing the PGR’s methodological limitations, and biases, and the problems internal to PGR’s pool of evaluators, for example, the limited number of evaluators in certain areas, the lack of women and philosophers of color serving as evaluators, and the use of the same people to evaluate in multiple areas, sometimes straining credulity as to whether these people actually possessed all of the areas of expertise that they were ranking. Links to many of these articles and posts can be found at “A User’s Guide to Philosophy Without Rankings.”
But this time around it may require fewer posts and less effort to highlight the problems with the PGR. Leiter and company have given us a gift here, namely, a new Board that in many ways encapsulates the provinciality, the insularity, and the other general limitations of the PGR.
One of the new Board’s important functions was assisting in the selection of the current evaluators. Here is how this activity is described on the PGR site.
Evaluators were selected with an eye to balance, in terms of area, age and educational background–though since, in all cases, the opinions of research-active faculty were sought, there was, necessarily, a large number of alumni of the top programs represented. Approximately half those surveyed were philosophers who had filled out the surveys in previous years; the other half were nominated by members of the Advisory Board, who picked research-active faculty in their fields. (Emphasis added.)
The new Board had a serious responsibility here, nominating people to serve as evaluators, half of them. Since one of the criticisms of the PGR has been the lack of intellectual diversity among evaluators, letting the Board nominate half of the evaluators certainly looked promising, even if the other half is coming from the past pool. So let’s take a look at this new Board.
The previous Board had 48 members. Instead of maintaining this number, which might have allowed for the addition of people with a greater variety of backgrounds, the PGR went in the other direction. The new Board has only 20 members, including Brian Leiter. Of the 20 current members, 16 were on the previous Board. (These 16 do not include Leiter himself.) Three new voices were added to the Board; four if you include Leiter. This doesn’t bode well for any significant change. But perhaps the academic backgrounds of the members of the Board will suggest otherwise. It’s commonly assumed that where someone does his/her/their graduate work can have a significant impact on how that person views philosophy–even what counts as good or acceptable philosophy–and in helping define a professional network. How much diversity do we have in professional training here? Out of the 16 members of the new Board who received their doctoral degrees in the United States:
- 13 received their graduate degrees from schools that are in the top 10 in the new PGR’s overall rankings.
- 2 in the top 20.
- 1 in the top 40.
For those outside the U.S., two received their doctorates from Oxford, one from the Sorbonne (now teaching at Pittsburgh), and one from Munich. Oxford is the PGR’s top ranked UK school, and #2 in the English-speaking world. The European schools are, of course, unranked.
This, however, reflects the current rankings. What if we were to look at the PGR in the past? Thanks to the work done by Christopher Pynes, this is easy enough, so after a little back-to-the-futuring, here are the results, attesting to the remarkable stability of the PGR.
- 12 received their doctorates from the top ten schools in the 2006 PGR.
- 3 in the top 20.
- 1 in the top 40.
If there isn’t much diversity in this regard, maybe there is significant diversity in the schools that people attended within these tiers. Out of the 16 Board members who received degrees in the United States:
- 5 received their doctorates from Princeton (and at least two others went to Princeton as undergrads, including Leiter). That’s almost 1/3 of all the Board members with U.S. doctorates.
- Three programs, Princeton, Pittsburgh, and Michigan, provided 11 of the 16 members of the Board with U.S. doctorates, or 56% of the Board.
Okay, let’s not give up yet. Maybe among the four new people, which includes Leiter, the PGR editors made a real effort to achieve a diversity of background.
- 2 of the new Board members received their degrees from Princeton.
- 2 from Michigan.
The story of why there is so much year-to-year stability in the PGR doesn’t end with the lack of change in the Board’s personnel and their backgrounds. In the “Description of the Report,” we are told that
[t]he survey presented 91 faculty lists, from the United States, Canada, United Kingdom, and Australia and New Zealand. Note that there are some 110 PhD-granting programs in the U.S. alone, but it would be unduly burdensome for evaluators to ask them to evaluate all these programs each year. The top programs in each region were selected for evaluation, plus a few additional programs are included each year to “test the waters.” (Emphasis added.)
And in the “Overall Rankings” section we are informed that
[a]ll programs in the top 50 in the U.S., the top 15 in the U.K., and the top 5 in Canada and Australasia from the prior survey were included in this year’s survey. Based on this and past year results, we have reason to think that no program not included in the survey would have ranked ahead of these programs. Other programs evaluated this year are listed unranked afterwards; there may well have been programs not surveyed this year that could have fared as well.
Full stop. Yes, you read this correctly. Someone–or some group–decided which programs would be evaluated this year based on the previously ranked top programs. And someone–or some group–decided, and will in future decide, which few additional programs may get a shot at their place in the sun in PGRs to come. Criteria, anyone? Well, if you have already made it, then you have made it. You are included by virtue of having been in the top in the past. There’s a criterion for you! (Perhaps we should let the wealthy 1% determine the economic policies for the country in perpetuity, since they have already proven their mettle, just like the members of our top tier schools.) Are we at the mercy here of Leiter and the two current co-editors going forward in determining which departments get a chance at being ranked, or does the Board decide, you know, that Board with the wonderful range of training?
Speaking of criteria, as we have noted in the past (here, for example), the editors appear, again, to have provided no guidance about the criteria evaluators should use to produce the overall rankings. I am not making this up. Here is a statement on this from the current PGR.
Different respondents had different “centers of gravity” in their scoring: some gave no 5s, others gave no score lower than a 2. It was also clear that respondents had different philosophies of evaluation: some clearly tried to consider the breadth of strength in a department, while others ranked a program highly or lowly based simply on its strength in his or her fields. (Emphasis added.)
And yet, remarkably, even with possible different mixes of criteria–somehow, year to year–the overall rankings don’t change much. It’s miraculous. Or it’s confirmation bias run amok. Or perhaps they provide an objective presentation of the quality of departments. But here’s the thing. Even if the latter were true, the PGR’s methodology doesn’t permit this assertion to be warranted.
Also, please note the serious bad faith of the PGR’s editors:
As in the past, we did not include the name of the university with the faculty lists. This has proved beneficial in forcing evaluators to respond to the current faculty. As one respondent put it a few years ago: “surprisingly tough to say what I think, without the institutional halo effect front loaded.”
Please. Are you really going to suggest that not mentioning the names of departments, but listing their faculty, is going to prove beneficial, as if doing so would lead to a more impartial evaluation? On the contrary. It may further distort the results. “Top” departments will be immediately recognized, but others won’t be, leading to relatively lower scores for them. Why? As the PGR notes above, “surprisingly tough to say what I think, without the institutional halo effect front loaded.” Result: halo effect city for the PGR.
One last word: when Leiter originally published the PGR, he took special care to call it a ranking of graduate programs in analytic philosophy:
THE PHILOSOPHICAL GOURMET REPORT, JUNE EDITION, 1995-1996 A Ranking of U.S. Graduate Programs in Analytic Philosophy*
Why has this title been dropped over the years? It’s obvious that this is in fact what the PGR is. Virtually no departments that do continental or American or non-western philosophy, which aren’t primarily analytic departments, make it into the top tiers for the overall rankings. The unfairness and bias here are palpable. And yet one modest remedy, calling the PGR what it actually is, eludes the editors. Why, indeed?
____________________
*The term “analytic” is in bold in Leiter’s original title. Here is what Leiter himself says in his description of the 1995-1996 Report. The word “analytic” is in bold and underlined, which I assume meant that it was to be italicized. Ditto for “tenured.”