“[T]here isn’t any fact in the world that can prove or disprove the quality of particular philosophical work. All there is in philosophy is the opinion of experts. Research universities–in their hiring and tenure decisions–are based on the premise that the opinion of experts is what matters. We have nothing else to go on.”–Brian Leiter*
As I have established in series of recent posts, the problems with the Philosophical Gourmet Report are legion. These flaws have nothing whatever to do with the controversy surrounding its editor. They are native to the Report and are symptoms of its poor methodology, in particular, its use of snowball sampling, its faulty assumptions about consensus in the profession, its use of pools of evaluators with too-similar professional backgrounds, its dearth of women evaluators, its small to very small pools of evaluators for the specializations, its marginalization of many specializations, its favoring of sub-specializations within certain specializations, etc. Problems with PGR have been masked to a degree by statements that Professor Leiter has made to defend it, which often involve exaggerated or misleading claims about its virtues, for example, his recent claim that there is a “remarkable convergence in overall evaluations of programs across almost all areas of philosophical specialization.” There isn’t, as we saw in the recent post, “The 2014 Philosophical Gourmet Report by the Numbers.”
In the “By the Numbers” post, I promised to address the fact that there are evaluators in specializations who are not experts or specialists in these areas. Leiter has insisted time and again that his evaluators are experts and experts in their specializations. But before we get there we need some context for this issue. The pools of evaluators for the PGR are relatively small, and in many cases very small. Here is a breakdown for the 2014 PGR, excluding Feminist Philosophy:**
- 72% of the specializations have 20 or fewer evaluators.
- 38% have 10 or fewer.
- 28% have 8 or fewer.
Given how few people are involved, it would seem that at minimum every voice should be that of an expert. And given how hard Professor Leiter has sold the accomplishments of his evaluators, I think it is fair to expect specialists to be currently active in the areas they are evaluating, because this is, after all, a ranking, and one would need to be as up-to-date as possible about the field in question.
On the 2014 Philosophical Gourmet Report’s website, the first line under “Breakdown of Programs by Specialties,” Leiter states flatly that the specializations (or specialties) use experts, in a statement that has been on the site for years.
The rankings of programs in the specialty areas are based on surveys by experts in those specialties (emphasis in the original).
The language here is important. He is saying that the evaluators are experts in the specializations (or specialties). Bear with me here: this may seem obvious, but it is crucial to avoid any confusion about this. It is clear that this has been and still is Leiter’s public position. We can see it, for example, in a discussion of this year’s rankings in physics:
98% of undergraduates and their advisors wouldn’t know to recommend these programs as top choices in philosophy of physics–how could they, but for the PGR? They’d recommend Oxford, of course, which happens to be excellent in this area, but also Harvard, Yale, MIT, Stanford, which are not. But the PGR makes available to students everywhere the opinion of leading experts. It makes available the judgment of philosophers like Princeton’s Hans Halvorson (who teaches at another top ten philosophy of physics program, as it happens), and Huw Price (the Bertrand Russell Professor of Philosophy at Cambridge [another top 10 program in philosophy of physics] and a Fellow of the British Academy), and Jill North (a leading young philosopher of physics at Cornell University), and Lawrence Sklar (Michigan Professor, former John Locke Lecturer at Oxford and Fellow of the American Academy of Arts & Sciences), and David Wallace (leading young philosopher of physics at Oxford), among others. (Remember: evaluators can’t evaluate their own departments.) (Emphasis added.)
It’s slightly crazy that some people think making these assessments available to everyone somehow “harms” students and the profession. But as we’ve seen, some “philosophers” are slightly crazy.
First, there is no question that Leiter is referring here to experts in the philosophy of physics. In addition, he is making a case for using the PGR by saying, in effect, that this is what we do in the PGR, we use experts, in fact, leading experts. But perhaps only some of the evaluators in the philosophy of physics are experts, leaving him some wiggle room. However, the last phrase, “among others,” implies that everyone else evaluating in this specialization is an expert. (N.b.: only ten people evaluated in philosophy of physics in 2014 and he lists five of them before resorting to the phrase “among others.”) Second, the ranking in philosophy of physics, with its accompanying plug for the merits of the PGR, was the first specialization that Leiter posted on his blog this year, the first in a series of early releases in advance of the publication of the 2014 PGR on December 8th. At the time I was puzzled about why he released the philosophy of physics ranking first. But I now have a hypothesis: it was his strongest case, or one of his strongest cases, for advertising the discipline-specific expertise of every evaluator in the specializations, and, therefore, the value of the PGR, which would have been important for Leiter at the time, given the attrition in the ranks of evaluators for the 2014 PGR and the other events of the fall.
Be that as it may, this use of real experts in the philosophy of physics is not something we see consistently in the other specializations of the PGR. There are evaluators who are clearly not experts in particular specializations, and by expert I mean nothing arcane or excessively demanding, merely, someone who specializes (claims an AOS, not only an interest or an AOC) in an area and publishes regularly, repeatedly, and, at minimum, recently in that area. (The “recently” is important here since these folks are ranking departments in the present.) Whether we are dealing with real experts is no doubt important to know. If this were not the case, it would make the rankings in different specializations less reliable. In addition, inconsistencies in the level of expertise between and among different specializations would be a serious methodological problem for the PGR, and God knows it can’t afford any more.
However, I was left with something of a dilemma regarding how to proceed. I would not willingly single out any individual in public for participating in the PGR. I did not want to name names or embarrass anyone. I thought that I might work around this problem by describing the evaluators’ specializations in their own words, and saying something about their publication records. But of course in the age of Google, people would be able to figure out who everyone is very quickly. I discussed this dilemma with colleagues and there didn’t appear to be any clear solution. I decided to hold off and try to find another way.
As it turns out, Professor Leiter’s avidity for public defense of the PGR has relieved me of having to establish that there are evaluators who aren’t experts in certain specializations, because Professor Leiter himself declared three years ago that the PGR’s evaluators are often not experts in the specializations. Yes, really! He announced this in an on-line exchange during a discussion of French philosophy and the 2011 PGR. John Protevi had written a post on NewAPPS discussing evaluators in 20th Century Continental philosophy. Leiter, as is his wont, joined the discussion in the comments section. I begin with Protevi’s remarks in the original post, discussing whether any of the evaluators specialize in 20th Century French philosophy. (I have deleted comments in whole or part that did not pertain to this discussion. The title of the original post was “A brief look at 2011 PGR 20th-Century Continental Philosophy evaluators”.) Here is Protevi:
These are however only mentions of secondary interests; a quick, non-systematic, but I believe accurate survey of their posted CVs show that none have consistent publishing records on these figures – at most an article here and there. Such consistent publishing is I think a good indicator of those cognizant of the relevant contemporary secondary literature – that is, that produced by people actively working in contemporary French philosophy (emphasis added).
Your “quick” survey of the CVs isn’t very accurate, and I invite you to do a more careful one. And you might try this same exercise for any of the other specialties; most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily. The one point I do agree with you on is that none of these people are interested in Irigaray, Kristeva, Badiou et al. (some are interested in Deleuze). But I think anyone interested mainly in those figures should probably not be going to a philosophy department anyway for a PhD (emphasis added).
Okay, pause, full stop. Yes, Leiter just said, “most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily.” One of the observations I made in “By the Numbers” was that people often evaluated multiple times. You can see it in the proportion of evaluators who rank in multiple areas. So this is not news.** But it is news when Leiter claims that he in fact asks evaluators to do this and the result is that “most evaluators are asked to evaluate areas in which they don’t necessarily work primarily.” At the time, Protevi was surprised by this admission:
John Protevi said in reply to Brian…
A systematic review of what’s web-available turned up 7 articles by 5 evaluators on French figures. So “an article here and there” seems an apt way to describe the research of 22 of the 24 evaluators.
On your other point, I was surprised to hear that in other parts of the PGR people evaluate areas outside their primary research (emphasis added).
Brian said in reply to John Protevi…
John, you can’t really believe that unless one does “primary research” on 20th-century Continental philosophy that one can’t, therefore have a reasonably informed opinion about where to work on 20th-century Continental philosophy. In any case, that’s a side issue–the evaluators here have rather obviously unimpeachable credentials as scholars, senior and junior, in the Continental traditions in philosophy. If someone really doesn’t care what Pierre Keller, Michael Rosen, Beatrice Han-Pile think, then Godspeed to them! (emphasis added).
Okay, another stop. So–now we have moved from “in which they don’t necessarily work primarily” to a “reasonably informed opinion.” This is a far cry from the claim on the PGR’s web site, above, or the way Leiter characterizes the philosophy of physics evaluators, again, leaving the impression in that case that this is what we do in the PGR, we use experts. (Oh, and by the way, not just ordinary, everyday experts, but those with named chairs. At least this is what Leiter says when we he is trying to make the case for the PGR and calling those who can’t see it “slightly crazy.”) Notice that Brian also tries to change the focus of the discussion by saying that “this is a side issue.” He wants to get back to bragging about the general excellence of his evaluators in the Continental traditions in philosophy. But that wasn’t the question. The issue is whether certain evaluators are in fact experts in the area. That someone publishes on Kierkegaard and German Idealism doesn’t make him or her an expert in 20th Century Continental philosophy, no matter how excellent their work is in these other areas. Protevi picks right up on this:
John Protevi said in reply to Brian…
Brian, nothing in my post reflects in any way on the obvious excellence of the evaluators. I was analyzing your choices as to the composition of the board*** (emphasis added).
[Here another commenter weighs in:]
Commentator said in reply to Brian…
Obviously one can have an “informed opinion” about where to work on an area without being expert in that area. I have an informed opinion about where to work on any area of philosophy whatsoever. But I was under the impression that the goal of the PGR is to give expert opinion. (You have certainly described it that way many times.) If not, then why bother with multiple area groups at all? Everyone on the general list is more than capable of giving an “informed opinion” on any area. In fact, John has simply pointed out that expertise in French philosophy is significantly under-represented on the continental list. (I am confident that the philosophers making up the list would agree.) (Emphasis added.)
Brian Leiter said…
[Commentator] et al: one can have an expert opinion without being a full-time specialist in an area. That was the only point (emphasis added).
Leiter is now claiming one can have an “expert opinion,” which might sound better than a “reasonably informed opinion,” but, really, this sleight of hand won’t work. Using “expert” as an adjective is not the same thing as using it as a noun, at least not in this situation, especially given what Leiter had already conceded about his evaluators, namely, that “most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily.”
Leiter tells us that “the opinion of experts is what matters. We have nothing else to go on.” If so, all of Leiter’s evaluators should be experts in their areas, but, as we’ve seen, for Leiter they need not be. Professor Leiter and the PGR have not met their own standards, and the Philosophical Gourmet Report needs no “smear campaign” to discredit it. It does very well on its own, thank you.
*This quotation is from Leiter’s post entitled “The five most common objections to the PGR,” under #2. In a later post, in which he attacks critics of the PGR (“Why the visceral and largely irrational response to being evaluated?“), Leiter links back to the “five most common objections” post. The “irrational response” argument of the later post is a red herring. People do not object to being evaluated. They object to being evaluated or ranked by a system that quantifies in methodologically unsound ways. The fact is that Leiter has given up defending the PGR by means of anything anybody would call cogent arguments. He now cites his own past statements as a one-size-fits-all defense of the PGR, or resorts to ad hominems. In an update to the “irrational response” post, here is one way that he dismisses his critics, after referring to their “ranting and raving about the PGR in Cyberspace” in the original post:
UPDATE: A note of caution to students: do not be misled by philosophers who teach at or took their PhDs from poorly ranked departments (or unranked departments) who profess with great certainty and earnestness that there are serious methodological problems with the PGR: there are not, and no one serious who has studied it thinks there are.
**If one doesn’t include Feminist Philosophy–because Leiter and Brogaard combined the evaluators for 2011 and 2014, and we don’t know who evaluated solely in 2014–there were a total of 548 evaluations done in 32 specializations. Leiter has told us that there were nearly 200 evaluators for the overall rankings, and 230 when the overall and the specializations are combined. This might mean that there were only 30 +1 persons ranking for the specializations. But let’s be generous. Let’s assume that 200 different individuals participated in the specialty rankings. This would mean an average of 2.74 evaluations per person. If you go through the PGR, you will find numerous instances of people ranking 3 and 4 times. For example, 44 members of the Advisory Board participated in the specialty rankings. Of these 11 ranked 4 times and 17 ranked 3 times.
***Protevi uses “board” to refer to the list of evaluators in 20th Century Continental philosophy.