“[T]here isn’t any fact in the world that can prove or disprove the quality of particular philosophical work. All there is in philosophy is the opinion of experts. Research universities–in their hiring and tenure decisions–are based on the premise that the opinion of experts is what matters. We have nothing else to go on.”–Brian Leiter*
As I have established in series of recent posts, the problems with the Philosophical Gourmet Report are legion. These flaws have nothing whatever to do with the controversy surrounding its editor. They are native to the Report and are symptoms of its poor methodology, in particular, its use of snowball sampling, its faulty assumptions about consensus in the profession, its use of pools of evaluators with too-similar professional backgrounds, its dearth of women evaluators, its small to very small pools of evaluators for the specializations, its marginalization of many specializations, its favoring of sub-specializations within certain specializations, etc. Problems with PGR have been masked to a degree by statements that Professor Leiter has made to defend it, which often involve exaggerated or misleading claims about its virtues, for example, his recent claim that there is a “remarkable convergence in overall evaluations of programs across almost all areas of philosophical specialization.” There isn’t, as we saw in the recent post, “The 2014 Philosophical Gourmet Report by the Numbers.”
In the “By the Numbers” post, I promised to address the fact that there are evaluators in specializations who are not experts or specialists in these areas. Leiter has insisted time and again that his evaluators are experts and experts in their specializations. But before we get there we need some context for this issue. The pools of evaluators for the PGR are relatively small, and in many cases very small. Here is a breakdown for the 2014 PGR, excluding Feminist Philosophy:**
- 72% of the specializations have 20 or fewer evaluators.
- 38% have 10 or fewer.
- 28% have 8 or fewer.
Given how few people are involved, it would seem that at minimum every voice should be that of an expert. And given how hard Professor Leiter has sold the accomplishments of his evaluators, I think it is fair to expect specialists to be currently active in the areas they are evaluating, because this is, after all, a ranking, and one would need to be as up-to-date as possible about the field in question.
On the 2014 Philosophical Gourmet Report’s website, the first line under “Breakdown of Programs by Specialties,” Leiter states flatly that the specializations (or specialties) use experts, in a statement that has been on the site for years.
The rankings of programs in the specialty areas are based on surveys by experts in those specialties (emphasis in the original).
The language here is important. He is saying that the evaluators are experts in the specializations (or specialties). Bear with me here: this may seem obvious, but it is crucial to avoid any confusion about this. It is clear that this has been and still is Leiter’s public position. We can see it, for example, in a discussion of this year’s rankings in physics:
98% of undergraduates and their advisors wouldn’t know to recommend these programs as top choices in philosophy of physics–how could they, but for the PGR? They’d recommend Oxford, of course, which happens to be excellent in this area, but also Harvard, Yale, MIT, Stanford, which are not. But the PGR makes available to students everywhere the opinion of leading experts. It makes available the judgment of philosophers like Princeton’s Hans Halvorson (who teaches at another top ten philosophy of physics program, as it happens), and Huw Price (the Bertrand Russell Professor of Philosophy at Cambridge [another top 10 program in philosophy of physics] and a Fellow of the British Academy), and Jill North (a leading young philosopher of physics at Cornell University), and Lawrence Sklar (Michigan Professor, former John Locke Lecturer at Oxford and Fellow of the American Academy of Arts & Sciences), and David Wallace (leading young philosopher of physics at Oxford), among others. (Remember: evaluators can’t evaluate their own departments.) (Emphasis added.)
It’s slightly crazy that some people think making these assessments available to everyone somehow “harms” students and the profession. But as we’ve seen, some “philosophers” are slightly crazy.
First, there is no question that Leiter is referring here to experts in the philosophy of physics. In addition, he is making a case for using the PGR by saying, in effect, that this is what we do in the PGR, we use experts, in fact, leading experts. But perhaps only some of the evaluators in the philosophy of physics are experts, leaving him some wiggle room. However, the last phrase, “among others,” implies that everyone else evaluating in this specialization is an expert. (N.b.: only ten people evaluated in philosophy of physics in 2014 and he lists five of them before resorting to the phrase “among others.”) Second, the ranking in philosophy of physics, with its accompanying plug for the merits of the PGR, was the first specialization that Leiter posted on his blog this year, the first in a series of early releases in advance of the publication of the 2014 PGR on December 8th. At the time I was puzzled about why he released the philosophy of physics ranking first. But I now have a hypothesis: it was his strongest case, or one of his strongest cases, for advertising the discipline-specific expertise of every evaluator in the specializations, and, therefore, the value of the PGR, which would have been important for Leiter at the time, given the attrition in the ranks of evaluators for the 2014 PGR and the other events of the fall.
Be that as it may, this use of real experts in the philosophy of physics is not something we see consistently in the other specializations of the PGR. There are evaluators who are clearly not experts in particular specializations, and by expert I mean nothing arcane or excessively demanding, merely, someone who specializes (claims an AOS, not only an interest or an AOC) in an area and publishes regularly, repeatedly, and, at minimum, recently in that area. (The “recently” is important here since these folks are ranking departments in the present.) Whether we are dealing with real experts is no doubt important to know. If this were not the case, it would make the rankings in different specializations less reliable. In addition, inconsistencies in the level of expertise between and among different specializations would be a serious methodological problem for the PGR, and God knows it can’t afford any more.
However, I was left with something of a dilemma regarding how to proceed. I would not willingly single out any individual in public for participating in the PGR. I did not want to name names or embarrass anyone. I thought that I might work around this problem by describing the evaluators’ specializations in their own words, and saying something about their publication records. But of course in the age of Google, people would be able to figure out who everyone is very quickly. I discussed this dilemma with colleagues and there didn’t appear to be any clear solution. I decided to hold off and try to find another way.
As it turns out, Professor Leiter’s avidity for public defense of the PGR has relieved me of having to establish that there are evaluators who aren’t experts in certain specializations, because Professor Leiter himself declared three years ago that the PGR’s evaluators are often not experts in the specializations. Yes, really! He announced this in an on-line exchange during a discussion of French philosophy and the 2011 PGR. John Protevi had written a post on NewAPPS discussing evaluators in 20th Century Continental philosophy. Leiter, as is his wont, joined the discussion in the comments section. I begin with Protevi’s remarks in the original post, discussing whether any of the evaluators specialize in 20th Century French philosophy. (I have deleted comments in whole or part that did not pertain to this discussion. The title of the original post was “A brief look at 2011 PGR 20th-Century Continental Philosophy evaluators”.) Here is Protevi:
These are however only mentions of secondary interests; a quick, non-systematic, but I believe accurate survey of their posted CVs show that none have consistent publishing records on these figures – at most an article here and there. Such consistent publishing is I think a good indicator of those cognizant of the relevant contemporary secondary literature – that is, that produced by people actively working in contemporary French philosophy (emphasis added).
Your “quick” survey of the CVs isn’t very accurate, and I invite you to do a more careful one. And you might try this same exercise for any of the other specialties; most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily. The one point I do agree with you on is that none of these people are interested in Irigaray, Kristeva, Badiou et al. (some are interested in Deleuze). But I think anyone interested mainly in those figures should probably not be going to a philosophy department anyway for a PhD (emphasis added).
Okay, pause, full stop. Yes, Leiter just said, “most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily.” One of the observations I made in “By the Numbers” was that people often evaluated multiple times. You can see it in the proportion of evaluators who rank in multiple areas. So this is not news.** But it is news when Leiter claims that he in fact asks evaluators to do this and the result is that “most evaluators are asked to evaluate areas in which they don’t necessarily work primarily.” At the time, Protevi was surprised by this admission:
John Protevi said in reply to Brian…
A systematic review of what’s web-available turned up 7 articles by 5 evaluators on French figures. So “an article here and there” seems an apt way to describe the research of 22 of the 24 evaluators.
On your other point, I was surprised to hear that in other parts of the PGR people evaluate areas outside their primary research (emphasis added).
Brian said in reply to John Protevi…
John, you can’t really believe that unless one does “primary research” on 20th-century Continental philosophy that one can’t, therefore have a reasonably informed opinion about where to work on 20th-century Continental philosophy. In any case, that’s a side issue–the evaluators here have rather obviously unimpeachable credentials as scholars, senior and junior, in the Continental traditions in philosophy. If someone really doesn’t care what Pierre Keller, Michael Rosen, Beatrice Han-Pile think, then Godspeed to them! (emphasis added).
Okay, another stop. So–now we have moved from “in which they don’t necessarily work primarily” to a “reasonably informed opinion.” This is a far cry from the claim on the PGR’s web site, above, or the way Leiter characterizes the philosophy of physics evaluators, again, leaving the impression in that case that this is what we do in the PGR, we use experts. (Oh, and by the way, not just ordinary, everyday experts, but those with named chairs. At least this is what Leiter says when we he is trying to make the case for the PGR and calling those who can’t see it “slightly crazy.”) Notice that Brian also tries to change the focus of the discussion by saying that “this is a side issue.” He wants to get back to bragging about the general excellence of his evaluators in the Continental traditions in philosophy. But that wasn’t the question. The issue is whether certain evaluators are in fact experts in the area. That someone publishes on Kierkegaard and German Idealism doesn’t make him or her an expert in 20th Century Continental philosophy, no matter how excellent their work is in these other areas. Protevi picks right up on this:
John Protevi said in reply to Brian…
Brian, nothing in my post reflects in any way on the obvious excellence of the evaluators. I was analyzing your choices as to the composition of the board*** (emphasis added).
[Here another commenter weighs in:]
Commentator said in reply to Brian…
Obviously one can have an “informed opinion” about where to work on an area without being expert in that area. I have an informed opinion about where to work on any area of philosophy whatsoever. But I was under the impression that the goal of the PGR is to give expert opinion. (You have certainly described it that way many times.) If not, then why bother with multiple area groups at all? Everyone on the general list is more than capable of giving an “informed opinion” on any area. In fact, John has simply pointed out that expertise in French philosophy is significantly under-represented on the continental list. (I am confident that the philosophers making up the list would agree.) (Emphasis added.)
Brian Leiter said…
[Commentator] et al: one can have an expert opinion without being a full-time specialist in an area. That was the only point (emphasis added).
Leiter is now claiming one can have an “expert opinion,” which might sound better than a “reasonably informed opinion,” but, really, this sleight of hand won’t work. Using “expert” as an adjective is not the same thing as using it as a noun, at least not in this situation, especially given what Leiter had already conceded about his evaluators, namely, that “most evaluators are asked to evaluate more than one area, and inevitably that means they evaluate areas in which they don’t necessarily work primarily.”
Leiter tells us that “the opinion of experts is what matters. We have nothing else to go on.” If so, all of Leiter’s evaluators should be experts in their areas, but, as we’ve seen, for Leiter they need not be. Professor Leiter and the PGR have not met their own standards, and the Philosophical Gourmet Report needs no “smear campaign” to discredit it. It does very well on its own, thank you.
*This quotation is from Leiter’s post entitled “The five most common objections to the PGR,” under #2. In a later post, in which he attacks critics of the PGR (“Why the visceral and largely irrational response to being evaluated?“), Leiter links back to the “five most common objections” post. The “irrational response” argument of the later post is a red herring. People do not object to being evaluated. They object to being evaluated or ranked by a system that quantifies in methodologically unsound ways. The fact is that Leiter has given up defending the PGR by means of anything anybody would call cogent arguments. He now cites his own past statements as a one-size-fits-all defense of the PGR, or resorts to ad hominems. In an update to the “irrational response” post, here is one way that he dismisses his critics, after referring to their “ranting and raving about the PGR in Cyberspace” in the original post:
UPDATE: A note of caution to students: do not be misled by philosophers who teach at or took their PhDs from poorly ranked departments (or unranked departments) who profess with great certainty and earnestness that there are serious methodological problems with the PGR: there are not, and no one serious who has studied it thinks there are.
**If one doesn’t include Feminist Philosophy–because Leiter and Brogaard combined the evaluators for 2011 and 2014, and we don’t know who evaluated solely in 2014–there were a total of 548 evaluations done in 32 specializations. Leiter has told us that there were nearly 200 evaluators for the overall rankings, and 230 when the overall and the specializations are combined. This might mean that there were only 30 +1 persons ranking for the specializations. But let’s be generous. Let’s assume that 200 different individuals participated in the specialty rankings. This would mean an average of 2.74 evaluations per person. If you go through the PGR, you will find numerous instances of people ranking 3 and 4 times. For example, 44 members of the Advisory Board participated in the specialty rankings. Of these 11 ranked 4 times and 17 ranked 3 times.
***Protevi uses “board” to refer to the list of evaluators in 20th Century Continental philosophy.
Have they mentioned anywhere why the Feminist Philosophy rankings were combined? Was the decline so embarrassingly low that they couldn’t do the un-ranked version that is done for other subjects with few evaluators (like Phil.of Race with three)? It’s really quite strange.
Michael, The only explanation that I have seen is one on the PGR itself under Feminist Philosophy, in which they say that it is due to the small number of evaluators in 2014. It probably was because it was embarrassingly low. We know that they let other specializations go with three, as you mention, so it is reasonable to suppose that it may have been less. Or because Leiter knows that there has been an issue about the participation by women, maybe they had three or four, but it just looked too bad. In any case, what they have done makes no sense at all if we are supposed to be getting a picture of the profession as it stands.
Your definition of being an “expert” is tendentious. Is Kripke not a qualified expert evaluator in mathematical logic because he hasn’t published in mathematical logic in decades? Can one really not have an expert and well-informed opinion in an area that is not one’s AOS but is part of one’s AOC, in which, for example, one teaches regularly?
Frege, the definition I gave was in the context of discussing a ranking system, as I think is completely clear from the context of my remarks. Based on Leiter’s own assumptions about the up-to-date nature of the PGR, an expert needs to be someone on top of current research and publications. So in this situation, having an expert and well-informed opinion about a SUBJECT is not enough to be an expert about a FIELD.
The University of Miami will rank highly in Applied Ethics, despite the queer fact that none of the faculty teach, specialize, or otherwise have fine reputations in Applied Ethics. Curious.
It seems like the main upshot of the post is that some evaluators have an AOC, rather than an AOS, in the areas they evaluate. That may be a little disappointing, and Brian may need to dial back his rhetoric a little, but this does not seem too scandalous to me. I would assume that many people with an AOC in an area can evaluate departments in that area competently.
The term “expert” seems fluid and contextual to me, so it does not seem downright dishonest to call the area evaluators “experts”. Someone might not be expert enough in political philosophy to accept a job that is AOS political, but may still be an expert for the purpose of refereeing articles, writing op-eds to the newspaper, or advising undergrads on where to study political philosophy in grad school.
The one thing I would see as a problem is if the evaluators in a given area were heavily skewed toward one or more specific AOS’s other than the area being evaluated. For example, if most of the evaluators for political philosophy were AOS ethic, AOC political, I could see how this might produce distortions. Similarly, if most of the evaluators for early modern philosophy were AOS ancient or AOS medieval, AOC early modern, I can see how biases could creep in. I don’t know enough about the area evaluators for 20th Century Continental to know if something like this has happened there, so I won’t comment further on that.
My main point, though, is that the use of people with an AOC in an area as evaluators does not strike me as intrinsically problematic. Maybe Brian should just be more clear if this is occurring.
To put things in a slightly different way, if I were an undergrad looking to specialize in political philosophy, I would ask the faculty at my school who have an AOS or AOC in political philosophy what schools they think are good for this, so as to get the broadest sample. So, maybe that is what the PGR is meant to simulate.
Chris, thanks for your two comments. They provide an opportunity for me to clarify further why Brian’s “flexibility” here is a problem. First, there are actually people without AOCs evaluating in certain areas, so this is not just a problem of using people with AOCs instead of AOSs. Second, using evaluators with only AOCs would undermine a major selling point for the PGR. One of the main arguments that Brian and his supporters have used is that the PGR provides an alternative to students seeking advice only from mentors. It does this through a ranking system that employs experts, presumably people who know more than professors with just AOCs. (There are always lots of AOCs among faculties in any but the very smallest departments.) This is not a small matter that Brian can fix by tweaking the language. To now say, people with AOCs can provide comparable advice cuts against the spirit of what the PGR is supposed to be about. Third, using people with AOCs raises another serious question, what are Brian’s standards for selecting evaluators? It’s difficult enough to figure out in certain cases why he is selecting some people with AOSs over other people with AOSs, since there are no criteria offered. But why would he ever pick people with AOCs over people with genuine AOSs, especially given how hard he has worked to promote the PGR by referring to the phenomenal people/experts who rank for it? There is a transparency issue here, and it echoes other problems with transparency.
You also give examples of situations in which imbalances of the evaluators could be problematic. I agree. Imbalances of the sort you describe would be problematic. The post you are commenting on is one in a series. It is a direct complement to another recent post, A Portrait of the PGR by the Numbers. There you will find figures on large overlaps between evaluators in different specialties, which raises the question of whether some specialities are weighted too heavily in the direction of evaluators with certain sets of interests or expertise. Also, there are instances of specializations with too many evaluators in sub-specialties. Take one of Brian’s own specialties, 19th century continental. A good case can be made that 5 of its 22 evaluators are not experts in 19th century continental. Of the remaining 17 evaluators, 8 are Nietzsche experts, including Brian, leading to serious imbalances, for example, 8 Nietzsche experts and 1 on Kierkegaard.
The problems with the PGR are legion, and they often dovetail with each other. One needs to see the whole package to get an idea of how compromised the PGR is.