upnight.com

NYC looking North from the EMPIRE STATE BUILDING. Photo (proportions altered) by DAVID ILIFF, license HERE.

UP@NIGHT

UP@NIGHT

UP@NIGHT

 

dsc_0867-3

……………

Early to bed, and early to rise,

Makes a man healthy, wealthy and wise

- Benjamin Franklin.

I don’t see it.

- George Washington

Now both of these are high authorities – very high and respectable authorities – but I am with General Washington first, last, and all the time on this proposition.

Because I don’t see it, either. . . .

Put no trust in the benefits to accrue from early rising, as set forth by the infatuated Franklin – but stake the last cent of your substance on the judgment of old George Washington, the Father of his Country, who said “he couldn’t see it.”

And you hear me endorsing that sentiment.  

Mark Twain, “Early Rising, As Regards Excursions to the Cliff House,” MARK TWAIN IN THE GOLDEN ERA 1863-1866.

……………

A Portrait of the 2014 Philosophical Gourmet Report by the Numbers

 

school-of-athens-detail-from-right-hand-side-showing-diogenes-on-the-steps-and-euclid-1511.jpg!Blog    In a recent post I asked why Professor Leiter had decided to replace reputational rankings with impact studies in his law school rankings, while sticking with reputational surveys in philosophy. In response Professor Leiter made the following claim:

The other big difference between academic law and academic philosophy is that in the former there is far less consensus on scholarly paradigms than in the latter.

More than few people in philosophy reacted with amazement to the notion that there is far less consensus in law than there is in philosophy. In order to explain his claim, Leiter posted an addendum:

As I pointed out to [a colleague], and is perhaps worth sharing, Kieran Healy’s research found remarkable convergence in overall evaluations of programs across almost all areas of philosophical specialization–that’s the evidence about consensus I had in mind [evidence not refuted by noting that at the margins there are dissenters from the consensus, obviously].  {Emphasis added, square brackets in original.}

This is a false statement. Professor Healy did not find a “remarkable convergence…across almost all areas of philosophical specialization.” In a post on Leiter’s blog comparing “the level of consensus or disagreement between specialists” regarding the overall rankings, Healy discovered that there was “a relatively high degree of consensus around the top seven or eight departments,” out of ninety-nine departments, but varying degrees of disagreement for other ranked schools, with those at the bottom showing more consensus than those in the middle. (Gregory Wheeler has argued that Healy’s results show that, except for the top six schools and a handful at the bottom, “rankings vary quite a lot”*.)

Leiter’s adamance about the “remarkable convergence” is no small matter.  He insists that there is a consensus and that the evidence for the consensus is the PGR. The criticism of colleagues, some with considerable training in statistics and survey methodology, over many years has not budged him. In his most recent remark it is clear that he’s doubling down. He cannot or will not see the circularity of his position. The PGR is philosophy and philosophy is the PGR. QED.

Using the data of the 2014 PGR I show here that the consensus Leiter insists on is artificial.  But make no mistake: this is not just about the PGR. It is about whether our vision of philosophy is like Raphael’s “The School of Athens,” in which there is room for multiple and widely different ways of doing philosophy, or whether a particular style, method, and set of concerns should crowd out current and future philosophical diversity.

There are too many problems with the current PGR to address in one post. This is an overview of data that call into question assumptions about the PGR’s reliability and its ostensible consensus. While I focus on the specializations here, note that the problems with this evaluator pool bear on the overall rankings because the pools are so similar.  For the sake of clarity and interest, I proceed by presenting a series of issues, asking the reader to decide–without any obligation to rank them–which are worse.

Which is worse?

  • That the current PGR lumped together the 2011 and 2014 rankings of Feminist Philosophy, unheard of in any other specialization, because there were too few evaluators this year, and still only had 12 evaluators?
  • Or that in 32 specializations–leaving Feminist Philosophy aside because we don’t know how many or who ranked in 2014–there are only 32 women?  (That’s correct, 32 women philosophers for all 32 specializations. It may seem like more because the same evaluators often rank in multiple areas.)
  • Or that in the Philosophy of Language, Philosophy of Mind, Metaphysics, Epistemology, and Philosophical Logic, there are only 11 different women evaluators?
  • Or that in the 27 other specializations not listed immediately above, there are a total of 21 women?
  • Or that 75% of all specializations have three or fewer women philosophers evaluating, including those who evaluate multiple times? (Or that 44% have 0 or 1, or that 63% have 0, 1 or 2.?)

Which is worse?

  • That 72% of the specializations have 20 or fewer evaluators, up from approximately 60% of the specializations for the 2011 PGR?
  • Or that 38% have 10 or fewer?
  • Or that 28% have 8 or fewer?
  • Or that 8 out of 9 areas, 89%, in the History of Philosophy have 20 or fewer evaluators, up from 6 out of 9 for 2011?
  • Or that, as troubling as these figures are regarding the small number of evaluators in so many areas, they only tell part of the story, because evaluators frequently rank multiple times? (So there are actually fewer different individuals doing the ranking in these specializations. See next.)

Which is worse? (Keeping in mind Leiter’s claim that there is a “remarkable convergence in overall evaluations of programs across almost all areas of philosophical specialization”.)

  • That 41% of the Philosophy of Language evaluators were also Philosophy of Mind evaluators (11/27)?
  • Or that 69% of the Philosophy of Law evaluators were also evaluators for Political Philosophy (11/16)?
  • Or that 59% of Political Philosophy evaluators were also evaluators for Ethics (20/34)?
  • Or that 79% of Cognitive Science evaluators were also evaluators for the Philosophy of Mind (22/28)?
  • Or that 37% of Metaphysics evaluators were also Philosophy of Mind evaluators (11/30)?
  • Or that 63% of Metaethics evaluators were also Ethics evaluators (10/16)?
  • Or that 85% of the 20th Century Continental evaluators were also Kant or 19th Century evaluators (11/13)?
  • Or that 46% of 20th Century Continental evaluators were also Kant evaluators (6/13)?
  • Or that 67% of those who ranked in Medieval also ranked in Philosophy of Religion (4/6)?
  • Or trumpeting the convergence between the specialty rankings and the overall rankings of the PGR without mentioning overlaps like these in the 2014 PGR?

Which is worse? (Bearing in mind that in four specializations, including Feminist Philosophy, the PGR recognizes that adjustments should be made for the small numbers of evaluators.**  Remarkably, none of the departments below fall into this category, even though some have few evaluators.)

  • That only 6 evaluators in the Philosophy of Mathematics ranked 41 departments, while 39 evaluators in Ethics ranked 42 departments?
  • Or that 7 evaluators in the History of Analytic Philosophy ranked 42 departments, while 42 evaluators in Philosophy of Mind ranked 37?
  • Or that 10 evaluators in Applied Ethics ranked 59 departments, while 41 evaluators in Epistemology ranked 37?
  • Or that 12 evaluators in Mathematical Logic ranked 43 departments, while 30 evaluators in Metaphysics ranked 37 departments?

Which is worse?

  • That the Philosophy of Religion has only 2 evaluators currently at Catholic universities—the same one, St. Louis University—effectively excommunicating a large number of philosophers who work in the Philosophy of Religion?
  • Or that in the Philosophy of Law only 7 of its 16 evaluators, 44%, are at US institutions, for a rankings report in which the vast majority of ranked programs are located in the US, and strangely, which is spearheaded by Brian Leiter, a law and philosophy professor at a US university?
  • Or that American Pragmatism, a specialization whose experts have ties to both analytic and continental philosophy, is only represented by 3 evaluators, 2/3rds of whom are not at universities in the United States?***
  • Or that the Philosophy of Race has only three evaluators?
  • Or that Chinese Philosophy only has four?
  • Or that there isn’t any other non-Western philosophy represented, for example, Japanese philosophy, or Indian philosophy?
  • Or that there isn’t a separate category for Latin American Philosophy?
  • Or that Feminist Philosophy didn’t have enough evaluators this year so that 2011 and 2014 were combined?  (Wait . . . this is where we came in.)

Enough.  In my next post I will establish that there are evaluators who are not leading experts or experts at all in the specializations they are ranking. (I will not embarrass individual evaluators, or list names, etc.)  I will also show that there are imbalances in the evaluation of specializations, for example, that eight of the experts in 19th Continental Philosophy are Nietzsche scholars, while other major figures are hardly represented–Kierkegaard gets one expert.  There is also the issue of whether there are more highly qualified people who have been overlooked or dismissed because they do not fit the confines of The Consensus.

I began this post by raising concerns about Brian Leiter’s lack of appreciation for philosophy’s diversity and about the PGR’s aspiration to mold and hold the profession to Leiter’s vision of it. However, his assertion about consensus is only one piece of the puzzle over why Leiter continues to argue so vehemently for the PGR, often attacking those who disagree with him–personally and sometimes brutally–on his blog and elsewhere. I have wondered whether there was something in his philosophical outlook, beyond mere personal idiosyncrasy, that leads him to conduct himself like this, and which shapes both the PGR and his defense of it.  And then I happened across a quotation apparently very important to Leiter; a passage from Nietzsche that, he makes clear, he has quoted more than once. The quotation appears in his essay “How to Rank Law Schools.” ****  The quotation and its context occur near the very end of the article.  He introduces the quotation with these words:

 Academic rankings that provide actual information on matters of educational value have a useful role to play for students, quite obviously, but they also have a constructive role to play for faculty. Professor Korobkin suggests that in ranking schools we want to discourage “status competition.” I guess my own view is more Nietzschean, and so let me close with a quote I have used before. This is Nietzsche from his early essay on “Homer’s Contest”:

 [Then Leiter quotes Nietzsche:]

[J]ealousy, hatred, and envy, spurs [sic] men to activity: not to the activity of fights of annihilation but to the activity of fights which are contests. The Greek is envious, and he does not consider this quality a blemish but the gift of a beneficient [sic] godhead . . . . The greater and more sublime a Greek is, the brighter the flame of ambition that flares out of him, consuming everybody who runs on the same course.

. . . .

Every talent must unfold itself in fighting: that is the command of Hellenic popular pedagogy, whereas modern educators dread nothing more than the unleashing of so-called ambition . . . . And just as the youths were educated through contests, their educators were also engaged in contests with each other. [All ellipses in original.]

Whatever one may think of the sentiment–or its employment in this way, or Leiter’s evident attraction to it–its invocation in a discussion of the right way to rank academic programs should give us pause.  “The greater and more sublime a Greek is, the brighter the flame of ambition that flares out of him, consuming everybody who runs on the same course.”  For people who see philosophy as a contest and vanquished opponents as fit for nothing but the flames, well, Leiter and his PGR are the way to go.   But in “The School of Athens,” all we’d see is that this guy can really empty a room.

 

__________

I compiled all of the statistics in this post.   Of course I may have made errors.  But I do believe that any errors would be minor and not undermine the basic points or patterns under discussion.  There are too many similar results.  If a reader finds any errors, please notify me.

___________

* Commenting on Healy’s work, “Ratings and Specialties,” on Leiter’s blog, Wheeler says, “Those rankings were then aggregated to see how much variation there is across the specialties, which gives a sense of  how much (or how little) consensus there is across specialties. The box and whisker plots (top 25: .png.pdf; total population: .png.pdf) give a picture of this.  Except for the top 6 departments, and a handfull rounding out the bottom, the answer is that the rankings vary quite a lot:  people vote according to whom they recognize, and invariably those are the people working in their area(s) of specialization(s).” Gregory Wheeler, “Manufactured Assent: The Philosophical Gourmet Report’s Sampling Problem,” in Choice and Inference.

**“Due to the small number of evaluators, we are not printing the rounded mean scores, but just a list of programs, broken into two groups based on the scores received.”

*** It’s not as if there isn’t a wealth of experts to draw on in the US for American Pragmatism.  Perhaps Leiter should consider contacting the Society for the Advancement of American Philosophy to get some names, which has hundreds of members.  Oh, wait, this is one of the organizations that is outside of the consensus.

**** “Commentary: How to Rank Law Schools,” Indiana Law Journal, Vol. 81, 2006

 

 

Why Did Leiter Give Up Reputational Surveys in Law, but Not in Philosophy? The Mystery Deepens

220px-Diogenes_looking_for_a_man_-_attributed_to_JHW_Tischbein   Two days ago I compared Brian Leiter’s philosophy and law school rankings in a post (“Before You Consult the 2014 Philosophical Gourmet Report, Consider Leiter’s Words: “Reputation tends to be yesterday’s news”).  I referred to comments Leiter made in 2010 about the relative value of reputational surveys and scholarly impact studies.  Yesterday Leiter declared on his blog that I had taken his quotation out of context and misrepresented his views.  This is what Leiter says in his post:

The “vitriolic criticsms” [sic] are understandable and also unrepresentative–obviously the critics have tremendous incentives to be a salient presence on social media, given the huge influence the PGR actually has in the real world.  As I’ve noted before, it’s important not to be misled by the volume and persistence of the critics–pay attention to who they are, where they teach, where they earned their PhD, this will usually tell you more about what’s really going on.  Not all of them have self-serving motives*, to be sure–some just have no judgment (vide Velleman). . . .

*Some are also pathologically dishonest, and are getting increasingly desperate now that the PGR is out.  The most amusing is the former SPEP Advocacy Committee member who purports to quote me saying, “Reputation tends to be yesterday’s news–what happened 25 years ago,” without noting that I was discussing the awful U.S. News surveys of law schools, which are random surveys (not suveys [sic] of experts) and which provide the respondents with no information at all–of course, those kinds of surveys are yesterday’s news.  The other big difference between academic law and academic philosophy is that in the former there is far less consensus on scholarly paradigms than in the latter.

I made two basic claims in my original post:  first, that Leiter gave up reputational surveys for impact studies in his law rankings, and, second, that he thinks that impact studies are superior to reputational surveys.  This second claim was based on the decision he made to give up reputational surveys in favor of impact studies as well as on his comments about the superiority of impact studies over reputational surveys.

The problem that Leiter faces in trying to confine his comments to criticism of the U.S. News reputational survey is that he doesn’t qualify his claim in this manner in the article I cited.  Here once again are the remarks that I quoted in my post.

Most of the law schools in the top twenty are not surprising.  But Leiter and Sisk agree that the study is a good indicator for future reputation.

“[Scholarly] Impact tells you things that reputation doesn’t,” Leiter said. “Reputation tends to be yesterday’s news–what happened 25 years ago.  I think [this study] is useful for students who care about the academic experience” [brackets in original].

This quotation states that “[Scholarly] Impact tells you things that reputation doesn’t.”  There is no qualification here: he doesn’t except his own expert-driven reputational surveys.  In addition, he drops this portion of the quotation in his criticism of my post and only quotes the second sentence.  “Reputation tends to be yesterday’s news–what happened 25 years ago.”   This allows him to suggest that he is only talking about the U.S. News survey, or at least to make that reading more plausible.  Restore the rest of the quotation and my point is all the stronger.

But there’s another difficulty with Leiter’s story:

The last reputational survey Leiter did for his law school rankings was in 2003.  As I point out in my original post, the reputational rankings were later replaced by impact studies based on citations. (Leiter has also relied on lists of inductees to the American Academy of Arts and Sciences to rank faculty quality in the same period.) The shift to impact studies started in 2005, and is beyond dispute.  What remains a mystery is why the shift, especially since Leiter was quite happy with the reputational survey approach for his law school rankings, and championed it in the same terms as he now describes the methods of the PGR–look at all of the fabulous people participating, etc.  Here is Leiter in his “Introduction” to the 2003-2004 reputational survey results for his law school rankings:

Since high-quality survey data may ultimately be more informative than “objective” measures, it is my intent, for now, to rely on this data (emphasis added).

Further down the page he says this:

The quality of evaluators in this survey is unparalleled: it includes the President and President-elect of the Association of American Law Schools; a dozen members of the American Academy of Arts & Sciences, the nation’s most prestigious learned society; dozens of the most frequently cited legal scholars in numerous fields; and leading figures, junior and senior, in corporate law, criminal law, health law, constitutional law, jurisprudence, international law, comparative law, legal history, feminist legal theory, and many other fields.

Sounds like he has a pretty good thing going.  But then the reputational surveys stop, and Leiter begins using impact studies based on citations.  So the other difficulty that his story faces is that he abandoned reputational surveys for impact studies in his law school rankings.  Why would he do this if he didn’t believe that impact studies were superior to reputational surveys for law school rankings?**   The quotation I cited is what one would expect to hear after Leiter decided to switch.

Finally, a word about the last line in his criticism of my post.   Leiter claims that there is “far less consensus on scholarly paradigms” in law than there is in academic philosophy.***  He appears to hope that people will believe that this is the reason that he stopped doing reputational surveys in law while continuing to use them for philosophy.  The problem is, it’s not believable.  Philosophy is certainly not enjoying a greater consensus about “scholarly paradigms” than law schools(!), and asserting that it does is completely unconvincing as a reason for assessing the fields differently.  At best it’s a piece of wishful thinking on Leiter’s part, with the PGR itself something of a fantasy.

_________

**As I noted in my post, the U.S. News’s rankings are only mentioned at the start of the article in order to contrast them with what Sisk and his colleagues were doing, with a very brief mention that part of what U.S. News does is reputational.  The article then moves on, with Leiter’s words coming near the very end.  I should add that in the middle of the piece there is another quotation from Leiter regarding how “Scholarly impact is a measure of the intellectual quality of the faculty…”  Nothing more is said about U.S. News’s rankings.

*** UPDATE:  12:00 PM, December 10.  I inadvertently switched law and academic philosophy in this sentence when I first published the post early this morning.  It’s been corrected.  Thanks to John Protevi for catching it.

Before You Consult the 2014 Philosophical Gourmet Report, Consider Leiter’s Words: “Reputation tends to be yesterday’s news”

   413JfmBHYEL._SL500_AA300_   “[Scholarly] Impact tells you things that reputation doesn’t. Reputation tends to be yesterday’s news–what happened 25 years ago.” –Brian Leiter*

Could it be? Is this the same Brian Leiter who has argued vociferously for years that his reputational survey-based rankings of philosophers are essential to the well-being of the profession? Indeed it is. It seems Leiter doesn’t believe that the best way to do rankings is through a reputational survey, or that rankings must be done in this way, but he’s convinced a lot of people in the profession that it’s the only way for philosophy.

While Leiter has left philosophers mired in arguments about the merits and faults of the PGR, he dropped survey-based reputational rankings from his law schools enterprise a decade ago and replaced them with scholarly impact rankings based primarily on citations.   It’s confusing at first, because he uses the term “reputation” to refer both more narrowly to reputational surveys as distinguished from impact studies, on the one hand, and more broadly as a marker of quality, however established, on the other.**  Since Leiter has dropped reputational surveys in favor of impact studies for his law school rankings, this post will focus on the replacement of reputational surveys with scholarly impact studies.

As far as I can tell, the last time Leiter did a reputational survey for law schools was 2003. He had 150 or so responses to his survey, which was sent out to 292 people.  His next edition no longer involved a reputational survey, as we can see from the list of reports from his Law School Rankings website (below).  First, however, let’s look at  the Welcome statement to Leiter’s Law School Rankings.

Faculty, students, and parents are, quite reasonably, interested in comparative information on the quality of different law schools. This site compiles such information about many (but, unfortunately, not all) U.S. law schools, along dimensions like faculty quality and job placement. Five or six new studies are posted each year, and are listed under “Newest Rankings.” Prior studies, some going back a decade, are listed under each of the subject-area categories. Each study begins with an explicit statement of the measure being utilized, often with caveats that should affect the interpretation of the results. No attempt is made to aggregate different measures, since no weighting of different elements could be justified in a principle [sic] way. Students are invited to consider measures important to them and to utilize those in selecting schools to which to apply and, ultimately, in deciding which among the schools to which you are accepted warrant closer, personalized investigation. Faculty will hopefully find the measures here useful benchmarks of institutional performance. The measurement criteria emphasized here pertain solely to academic excellence and professional outcomes, and rely on data in the public domain. Suggestions for new studies and improvements in the existing measures are welcome. (Emphasis added)

Several points of interest here: there are multiple measures, “students are invited to consider measures important to them” (as many of us have suggested for philosophy), there is no attempt to aggregate different measures into an overall ranking, and measurement of academic excellence relies on “data in the public domain.”  What are these multiple measures?  At the top of Leiter’s law rankings, there are three tabs, little doors to a storehouse of treasures: Faculty Quality, Student Placement, and Job Placement.  As a philosopher living in the world according to the PGR, I marvel at the bounty and feel like the kid who finds that Santa left him a piece of coal in his Christmas stocking.  Are we so badly behaved, in philosophy, that we get only the PGR?

When we click the tab that says “Faculty Quality,” we get information, ostensibly, on Scholarly Reputation.**

Scholarly Reputation**

According to his site, the last time Leiter used a survey for “Scholarly Reputation” was in 2003-2004.   The method employed will sound familiar to the PGR’s fans (and critics):

The Survey and the Method
Between March 3 and March 21, 2003, more than 150 leading legal scholars from around the nation completed the most thorough evaluation of American law faculty quality ever undertaken.  Scholars were invited to participate in the EQR survey based on the following criteria:

  1. Only active and distinguished scholars were invited.  These are the scholars most likely to have informed opinions about faculty quality.
  2. Multiple faculty from every school evaluated were invited.
  3. Diversity in terms of seniority was sought in the evaluator pool.
  4. Diversity in terms of fields and approaches was sought in the evaluator pool.

Evaluators were not permitted to evaluate either their own institution or the institution from which they had received the highest law degree.

What we have here is Leiter’s version of the Philosophical Gourmet Report for law schools, or vice versa.   However, the next entry regarding faculty quality, from 2006, is not based on a reputational survey, but faculty membership in The American Academy of Arts & Sciences.  And for those of us who have worried that the pool of evaluators for the PGR may not be sufficiently diverse, we learn that Leiter actually shares this concern, that is, when he is discussing the AAAS, but not when he’s defending the PGR:

The American Academy of Arts & Sciences each year elects members based on their distinguished contributions to scholarship, the arts, education, business, or public affairs. In reality, the Academy tends to be a bit “chummy”—schools already “rich” with members get “richer,” not always on the merits—though the sins tend to be of omission rather than inclusion.

Be that as it may, it is something of a sideshow compared to what Leiter has in store, rankings based on citations, which has been dismissed as an inferior method for ranking departments by some fans of Leiter’s PGR.  When we click the link to the Top 70 Law Faculties in Scholarly Impact, 2007-2011, we find Professor Leiter commenting on work which has been done to update his earlier studies based on citations.

Professor Gregory Sisk and his colleagues in the law library at the University of St. Thomas (Minnesota) have used the methodology of my prior scholarly impact studies to produced an updated study of the 70 law faculties with the highest scholarly impact, based on both average and median impact as measured by citations.   Professor Sisk and his colleagues give a detailed explanation of the project here; I also encourage those utilizing this information to note the caveats I have highlighted previously.

When you click on the link to “the methodology of my [Leiter’s] prior scholarly impact studies,” the following  appears.

This is a study that aims to identify the 25 law faculties with the most “scholarly impact” as measured by citations during roughly the past FIVE years.  The methodology is the same as used in the 2007 study, though now excluding, per suggestion from many colleagues and as we did last year, untenured faculty from the count, since their citation counts are, for obvious reasons, always lower.   The study also excludes judges who still do some teaching (like Guido Calabresi at Yale and Richard Posner at Chicago).
The study was conducted in early January of 2010 (the search parameters were date aft [sic] 2004 and bef [sic] January 15 2010), so incorporates some articles published in early 2010, but the bulk of the sample is made up of articles published in the years 2005, 2006, 2007, 2008, and 2009.

I wanted to quote this in full so that there is no question that Leiter has been using a citation method and not a reputational survey for law schools since 2003-04.  Now let’s look under the tree:

Recall the quotation with which I began, Leiter’s claim that reputation “tends to be yesterday’s news.”   If you hit the link that is meant to explain the project in detail, the one marked “here” (above) you are taken to the article from which I drew the quotation, “Scholarly Impact of Law School Faculties in 2012: Applying Leiter Scores to Rank the Top Third.”  This article contains the quotation in a footnote but also includes a reference to it in the main body of the article, as we shall see in a moment.  First let’s look at the original occurrence of the quotation, in an article entitled “Top Scholarly Faculties,” which appeared in the National Jurist in 2010 and dealt with Sisk and his colleagues’ study.   The quotation appears near the end of the article, in this context:

Most of the law schools in the top twenty are not surprising.  But Leiter and Sisk agree that the study is a good indicator for future reputation.

“[Scholarly] Impact tells you things that reputation doesn’t,” Leiter said. “Reputation tends to be yesterday’s news–what happened 25 years ago.  I think [this study] is useful for students who care about the academic experience” [brackets in original].

Now here is how the claim appears in the article, “Scholarly Impact of Law School Faculties in 2012,” that Leiter links to on his law school rankings site:

In their pioneering work evaluating law faculties through per capita citations to their scholarly writings, Professors Theodore Eisenberg and Martin Wells asserted that scholarly impact ranking “assesses not what scholars say about schools’ academic reputations but what they in fact do with schools’ output.” As Professor Brian Leiter puts it, reputational surveys for law schools, such as that incorporated in the U.S. News ranking, tend to reflect “yesterday’s news.”  Scholarly impact studies focus on the present-day reception of a law faculty’s work by the community of legal scholars.

So in the article Leiter links to by way of information about his law school rankings methodology, the claim appears not only in a footnote but also in the main body.  If Leiter thought there was a problem with this claim, or with the quotation, he would have alerted his readers to this fact when he recommended the article.  Naturally, Leiter thinks that his reputational survey was (for law schools) and is (for philosophy) better than that element of the U.S. News rankings dealing with reputation, because, for example, it ranks schools and not departments, as Leiter’s surveys do.  But bear in mind that his claim about reputation and “yesterday’s news” is not confined to that element of the U.S. News rankings.  The reputational element of the U.S. News rankings is merely an example in the quotation from “Scholarly Impact of Law School Faculties in 2012,” while in the “Top Scholarly Faculties,” the U.S. News rankings are mentioned only briefly at the beginning in order to contrast its results with those of Sisk and his colleagues.  The quotation from Leiter appears at the end of the article.

The kind of reputational survey that Leiter gave up in favor of rankings based on public data for law schools was virtually the same as the sole method currently implemented–and by some, fiercely defended–for the PGR.  Why did Leiter abandon it ten years ago if it was so distinctive and valuable in comparison with other law school rankings–of which there are many–including the element of the U.S. New’s rankings based on a reputational survey?  Whatever the reasons–and it would be interesting to hear them–it appears that, for Leiter, what’s good enough for philosophy won’t do for law schools.

___________

 

*Brian Leiter, quoted by Jack Crittenden,“Top Scholarly Faculties,” National Jurist, Nov., 2010, at 5. Also, quoted in “Scholarly Impact of Law School Faculties in 2012: Applying Leiter Scores to Rank the Top Third,” Gregory Sisk, Valerie Aggerbeck, Debby Hackerson & Mary Wells, UNIVERSITY OF ST. THOMAS LAW JOURNAL Vol. 9:3. p. 5.

**The confusion comes in large part from the way Leiter sets out his law school rankings on the site.  For these rankings he stopped doing reputation surveys and reports now generally on citation data and learned society membership data.  He presents these “studies,” some of which are “scholarly impact” reports, as evidence of “Scholarly Reputation” and, ultimately, of “Faculty Quality.” The problem is that he’s using the general heading “Scholarly Reputation” for his “Scholarly Impact” studies, which obscures the replacement of reputational surveys with reports based on public data.

Leiter’s taxonomy of these categories on his law schools rankings site is, shall we say, exuberant: under the tab “Faculty Quality” he lists five main headings: “Scholarly Reputation” (nine reports of various sorts in various years, non-continuous, including impact studies, and for 2003-04 “Scholarly Reputation” surveys for both overall and specialty rankings), “Scholarly Productivity” (four reports for 2002-03 and one for 2008), “Scholarly Impact” (nine miscellaneous reports in various years, non-continuous), “Teaching Quality” (one report, 2003-04), and “Faculty Moves” (one report for 1995-2004).

With a Bang–The PGR in Free Fall

red-down-arrow-md On November 17, 2014 Brian Leiter posted a teaser, or trailer, for the 2014 PGR, on his overall rankings.  We have now had a total of four posts on overall rankings, including the last thirty of the top fifty departments on December 2nd.  Schools and departments are already trumpeting their success in the rankings based on these previews–apparently they can’t resist advertising the good news to the world, like companies reporting an uptick in market share.  But wait!  We still don’t know how many evaluators participated in the overall rankings.  Why all the mystery?  Well, now we may have the answer.

Leiter has now posted a preview of the rankings for the Philosophy of Language.  The results catapult this specialization to the number one spot in my own ranking of the rankings, “Not With a Bang But With a Whimper—Falling Rates of Participation in the Philosophical Gourmet Report.”  The PGR has lost evaluators in all eleven of the specializations posted thus far by Leiter, and some of these losses have been substantial.   Philosophy of Language is a core area of contemporary philosophy.   If the PGR is losing big here, it’s a good bet that it has lost everywhere.

In 2011 there were 52 evaluators in the Philosophy of Language.

Of these 52 evaluators, 31 did not participate in the Philosophy of Language rankings in 2014.

This is a 60% drop from 2011 to 2014.

There were a total of 27 evaluators in 2014.

This is a net drop of 48% from 2011 to 2014.

In other words, the Philosophy of Language lost 31 evaluators in 2014 and found only 6 replacements.

Finally, loss in the total number of evaluators is just one of the problems here.  For example, the Philosophy of Language had only 7 women evaluators in 2011, or 14%.   This time around, 3 women served as evaluators, or 11%.  The PGR’s long-term methodological flaws mean that it was not a viable tool even before the recent losses.   Between the loss of evaluators in the specializations we’ve seen so far and the limited participation of women philosophers, among others, in many of the specializations, the PGR should not be used this year as a guide for students–or as a feather in departmental caps.

 

Not With a Bang But With a Whimper—Falling Rates of Participation in the Philosophical Gourmet Report

down_arrow    Professor Leiter has yet to tell us how many people served as evaluators or the size of the pool from which they were drawn for the 2014 Philosophical Gourmet Report. This information requires only counting and reporting. Nevertheless, without giving us this pertinent information, and within hours of the survey deadline, Leiter managed to produce some preliminary overall rankings on his blog, and the next day he gave us a list of the “overall” top departments in the US and the UK. Since then we have been offered information on several specializations, but still no data on how many people participated.  We also don’t know why Leiter put out these specializations first.  There doesn’t appear to be any methodological reason for doing so.  (One can speculate–what are the odds that Leiter would put his worst feet forward first?)  In any event, the information that we have been offered is telling in terms of rates of participation.

In analyzing the rate of participation for this year’s PGR it’s important to bear in mind that Leiter invites past evaluators to participate. He recently reaffirmed this policy in a blog post defending Peter Ludlow’s participation in the 2014 PGR.

Ludlow has been a regular respondent to the surveys for many years; we have always invited past participants (except when they ask to be removed), and we did so this year as well (including, for example, those who signed the boycott statement–many of them did, in the end, participate happily).

Given this policy I thought it would be interesting to do a preliminary analysis of participation rates based on the areas of specialization posted to date on Leiter’s blog.  (I am not sure why he is posting the results on his blog this time around.  I was under the impression that the close connection between the PGR and Leiter’s blog was one of the things that was supposed to be different going forward, but perhaps I misunderstood.)

In addition to the information that I present here, I plan to report in future posts about some unsettling findings.  By way of preview, and for example:

  • 19th Century Continental Philosophy lost eleven of its twenty-eight evaluators this year. Examination of the CV’s, websites, and personal statements of the evaluators for 2014 shows that five aren’t specialists in 19th Century Continental Philosophy.  In addition, out of the twenty-two evaluators who did participate in 2014, eight are Nietzsche specialists. (Well, that’s really eight out of seventeen, since five aren’t 19th Century Continental experts.) In contrast, there was one Kierkegaard expert.
  • Nine out the sixteen evaluators for Metaethics in 2014 received their degrees from two schools, Princeton and Michigan. Leiter argues that the Report looks for balance in educational background.
  •  A significant number of women serving as evaluators from 2011 chose not to participate this year. For example, Philosophy of Mind lost seven out of eight of the women who participated in 2011 PGR.

In this post, I rank specializations–hey, turnabout is fair play–based on how many evaluators for a given specialization chose not to participate in this year’s PGR for that specialization, after having participated in 2011.  Reading the columns from left to right you will find: the total number of participants for a given specialization in 2011, how many did not participate in the 2014 PGR, what percentage did not participate, the total number of evaluators for 2014, and, lastly, the percent net loss of evaluators in each specialization from 2011 to 2014.  (Leiter has added some new evaluators–there also perhaps hangs a tale.)  The information was gathered from the Philosophical Gourmet Report 2011 and posts on Leiter’s blog, starting with his first report on specializations, the Philosophy of Physics, November 20, 2014.   (If you find any errors in the numbers, please let me know.   This is preliminary but to the best of my knowledge accurate.)

Rankings of Specializations according to what percentage of evaluators in 2011 did not participate in 2014 (all percentages rounded;  *=tie;  updates in brackets)

2011 evaluators   Loss in 2014   Percent Drop   Total 2014   Net drop

Group One (>50% loss)

1)  Philosophy of Language  [added 12/5/2014]

52                              31                     60%                  27                 48%

2) Philosophy of Action

17                               10                     59%                    8                 53%

3) *Ethics

58                              30                    52%                   39                 33%

3) *Metaethics

25                              13                     52%                   16                  36%

 

Group Two (30%-50% loss)

1)  Philosophy of Mind [added 12/5/2014]

52                               22                     42%                  42                19%

2)  *Political Philosophy [added 11/30/2014]

47                               19                     40%                  34                28%

2)  *Early Modern Philosophy, 17th Century  [added 12/1/2014]

20                                8                     40%                 18                 10%

4) *Kant

18                                 7                     39%                  14                 22%

4) *19th Century Continental

28                                11                    39%                  22                21%

6) Philosophy of Physics

11                                 4                      36%                  10                   9%

 7)  Philosophy of Law [added 12/5/2014.]

20                                7                      35%                  16                 20%**

 

Group Three   (15%-30% loss)

1) Ancient Philosophy

20                               3                        15%                  18                  10%

 

Group Four (15% or less)

No Specializations thus far.  (Note: there have been no gains in the total number of evaluators for any specialization posted to date.)

____________

UPDATE   11/30/2014   Political Philosophy added.  For the 2014 PGR there are four women philosophers in Political Philosophy, 12% of the total.

UPDATE  12/1/2014   Early Modern Philosophy, 17th Century, added.   For the 2014 PGR there are two women philosophers in 17th Century, 11% of the total.   Half of all of the 17th Century evaluators in 2014, nine, went to four Ph.D. programs.

UPDATE  12/2/2014   For the ten specializations ranked thus far, 43% of evaluators who participated in these specializations in 2011 did not participate in them in 2014.  The total net loss for all ten specializations is 25%.

UPDATE  12/5/2014   The Philosophy of Language has three women evaluators, or 11%.  In 2011, it had seven women evaluators, or 14%.

UPDATE  12/5/2014   For Philosophy of Law there were three women evaluators in 2014, 19%.  In 2011, there were four, 20%.    Brian Leiter did not evaluate in this category in 2011.  He did this year.  Without him the net loss would have been 25%. **    Also of note, for 2014, nine out of sixteen evaluators are not at U.S. institutions; five evaluators from U.S. institutions who participated in 2011 did not do so this time around.

UPDATE  12/6/2104   For the twelve specializations ranked thus far, 45% of evaluators who participated in these specializations in 2011 did not participate in them in 2014.   The total net loss for all twelve specializations is 28%.

More Than Funny Great Jokes

AudienceLaughing   I hereby inaugurate a new series for UP@NIGHT,  “More Than Funny Great Jokes.”  I begin with one of my favorite jokes in this vein.  (Please feel free to send in suggestions for this series.)

On Yom Kippur, the rabbi stops in the middle of the service, prostrates himself beside the bema, and cries out, “Oh, God. Before You, I am nothing!”

Saul Rosenberg, president of the temple is so moved by this demonstration of piety that he immediately throws himself to the floor beside the rabbi and cries, “Oh, God!  Before you, I am nothing!”

Then Chaim Pitkin, a tailor, jumps from his seat, prostrates himself in the aisle and cries, “Oh God! Before You, I am nothing!”

Rosenberg nudges the rabbi and whispers, “So look who thinks he’s nothing.”

(Thanks to Jewish Sight Seeing and Bruce Lowitt for this version of a pretty famous punch line.)