413JfmBHYEL._SL500_AA300_  

“[Scholarly] Impact tells you things that reputation doesn’t. Reputation tends to be yesterday’s news–what happened 25 years ago.” –Brian Leiter*

Could it be? Is this the same Brian Leiter who has argued vociferously for years that his reputational survey-based rankings of philosophers are essential to the well-being of the profession? Indeed it is. It seems Leiter doesn’t believe that the best way to do rankings is through a reputational survey, or that rankings must be done in this way, but he’s convinced a lot of people in the profession that it’s the only way for philosophy.

While Leiter has left philosophers mired in arguments about the merits and faults of the PGR, he dropped survey-based reputational rankings from his law schools enterprise a decade ago and replaced them with scholarly impact rankings based primarily on citations.   It’s confusing at first, because he uses the term “reputation” to refer both more narrowly to reputational surveys as distinguished from impact studies, on the one hand, and more broadly as a marker of quality, however established, on the other.**  Since Leiter has dropped reputational surveys in favor of impact studies for his law school rankings, this post will focus on the replacement of reputational surveys with scholarly impact studies.

As far as I can tell, the last time Leiter did a reputational survey for law schools was 2003. He had 150 or so responses to his survey, which was sent out to 292 people.  His next edition no longer involved a reputational survey, as we can see from the list of reports from his Law School Rankings website (below).  First, however, let’s look at  the Welcome statement to Leiter’s Law School Rankings.

Faculty, students, and parents are, quite reasonably, interested in comparative information on the quality of different law schools. This site compiles such information about many (but, unfortunately, not all) U.S. law schools, along dimensions like faculty quality and job placement. Five or six new studies are posted each year, and are listed under “Newest Rankings.” Prior studies, some going back a decade, are listed under each of the subject-area categories. Each study begins with an explicit statement of the measure being utilized, often with caveats that should affect the interpretation of the results. No attempt is made to aggregate different measures, since no weighting of different elements could be justified in a principle [sic] way. Students are invited to consider measures important to them and to utilize those in selecting schools to which to apply and, ultimately, in deciding which among the schools to which you are accepted warrant closer, personalized investigation. Faculty will hopefully find the measures here useful benchmarks of institutional performance. The measurement criteria emphasized here pertain solely to academic excellence and professional outcomes, and rely on data in the public domain. Suggestions for new studies and improvements in the existing measures are welcome. (Emphasis added)

Several points of interest here: there are multiple measures, “students are invited to consider measures important to them” (as many of us have suggested for philosophy), there is no attempt to aggregate different measures into an overall ranking, and measurement of academic excellence relies on “data in the public domain.”  What are these multiple measures?  At the top of Leiter’s law rankings, there are three tabs, little doors to a storehouse of treasures: Faculty Quality, Student Placement, and Job Placement.  As a philosopher living in the world according to the PGR, I marvel at the bounty and feel like the kid who finds that Santa left him a piece of coal in his Christmas stocking.  Are we so badly behaved, in philosophy, that we get only the PGR?

When we click the tab that says “Faculty Quality,” we get information, ostensibly, on Scholarly Reputation.**

Scholarly Reputation**

According to his site, the last time Leiter used a survey for “Scholarly Reputation” was in 2003-2004.   The method employed will sound familiar to the PGR’s fans (and critics):

The Survey and the Method
Between March 3 and March 21, 2003, more than 150 leading legal scholars from around the nation completed the most thorough evaluation of American law faculty quality ever undertaken.  Scholars were invited to participate in the EQR survey based on the following criteria:

  1. Only active and distinguished scholars were invited.  These are the scholars most likely to have informed opinions about faculty quality.
  2. Multiple faculty from every school evaluated were invited.
  3. Diversity in terms of seniority was sought in the evaluator pool.
  4. Diversity in terms of fields and approaches was sought in the evaluator pool.

Evaluators were not permitted to evaluate either their own institution or the institution from which they had received the highest law degree.

What we have here is Leiter’s version of the Philosophical Gourmet Report for law schools, or vice versa.   However, the next entry regarding faculty quality, from 2006, is not based on a reputational survey, but faculty membership in The American Academy of Arts & Sciences.  And for those of us who have worried that the pool of evaluators for the PGR may not be sufficiently diverse, we learn that Leiter actually shares this concern, that is, when he is discussing the AAAS, but not when he’s defending the PGR:

The American Academy of Arts & Sciences each year elects members based on their distinguished contributions to scholarship, the arts, education, business, or public affairs. In reality, the Academy tends to be a bit “chummy”—schools already “rich” with members get “richer,” not always on the merits—though the sins tend to be of omission rather than inclusion.

Be that as it may, it is something of a sideshow compared to what Leiter has in store, rankings based on citations, which has been dismissed as an inferior method for ranking departments by some fans of Leiter’s PGR.  When we click the link to the Top 70 Law Faculties in Scholarly Impact, 2007-2011, we find Professor Leiter commenting on work which has been done to update his earlier studies based on citations.

Professor Gregory Sisk and his colleagues in the law library at the University of St. Thomas (Minnesota) have used the methodology of my prior scholarly impact studies to produced an updated study of the 70 law faculties with the highest scholarly impact, based on both average and median impact as measured by citations.   Professor Sisk and his colleagues give a detailed explanation of the project here; I also encourage those utilizing this information to note the caveats I have highlighted previously.

When you click on the link to “the methodology of my [Leiter’s] prior scholarly impact studies,” the following  appears.

This is a study that aims to identify the 25 law faculties with the most “scholarly impact” as measured by citations during roughly the past FIVE years.  The methodology is the same as used in the 2007 study, though now excluding, per suggestion from many colleagues and as we did last year, untenured faculty from the count, since their citation counts are, for obvious reasons, always lower.   The study also excludes judges who still do some teaching (like Guido Calabresi at Yale and Richard Posner at Chicago).
The study was conducted in early January of 2010 (the search parameters were date aft [sic] 2004 and bef [sic] January 15 2010), so incorporates some articles published in early 2010, but the bulk of the sample is made up of articles published in the years 2005, 2006, 2007, 2008, and 2009.

I wanted to quote this in full so that there is no question that Leiter has been using a citation method and not a reputational survey for law schools since 2003-04.  Now let’s look under the tree:

Recall the quotation with which I began, Leiter’s claim that reputation “tends to be yesterday’s news.”   If you hit the link that is meant to explain the project in detail, the one marked “here” (above) you are taken to the article from which I drew the quotation, “Scholarly Impact of Law School Faculties in 2012: Applying Leiter Scores to Rank the Top Third.”  This article contains the quotation in a footnote but also includes a reference to it in the main body of the article, as we shall see in a moment.  First let’s look at the original occurrence of the quotation, in an article entitled “Top Scholarly Faculties,” which appeared in the National Jurist in 2010 and dealt with Sisk and his colleagues’ study.   The quotation appears near the end of the article, in this context:

Most of the law schools in the top twenty are not surprising.  But Leiter and Sisk agree that the study is a good indicator for future reputation.

“[Scholarly] Impact tells you things that reputation doesn’t,” Leiter said. “Reputation tends to be yesterday’s news–what happened 25 years ago.  I think [this study] is useful for students who care about the academic experience” [brackets in original].

Now here is how the claim appears in the article, “Scholarly Impact of Law School Faculties in 2012,” that Leiter links to on his law school rankings site:

In their pioneering work evaluating law faculties through per capita citations to their scholarly writings, Professors Theodore Eisenberg and Martin Wells asserted that scholarly impact ranking “assesses not what scholars say about schools’ academic reputations but what they in fact do with schools’ output.” As Professor Brian Leiter puts it, reputational surveys for law schools, such as that incorporated in the U.S. News ranking, tend to reflect “yesterday’s news.”  Scholarly impact studies focus on the present-day reception of a law faculty’s work by the community of legal scholars.

So in the article Leiter links to by way of information about his law school rankings methodology, the claim appears not only in a footnote but also in the main body.  If Leiter thought there was a problem with this claim, or with the quotation, he would have alerted his readers to this fact when he recommended the article.  Naturally, Leiter thinks that his reputational survey was (for law schools) and is (for philosophy) better than that element of the U.S. News rankings dealing with reputation, because, for example, it ranks schools and not departments, as Leiter’s surveys do.  But bear in mind that his claim about reputation and “yesterday’s news” is not confined to that element of the U.S. News rankings.  The reputational element of the U.S. News rankings is merely an example in the quotation from “Scholarly Impact of Law School Faculties in 2012,” while in the “Top Scholarly Faculties,” the U.S. News rankings are mentioned only briefly at the beginning in order to contrast its results with those of Sisk and his colleagues.  The quotation from Leiter appears at the end of the article.

The kind of reputational survey that Leiter gave up in favor of rankings based on public data for law schools was virtually the same as the sole method currently implemented–and by some, fiercely defended–for the PGR.  Why did Leiter abandon it ten years ago if it was so distinctive and valuable in comparison with other law school rankings–of which there are many–including the element of the U.S. New’s rankings based on a reputational survey?  Whatever the reasons–and it would be interesting to hear them–it appears that, for Leiter, what’s good enough for philosophy won’t do for law schools.

___________

 

*Brian Leiter, quoted by Jack Crittenden,“Top Scholarly Faculties,” National Jurist, Nov., 2010, at 5. Also, quoted in “Scholarly Impact of Law School Faculties in 2012: Applying Leiter Scores to Rank the Top Third,” Gregory Sisk, Valerie Aggerbeck, Debby Hackerson & Mary Wells, UNIVERSITY OF ST. THOMAS LAW JOURNAL Vol. 9:3. p. 5.

**The confusion comes in large part from the way Leiter sets out his law school rankings on the site.  For these rankings he stopped doing reputation surveys and reports now generally on citation data and learned society membership data.  He presents these “studies,” some of which are “scholarly impact” reports, as evidence of “Scholarly Reputation” and, ultimately, of “Faculty Quality.” The problem is that he’s using the general heading “Scholarly Reputation” for his “Scholarly Impact” studies, which obscures the replacement of reputational surveys with reports based on public data.

Leiter’s taxonomy of these categories on his law schools rankings site is, shall we say, exuberant: under the tab “Faculty Quality” he lists five main headings: “Scholarly Reputation” (nine reports of various sorts in various years, non-continuous, including impact studies, and for 2003-04 “Scholarly Reputation” surveys for both overall and specialty rankings), “Scholarly Productivity” (four reports for 2002-03 and one for 2008), “Scholarly Impact” (nine miscellaneous reports in various years, non-continuous), “Teaching Quality” (one report, 2003-04), and “Faculty Moves” (one report for 1995-2004).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s