InquiringMinds-1Chippy at 5

IF you are interested in finding the most suitable dog for you and your family, and you are okay with pure breeds, you can find websites that help you decide. They allow you to select various preferences, for example, coat length and upkeep requirements, how much the dogs will bark, how they are with children, genetic predispositions to illness, size, etc. Not only can you select preferences, but you can weight each one on a scale in terms how important they are relative to the others. You then hit enter. And voilà, a list of breeds appears, with detailed information about each one. In contrast, the most extensive rankings system in Philosophy is not even in the same ballpark in terms of assisting students with selection of graduate programs.  We provide better selection platforms for people looking for an animal companion than for our students researching philosophy programs.

In our zeal to address the pros and cons of the current ranking system in Philosophy, Brian Leiter’s “Philosophical Gourmet Report,” as well as to discuss the future of rankings, it seems that we have trapped ourselves in something of a box, an old box, one that is more 20th than 21st Century.   Or to be more explicit, the kind of ranking system that exists currently in Philosophy is old hat. It has limited value because there are too few or no data points about a host of factors that would be important to students. After conversations with my wife, Cathy Kemp, also a philosopher, and reading suggestions on the web, I would like to propose an alternative. This alternative is by no means original. As a matter of fact, I will quote extensively from a colleague, Noëlle McAfee, regarding this idea below. But what I would like to do here is frame the alternative and suggest why moving in this direction is, well, a no-brainer. First, some comments and assumptions.

There has been a good deal of discussion about why a ranking system is a good for Philosophy. I have heard claims that it’s helpful to philosophy departments seeking funding. These claims are debateable. And even if true, they have to be weighed against possible harms. We can sidestep this debate for the present, because the general public consensus of those who support rankings is that they are done for the benefit of prospective graduate students. This has been Leiter’s position.

Assumption #1. Whatever kind of information service we offer should have as its primary goal assisting prospective graduate students in choosing the best graduate program(s) for them.

One of the bones of contention in the profession has been about whether rankings accurately report quality or prestige (or both).

Assumption #2. Our goal should be to assist students in finding the best quality graduate education, not the one with the most prestige. It may be that the two go hand and hand, but they may not. Prestige should only be viewed as a possible efficient cause, not as a final one.

Perhaps the biggest bone of contention is the notion of quality. Often the debate about rankings comes down to whether we can assume a univocal definition of quality. A definition of this sort appears to be implied in the overall rankings of departments. (If this assumption were not made, then there would be little point in doing overall rankings.) However, in the actual practices of philosophers, while we may have standards that we like to think of as universal for the discipline, it is impossible to find a univocal definition that is robust and concrete enough to reach all sub-fields, styles, traditions, etc.

Assumption #3. There are multiple reasonable ways to demonstrate quality in Philosophy, and different philosophers and departments will do so in different ways.

Prospective graduate students presumably want the highest quality education they can get, given their interests. They also have lives outside of philosophy that will require them to live in different locations and pay for their education in different ways. In addition, they will have different kinds of desires about where they would like to work and at what kind of institution. (Here I think we must give up the notion–or the illusion–that the ultimate goal of all prospective graduate students is to teach at a Research I institution.)

Assumption #4. Any service that is provided to prospective graduates students should take into consideration a wide range of factors such as those mentioned above. For example, it should contain placement records, types of school, size of school, as well as geography.

With these assumptions in mind, I ask the obvious question, how best can we serve the next generation of philosophers? And I submit that once we start considering the actual needs of students, the limitations of even an extensive ranking system like the PGR become readily apparent. We are privileging the perceptions of a subset of philosophers about quality over offering students a full range of ways of judging quality as well as suitability for themselves. One way to put this: we are still using mid-twentieth century technology when we could be providing a twenty-first century alternative, an alternative that could be much more helpful to students than any ranking system.  Further, if Philosophy put half of the energy it now puts into the PGR, we could create a system that would be the envy of other disciplines. (Certainly we could use the information contained in it in other ways, but I won’t address these here.) What would such a system look like?  Noëlle McAfee made the following suggestion recently on her blog, “gonepublic: philosophy, politics, & public life”:

The APA has been collecting data from philosophy PhD programs for a few years now for its Guide to Programs on placement rates, etc. What if more information were collected, such as numbers of books published with university presses, faculty citation and Google Scholar analytics, peer-reviewed conference papers, faculty areas of specialization, etc? And then what if that information were turned into a search engine such that a prospective graduate student (or anyone) could go there and search by key words for programs that offered what she or he was wanting to study? Programs that were more research productive (with faculty being cited more) would show up higher on the list than those that weren’t. So the student could create a customized ranking of programs that would meet his or her interests. Anyone could use that data to generate rankings of any particular specialty.

Citations, publications, etc. are a better measure than perceived reputation. Not only are they more objective, they factor in the careful scrutiny that goes into the peer-review process—as opposed to top-of-the-head perceptions of faculty lists by those that may be unfamiliar with those faculty members’ work.

To this list could be added some of the factors that I mentioned earlier, for example, geographic distribution, etc. A student using a search engine of this sort could create a truly individualized set of schools to apply to.

But wait! The supporters of rankings will say we still need to provide students with “objective” information about quality, meaning, the perceptions of those in the field. I say, we can debate from today until the apocalypse and we may never agree on what these “objective” perceptions amount to. In this case, the best “objectivity” lies in the data we provide. Let’s give future graduate students some credit and not assume, paternalistically, that without our perceptions of “quality” in the form of rankings they will be lost at sea. They are a sophisticated bunch, well-accustomed to making all kinds of decisions with information presented on web-based platforms.  Philosophy should tailor its information outreach to the prospective graduate students of this century, not the last.

(For more information on the current controversy in Philosophy about rankings, see “Archive of the Meltdown.”)

2 thoughts

  1. Excellent ideas. Why not include blogposts? Today, blogs reflect current thoughts, presses print what we were contemplating last year. Let us also broaden this application, which includes all the items mentioned, as well as student feedback, to apply to ALL University programs. If we are to think outside the box, define the box, then leave it behind. Education itself is in trouble, perhaps it is time to examine the fact that testing companies, who profit from their products, should not be writing curriculums for public education. That task should belong to experienced educators. Philosophers should be defining their own programs, with the help of students, not allowing those who do not even understand the questions to be defining and rating programs, which to them must seem like a fog bank.

  2. Thanks, Mitchell (and Noelle)! Love the idea of moving into a multi-dimensional quality space. There will still be problems of bias (associated with, e.g., publications, citations, and so on), but as you say, they are anyway better than the sort of top-of-the-head assessments of quality that we are presently relying on.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.