upnight.com

NYC looking North from the EMPIRE STATE BUILDING. Photo (proportions altered) by DAVID ILIFF, license HERE.

UP@NIGHT

UP@NIGHT

UP@NIGHT

 

dsc_0867-3

……………

Early to bed, and early to rise,

Makes a man healthy, wealthy and wise

- Benjamin Franklin.

I don’t see it.

- George Washington

Now both of these are high authorities – very high and respectable authorities – but I am with General Washington first, last, and all the time on this proposition.

Because I don’t see it, either. . . .

Put no trust in the benefits to accrue from early rising, as set forth by the infatuated Franklin – but stake the last cent of your substance on the judgment of old George Washington, the Father of his Country, who said “he couldn’t see it.”

And you hear me endorsing that sentiment.

Mark Twain, “Early Rising, As Regards Excursions to the Cliff House,” MARK TWAIN IN THE GOLDEN ERA 1863-1866.

……………

Ten Excuses for Not Filling Out the PGR Survey

spock4thumb400px-Dog_Show

Given the ever increasing number of problems related to the PGR that are presently being highlighted–for example, the criteria for evaluating departments are inadequately defined, leaving individual evaluators in the position of falling back on their own “different philosophies of evaluation” in order to rank–I thought I would provide the PGR’s invited evaluators a list of excuses for bowing out.   Visitors to UP@NIGHT are welcome to add to the list in the Comments.   By the way, the phrase “different philosophies of evaluation” is found on the PGR’s site under Methods and Criteria.   See, “The Dog Ate My (Philosophical Gourmet) Report,” for discussion of this phrase and other gems.

 

Ten Excuses for Not Filling Out the Philosophical Gourmet Report Survey

 

….10.  I was attending a Star Trek Convention.

 

  1.  I couldn’t find the “Like” button.

facebook_like_button_big-1

  1.   I became confused when no one asked us to use a #2 lead pencil.

pencil

  1.   I was counting cars on the NJ Turnpike.

 

  1.  I couldn’t find photos of the dishes to rate on the Gourmet’s menu.

 

  1.  Hegel’s Logic started to make sense and I lost track of time.

 

  1.  Bruce Springsteen asked me not to fill out the survey.

 

  1.  I was waiting for iOS 8.0 to download on my iPad.

 

  1. I thought PGR stood for Państwowe Gospodarstwo Rolne, collective farms that existed in communist Poland.

 

  1. I was serving as a judge at an American Kennel Club dog show.

 

Other excuses welcome. Please feel free to leave them in the Comments Section.

The Dog Ate My (Philosophical Gourmet) Report

ribbonplato_Fotor_CollageDriven_Ribbon_AKC

TO RANK OR NOT TO RANK….in the year 2014?

Consider some necessary but not sufficient conditions for the evaluation of individuals or groups in order to rank them: (1) there should be explicit standards—criteria—on which to base the evaluations, and (2) there should a public code of conduct for the evaluators—judges—to follow, as well as a detailed description of their function as judges. Currently neither of these exist for the PGR, but they do exist for organizations well-known in the U.S.

Here is the way that the PGR addresses #1.

Please give your opinion of the attractiveness of the faculty for a prospective student, taking in to account (and weighted as you deem appropriate) the quality of philosophical work and talent on the faculty, the range of areas the faculty covers, and the availability of the faculty over the next few years (emphasis added).

In addition, the PGR provides the following guidance to the evaluators regarding criteria and their respective weights.

“Faculty quality” should be taken to encompass the quality of philosophical work and talent represented by the faculty and the range of areas they cover, with the two weighted as you think appropriate. Since the rankings are used by prospective students, about to embark on a multi-year course of study, you may also take in to account, as you see fit, considerations like the status (full-time, part-time) of the faculty; the age of the faculty (as a somewhat tenuous guide to prospective availability, not quality); and the quality of training the faculty provide, to the extent you have information about this (emphasis added).

Hmmm.  Not much here by way of any definitive and well-defined criteria: “attractiveness,” “weighted as you deem appropriate,” “quality,” “talent,” “weighted as you think appropriate.” “may take into account as you see fit,” and “to the extent you have information about this.” *  Judges can decide which criteria to use and in what combination. Note that the PGR’s criteria for evaluation are run together with the standards for the judicial function (#2).

Compare this with the way that the Westminster Kennel Club discusses the breed standards by which individual dogs should be evaluated. For breeds you might think of specializations; for individual dogs and their attributes, perhaps departments and their faculties. The important thing is the way in which an enterprise develops and provides criteria for judgment of its objects, not the objects themselves, dogs or departments.

Each breed’s parent club creates a STANDARD, a written description of the ideal specimen of that breed. Generally relating form to function, i.e., the original function that the dog was bred to perform, most standards describe general appearance, movement, temperament, and specific physical traits such as height and weight, coat, colors, eye color and shape, ear shape and placement, feet, tail, and more. Some standards can be very specific, some can be rather general and leave much room for individual interpretation by judges. This results in the sport’s subjective basis: one judge, applying his or her interpretation of the standard, giving his or her opinion of the best dog on that particular day. Standards are written, maintained and owned by the parent clubs of each breed (CAPS in original).

Notice both the attention to detail and the willingness to acknowledge that there will still be room for interpretation by judges, or evaluators. We have here both reasonably objective criteria and a bow to the interpretive dimension in one paragraph. Impressive! In addition, and perhaps most importantly, the Club refers to the relations of form and function (“the original function that the dog was bred to perform”), that is, there is some agreed upon standard/function for each breed. And WKC expects each breed’s parent club to provide a written standard for its breed.

Now consider how the PGR handles #2, that is, how judges should understand their roles and rules of conduct.

Different respondents had different “centers of gravity” in their scoring: some gave no 5s, others gave no score lower than a 2. It was also clear that respondents had different philosophies of evaluation: some clearly tried to consider the breadth of strength in a department, while others ranked a program highly or lowly based simply on its strength in his or her fields. The range of evaluations for single departments should be a cautionary note to all undergraduates about relying too much on the advice of just one or two faculty advisors. Idiosyncrasy abounds, even at top departments! (emphasis added).

Yes, indeed, idiosyncrasy abounds. Now, for contrast, let’s look to the hounds: the American Kennel Club has a thirty page document detailing the expectations for proper judging and the requirements for judges. Here is the way the AKC introduces the document:

Judging at AKC® shows should be enjoyable for the judge and beneficial to the sport of purebred dogs. In this publication, you will find Rules, Policies and suggested Guidelines. The Policies and Rules will be clearly designated as such. The suggestions have been developed over the years based on the experience of many seasoned judges and the AKC staff. You will find them most helpful in learning the judging process. Policies are adopted by the Board of Directors, and Rules are approved by the Delegate body. Compliance with these is mandatory.

“Policies are adopted by the Board of Directors, and Rules are approved by the Delegate body. Compliance with these is mandatory.” Seems eminently sensible.  The PGR, on the other hand, says virtually nothing about the role of judges, except the catch as catch can points mentioned under #1 above and the acknowledgement, “It was also clear that respondents had different philosophies of evaluation.” (Different philosophies of evaluation to rank different philosophy departments. No comment.)   And the AKC provides a staff to assist those learning the judging process.

I know, you can say that we philosophers don’t need any such guidelines because, well, we all know how to serve as judges. However, the problem is that without agreed-upon expectations for our judges we can’t be sure that they are actually judging in the same way, with the same care, and with the same set of assumptions. As far as I know, neither the PGR’s editors nor its Advisory Board provide detailed policies or any policy at all in this regard. The criteria and the guidelines for judgment should be explicit and known not only to the judges but to the judged.

This is embarrassing. The dominant rankings system in philosophy, the “Philosophical Gourmet Report,” doesn’t provide its evaluators with anything like the detailed or comprehensive guidance that judges in dog shows receive.

I’ve heard complaints that the PGR is equivalent to a dog and pony show. Wrong! The dog fanciers have us beat, paws down.  Or they’re eating our lunch, I mean, our Report.

So, the question remains: To Rank or Not to Rank in the year 2014? **

FullSizeRender    * How many philosophers does it take to change a quality light bulb?   Not going to happen, my friend.  The philosophers, any number, will first have to agree that the object in front of them is a quality light bulb.

** Coming Soon:  A  list of ten reasons that evaluators can use to excuse themselves from this year’s PGR rankings.  [UPDATE: Now Available here.]

An Open Letter to Prospective Evaluators for the 2014-2015 Philosophical Gourmet Report

plato

Dear Colleagues,

According to a recent post on Brian’s Leiter’s Blog, you will soon be receiving surveys to fill out for the PGR.

My co-editor Brit Brogaard (Miami) and her RA have done a great job finishing the evaluator and faculty list spread sheets, and the IT professionals here should have a testable version of the survey ready for us to try out during the weekend.  If all goes well, Brit will send out the invitations to evaluators early next week (Monday or Tuesday is our goal).  We agreed to a somewhat shorter window for responses (two weeks, rather than three weeks) due to the late start date this year and our goal of getting the results out in time for students applying in the current cycle.

UPDATE:  The IT folks are still working out certain bugs in the survey program, so we won’t be able to test it before Monday.  That means, at the soonest, Prof. Brogaard will be sending out invitations on Tuesday or perhaps Wednesday of next week (Oct. 21 or Oct. 22).

I am sure that you must be aware of the controversy surrounding the PGR’s rankings, which appear legitimate because philosophers are responsible for them. There have been many persuasive pieces written about the biases inherent in surveys of this sort. I write as someone convinced that rankings do more harm than good. A comprehensive informational web site with a sophisticated search engine would be my personal preference. But I will not try to convince you of this here. I write to run some numbers by you and ask that you consider them before filling out this year’s survey. I am not claiming my concerns are original.  But I do want to highlight some of them as you consider whether to fill out the survey. Many philosophers do not fill out the survey when they receive it, and there are good reasons for you to take a pass on it this year.  Here’s why.

According to Leiter, he is currently working from a list of 560 nominees to serve as evaluators for the 2014-2015 PGR. During the last go-around in 2011, 271 philosophers filled out the part of the survey dealing with overall rankings, and a total of 300 filled out the overall and specialty rankings.   Leiter claims that in 2011 the on-line survey was sent to 500 philosophers. So many philosophers decided NOT to fill it out even after receiving it.

Let’s consider some of the numbers. Three hundred may seem to be a reasonable number of evaluators, but the total number of philosophers obscures crucial details, and one doesn’t need any sophisticated form of statistical analysis to judge how problematic they are. If you look at the thirty-three specializations that are evaluated in the PGR, slightly more than 60% have twenty or fewer evaluators. That’s right, twenty or fewer. Please think about this for a moment, twenty or fewer philosophers, in one case as few as three, are responsible for ranking 60% of the specializations found in the PGR, what many consider to be the most important feature of the PGR.

But it is actually worse than this.   There are certain areas that have many fewer evaluators than other areas. For example, the PGR lists nine specializations under the History of Philosophy rubric. Six of the nine have twenty or fewer evaluators. And one of the specializations, American Pragmatism, has only seven. As a matter of fact, the only general category to have the majority of specializations with more than twenty evaluators is “Metaphysics and Epistemology.” Five of its seven specialties have more than twenty.   But none of the others–Philosophy of Science and Mathematics, Value Theory, and the History of Philosophy—have a majority of specializations with more than twenty evaluators. And in the three specializations outside of these rubrics we find: eleven evaluators for feminism, three for Chinese, and four for philosophy of race. (Yes, the PGR actually provides rankings for Chinese Philosophy with three evaluators.)

But don’t take my word for this problem. Here’s what Leiter says on the 2011 survey site.

Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3).   More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping.

I’m sorry. The urging of students “not to assign much weight at all to small differences” does not solve the problem. No weight should be assigned to specializations ranked by so few people. This is not rocket science. This is common sense. You can’t evaluate the quality of specializations that have so many facets with so few people, who themselves were selected by another small group of people, the Board, which clearly favors certain specializations given the distribution of evaluators. (This is especially true when there hasn’t even been a public discussion about what should constitute standards for rankings of specializations in philosophy.) Yet Leiter’s advice makes it appear that one should take the specialization rankings seriously, that is, if one just doesn’t assign too much weight to small differences.  This is a shady rhetorical move.

I honestly don’t know how one could fill out the survey in good faith knowing that so few people are participating in ranking so many specializations. When you fill out the survey you are making statement. You are providing your expertise to support this enterprise. The fact that you might be an evaluator in M & E, with more evaluators than the other areas, doesn’t lift the responsibility of involvement. At minimum, you are tacitly endorsing the whole project.

Ah, you say, but perhaps this year’s crop of evaluators will be more balanced. However, the way that the PGR is structured undermines this hope. The evaluators are nominated by the Board, which has roughly fifty members. Most of the same people are on the Board this time around as last time. But here’s the kicker: Brian asks those leaving the Board to suggest a replacement.   The obvious move for a Board member here would be to nominate a replacement in his or her own area, probably from his or her own circle of experts. In Leiter’s words, “Board members nominate evaluators in their areas of expertise, vote on various policy issues (including which faculties to add to the surveys), serve as evaluators themselves and, when they step down, suggest replacements.” So there is no reason to believe that the make up of the pool of evaluators would have markedly changed since the last go around.

The 2014-2015 PGR survey will be in place for at least the next two years, maybe more, given the difficulties that the PGR faces. There are a lot of young people who will be influenced by it. Please consider taking a pass on filling out the survey. If enough of you do so, the PGR will have to change or go out of business.  Given the recent and continuing publicity surrounding the PGR, we should try to avoid embarrassment, which is likely to occur when those outside of philosophy, especially those who know about survey methods, discover our support for such a compromised rating system.

 

Three disclaimers:

1) I purposely sought to keep the statistics as simple and as straightforward as possible in this post in order to raise basic questions about imbalances and sampling size in the current PGR.  Based on these and other considerations I ask prospective evaluators to reconsider filling out the survey.   Gregory Wheeler has a nice series on some of the more in-depth statistical work in “Choice & Inference.”  See, the series and concluding piece, “Two Reasons for Abolishing the PGR.”

2) If there is public content regarding changes to the PGR that is available, and that I somehow missed, I would appreciate being informed about it. As far as I know, no fundamental change is taking place in this year’s PGR.

3) I counted the number of evaluators in the different categories. Of course I could have made an error in the count somewhere. But the numbers are certainly correct enough to back up my concerns.

The Mars Rover Update

For those interested in the progress of the Mars Rover Opportunity, I offer an update to a previous post on our exploration of Mars.  Well, not actually much of an update.  It’s the same old story.  Martians 21, Earthlings and Rover 0.

431076_229932683795944_2015757470_n-1

Photo Bentan Z Rana

The Halo Culture: Taking the Rankings Challenge

Hero200px-HK_Central_Statue_Square_Legislative_Council_Building_n_Themis_s Hero

IN a recent post, “Rank and Yank,” I argued that we philosophers have not lived up to our reputation as critical thinkers with regard to rankings. We have not engaged in any sustained or comprehensive debate about their virtues and vices. Instead we have allowed rankings to take on a life of their own. In a second post, “Thinking Outside of the Box,” I suggested an alternative to rankings that would be more helpful for prospective graduate students, without the biases that are built into a ranking system. Here I would like to consider the consequences of rankings for those entering the job market.

Supporters of rankings often claim that they provided a real service by supplanting an old boys hiring network.   Whether they actually helped in this regard, I do not know. But they have undoubtedly created an alternative type of old boys network, more insidious in certain ways than the last, because this one claims to be based on meritocratic and objective principles, when in fact it diminishes opportunity for individuals and does not promote merit for the profession as a whole. There is a rather simple thought experiment that can prove my point. Or if it doesn’t prove my point, it should at least give pause to any fair-minded individual in his or her support for rankings.

There is little doubt that a halo effect exists with regard to academic institutions. Tell someone that you received your undergraduate or graduate degree from an Ivy League school and you will be looked at differently than someone who graduated from Public U. This is ancient history, both inside and outside of academia, but appears to be more pervasive of late as our culture becomes ever more status conscious, which is no doubt related to deepening economic class stratification.   Associate yourself with a certain brand, Apple or Harvard, and you’ve got a better shot at winning the prestige game. Although we might hope that it would be different in Philosophy, it’s difficult to deny that rankings play into this prestige and halo culture.

Here I offer a personal anecdote, one that I believe reflects very deep biases in the profession, nay prejudices. These unfairly mark individuals in certain ways, especially in the academic job market in Philosophy.

More than two decades ago I was at a conference with philosophers trained in various traditions. One very eminent analytic philosopher—someone known the world over—and I hit it off. We had lunch, then dinner, and then lunch together. We talked a lot about philosophy, as well as other matters. Well into our second day of conversation my new friend asked where I had gone to graduate school. I said, Boston College. And without skipping a beat, or reflecting on what he was about to say, my new friend looked at me, somewhat in shock, and said, “Oh, you’re much too smart to have gone there.”

I tell this story because it reflects wide-spread assumptions in our profession about people who attend schools with which others may be unfamiliar, or those that are not part of a sanctified list of schools based on rankings.   In the situation I described there was no harm done because, well, I was employed, and my new friend was not sitting on a hiring committee. But if those less candid than he, but sharing some of the same institutional prejudices, were on such a committee, I would not have gotten past the first round in the application process. This would not have been right. I would not have been judged on my competence, but on my lack of a halo (or my anti-halo).

And now my challenge.

In spite of the fact that some of us may believe that rankings do not effect our ability to judge job candidates, that we can remain objective in spite of biases about institutions, I simply do not believe this to be the case. Here is a simple thought experiment to support my point. I believe that anyone who honestly reflects on it, and takes it seriously, will agree that we have a problem, a serious one in terms of basic fairness to job candidates.

The next time you do a job search break your committee into two groups. Have one group evaluate the candidates without reference to the institution from which they graduated and have the other evaluate the candidates with all of the institutional information included. I can almost guarantee that the short lists will not be the same. And I believe that anyone who is honest with him or herself about how this process works will agree. If you think I’m wrong, try it. Or at least try it as a thought experiment.

Oh, but you will say, we can never get rid of institutional biases. X and Y universities will always have a special cachet, just because they are X and Y. No doubt! But this isn’t the question. The question is whether we as philosophers are not only aware of institutional biases, but whether we are comfortable actively promoting them, entrenching them, lending our good name to furthering them. We should be doing everything in our power to level the playing field when it comes to hiring in Philosophy. And this means diminishing as much as possible the halo effect, not supporting a system that creates halos and then—all too often—trumpets them to the world.

Thinking Outside the Box (or, A Real Alternative to Rankings)

InquiringMinds-1Chippy at 5

IF you are interested in finding the most suitable dog for you and your family, and you are okay with pure breeds, you can find websites that help you decide. They allow you to select various preferences, for example, coat length and upkeep requirements, how much the dogs will bark, how they are with children, genetic predispositions to illness, size, etc. Not only can you select preferences, but you can weight each one on a scale in terms how important they are relative to the others. You then hit enter. And voilà, a list of breeds appears, with detailed information about each one. In contrast, the most extensive rankings system in Philosophy is not even in the same ballpark in terms of assisting students with selection of graduate programs.  We provide better selection platforms for people looking for an animal companion than for our students researching philosophy programs.

In our zeal to address the pros and cons of the current ranking system in Philosophy, Brian Leiter’s “Philosophical Gourmet Report,” as well as to discuss the future of rankings, it seems that we have trapped ourselves in something of a box, an old box, one that is more 20th than 21st Century.   Or to be more explicit, the kind of ranking system that exists currently in Philosophy is old hat. It has limited value because there are too few or no data points about a host of factors that would be important to students. After conversations with my wife, Cathy Kemp, also a philosopher, and reading suggestions on the web, I would like to propose an alternative. This alternative is by no means original. As a matter of fact, I will quote extensively from a colleague, Noëlle McAfee, regarding this idea below. But what I would like to do here is frame the alternative and suggest why moving in this direction is, well, a no-brainer. First, some comments and assumptions.

There has been a good deal of discussion about why a ranking system is a good for Philosophy. I have heard claims that it’s helpful to philosophy departments seeking funding. These claims are debateable. And even if true, they have to be weighed against possible harms. We can sidestep this debate for the present, because the general public consensus of those who support rankings is that they are done for the benefit of prospective graduate students. This has been Leiter’s position.

Assumption #1. Whatever kind of information service we offer should have as its primary goal assisting prospective graduate students in choosing the best graduate program(s) for them.

One of the bones of contention in the profession has been about whether rankings accurately report quality or prestige (or both).

Assumption #2. Our goal should be to assist students in finding the best quality graduate education, not the one with the most prestige. It may be that the two go hand and hand, but they may not. Prestige should only be viewed as a possible efficient cause, not as a final one.

Perhaps the biggest bone of contention is the notion of quality. Often the debate about rankings comes down to whether we can assume a univocal definition of quality. A definition of this sort appears to be implied in the overall rankings of departments. (If this assumption were not made, then there would be little point in doing overall rankings.) However, in the actual practices of philosophers, while we may have standards that we like to think of as universal for the discipline, it is impossible to find a univocal definition that is robust and concrete enough to reach all sub-fields, styles, traditions, etc.

Assumption #3. There are multiple reasonable ways to demonstrate quality in Philosophy, and different philosophers and departments will do so in different ways.

Prospective graduate students presumably want the highest quality education they can get, given their interests. They also have lives outside of philosophy that will require them to live in different locations and pay for their education in different ways. In addition, they will have different kinds of desires about where they would like to work and at what kind of institution. (Here I think we must give up the notion–or the illusion–that the ultimate goal of all prospective graduate students is to teach at a Research I institution.)

Assumption #4. Any service that is provided to prospective graduates students should take into consideration a wide range of factors such as those mentioned above. For example, it should contain placement records, types of school, size of school, as well as geography.

With these assumptions in mind, I ask the obvious question, how best can we serve the next generation of philosophers? And I submit that once we start considering the actual needs of students, the limitations of even an extensive ranking system like the PGR become readily apparent. We are privileging the perceptions of a subset of philosophers about quality over offering students a full range of ways of judging quality as well as suitability for themselves. One way to put this: we are still using mid-twentieth century technology when we could be providing a twenty-first century alternative, an alternative that could be much more helpful to students than any ranking system.  Further, if Philosophy put half of the energy it now puts into the PGR, we could create a system that would be the envy of other disciplines. (Certainly we could use the information contained in it in other ways, but I won’t address these here.) What would such a system look like?  Noëlle McAfee made the following suggestion recently on her blog, “gonepublic: philosophy, politics, & public life”:

The APA has been collecting data from philosophy PhD programs for a few years now for its Guide to Programs on placement rates, etc. What if more information were collected, such as numbers of books published with university presses, faculty citation and Google Scholar analytics, peer-reviewed conference papers, faculty areas of specialization, etc? And then what if that information were turned into a search engine such that a prospective graduate student (or anyone) could go there and search by key words for programs that offered what she or he was wanting to study? Programs that were more research productive (with faculty being cited more) would show up higher on the list than those that weren’t. So the student could create a customized ranking of programs that would meet his or her interests. Anyone could use that data to generate rankings of any particular specialty.

Citations, publications, etc. are a better measure than perceived reputation. Not only are they more objective, they factor in the careful scrutiny that goes into the peer-review process—as opposed to top-of-the-head perceptions of faculty lists by those that may be unfamiliar with those faculty members’ work.

To this list could be added some of the factors that I mentioned earlier, for example, geographic distribution, etc. A student using a search engine of this sort could create a truly individualized set of schools to apply to.

But wait! The supporters of rankings will say we still need to provide students with “objective” information about quality, meaning, the perceptions of those in the field. I say, we can debate from today until the apocalypse and we may never agree on what these “objective” perceptions amount to. In this case, the best “objectivity” lies in the data we provide. Let’s give future graduate students some credit and not assume, paternalistically, that without our perceptions of “quality” in the form of rankings they will be lost at sea. They are a sophisticated bunch, well-accustomed to making all kinds of decisions with information presented on web-based platforms.  Philosophy should tailor its information outreach to the prospective graduate students of this century, not the last.

(For more information on the current controversy in Philosophy about rankings, see “Archive of the Meltdown.”)