upnight.com

NYC looking North from the EMPIRE STATE BUILDING. Photo (proportions altered) by DAVID ILIFF, license HERE.

UP@NIGHT

UP@NIGHT

UP@NIGHT

 

dsc_0867-3

……………

Early to bed, and early to rise,

Makes a man healthy, wealthy and wise

- Benjamin Franklin.

I don’t see it.

- George Washington

Now both of these are high authorities – very high and respectable authorities – but I am with General Washington first, last, and all the time on this proposition.

Because I don’t see it, either. . . .

Put no trust in the benefits to accrue from early rising, as set forth by the infatuated Franklin – but stake the last cent of your substance on the judgment of old George Washington, the Father of his Country, who said “he couldn’t see it.”

And you hear me endorsing that sentiment.

Mark Twain, “Early Rising, As Regards Excursions to the Cliff House,” MARK TWAIN IN THE GOLDEN ERA 1863-1866.

……………

A Fun Way to Handle the PGR Mess

l-1     Okay, here is a fun way we can handle the PGR rankings mess.   Members of the Board and evaluators agree to meet opponents of the PGR at this pub, Philosophers Club, in San Francisco.  Never been there but people seem to like it.

l    And someone at the place appears to have a good sense of humor.  If you click this photo on Yelp’s site for the Club, we are told that this placemat (?) is also a mural on the ceiling.   A veritable Sistine Chapel for philosophers.

l-2 And there seems to be some real history here.

images

So let’s do it and see who makes the best arguments and jokes.   Winners take all:  no survey or an eternal return of the PGR survey.  They are open until 2:00 AM.    (I promise I will still be standing if it’s only 2:00 AM.  Early hours for UP@NIGHT.)

 

 

 

Leiter Posts Response to Criticism of Rankings—A Response to the Response

FullSizeRender-1

Dear Philosophy Colleagues,

Today Brian Leiter posted what he called “an amusing (and also insightful) e-mail ‘rant’ about the PGR and the recent campaign against rankings,” written by one Michael Bramley, “a longtime reader” of Leiter’s blog. This is clearly some sort of rebuttal directed at criticisms—I’ve made some here on UP@NIGHT—of the PGR, of rankings of graduate programs in philosophy, of Leiter himself. Leiter has been uncharacteristically restrained in the last couple of weeks and has not responded to critics, but since he chose to post Mr. Bramley’s messages under the banner “insightful,” it’s reasonable to suppose that Leiter shares Mr. Bramley’s views. Leiter calls this a “rant,” perhaps a sign of some reservations about Mr. Bramley’s remarks, perhaps a term of affection. But nowhere does Leiter say that he disagrees with Mr. Bramley. He goes out of his way at the end of the post to add a follow-up from Mr. Bramley, and the general line of argument is similar to that of defenders of the PGR.

The post is a hearty brew of straw man and ad hominem arguments. I will not go through the entire thing here, but I must address a few of the most illuminating passages, to get us all started. From the first paragraph of Mr. Bramley’s missive:

Please allow me to express my support over the recent rankings nonsense by venting my frustration at the campaign to remove you from the PGR and the campaign to stop all rankings in philosophy.  A move which, it is obvious, is for the benefit of those who do not score highly and not for the benefit of students.

So, all of those who are concerned about the PGR’s methodological problems—for example, the lack of defined criteria, which in turn leaves judges in the position of introducing their own “philosophies of evaluation” (Leiter’s words) into the survey—and those of us who worry about biases in the rankings and the creation of a halo effect in hiring that is unfair to candidates, are criticizing the PGR out of crass self-interest. While, on the other hand, since this is a defense of Leiter’s rankings, I assume that he, in contrast, acts out of the goodness of his heart. This is not the first time we have heard accusations of this sort. Let’s call it the resentment dismissal, and hope that Nietzsche is not rolling over in his grave at this trivialization of ressentiment.

From the next paragraph:

Talk about the perfect being the enemy of the good.  Plato could not have done a better job of convincing everyone that everything is worthless and shit until and unless we can all apprehend the Form of the Good Ranking System.

Speaking for myself, as a pragmatist, I must say that I don’t generally advance views that are tagged with making the perfect the enemy of the good. And I assume that most, if not all, of my non-pragmatists colleagues are not criticizing the PGR from the realm of The Good, even aspirationally. This is a straw man on fire, a sort of Burning Man without the arts. This is ludicrous. We have suggested doable alternatives to the kind of rankings that we find in the PGR, for example, a comprehensive information site with a sophisticated search engine, which will in effect allow each prospective graduate student to create their own “rankings” based on his or her own interests and needs. Real data would be used in such a site. In addition, as I have noted in other posts, although I would prefer no rankings, if we are to have them, they should not suffer from the kind of obvious methodological problems of the PGR.  The PGR is so far from The Good that those who criticize it have plenty of room to maneuver between it and The Good.

Next, from the third paragraph:

The PGR is largely an informed-opinion poll: what do the philosophy professionals think of certain philosophy departments?  This is interesting and good to know (emphasis added).

I am tempted to lead off hear with:    EXTRA, EXTRA, READ ALL ABOUT IT: THE PGR IS AN OPINION POLL!! But let’s look at what in fact it is polling: “what do the philosophy professionals think of certain philosophy departments?” This would be an acceptable statement with the following modification, “what do the CERTAIN, SELECTED, philosophy professionals think of certain philosophy departments?”  I mean, isn’t this precisely the heart of the problem? The PGR presents itself as speaking for the profession as a whole, when at best it speaks for a slice of the profession. We don’t even know if a majority of those in the profession would agree with the idea that it speaks for them. But we surely know that it doesn’t speak for many people. Just listen to all of the critical voices. (To suggest it speaks for the profession is real chutzpah. And I believe that this reflects Leiter’s view. If not, he is more than welcome to say so here or in any other venue.)

Continuing in the third paragraph:

If those for whom the PGR is intended are unable to understand what an opinion poll is, then they should demand a refund from their undergraduate education for having failed to teach them basic critical thinking.

Again, we hear that the PGR is an opinion poll, and we, those opposed to rankings, in failing to understand that it’s an opinion poll haven’t learned basic critical thinking skills. Of course the issue is not whether it is an opinion poll–although I am happy to hear a PGR fan characterize it so honestly–it’s that we think it’s a poorly constructed poll/survey, which, much more importantly, pretends to be more than a mere opinion poll. Opinion polls capture snapshots of views held by members of the group being polled. They are time sensitive. So a disclaimer at the top of the PGR’s rankings might be nice: this is an opinion poll and like any poll its results are confined to the polling period.  But we won’t get this, in part because, let’s be honest here, the PGR doesn’t present itself as an ordinary poll. Why? Because it claims that it is reflecting durable judgments about the “quality” of graduate programs, judgments that are not as variable as mere opinion.

Now the last line from the same paragraph:

And if the professors who oppose it do so because they think opinion polls/reputational surveys do not capture adequately the real picture, then they are free to construct ways to capture this ‘real picture’ that they are so worried about missing with the PGR.

I am not opposed to the PGR because it doesn’t capture the “real picture.” This is not a debate about who gets to present reality. No, the problem with the PGR is that its methods can produce inadequate and distorted information.  As it stands, it misleads readers about its mission.   We can do better—just better!—without seeking the ideal of The Good.

And for now, just a word about the follow-up from Mr. Bramley that Leiter includes at the end of his post.

Before I take any more of your time I must say just this: the PGR is a collection of a large number of informed – expert – opinions of department reputation.  Students will ask their professors for advice about where to go for graduate school.  Unless these professors will refuse to even answer their students’ questions, then professional opinions on departments re. graduate training are legitimate.  So it seems this whole thing amounts to exactly this: ‘by all means have your opinions and even offer them to students – but for the love of humanity, do not put them in one place and record them on paper!’

Unbelievable.

Yes, unbelievable, indeed! Just look at the “logic” here. It seems that Mr. Bramley—and presumably Leiter himself, since he went out of his way to post this as a follow-up—don’t understand how radically different getting information from individual professors is to the claims of a survey like the PGR. To be more specific, Bramley (& Leiter) see the PGR as just a collection of the opinions of professors in one place. Not only is this an impossible claim—when was the last time you gave advice to a prospective grad student and gave ordinal rankings of his or her choices, as opposed to discussing the pros and cons of programs, etc.?—but it once again misses the most basic point: the PGR does not claim to the be the mere opinions of a bunch of individual philosophers. Its overall and specialty rankings pretend to tell us something about the state of graduate education as a whole. The idea that the PGR seriously marginalizes whole swaths of the profession still hasn’t gotten through to Brian Leiter (or perhaps it is his idée fixe and has been driving the enterprise from the start).

If Leiter doesn’t share Bramley’s views, perhaps he can say more about what it is he finds “insightful” about the messages to which he’s given so much space today.

Ten Excuses for Not Filling Out the PGR Survey

spock4thumb400px-Dog_Show

Given the ever increasing number of problems related to the PGR that are presently being highlighted–for example, the criteria for evaluating departments are inadequately defined, leaving individual evaluators in the position of falling back on their own “different philosophies of evaluation” in order to rank–I thought I would provide the PGR’s invited evaluators a list of excuses for bowing out.   Visitors to UP@NIGHT are welcome to add to the list in the Comments.   By the way, the phrase “different philosophies of evaluation” is found on the PGR’s site under Methods and Criteria.   See, “The Dog Ate My (Philosophical Gourmet) Report,” for discussion of this phrase and other gems.

 

Ten Excuses for Not Filling Out the Philosophical Gourmet Report Survey

 

….10.  I was attending a Star Trek Convention.

 

  1.  I couldn’t find the “Like” button.

facebook_like_button_big-1

  1.   I became confused when no one asked us to use a #2 lead pencil.

pencil

  1.   I was counting cars on the NJ Turnpike.

 

  1.  I couldn’t find photos of the dishes to rate on the Gourmet’s menu.

 

  1.  Hegel’s Logic started to make sense and I lost track of time.

 

  1.  Bruce Springsteen asked me not to fill out the survey.

 

  1.  I was waiting for iOS 8.0 to download on my iPad.

 

  1. I thought PGR stood for Państwowe Gospodarstwo Rolne, collective farms that existed in communist Poland.

 

  1. I was serving as a judge at an American Kennel Club dog show.

 

Other excuses welcome. Please feel free to leave them in the Comments Section.

The Dog Ate My (Philosophical Gourmet) Report

ribbonplato_Fotor_CollageDriven_Ribbon_AKC

TO RANK OR NOT TO RANK….in the year 2014?

Consider some necessary but not sufficient conditions for the evaluation of individuals or groups in order to rank them: (1) there should be explicit standards—criteria—on which to base the evaluations, and (2) there should a public code of conduct for the evaluators—judges—to follow, as well as a detailed description of their function as judges. Currently neither of these exist for the PGR, but they do exist for organizations well-known in the U.S.

Here is the way that the PGR addresses #1.

Please give your opinion of the attractiveness of the faculty for a prospective student, taking in to account (and weighted as you deem appropriate) the quality of philosophical work and talent on the faculty, the range of areas the faculty covers, and the availability of the faculty over the next few years (emphasis added).

In addition, the PGR provides the following guidance to the evaluators regarding criteria and their respective weights.

“Faculty quality” should be taken to encompass the quality of philosophical work and talent represented by the faculty and the range of areas they cover, with the two weighted as you think appropriate. Since the rankings are used by prospective students, about to embark on a multi-year course of study, you may also take in to account, as you see fit, considerations like the status (full-time, part-time) of the faculty; the age of the faculty (as a somewhat tenuous guide to prospective availability, not quality); and the quality of training the faculty provide, to the extent you have information about this (emphasis added).

Hmmm.  Not much here by way of any definitive and well-defined criteria: “attractiveness,” “weighted as you deem appropriate,” “quality,” “talent,” “weighted as you think appropriate.” “may take into account as you see fit,” and “to the extent you have information about this.” *  Judges can decide which criteria to use and in what combination. Note that the PGR’s criteria for evaluation are run together with the standards for the judicial function (#2).

Compare this with the way that the Westminster Kennel Club discusses the breed standards by which individual dogs should be evaluated. For breeds you might think of specializations; for individual dogs and their attributes, perhaps departments and their faculties. The important thing is the way in which an enterprise develops and provides criteria for judgment of its objects, not the objects themselves, dogs or departments.

Each breed’s parent club creates a STANDARD, a written description of the ideal specimen of that breed. Generally relating form to function, i.e., the original function that the dog was bred to perform, most standards describe general appearance, movement, temperament, and specific physical traits such as height and weight, coat, colors, eye color and shape, ear shape and placement, feet, tail, and more. Some standards can be very specific, some can be rather general and leave much room for individual interpretation by judges. This results in the sport’s subjective basis: one judge, applying his or her interpretation of the standard, giving his or her opinion of the best dog on that particular day. Standards are written, maintained and owned by the parent clubs of each breed (CAPS in original).

Notice both the attention to detail and the willingness to acknowledge that there will still be room for interpretation by judges, or evaluators. We have here both reasonably objective criteria and a bow to the interpretive dimension in one paragraph. Impressive! In addition, and perhaps most importantly, the Club refers to the relations of form and function (“the original function that the dog was bred to perform”), that is, there is some agreed upon standard/function for each breed. And WKC expects each breed’s parent club to provide a written standard for its breed.

Now consider how the PGR handles #2, that is, how judges should understand their roles and rules of conduct.

Different respondents had different “centers of gravity” in their scoring: some gave no 5s, others gave no score lower than a 2. It was also clear that respondents had different philosophies of evaluation: some clearly tried to consider the breadth of strength in a department, while others ranked a program highly or lowly based simply on its strength in his or her fields. The range of evaluations for single departments should be a cautionary note to all undergraduates about relying too much on the advice of just one or two faculty advisors. Idiosyncrasy abounds, even at top departments! (emphasis added).

Yes, indeed, idiosyncrasy abounds. Now, for contrast, let’s look to the hounds: the American Kennel Club has a thirty page document detailing the expectations for proper judging and the requirements for judges. Here is the way the AKC introduces the document:

Judging at AKC® shows should be enjoyable for the judge and beneficial to the sport of purebred dogs. In this publication, you will find Rules, Policies and suggested Guidelines. The Policies and Rules will be clearly designated as such. The suggestions have been developed over the years based on the experience of many seasoned judges and the AKC staff. You will find them most helpful in learning the judging process. Policies are adopted by the Board of Directors, and Rules are approved by the Delegate body. Compliance with these is mandatory.

“Policies are adopted by the Board of Directors, and Rules are approved by the Delegate body. Compliance with these is mandatory.” Seems eminently sensible.  The PGR, on the other hand, says virtually nothing about the role of judges, except the catch as catch can points mentioned under #1 above and the acknowledgement, “It was also clear that respondents had different philosophies of evaluation.” (Different philosophies of evaluation to rank different philosophy departments. No comment.)   And the AKC provides a staff to assist those learning the judging process.

I know, you can say that we philosophers don’t need any such guidelines because, well, we all know how to serve as judges. However, the problem is that without agreed-upon expectations for our judges we can’t be sure that they are actually judging in the same way, with the same care, and with the same set of assumptions. As far as I know, neither the PGR’s editors nor its Advisory Board provide detailed policies or any policy at all in this regard. The criteria and the guidelines for judgment should be explicit and known not only to the judges but to the judged.

This is embarrassing. The dominant rankings system in philosophy, the “Philosophical Gourmet Report,” doesn’t provide its evaluators with anything like the detailed or comprehensive guidance that judges in dog shows receive.

I’ve heard complaints that the PGR is equivalent to a dog and pony show. Wrong! The dog fanciers have us beat, paws down.  Or they’re eating our lunch, I mean, our Report.

So, the question remains: To Rank or Not to Rank in the year 2014? **

FullSizeRender    * How many philosophers does it take to change a quality light bulb?   Not going to happen, my friend.  The philosophers, any number, will first have to agree that the object in front of them is a quality light bulb.

** Coming Soon:  A  list of ten reasons that evaluators can use to excuse themselves from this year’s PGR rankings.  [UPDATE: Now Available here.]

An Open Letter to Prospective Evaluators for the 2014-2015 Philosophical Gourmet Report

plato

Dear Colleagues,

According to a recent post on Brian’s Leiter’s Blog, you will soon be receiving surveys to fill out for the PGR.

My co-editor Brit Brogaard (Miami) and her RA have done a great job finishing the evaluator and faculty list spread sheets, and the IT professionals here should have a testable version of the survey ready for us to try out during the weekend.  If all goes well, Brit will send out the invitations to evaluators early next week (Monday or Tuesday is our goal).  We agreed to a somewhat shorter window for responses (two weeks, rather than three weeks) due to the late start date this year and our goal of getting the results out in time for students applying in the current cycle.

UPDATE:  The IT folks are still working out certain bugs in the survey program, so we won’t be able to test it before Monday.  That means, at the soonest, Prof. Brogaard will be sending out invitations on Tuesday or perhaps Wednesday of next week (Oct. 21 or Oct. 22).

I am sure that you must be aware of the controversy surrounding the PGR’s rankings, which appear legitimate because philosophers are responsible for them. There have been many persuasive pieces written about the biases inherent in surveys of this sort. I write as someone convinced that rankings do more harm than good. A comprehensive informational web site with a sophisticated search engine would be my personal preference. But I will not try to convince you of this here. I write to run some numbers by you and ask that you consider them before filling out this year’s survey. I am not claiming my concerns are original.  But I do want to highlight some of them as you consider whether to fill out the survey. Many philosophers do not fill out the survey when they receive it, and there are good reasons for you to take a pass on it this year.  Here’s why.

According to Leiter, he is currently working from a list of 560 nominees to serve as evaluators for the 2014-2015 PGR. During the last go-around in 2011, 271 philosophers filled out the part of the survey dealing with overall rankings, and a total of 300 filled out the overall and specialty rankings.   Leiter claims that in 2011 the on-line survey was sent to 500 philosophers. So many philosophers decided NOT to fill it out even after receiving it.

Let’s consider some of the numbers. Three hundred may seem to be a reasonable number of evaluators, but the total number of philosophers obscures crucial details, and one doesn’t need any sophisticated form of statistical analysis to judge how problematic they are. If you look at the thirty-three specializations that are evaluated in the PGR, slightly more than 60% have twenty or fewer evaluators. That’s right, twenty or fewer. Please think about this for a moment, twenty or fewer philosophers, in one case as few as three, are responsible for ranking 60% of the specializations found in the PGR, what many consider to be the most important feature of the PGR.

But it is actually worse than this.   There are certain areas that have many fewer evaluators than other areas. For example, the PGR lists nine specializations under the History of Philosophy rubric. Six of the nine have twenty or fewer evaluators. And one of the specializations, American Pragmatism, has only seven. As a matter of fact, the only general category to have the majority of specializations with more than twenty evaluators is “Metaphysics and Epistemology.” Five of its seven specialties have more than twenty.   But none of the others–Philosophy of Science and Mathematics, Value Theory, and the History of Philosophy—have a majority of specializations with more than twenty evaluators. And in the three specializations outside of these rubrics we find: eleven evaluators for feminism, three for Chinese, and four for philosophy of race. (Yes, the PGR actually provides rankings for Chinese Philosophy with three evaluators.)

But don’t take my word for this problem. Here’s what Leiter says on the 2011 survey site.

Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3).   More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping.

I’m sorry. The urging of students “not to assign much weight at all to small differences” does not solve the problem. No weight should be assigned to specializations ranked by so few people. This is not rocket science. This is common sense. You can’t evaluate the quality of specializations that have so many facets with so few people, who themselves were selected by another small group of people, the Board, which clearly favors certain specializations given the distribution of evaluators. (This is especially true when there hasn’t even been a public discussion about what should constitute standards for rankings of specializations in philosophy.) Yet Leiter’s advice makes it appear that one should take the specialization rankings seriously, that is, if one just doesn’t assign too much weight to small differences.  This is a shady rhetorical move.

I honestly don’t know how one could fill out the survey in good faith knowing that so few people are participating in ranking so many specializations. When you fill out the survey you are making statement. You are providing your expertise to support this enterprise. The fact that you might be an evaluator in M & E, with more evaluators than the other areas, doesn’t lift the responsibility of involvement. At minimum, you are tacitly endorsing the whole project.

Ah, you say, but perhaps this year’s crop of evaluators will be more balanced. However, the way that the PGR is structured undermines this hope. The evaluators are nominated by the Board, which has roughly fifty members. Most of the same people are on the Board this time around as last time. But here’s the kicker: Brian asks those leaving the Board to suggest a replacement.   The obvious move for a Board member here would be to nominate a replacement in his or her own area, probably from his or her own circle of experts. In Leiter’s words, “Board members nominate evaluators in their areas of expertise, vote on various policy issues (including which faculties to add to the surveys), serve as evaluators themselves and, when they step down, suggest replacements.” So there is no reason to believe that the make up of the pool of evaluators would have markedly changed since the last go around.

The 2014-2015 PGR survey will be in place for at least the next two years, maybe more, given the difficulties that the PGR faces. There are a lot of young people who will be influenced by it. Please consider taking a pass on filling out the survey. If enough of you do so, the PGR will have to change or go out of business.  Given the recent and continuing publicity surrounding the PGR, we should try to avoid embarrassment, which is likely to occur when those outside of philosophy, especially those who know about survey methods, discover our support for such a compromised rating system.

 

Three disclaimers:

1) I purposely sought to keep the statistics as simple and as straightforward as possible in this post in order to raise basic questions about imbalances and sampling size in the current PGR.  Based on these and other considerations I ask prospective evaluators to reconsider filling out the survey.   Gregory Wheeler has a nice series on some of the more in-depth statistical work in “Choice & Inference.”  See, the series and concluding piece, “Two Reasons for Abolishing the PGR.”

2) If there is public content regarding changes to the PGR that is available, and that I somehow missed, I would appreciate being informed about it. As far as I know, no fundamental change is taking place in this year’s PGR.

3) I counted the number of evaluators in the different categories. Of course I could have made an error in the count somewhere. But the numbers are certainly correct enough to back up my concerns.

The Mars Rover Update

For those interested in the progress of the Mars Rover Opportunity, I offer an update to a previous post on our exploration of Mars.  Well, not actually much of an update.  It’s the same old story.  Martians 21, Earthlings and Rover 0.

431076_229932683795944_2015757470_n-1

Photo Bentan Z Rana