upnight.com

NYC looking North from the EMPIRE STATE BUILDING. Photo (proportions altered) by DAVID ILIFF, license HERE.

UP@NIGHT

UP@NIGHT

UP@NIGHT

 

dsc_0867-3

……………

Early to bed, and early to rise,

Makes a man healthy, wealthy and wise

- Benjamin Franklin.

I don’t see it.

- George Washington

Now both of these are high authorities – very high and respectable authorities – but I am with General Washington first, last, and all the time on this proposition.

Because I don’t see it, either. . . .

Put no trust in the benefits to accrue from early rising, as set forth by the infatuated Franklin – but stake the last cent of your substance on the judgment of old George Washington, the Father of his Country, who said “he couldn’t see it.”

And you hear me endorsing that sentiment.

Mark Twain, “Early Rising, As Regards Excursions to the Cliff House,” MARK TWAIN IN THE GOLDEN ERA 1863-1866.

……………

An Open Letter to Prospective Evaluators for the 2014-2015 Philosophical Gourmet Report

plato

Dear Colleagues,

According to a recent post on Brian’s Leiter’s Blog, you will soon be receiving surveys to fill out for the PGR.

My co-editor Brit Brogaard (Miami) and her RA have done a great job finishing the evaluator and faculty list spread sheets, and the IT professionals here should have a testable version of the survey ready for us to try out during the weekend.  If all goes well, Brit will send out the invitations to evaluators early next week (Monday or Tuesday is our goal).  We agreed to a somewhat shorter window for responses (two weeks, rather than three weeks) due to the late start date this year and our goal of getting the results out in time for students applying in the current cycle.

UPDATE:  The IT folks are still working out certain bugs in the survey program, so we won’t be able to test it before Monday.  That means, at the soonest, Prof. Brogaard will be sending out invitations on Tuesday or perhaps Wednesday of next week (Oct. 21 or Oct. 22).

I am sure that you must be aware of the controversy surrounding the PGR’s rankings, which appear legitimate because philosophers are responsible for them. There have been many persuasive pieces written about the biases inherent in surveys of this sort. I write as someone convinced that rankings do more harm than good. A comprehensive informational web site with a sophisticated search engine would be my personal preference. But I will not try to convince you of this here. I write to run some numbers by you and ask that you consider them before filling out this year’s survey. I am not claiming my concerns are original.  But I do want to highlight some of them as you consider whether to fill out the survey. Many philosophers do not fill out the survey when they receive it, and there are good reasons for you to take a pass on it this year.  Here’s why.

According to Leiter, he is currently working from a list of 560 nominees to serve as evaluators for the 2014-2015 PGR. During the last go-around in 2011, 271 philosophers filled out the part of the survey dealing with overall rankings, and a total of 300 filled out the overall and specialty rankings.   Leiter claims that in 2011 the on-line survey was sent to 500 philosophers. So many philosophers decided NOT to fill it out even after receiving it.

Let’s consider some of the numbers. Three hundred may seem to be a reasonable number of evaluators, but the total number of philosophers obscures crucial details, and one doesn’t need any sophisticated form of statistical analysis to judge how problematic they are. If you look at the thirty-three specializations that are evaluated in the PGR, slightly more than 60% have twenty or fewer evaluators. That’s right, twenty or fewer. Please think about this for a moment, twenty or fewer philosophers, in one case as few as three, are responsible for ranking 60% of the specializations found in the PGR, what many consider to be the most important feature of the PGR.

But it is actually worse than this.   There are certain areas that have many fewer evaluators than other areas. For example, the PGR lists nine specializations under the History of Philosophy rubric. Six of the nine have twenty or fewer evaluators. And one of the specializations, American Pragmatism, has only seven. As a matter of fact, the only general category to have the majority of specializations with more than twenty evaluators is “Metaphysics and Epistemology.” Five of its seven specialties have more than twenty.   But none of the others–Philosophy of Science and Mathematics, Value Theory, and the History of Philosophy—have a majority of specializations with more than twenty evaluators. And in the three specializations outside of these rubrics we find: eleven evaluators for feminism, three for Chinese, and four for philosophy of race. (Yes, the PGR actually provides rankings for Chinese Philosophy with three evaluators.)

But don’t take my word for this problem. Here’s what Leiter says on the 2011 survey site.

Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3).   More evaluators in the pool might well have resulted in changes of .5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping.

I’m sorry. The urging of students “not to assign much weight at all to small differences” does not solve the problem. No weight should be assigned to specializations ranked by so few people. This is not rocket science. This is common sense. You can’t evaluate the quality of specializations that have so many facets with so few people, who themselves were selected by another small group of people, the Board, which clearly favors certain specializations given the distribution of evaluators. (This is especially true when there hasn’t even been a public discussion about what should constitute standards for rankings of specializations in philosophy.) Yet Leiter’s advice makes it appear that one should take the specialization rankings seriously, that is, if one just doesn’t assign too much weight to small differences.  This is a shady rhetorical move.

I honestly don’t know how one could fill out the survey in good faith knowing that so few people are participating in ranking so many specializations. When you fill out the survey you are making statement. You are providing your expertise to support this enterprise. The fact that you might be an evaluator in M & E, with more evaluators than the other areas, doesn’t lift the responsibility of involvement. At minimum, you are tacitly endorsing the whole project.

Ah, you say, but perhaps this year’s crop of evaluators will be more balanced. However, the way that the PGR is structured undermines this hope. The evaluators are nominated by the Board, which has roughly fifty members. Most of the same people are on the Board this time around as last time. But here’s the kicker: Brian asks those leaving the Board to suggest a replacement.   The obvious move for a Board member here would be to nominate a replacement in his or her own area, probably from his or her own circle of experts. In Leiter’s words, “Board members nominate evaluators in their areas of expertise, vote on various policy issues (including which faculties to add to the surveys), serve as evaluators themselves and, when they step down, suggest replacements.” So there is no reason to believe that the make up of the pool of evaluators would have markedly changed since the last go around.

The 2014-2015 PGR survey will be in place for at least the next two years, maybe more, given the difficulties that the PGR faces. There are a lot of young people who will be influenced by it. Please consider taking a pass on filling out the survey. If enough of you do so, the PGR will have to change or go out of business.  Given the recent and continuing publicity surrounding the PGR, we should try to avoid embarrassment, which is likely to occur when those outside of philosophy, especially those who know about survey methods, discover our support for such a compromised rating system.

 

Three disclaimers:

1) I purposely sought to keep the statistics as simple and as straightforward as possible in this post in order to raise basic questions about imbalances and sampling size in the current PGR.  Based on these and other considerations I ask prospective evaluators to reconsider filling out the survey.   Gregory Wheeler has a nice series on some of the more in-depth statistical work in “Choice & Inference.”  See, the series and concluding piece, “Two Reasons for Abolishing the PGR.”

2) If there is public content regarding changes to the PGR that is available, and that I somehow missed, I would appreciate being informed about it. As far as I know, no fundamental change is taking place in this year’s PGR.

3) I counted the number of evaluators in the different categories. Of course I could have made an error in the count somewhere. But the numbers are certainly correct enough to back up my concerns.

The Mars Rover Update

For those interested in the progress of the Mars Rover Opportunity, I offer an update to a previous post on our exploration of Mars.  Well, not actually much of an update.  It’s the same old story.  Martians 21, Earthlings and Rover 0.

431076_229932683795944_2015757470_n-1

Photo Bentan Z Rana

The Halo Culture: Taking the Rankings Challenge

Hero200px-HK_Central_Statue_Square_Legislative_Council_Building_n_Themis_s Hero

IN a recent post, “Rank and Yank,” I argued that we philosophers have not lived up to our reputation as critical thinkers with regard to rankings. We have not engaged in any sustained or comprehensive debate about their virtues and vices. Instead we have allowed rankings to take on a life of their own. In a second post, “Thinking Outside of the Box,” I suggested an alternative to rankings that would be more helpful for prospective graduate students, without the biases that are built into a ranking system. Here I would like to consider the consequences of rankings for those entering the job market.

Supporters of rankings often claim that they provided a real service by supplanting an old boys hiring network.   Whether they actually helped in this regard, I do not know. But they have undoubtedly created an alternative type of old boys network, more insidious in certain ways than the last, because this one claims to be based on meritocratic and objective principles, when in fact it diminishes opportunity for individuals and does not promote merit for the profession as a whole. There is a rather simple thought experiment that can prove my point. Or if it doesn’t prove my point, it should at least give pause to any fair-minded individual in his or her support for rankings.

There is little doubt that a halo effect exists with regard to academic institutions. Tell someone that you received your undergraduate or graduate degree from an Ivy League school and you will be looked at differently than someone who graduated from Public U. This is ancient history, both inside and outside of academia, but appears to be more pervasive of late as our culture becomes ever more status conscious, which is no doubt related to deepening economic class stratification.   Associate yourself with a certain brand, Apple or Harvard, and you’ve got a better shot at winning the prestige game. Although we might hope that it would be different in Philosophy, it’s difficult to deny that rankings play into this prestige and halo culture.

Here I offer a personal anecdote, one that I believe reflects very deep biases in the profession, nay prejudices. These unfairly mark individuals in certain ways, especially in the academic job market in Philosophy.

More than two decades ago I was at a conference with philosophers trained in various traditions. One very eminent analytic philosopher—someone known the world over—and I hit it off. We had lunch, then dinner, and then lunch together. We talked a lot about philosophy, as well as other matters. Well into our second day of conversation my new friend asked where I had gone to graduate school. I said, Boston College. And without skipping a beat, or reflecting on what he was about to say, my new friend looked at me, somewhat in shock, and said, “Oh, you’re much too smart to have gone there.”

I tell this story because it reflects wide-spread assumptions in our profession about people who attend schools with which others may be unfamiliar, or those that are not part of a sanctified list of schools based on rankings.   In the situation I described there was no harm done because, well, I was employed, and my new friend was not sitting on a hiring committee. But if those less candid than he, but sharing some of the same institutional prejudices, were on such a committee, I would not have gotten past the first round in the application process. This would not have been right. I would not have been judged on my competence, but on my lack of a halo (or my anti-halo).

And now my challenge.

In spite of the fact that some of us may believe that rankings do not effect our ability to judge job candidates, that we can remain objective in spite of biases about institutions, I simply do not believe this to be the case. Here is a simple thought experiment to support my point. I believe that anyone who honestly reflects on it, and takes it seriously, will agree that we have a problem, a serious one in terms of basic fairness to job candidates.

The next time you do a job search break your committee into two groups. Have one group evaluate the candidates without reference to the institution from which they graduated and have the other evaluate the candidates with all of the institutional information included. I can almost guarantee that the short lists will not be the same. And I believe that anyone who is honest with him or herself about how this process works will agree. If you think I’m wrong, try it. Or at least try it as a thought experiment.

Oh, but you will say, we can never get rid of institutional biases. X and Y universities will always have a special cachet, just because they are X and Y. No doubt! But this isn’t the question. The question is whether we as philosophers are not only aware of institutional biases, but whether we are comfortable actively promoting them, entrenching them, lending our good name to furthering them. We should be doing everything in our power to level the playing field when it comes to hiring in Philosophy. And this means diminishing as much as possible the halo effect, not supporting a system that creates halos and then—all too often—trumpets them to the world.

Thinking Outside the Box (or, A Real Alternative to Rankings)

InquiringMinds-1Chippy at 5

IF you are interested in finding the most suitable dog for you and your family, and you are okay with pure breeds, you can find websites that help you decide. They allow you to select various preferences, for example, coat length and upkeep requirements, how much the dogs will bark, how they are with children, genetic predispositions to illness, size, etc. Not only can you select preferences, but you can weight each one on a scale in terms how important they are relative to the others. You then hit enter. And voilà, a list of breeds appears, with detailed information about each one. In contrast, the most extensive rankings system in Philosophy is not even in the same ballpark in terms of assisting students with selection of graduate programs.  We provide better selection platforms for people looking for an animal companion than for our students researching philosophy programs.

In our zeal to address the pros and cons of the current ranking system in Philosophy, Brian Leiter’s “Philosophical Gourmet Report,” as well as to discuss the future of rankings, it seems that we have trapped ourselves in something of a box, an old box, one that is more 20th than 21st Century.   Or to be more explicit, the kind of ranking system that exists currently in Philosophy is old hat. It has limited value because there are too few or no data points about a host of factors that would be important to students. After conversations with my wife, Cathy Kemp, also a philosopher, and reading suggestions on the web, I would like to propose an alternative. This alternative is by no means original. As a matter of fact, I will quote extensively from a colleague, Noëlle McAfee, regarding this idea below. But what I would like to do here is frame the alternative and suggest why moving in this direction is, well, a no-brainer. First, some comments and assumptions.

There has been a good deal of discussion about why a ranking system is a good for Philosophy. I have heard claims that it’s helpful to philosophy departments seeking funding. These claims are debateable. And even if true, they have to be weighed against possible harms. We can sidestep this debate for the present, because the general public consensus of those who support rankings is that they are done for the benefit of prospective graduate students. This has been Leiter’s position.

Assumption #1. Whatever kind of information service we offer should have as its primary goal assisting prospective graduate students in choosing the best graduate program(s) for them.

One of the bones of contention in the profession has been about whether rankings accurately report quality or prestige (or both).

Assumption #2. Our goal should be to assist students in finding the best quality graduate education, not the one with the most prestige. It may be that the two go hand and hand, but they may not. Prestige should only be viewed as a possible efficient cause, not as a final one.

Perhaps the biggest bone of contention is the notion of quality. Often the debate about rankings comes down to whether we can assume a univocal definition of quality. A definition of this sort appears to be implied in the overall rankings of departments. (If this assumption were not made, then there would be little point in doing overall rankings.) However, in the actual practices of philosophers, while we may have standards that we like to think of as universal for the discipline, it is impossible to find a univocal definition that is robust and concrete enough to reach all sub-fields, styles, traditions, etc.

Assumption #3. There are multiple reasonable ways to demonstrate quality in Philosophy, and different philosophers and departments will do so in different ways.

Prospective graduate students presumably want the highest quality education they can get, given their interests. They also have lives outside of philosophy that will require them to live in different locations and pay for their education in different ways. In addition, they will have different kinds of desires about where they would like to work and at what kind of institution. (Here I think we must give up the notion–or the illusion–that the ultimate goal of all prospective graduate students is to teach at a Research I institution.)

Assumption #4. Any service that is provided to prospective graduates students should take into consideration a wide range of factors such as those mentioned above. For example, it should contain placement records, types of school, size of school, as well as geography.

With these assumptions in mind, I ask the obvious question, how best can we serve the next generation of philosophers? And I submit that once we start considering the actual needs of students, the limitations of even an extensive ranking system like the PGR become readily apparent. We are privileging the perceptions of a subset of philosophers about quality over offering students a full range of ways of judging quality as well as suitability for themselves. One way to put this: we are still using mid-twentieth century technology when we could be providing a twenty-first century alternative, an alternative that could be much more helpful to students than any ranking system.  Further, if Philosophy put half of the energy it now puts into the PGR, we could create a system that would be the envy of other disciplines. (Certainly we could use the information contained in it in other ways, but I won’t address these here.) What would such a system look like?  Noëlle McAfee made the following suggestion recently on her blog, “gonepublic: philosophy, politics, & public life”:

The APA has been collecting data from philosophy PhD programs for a few years now for its Guide to Programs on placement rates, etc. What if more information were collected, such as numbers of books published with university presses, faculty citation and Google Scholar analytics, peer-reviewed conference papers, faculty areas of specialization, etc? And then what if that information were turned into a search engine such that a prospective graduate student (or anyone) could go there and search by key words for programs that offered what she or he was wanting to study? Programs that were more research productive (with faculty being cited more) would show up higher on the list than those that weren’t. So the student could create a customized ranking of programs that would meet his or her interests. Anyone could use that data to generate rankings of any particular specialty.

Citations, publications, etc. are a better measure than perceived reputation. Not only are they more objective, they factor in the careful scrutiny that goes into the peer-review process—as opposed to top-of-the-head perceptions of faculty lists by those that may be unfamiliar with those faculty members’ work.

To this list could be added some of the factors that I mentioned earlier, for example, geographic distribution, etc. A student using a search engine of this sort could create a truly individualized set of schools to apply to.

But wait! The supporters of rankings will say we still need to provide students with “objective” information about quality, meaning, the perceptions of those in the field. I say, we can debate from today until the apocalypse and we may never agree on what these “objective” perceptions amount to. In this case, the best “objectivity” lies in the data we provide. Let’s give future graduate students some credit and not assume, paternalistically, that without our perceptions of “quality” in the form of rankings they will be lost at sea. They are a sophisticated bunch, well-accustomed to making all kinds of decisions with information presented on web-based platforms.  Philosophy should tailor its information outreach to the prospective graduate students of this century, not the last.

(For more information on the current controversy in Philosophy about rankings, see “Archive of the Meltdown.”)

Rank-and-Yank: What’s Next for the Philosophy Rankings Game?

150px-Goldman_Sachs.svg sisyphus2images-1

Jack Welch, former CEO of GE, is known for advocating a system he calls differentiation.  Others call it Rank-and-Yank.  Here’s how he describes the system as he attempts to defend it against critiques.

Another criticism of differentiation is that it requires managers to let every employee know where he or she stands—how they’re doing today, both quantitatively and qualitatively, and what their future with the company looks like. Are they a star in terms of both results and values (say, in the top 20% of the team), about average (say, about 70%), or not up to expectations (the bottom 10%)? Note: The 20-70-10 distribution is not set in stone. Some companies use A, B, and C grades, and there are other approaches as well. . . . Yes, I realize that some believe the bell-curve aspect of differentiation is “cruel.”  That always strikes me as odd. We grade children in school, often as young as 9 or 10, and no one calls that cruel.  But somehow adults can’t take it?  Explain that one to me.  WSJ

This is not a phenomenon that exists only in the corporate world.  I am here to tell you that I have been part of an established Research I department that required every faculty member in the department to be ranked, yearly.   And I dare say that there are others that require this and more might be on the way.  I say this not to scare, but to alert, for we are paving the way for this mentality every time we cheer the rankings game in philosophy.   We are actually champs here.  There are few other disciplines in the humanities (any?) that have decided to rank their own departments.  But philosophy, with all of its skeptical and critical minds, dove right in over the years.  And it did so without any organized public debate.  Without any democratic process. Without any serious evaluation of the pros and cons by the profession as a whole.   It was the doing of one fellow, Brian Leiter, and those who were willing and able to collaborate.

I don’t know many colleagues in philosophy who advocate further corporatizing of universities, but I do know many who are, shall we say, taken with the notion that quality and prudence demand that we rank of philosophy departments.  Currently amidst the brouhaha over the bad behavior of the king of philosophy’s rankings, Brian Leiter, many philosophers have taken to the net to express their outrage and to demand that the king, who has in some ways already lost his head, step down from his post as the rankings meister.  But many of these same folks appear willing to dive right back in order to continue the rankings game.

I and others have argued over the years, many years, that these systems are flawed, and that given the current make-up of the profession bound to harm members of the community.  They have been divisive and could be replaced by a sparkling new web site filled with extensive information on philosophy departments, including statistical data.  Others view rankings as essential to the profession, but both as a professor and administrator, I have never been convinced that such a need actually exists.  This, however, is all anecdotal.  And I am not here to argue about the pros and cons.  I do want to make a positive suggestion.  But before doing so I want to get something off my chest.

I have been in this profession for over 35 years, 40 including grad school.  I am embarrassed.  I don’t know any other word for it.  Perhaps shame.  I know that I am not my horse, to borrow from Epictetus here, that I am not not tied to my profession.  My profession’s faults do not fall on me.  But I still feel shame.  I can’t believe that we philosophers have allowed Leiter and the rankings to happen without rising up and demanding that all those who wish to participate in a genuine and meaningful debate get a chance to do so.  Yes, there have been voices raised against Leiter’s rankings.  But the machine has rolled on and critical voices have thus far been marginalized.  And it looks like it may happen again, even though there are those discussing alternatives.   I think my shame may have to do with the fact that I always expected something more from philosophers, perhaps foolishly so, to think more critically than most.  Yet, like children following the pied piper, we let Brian Leiter, and a mentality he imported from law schools, entrance large segments of our profession and entrench something we never adequately discussed or debated.

Ah, but you will say I am assuming what I haven’t proved.  Perhaps those who wish to maintain the rankings are not entranced.  Perhaps those who believe in them are hardheaded realists, who worry about how the humanities are being undermined and accept the culture of rankings (or branding), a handmaiden of the corporatizing of higher ed, if it can help us save ourselves from budget cuts and other nastiness that can be doled out by Corporate U.

However, I am not asking that those who believe in rankings accede to my feelings or brief critical comments.  Of course this would never happen.  I am asking them to be willing to behave like philosophers, that is, engage in a real public debate.   Show your cards in public.  Be willing to strut your stuff.  Forcing those of us who disagree to accept rankings, because the powers that be have accepted them, means letting the god Thrasymachos win, once again.  Force, not the force of the better argument, will carry the day.

Let’s have a full public debate about the topic, perhaps in a series of meetings at the APA.  Some people have already gotten into the swing of things. (See, Archive of the Meltdown.) Let’s see those much vaunted debating and analytic skills on display.  Let’s see some fireworks before we agree to drown in a version the status quo.   Let’s behave like philosophers, not like this fellow.

035ostrich-head-in-sand_468x538