In two previous posts, I discussed the recent ranking of Canadian law schools in Maclean's magazine.  Before I make a post with my suggestions for improving the ranking methodology, however, I think it would be helpful to establish the spirit in which I will make these suggestions.  To set the stage, let me explain what I think are some of the legitimate roles of ranking systems.

The appropriate motivation behind constructing a ranking of anything, including law schools, I take it, is to systematically collect and analyze information about an array of elements in a way that provides some output that is meaningful for the users of the ranking.  Various ranking systems with meaningful output exist in other contexts.  ELO ratings in chess provide information about an individual's playing ability (and also information about a chess program's playing ability) by summarizing the relative performance of players in a more telling way that a pure Win-Loss-Draw tabulation.  It can do this because the ELO ranking system takes into account the playing ability of each player's opponents.  The World Golf Rankings provide rankings of golfers based on a player's performance over the past two years in certain golf tournaments.  For the golf rankings the points awarded for any given result is based on a combination of the strength of the tournament's field and its proximity in time (more remote results are given less weight).  Ranking systems are also in use in other sporting contexts, such as tennis, squash, (indeed most, perhaps all, racquet sports), bridge, etc.  There is some consensus, I believe, that in each context in which such rankings are used they are imperfect in that they are not perfect indicators of future performance.  But it does not follow from this that they are useless and should be abandoned.  To the extent that they are well-designed, and collect and summarize information that can be useful in predicting future performance, they are useful.  In a similar way, in the law school context a ranking methodology can be worthwhile if it has the practical effect of consolidating, summarizing, and brokering information about a range of institutions in a way that reduces noise, increases the availability of meaningful information about an institution's results, and predicts (albeit necessarily imperfectly) future results.

The central danger in any ranking system is poor design, either conceptually or as a matter of operationalization.  If ranking methodologies are poorly designed at a conceptual level, then they can do damage by not  consolidating, summarizing and brokering information in a way that results in a meaningful ranking.  Ranking systems in sports do not exhibit this problem because they have a well-defined goal and are (at least to some extent) empirically verifiable.  The goal: predict who will prevail in future sporting events.  Everyone (or nearly everyone) agrees what it means to "win" in chess (checkmate opponent's king), in golf (complete 72 holes with the fewest strokes), or in squash (reach 11 points in three separate games before an opponent does).  As a result, everyone knows how to test the validity of the rankings systems--see if there is a way to improve the accuracy of the predictions made by the ranking system by tweaking its parameters. 

The question underlying law school rankings (and university rankings more generally) is whether there is some sort of agreed upon definition of "winning" or "success" that all can agree upon for institutions of higher education.  In the case of law schools, there is no all-encompassing answer that every reasonable person would agree upon. This does not mean that the process is futile, however; it means simply that one must carefully assess the ingredients of the ranking to ensure that the ranking takes into consideration those factors that one regards as being important and relevant indicators of a law school's success.  If irrelevant or misleading indicators of an institution's success are used in constructing a ranking, institutions may be tempted to "game" the rankings by deliberately manipulating the misleading or irrelevant attributes taken into account by the ranking. 

Brian Leiter is, of course, aware of the these concerns and has attempted to construct a ranking methodology for Canadian law schools that is immune to gaming and indicates meaningful information about the performance of law schools.  To his credit, the rankings take into account both of the main outputs of law schools--research and graduates. 

The question I will address in the next post is this: in light of the limitations and concerns I've highlighted so far, how might the methodology be refined to do an even better job of measuring these outputs in a useful and meaningful way?