As mentioned in the previous post, the Maclean's ranking of Canadian law schools uses four elements to produce an overall ranking of Canadian law schools.  This ranking is based on:

  • Faculty quality (50%)
  • Student (more aptly graduate) quality (total of 50%)
    • elite firm hiring (25%)
    • national reach (15%)
    • Supreme Court Clerkship hiring (10%)

At this general level, it is difficult to quibble with the concept of assessing law schools on the basis of performance in faculty research and in teaching outputs.  Taking a closer look at how each of these four components is operationalized and measured, however, suggests a number of important limitations to the rankings.

1. "Faculty Quality [...] measures how often faculty members at each school are cited by other academics in 33 Canadian legal journals found in the Quicklaw journals database."

One of the central problems with how Faculty quality is measured is that it doesn’t assess influence in publications aside from 33 Canadian law journals.  As an initial matter, I think it is fair to say that academics seek to publish in places with the most active audiences for particular types of research—for example, the best journals to publish law and economics research in are likely to be American peer reviewed journals such as the Journal of Legal Studies, or the American Law and Economics Review, or even professional economics journals.  While I am not well-versed in the most attractive journals for publishing other types of interdisciplinary legal research, I strongly suspect that similar observations could be made about legal theory where certain peer reviewed journals out of the US and UK are preeminent, and some of the best work will appear in peer-reviewed philosophy journals rather than publications specifically focused on jurisprudence.  For related reasons, using citation counts would be more complete if  citations to work of Canadian scholars in US and UK journals were counted as well.  Of course, this would raise questions as to the type of research that is most valued (e.g. predominantly Canadian topics such as Charter jurisprudence or Canadian legal history versus research that travels better, such as legal theory or law and economics).  It would be difficult to design a system of citation counting that tidily and adequately addresses this problem.  It is important not to dismiss its existence, however.

Another limitation of the citation counts as a measure of Faculty quality is that it is unlikely that frequency of citation is a perfect proxy for quality; for example, overly provocative papers are sometimes cited for being so provocative (and / or for defending positions that are outliers in the literature).  Provocative papers are frequently of high quality, but not always so.

Yet another possible limitation of the citation counts is that they will be affected by the size of the graduate program at a law school, and the number of faculty members more generally.  This gives a leg up to larger schools.  I strongly suspect that graduate students are more likely (not necessarily strategically or malignly) to end up citing their supervisor’s previous work because they will be more familiar with it and steered to it by their supervisors.  They will also more frequently be steered to relevant work by other members of the same faculty, since colleagues at the same law school will tend to be more familiar with each other's work, especially when they are active in the same area of research endeavour.  To emphasize this point, there are economies of scale in the citation measure--larger law schools will have an advantage since colleagues (and graduate students who subsequently publish their work) are more likely to be familiar with the work of other members of the same faculty (and to read and comment on works in progress).  All else the same, this will lead to higher intra-faculty citations; law schools with a single scholar working in a particular area will miss out on these economies of scale.  Having colleagues active in the same area can, of course, also lead to higher quality work since frequent interaction and cooperation will tend to improve scholarship.  To the extent that quality of publications and citation to work of colleagues is correlated, this may be overstating the limitations to using citation counts as an indicator of faculty quality.

Despite Professor Leiter's claim to the contrary, quantity will probably to some extent matter in the citation counts.  Prolific scholars will have an advantage to the extent that they cite their own earlier work in subsequent publications.  There is no suggestion that self-citations were excluded from the analysis, but it is difficult to argue that self-citation is an accurate measure of one’s own scholarly impact.  I don't know how many of the citations counted were to scholars' own previous work, but I suspect it had at least some influence on the rankings.

2. "Elite Firm Hiring [...] uses the Lexpert list of leading Canadian law firms as its basis. Maclean’s examined the website of each of these leading law firms, and counted the number of associates from each law school at each firm. "

Some more information about the identity of these firms would be welcomed; it is not clear which firms were actually included.  As far as I can tell (I may be wrong on this) Lexpert does not produce a global list of "leading Canadian law firms", but instead publishes a list of leading firms in different practice areas in different regions of the country.  Many practice areas do not have “leading firms” but only “leading practitioners."  And where practice areas do have “leading firms” listed in a particular region, they are further separated first into “major full service” versus “mid-sized & litigation specialty” and then again into “most frequently recommended”, “consistently recommended” and “repeatedly recommended”. It is unclear what the threshold used was for including the firms as among the “elite firms” used by Maclean’s.  More guidance on this would be appreciated (and perhaps will appear in the print version of the magazine, which I will be able to look at this afternoon).

It might make more sense to use a measure that is more sensitive to differences among the elite firms—e.g. look at the number of practitioners at each of the firms who are among Lexpert’s elite—to make an index of eliteness for each of the firms. As this measure stands, it appears that the most elite law firm’s associates in the most elite practice group would count the same in this ranking as an associate at the least elite law firm in the weakest practice area in that firm.  There are other problems, too, relating to how one compares law firms with just one leading practice area (and hence meets the threshold for being "elite") with firms that are among leaders in a majority (or all) of its practice areas.

There is also a problem with counting only Canadian law firms.  Massachusetts and New York allow Canadian law graduates to sit directly for those state bar examinations.  Boston and New York firms regularly recruit at several Canadian law schools.  A significant proportion of the law school classes at some Canadian schools prefer to work at elite American firms rather than work at elite Canadian firms.  Often the students who capitalize on these opportunities are among the most talented and  rejected offers to work for elite Canadian law firms.  Not including these students as among those who are working at "elite firms" is a significant limitation to this measure.

UPDATE: I have been told that associates at the top five Vault New York law firms were included in the rankings.  This wasn't originally described in the methodology.  This is a step in the right direction, but maybe several more steps would be desirable.  It is odd that just the top five Vault NYC firms were included and not a greater number (say 10 or 15) and that no Boston or London firms were included.

3. "National Reach [...] looks at how widely spread are the graduates from each school. The idea is to get a sense of whether a law school is able to place its grads at leading firms beyond its region and beyond a small network of firms. The elite firm hiring count from the previous metric was examined to determine what percentage of a school’s graduates are at elite firms other than the three elite firms that have the most associates from that school. If School X has 100 associates at elite firms, and 45 associates at its top three firms, it would have a “reach quotient” of 55/100 = 0.55."

This measure helpfully counteracts some of the deficiencies of the previous measure—that some regional firms with strong connections to particular schools that were counted as “leading” by Lexpert in a particular region in a particular practice group will fare poorly on this measure as compared with the previous one. 

On the other hand, it is unclear precisely why having a number of your graduates concentrated in the same elite firms—provided those firms truly are the best—should be penalized; for example, if ranking private high schools one counted all Ivy League schools as “elite” and one wanted to assess the quality of the private school it would make little sense to exclude the "more elite" of the Ivy League schools (e.g. Harvard, Yale, Princeton) from the measure.  Presumably if a given private school placed more of its graduates in schools with higher admission standards, that concentration should not be counted against it.  It is unclear how much of a problem this is in this context, but constructing some kind of more nuanced ranking of the elite law firms might provide a way out of this potential jam. 

Of course, this measure also misses "international reach" of the law schools that regularly place students in the excluded top New York and Boston firms, in international NGOs, and in various other attractive positions.

4. "Supreme Court hiring [...] Clerks are hired by the Supreme Court for a term that is usually one year; the clerks are selected by the judges and are generally chosen from the country’s top graduating students. We measured clerkship hiring over the past six years."

This measure is better than the other two measures of student quality, but also has limitations.  For example, because appeals to the Supreme Court of Canada are heard in both French and English, and involve both common and civil law, each justice usually selects at least one law clerk who has studied civil law and is fluently bilingual.  This tends to favour graduates of Quebec law schools and the University of Ottawa.

In addition, it is difficult for students of some law schools, regardless of how talented they may be, to obtain clerkships because some schools place very few or any clerks on a regular basis.  Justices are likely to favour clerkship candidates that are not perceived as being overly "risky".  Candidates who come highly recommended from professors and Deans that have recommended very effective clerks in the past are likely to be given a leg up in the competition because they will be perceived (perhaps rightly) as being less of a gamble.  This may perhaps lead to strong persistence in the performance of certain schools in the SCC clerkship hiring process, and may exacerbate the difficulty even extremely talented students from less familiar law schools have in successfully landing a position.

Comments are welcome.  I'm certain that there are other limitations that ought to be pointed out, and also other advantages to this methodology that I've missed. 

I will post again shortly with some thoughts about how the rankings might be improved for the next time around.