Ideally rankings should be based on aspects of law school performance that can garner wide acceptance and cannot easily be manipulated.  That is, for a law school to improve its position in the rankings, it should have to actually improve its performance as an institution.  The two main activities of a law school are research and teaching.  Therefore, in measuring the performance of law schools in order to rank them, the goal should be to measure research contributions and teaching effectiveness in an unbiased way.  This is not easy to do.

First, some ideas about improving the measure of "faculty quality." The Maclean's ranking measured "faculty quality" by determining the per capita number of citations to faculty work in the 33 Canadian law reviews found in the Quicklaw journals database.  This was intended to measure the "impact" of the research produced by members of each faculty.  One limitation of this measure is that it ignores important non-Canadian journals.  One way to improve this measure, then, would be to add a list of such journals to the 33 Canadian ones that are already included in the measure.  A (possibly) relatively uncontroversial way of compiling this list of "other journals" would be to include any electronically search-able academic journal that has published at least X articles (say, at least three in order to avoid the especially thin part of the distribution) in it by current faculty members at Canadian law schools.  This would probably not dramatically increase the number of academic journals included in the measure, but would result in a better targeted list of journals to scour for citations.  For example, it would pick up journals like the The New England Journal of Medicine, Oxford Journal of Legal Studies, Philosophy and Public Affairs, American Law and Economics Review, Modern Law Review, Law Quarterly Review, The Journal of Legal Studies, and an undetermined number of others.  I would also exclude self-citations from the measure, since including self-citation rewards self-promotion rather than true scholarly impact.  It might also be desirable to scale citations according to the quality of the journal that the citations appear in, though this would probably be very difficult to do without arousing considerable controversy (for this reason, I would probably avoid trying to do so).  My guess is that this extended list of academic journals would not dramatically affect the ordering of the law schools with respect to faculty quality, but it would at least counter the incentive that measuring only Canadian journals would have on decisions about where to publish one's research (and what to write about).

Second, with respect to assessing graduates, the way this is currently calculated, the aim seems to be to measure how consistently graduates of each law school are able to secure the "most attractive" positions.  One problem with this as a measure of a law school is that it may misleadingly give the impression that the law school itself is responsible for or is unduly "taking credit" for the subsequent success of its students.  It would probably be considered rather uncontroversial to claim that law graduates are only partly hired for what they know about the law at the time they graduate.  They are hired also for their talent and their commitment to nurturing their talent in developing into effective counsel (or business persons, or NGO staff members, or wherever their ambitions take them).  To the extent this is true, then looking at the employment experience of graduates doesn't reveal necessarily what the learning experience is like at a particular school, but rather something about the ability of the members of past classes of graduates.  That is, measuring employment and placement outcomes provides a noisy signal about the distribution of talent and ambition in law schools over several years.  There is probably also some effect of the substance of what law students learn during law school, but it would be quite difficult to tease apart the two effects empirically given the state of the data. 

A potentially persuasive response to the argument that "good inputs leads to good outputs" and thus the measure of an institution's performance should the difference between the quality of the incoming and the performance of graduates surrounds student choice.  To the extent that students base matriculation decisions on their own view of the quality of the educational experience, then the quality of the incoming class will itself reflect the perceived performance of the institution by students.  Relatedly, it would be difficult (though not impossible) to argue that the competitiveness of admission standards at law schools doesn't indirectly provide information about the market's assessment of a law school's quality.  Of course, the competitiveness of admission might also pick up other things, such as lower tuition fees / more generous financial aid, a school's presence in a relatively desirable geographical location (e.g. large urban areas where spouses or partners can more easily find opportunities to study or work), etc.

With these caveats in mind, the measures of graduate placement are both under and over-inclusive as they now stand.  Let's assume that we care about graduate placement because it captures the talent and ambition of students and also the influence of the legal education itself.  I don't claim to know what the "most attractive" positions are in a global sense, but I do think that clerkships per capita at the Supreme Court of Canada is a strong measure and should be retained (despite the weaknesses I outlined in an earlier post).  More thought needs to be put into the "elite firm employment" and "national reach" categories, since they include positions that are not all equally "elite" and because they exclude other types of very attractive public service employment.  It is unclear which firms were considered "elite", but if the list came from Lexpert's list of Canada's largest law firms in different regions, it is quite imperfect.  One obvious thing to do would be to include all the member firms of the Vault 100, since even the lowliest member of the Vault 100 is likely to be more "elite" than the least among the Canadian firms included in the Lexpert list.  A Canadian law school graduate hired by a Vault 100 firm very likely could have worked at an "elite" Canadian firm if they had pursued this option.  Another thing to consider would be to count only associates in practice areas at these firms that are listed by Lexpert as being "recommended" instead of counting all practice areas of a firm regardless of its reputation.

I conclude with a general observation that if one must have rankings (and I assume that it is inevitable since demand is high among prospective students), then it is better to have rankings that are at least measuring meaningful things.  Despite the limitations of the methodology used by Maclean's it is probably superior to methodologies used elsewhere in other rankings of Canadian law schools.  The results obtained through adopting some of the above suggestions (perhaps even all) would probably be strongly correlated with the results reported in the most recent issue, largely because the refinements suggested are just that.