This piece was originally published on Bloomberg.
Earlier this month, at the American Association of Law Schools’ annual meeting in New York, the AALS’s Section for the Law School Dean hosted a panel on law school rankings. During a Q&A, Nebraska Law School Dean Susan Poser posed a series of questions to Bob Morse, chief architect of the U.S. News law school rankings.
I don’t know anything about schools except the one I went to and the one I’m at now. How do you justify asking us to rank the prestige of other schools, and how do you justify giving this component such a large weight?
Blake Edwards, writing for Big Law Business, has more details on the panel here. I want spark a discussion about some ways to improve the reputation metric.
The Current Reputation Metrics
U.S. News measures “quality” by surveying two distinct groups: a defined group of faculty and deans at each law school (peers) and a range of lawyers and judges. U.S. News, through a third party, asks each person to rate law school programs from 1 (marginal) to 5 (outstanding). All 200+ ABA-approved law schools are on the survey. If an individual does not know enough about a school to evaluate it fairly, they are instructed to not select a rating.
Last year, 58% of contacted legal academics responded to the survey. Lawyers and judges respond at a far lower rate, thus U.S. News averages and weights responses from the three most recent years of surveys. The average peer assessment score for each school counted for 25% of a school’s total rank. The average assessment score by lawyers and judges counts for 15% of a school’s total rank.
The Problems With These Metrics
A lot of ink has been spilled over problems with the current reputation metrics. They cause law schools to spend a lot of money on “law school porn” to attempt (and fail) to boost scores. Quality is not defined for survey respondents. Self-interest may cause respondents to strategically maximize their alma mater or employer’s relative standing. Scores are affected by an “echo effect” caused by the previous year’s ranking. Insofar that they are not, scores are likely produced in response to exposure to just a few people associated with the school the respondent is rating. Additionally, the peer assessment score may only capture the quality of legal scholarship — whatever that means to the respondent — rather than teaching quality or learning outcomes.
Improving These Metrics
Accurately representing school reputations matters for students looking to join a profession collectively obsessed with prestige and ordering. If the reputational survey does not or cannot serve its purpose, U.S. News should change it.
What follows is a first draft of sorts about how to better measure school reputations among lawyers and judges in a manner that serves a valid consumer purpose. (Improving the peer reputation score is a topic for a different day.) All but a handful of law schools have a regional, in-state, or even just local reach. The fact that U.S. News treats schools through a national scope is a design flaw that misjudges how reputation works throughout the legal profession. A survey designed to reflect regional differences will provide students a better tool for distinguishing among schools.
U.S. News should distribute state-specific surveys to lawyers and judges based on where they work. Rather than including 200+ law schools on the survey, it would include only schools with an objective connection to the state.
The entry-level hiring data that schools already collect and publish makes this easy. Examine geographic outcomes for each school and include each state in which 5% or more of graduates worked. The New York survey would include many schools, and the Wyoming survey would include just one school. Either way, response rates will improve because the surveys would be significantly less overwhelming and survey data will improve because survey recipients are likely to have encountered alumni from the schools listed.
Sure, the University of Wyoming probably gets a much higher rating than it would under the current scheme, but why is that a problem? Measuring reputation, to some extent, is measuring bias. A state’s insularity tells us something meaningful about a school in that state’s degree. The larger concern is that a significantly better rating will elevate the school’s rank and cause someone to choose the school because it has a higher ranking, despite that person not wanting to work in Wyoming. But that problem exists now.
One way to understand—and perhaps react to—gaming reputation scores would be to ask survey recipients to report their school. Depending on the results, U.S. News could adjust the final score or simply report the differences that do or do not exist.
The more fundamental problem is that ranking law schools nationally is nonsense. (My organization uses a different system for helping prospective law students choose whether and where to go to school.) Yet incremental change matters. As long as the U.S. News rankings influence decision-making by schools and students, we should encourage the ranking gurus to optimize their methodology by providing alternatives. Simply complaining serves little valuable purpose.
Cafe Manager & Co-Moderator
Deborah J. Merritt
Cafe Designer & Co-Moderator
Kyle McEntee
Law School Cafe is a resource for anyone interested in changes in legal education and the legal profession.
Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.