Critics of legal education raise two key questions about our scholarship: (1) How much value does it offer? And, (2) do law schools have to spend so much money to produce that value?
The answer to the second question is easy: No. We used to produce plenty of superb scholarship with typewriters and four-course teaching loads. Now that we have laptops, tablets, high-powered statistical software, and 24/7 online libraries, our productivity has leaped. Law schools could easily restore teaching loads to four courses a year while still facilitating plenty of good research. The resulting reduction in faculty size could help fund scholarships and reduce tuition.
The answer to the value question is harder. Do we mean immediate pay-off or long term influence? Do we care about value to judges, legislators, practicing attorneys, clients, teachers, students, or some other group? Does each article have to demonstrate value? Or do we recognize that trial and error is part of scholarship as well as other endeavors?
Those are difficult questions and they deserve a series of posts. For now, I’ll limit my discussion to a recent paper by Jeffrey Harrison and Amy Mashburn, which has already provoked considerable commentary. I agree with some of Harrison and Mashburn’s observations, but the empirical part of their paper goes badly astray. Without better method, their conclusions can’t stand. In fact, as I note below, some of their findings seem at odds with their recommendations.
Measuring Citation Strength
Harrison and Mashburn decided to measure the strength of citations to scholarly work, rather than simply count the number of citations. That was an excellent idea; scholars in other fields have done this for decades. There’s a good review of that earlier work in Bornmann & Daniel, Do Citation Counts Measure? A Review of Studies on Citing Behavior, 64 Journal of Documentation 45 (2008). (By the way, isn’t that an amazing name for a journal?)
If Harrison and Mashburn had consulted this literature, they would have found some good guideposts for their own approach. Instead, the paper’s method will make any social scientist cringe. There’s a “control group” that is nothing of the sort, and the method used for choosing articles in that group is badly flawed.* There is little explanation of how they developed or applied their typology (written protocol? inter-rater agreement? training periods?). Harrison and Mashburn tell us only that the distinctions were “highly subjective,” the lines were “difficult to draw,” and “even a second analysis by the current researchers could result in a different count.” Ouch.
Is it possible to make qualitative decisions about citation strength in a thoughtful, documented way? Absolutely. Here’s an example of a recent study of citation types that articulates a rigorous method: Stefan Stremersch, et al., Unraveling Scientific Impact: Citation Types in Marketing Journals, 32 Int’l Journal of Research in Marketing 64 (2015). Harrison and Mashburn might choose a different design than previous scholars, but they need to develop their parameters, articulate them to others, and apply them in a controlled way.
Influence and Usefulness
Harrison and Mashburn conclude that most legal scholarship “is not regarded as useful.” Even when a judge or scholar cites an article, they find, most of the cited articles “serve no useful function in helping the citing author advance or articulate a new idea, theory or insight.” Application of this standard, however, leads to some troubling results.
The authors point, for example, to an article by John Blume, Ten Years of Payne: Victim Impact Evidence in Capital Cases, 88 Cornell L. Rev. 257 (2003). A court cited this article for the seemingly banal fact that “the federal government, the military, and thirty-three of the thirty-eight states with the death penalty have authorized the use of victim impact evidence in capital sentencing.” Harrison and Mashburn dismiss this citation as “solely to the descriptive elements of the article.”
That’s true in a way, but this particular “description” didn’t exist until Blume researched all of that state and federal law to create it. The court wanted to know the state of the law, and Blume provided the answer. This answer may not have “advance[d] . . . a new idea, theory or insight,” but most cases don’t require that level of theory. Disputes do require information about the existing state of the law and Blume assembled information that helped advance resolution of this dispute. Why isn’t that a worthwhile type of influence?
I suspect that judges and practitioners appreciate the type of survey that Blume provided; analyzing the law of 40 jurisdictions requires both time and professional judgment. Blume, of course, did more than just survey the law: he also pointed out crevices and problems in the existing law. But dismissing a citation to the survey portion of his article seems contrary to the authors’ desire to create scholarship that will be more useful.
A reworked method might well distinguish citations to descriptive/survey research from those that adopt a scholar’s new theory. Asking scholars to limit their work to the latter, however, seems counter productive. A lot of people need to know what the law is, not just what it might be.
Judges and Scholars
One statistic in the Harrison and Mashburn article blew me away. On page 25, they note that 73 out of 198 articles from their “top 100” group of journals were cited by courts. That’s more than a third (36.9%) of the articles! I find that a phenomenally high citation rate. I know from personal experience that judges do pay attention to law review articles. When I clerked for Justice O’Connor, for example, she asked us to give her a shelf of law review articles for each of the bench memos we wrote. She didn’t want just our summaries of the articles–she wanted the articles themselves.
But I never would have guessed that the judicial citation rate was as high as 36.9% for professional articles, even for journals from the top 100 schools. At least in judicial circles, there’s a big drop-off between learning from an article and citing the article. Most judges try to keep their opinions lean, and there’s no cultural pressure to cite scholarly works.
I’m not sure how to mesh the judicial citation statistic with the tone of Harrison and Mashburn’s article. More than a third sounds like a high citation rate to me–as does the one quarter figure for journals in the 15-100 group.
Ongoing Discussion
Harrison and Mashburn urge critical debate over the value and funding of legal scholarship, and I back them all the way on that. I wrote this post in that spirit. As I note above, I don’t think law schools need to spend as much money as they do to produce strong levels of excellent scholarship. I also applaud efforts to replace citation counting with more nuanced measures of scholarly value. But we need much stronger empirical work to examine claims like the ones advanced in this paper. Are Harrison and Mashburn right that most legal scholarship “is not regarded as useful”? I don’t know, but I was put off by strong statements with weak empirical evidence.
__________________________
* Harrison and Mashburn chose the first article from each volume. That’s a textbook example of non-random selection: the first article in a volume almost certainly differs, on average, from other articles.
Cafe Manager & Co-Moderator
Deborah J. Merritt
Cafe Designer & Co-Moderator
Kyle McEntee
Law School Cafe is a resource for anyone interested in changes in legal education and the legal profession.
Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.