Equating, Scaling, and Civil Procedure

April 16th, 2015 / By

Still wondering about the February bar results? I continue that discussion here. As explained in my previous post, NCBE premiered its new Multistate Bar Exam (MBE) in February. That exam covers seven subjects, rather than the six tested on the MBE for more than four decades. Given the type of knowledge tested by the MBE, there is little doubt that the new exam is harder than the old one.

If you have any doubt about that fact, try this experiment: Tell any group of third-year students that the bar examiners have decided to offer them a choice. They may study for and take a version of the MBE covering the original six subjects, or they may choose a version that covers those subjects plus Civil Procedure. Which version do they choose?

After the students have eagerly indicated their preference for the six-subject test, you will have to apologize profusely to them. The examiners are not giving them a choice; they must take the harder seven-subject test.

But can you at least reassure the students that NCBE will account for this increased difficulty when it scales scores? After all, NCBE uses a process of equating and scaling scores that is designed to produce scores with a constant meaning over time. A scaled score of 136 in 2015 is supposed to represent the same level of achievement as a scaled score of 136 in 2012. Is that still true, despite the increased difficulty of the test?

Unfortunately, no. Equating works only for two versions of the same exam. As the word “equating” suggests, the process assumes that the exam drafters attempted to test the same knowledge on both versions of the exam. Equating can account for inadvertent fluctuations in difficulty that arise from constructing new questions that test the same knowledge. It cannot, however, account for changes in the content or scope of an exam.

This distinction is widely recognized in the testing literature–I cite numerous sources at the end of this post. It appears, however, that NCBE has attempted to “equate” the scores of the new MBE (with seven subjects) to older versions of the exam (with just six subjects). This treated the February 2015 examinees unfairly, leading to lower scores and pass rates.

To understand the problem, let’s first review the process of equating and scaling.

Equating

First, remember why NCBE equates exams. To avoid security breaches, NCBE must produce a different version of the MBE every February and July. Testing experts call these different versions “forms” of the test. For each of the MBE forms, the designers attempt to create questions that impose the same range of difficulty. Inevitably, however, some forms are harder than others. It would be unfair for examinees one year to get lower scores than examinees the next year, simply because they took a harder form of the test. Equating addresses this problem.

The process of equating begins with a set of “control” questions or “common items.” These are questions that appear on two forms of the same exam. The February 2015 MBE, for example, included a subset of questions that had also appeared on some earlier exam. For this discussion, let’s assume that there were 30 of these common items and 160 new questions that counted toward each examinee’s score. (Each MBE also includes 10 experimental questions that do not count toward the test-taker’s score but that help NCBE assess items for future use.)

When NCBE receives answer sheets from each version of the MBE, it is able to assess the examinees’ performance on the common items and new items. Let’s suppose that, on average, earlier examinees got 25 of the 30 common items correct. If the February 2015 test-takers averaged only 20 correct answers to those common items, NCBE would know that those test-takers were less able than previous examinees. That information would then help NCBE evaluate the February test-takers’ performance on the new test items. If the February examinees also performed poorly on those items, NCBE could conclude that the low scores were due to the test-takers’ abilities rather than to a particularly hard version of the test.

Conversely, if the February test-takers did very well on the new items–while faring poorly on the common ones–NCBE would conclude that the new items were easier than questions on earlier tests. The February examinees racked up points on those questions, not because they were better prepared than earlier test-takers, but because the questions were too easy.

The actual equating process is more complicated than this. NCBE, for example, can account for the difficulty of individual questions rather than just the overall difficulty of the common and new items. The heart of equating, however, lies in this use of “common items” to compare performance over time.

Scaling

Once NCBE has compared the most recent batch of exam-takers with earlier examinees, it converts the current raw scores to scaled ones. Think of the scaled scores as a rigid yardstick; these scores have the same meaning over time. 18 inches this year is the same as 18 inches last year. In the same way, a scaled score of 136 has the same meaning this year as last year.

How does NCBE translate raw points to scaled scores? The translation depends upon the results of equating. If a group of test-takers performs well on the common items, but not so well on the new questions, the equating process suggests that the new questions were harder than the ones on previous versions of the test. NCBE will “scale up” the raw scores for this group of exam takers to make them comparable to scores earned on earlier versions of the test.

Conversely, if examinees perform well on new questions but poorly on the common items, the equating process will suggest that the new questions were easier than ones on previous versions of the test. NCBE will then scale down the raw scores for this group of examinees. In the end, the scaled scores will account for small differences in test difficulty across otherwise similar forms.

Changing the Test

Equating and scaling work well for test forms that are designed to be as similar as possible. The processes break down, however, when test content changes. You can see this by thinking about the data that NCBE had available for equating the February 2015 bar exam. It had a set of common items drawn from earlier tests; these would have covered the six original subjects. It also had answers to 190 new items; these would have included both the original subjects and the new one (Civil Procedure).

With these data, NCBE could make two comparisons:

1. It could compare performance on the common items. It undoubtedly found that the February 2015 test-takers performed less well than previous test-takers on these items. That’s a predictable result of having a seventh subject to study. This year’s examinees spread their preparation among seven subjects rather than six. Their mastery of each subject was somewhat lower, and they would have performed less well on the common items testing those subjects.

2. NCBE could also compare performance on the new Civil Procedure items with performance on old and new items in other subjects. NCBE won’t release those comparisons, because it no longer discloses raw scores for subject areas. I predict, however, that performance on Civil Procedure items was the same as on Evidence, Property, or other subjects. Why? Because Civil Procedure is not intrinsically harder than these other subjects, and the examinees studied all seven subjects.

Neither of these comparisons, however, would address the key change in the MBE: Examinees had to prepare seven subjects rather than six. As my previous post suggested, this isn’t just a matter of taking all seven subjects in law school and remembering key concepts for the MBE. Because the MBE is a closed-book exam that requires recall of detailed rules, examinees devote 10 weeks of intense study to this exam. They don’t have more than 10 weeks, because they’re occupied with law school classes, extracurricular activities, and part-time jobs before mid-May or mid-December.

There’s only so much material you can cram into memory during ten weeks. If you try to memorize rules from seven subjects, rather than just six, some rules from each subject will fall by the wayside.

When Equating Doesn’t Work

Equating is not possible for a test like the new MBE, which has changed significantly in content and scope. The test places new demands on examinees, and equating cannot account for those demands. The testing literature is clear that, under these circumstances, equating produces misleading results. As Robert L. Brennan, a distinguished testing expert, wrote in a prominent guide: “When substantial changes in test specifications occur, either scores should be reported on a new scale or a clear statement should be provided to alert users that the scores are not directly comparable with those on earlier versions of the test.” (See p. 174 of Linking and Aligning Scores and Scales, cited more fully below.)

“Substantial changes” is one of those phrases that lawyers love to debate. The hypothetical described at the beginning of this post, however, seems like a common-sense way to identify a “substantial change.” If the vast majority of test-takers would prefer one version of a test over a second one, there is a substantial difference between the two.

As Brennan acknowledges in the chapter I quote above, test administrators dislike re-scaling an exam. Re-scaling is both costly and time-consuming. It can also discomfort test-takers and others who use those scores, because they are uncertain how to compare new scores to old ones. But when a test changes, as the MBE did, re-scaling should take the place of equating.

The second best option, as Brennan also notes, is to provide a “clear statement” to “alert users that the scores are not directly comparable with those on earlier versions of the test.” This is what NCBE should do. By claiming that it has equated the February 2015 results to earlier test results, and that the resulting scaled scores represent a uniform level of achievement, NCBE is failing to give test-takers, bar examiners, and the public the information they need to interpret these scores.

The February 2015 MBE was not the same as previous versions of the test, it cannot be properly equated to those tests, and the resulting scaled scores represent a different level of achievement. The lower scaled scores on the February 2015 MBE reflect, at least in part, a harder test. To the extent that the test-takers also differed from previous examinees, it is impossible to separate that variation from the difference in the tests themselves.

Conclusion

Equating was designed to detect small, unintended differences in test difficulty. It is not appropriate for comparing a revised test to previous versions of that test. In my next post on this issue, I will discuss further ramifications of the recent change in the MBE. Meanwhile, here is an annotated list of sources related to equating:

Michael T. Kane & Andrew Mroch, Equating the MBE, The Bar Examiner, Aug. 2005, at 22. This article, published in NCBE’s magazine, offers an overview of equating and scaling for the MBE.

Neil J. Dorans, et al., Linking and Aligning Scores and Scales (2007). This is one of the classic works on equating and scaling. Chapters 7-9 deal specifically with the problem of test changes. Although I’ve linked to the Amazon page, most university libraries should have this book. My library has the book in electronic form so that it can be read online.

Michael J. Kolen & Robert L. Brennan, Test Equating, Scaling, and Linking:
Methods and Practices (3d ed. 2014). This is another standard reference work in the field. Once again, my library has a copy online; check for a similar ebook at your institution.

CCSSO, A Practitioner’s Introduction to Equating. This guide was prepared by the Council of Chief State School Officers to help teachers, principals, and superintendents understand the equating of high-stakes exams. It is written for educated lay people, rather than experts, so it offers a good introduction. The source is publicly available at the link.

, No Comments Yet

Old Ways, New Ways

April 14th, 2015 / By

For the last two weeks, Michael Simkovic and I have been discussing the manner in which law schools used to publish employment and salary information. The discussion started here and continued on both that blog and this one. The debate, unfortunately, seems to have confused some readers because of its historical nature. Let’s clear up that confusion: We were discussing practices that, for the most part, ended four or five years ago.

Responding to both external criticism and internal reflection, today’s law schools publish a wealth of data about their employment outcomes; most of that information is both user-friendly and accurate. Here’s a brief tour of what data are available today and what the future might still hold.

ABA Reports

For starters, all schools now post a standard ABA form that tabulates jobs in a variety of categories. The ABA also provides this information on a website that includes a summary sheet for each school and a spreadsheet compiling data from all of the ABA-accredited schools. Data are available for classes going back to 2010; the 2014 data will appear shortly (and are already available on many school sites).

Salary Specifics

The ABA form does not include salary data, and the organization warns schools to “take special care” when reporting salaries because “salary data can so easily be misleading.” Schools seem to take one of two approaches when discussing salary data today.

Some provide almost no information, noting that salaries vary widely. Others post their “NALP Report” or tables drawn directly from that report. What is this report? It’s a collection of data that law schools have been gathering for about forty years, but not disclosing publicly until the last five. The NALP Report for each school summarizes the salary data that the school has gathered from graduates and other sources. You can find examples by googling “NALP Report” along with the name of a law school. NALP reports are available later in the year than ABA ones; you won’t find any 2014 NALP Reports until early summer.

NALP’s data gathering process is far from perfect, as both Professor Simkovic and I have discussed. The report for each school, however, has the virtue of both providing some salary information and displaying the limits of that information. The reports, for example, detail how many salaries were gathered in each employment category. If a law school reports salaries for 19/20 graduates working for large firms, but just 5/30 grads working in very small firms, a reader can make note of that fact. Readers also get a more complete picture of how salaries differ between the public and private sector, as well as within subsets of those groups.

Before 2010, no law school shared its NALP Report publicly. Instead, many schools chose a few summary statistics to disclose. A common approach was to publish the median salary for a particular law school class, without further information about the process of obtaining salary information, the percentage of salaries gathered, or the mix of jobs contributing to the median. If more specific information made salaries look better, schools could (and did) provide that information. A school that placed a lot of graduates in judicial clerkships, government jobs, or public interest positions, for example, often would report separate medians for those categories–along with the higher median for the private sector. Schools had a lot of discretion to choose the most pleasing summary statistic, because no one reported more detailed data.

Given the brevity of reported salary data, together with the potential for these summary figures to mislead, the nonprofit organization Law School Transparency (LST) began urging schools to publish their “full” NALP Reports. “Full” did not mean the entire report, which can be quite lengthy and repetitive. Instead, LST defined the portions of the report that prospective students and others would find helpful. Schools seem to agree with LST’s definition, publishing those portions of the report when they choose to disclose the information.

Today, according to LST’s tracking efforts, at least half of law schools publish their NALP Reports. There may be even more schools that do so; although LST invites ongoing communication with law schools, the schools don’t always choose to update their status for the LST site.

Plus More

The ABA’s standardized employment form, together with greater availability of NALP Reports, has greatly changed the information available to potential law students and other interested parties. But the information doesn’t stop with these somewhat dry forms. Many law schools have built upon these reports to convey other useful information about their graduates’ careers. Although I have not made an exhaustive review, the contemporary information I’ve seen seems to comply with our obligation to provide information that is “complete, accurate and not misleading to a reasonable law school student or applicant.”

In addition to these efforts by individual schools, the ABA has created two websites with consumer information about law schools: the employment site noted above and a second site with other data regularly reported to the ABA. NALP has also increased the amount of data it releases publicly without charge. LST, finally, has become a key source for prospective students who want to sort and compare data drawn from all of these sources. LST has also launched a new series of podcasts that complement the data with a more detailed look at the wide range of lawyers’ work.

Looking Forward

There’s still more, of course, that organizations could do to gather and disseminate data about legal careers. I like Professor Simkovic’s suggestion that the Census Bureau expand the Current Population Survey and American Community Survey to include more detailed information about graduate education. These surveys were developed when graduate education was relatively uncommon; now that post-baccalaureate degrees are more common, it seems critical to have more rigorous data about those degrees.

I also hope that some scholars will want to gather data from bar records and other online sources, as I have done. This method has limits, but so do larger initiatives like After the JD. Because of their scale and expense, those large projects are difficult to maintain–and without regular maintenance, much of their utility falls.

Even with projects like these, however, law schools undoubtedly will continue to collect and publish data about their own employment outcomes. Our institutions compete for students, US News rank, and other types of recognition. Competition begets marketing, and marketing can lead to overstatements. The burden will remain on all of us to maintain professional standards of “complete, accurate and not misleading” information, even as we talk with pride about our schools. Our graduates face similar obligations when they compete for clients. Although all of us chafe occasionally at duties, they are also the mark of our status as professionals.

, View Comment (1)

The February 2015 Bar Exam

April 12th, 2015 / By

States have started to release results of the February 2015 bar exam, and Derek Muller has helpfully compiled the reports to date. Muller also uncovered the national mean scaled score for this February’s MBE, which was just 136.2. That’s a notable drop from last February’s mean of 138.0. It’s also lower than all but one of the means reported during the last decade; Muller has a nice graph of the scores.

The latest drop in MBE scores, unfortunately, was completely predictable–and not primarily because of a change in the test takers. I hope that Jerry Organ will provide further analysis of the latter possibility soon. Meanwhile, the expected drop in the February MBE scores can be summed up in five words: seven subjects instead of six. I don’t know how much the test-takers changed in February, but the test itself did.

MBE Subjects

For reasons I’ve explained in a previous post, the MBE is the central component of the bar exam. In addition to contributing a substantial amount to each test-taker’s score, the MBE is used to scale answers to both essay questions and the Multistate Performance Test (MPT). The scaling process amplifies any drop in MBE scores, leading to substantial drops in pass rates.

In February 2015, the MBE changed. For more than four decades, that test has covered six subjects: Contracts, Torts, Criminal Law and Procedure, Constitutional Law, Property, and Evidence. Starting with the February 2015 exam, the National Conference of Bar Examiners (NCBE) added a seventh subject, Civil Procedure.

Testing examinees’ knowledge of Civil Procedure is not itself problematic; law students study that subject along with the others tested on the exam. In fact, I suspect more students take a course in Civil Procedure than in Criminal Procedure. The difficulty is that it’s harder to memorize rules drawn from seven subjects than to learn the rules for six. For those who like math, that’s an increase of 16.7% in the body of knowledge tested.

Despite occasional claims to the contrary, the MBE requires lots of memorization. It’s not solely a test of memorization; the exam also tests issue spotting, application of law to fact, and other facets of legal reasoning. Test-takers, however, can’t display those reasoning abilities unless they remember the applicable rules: the MBE is a closed-book test.

There is no other context, in school or practice, where we expect lawyers to remember so many legal principles without reference to codes, cases, and other legal materials. Some law school exams are closed-book, but they cover a single subject that has just been studied for a semester. The “closed book” moments in practice are much fewer than many observers assume. I don’t know any trial lawyers who enter the courtroom without a copy of the rules of evidence and a personalized cribsheet reminding them of common objections and responses.

This critique of the bar exam is well known. I repeat it here only to stress the impact of expanding the MBE’s scope. February’s test takers answered the same number of multiple choice questions (190 that counted, plus 10 experimental ones) but they had to remember principles from seven fields of law rather than six.

There’s only so much that the brain can hold in memory–especially when the knowledge is abstract, rather than gained from years of real-client experience. I’ve watched many graduates prepare for the bar over the last decade: they sit in our law library or clinic, poring constantly over flash cards and subject outlines. Since states raised passing scores in the 1990s and early 2000s, examinees have had to memorize many more rules in order to answer enough questions correctly. From my observation, their memory banks were already full to overflowing.

Six to Seven Subjects

What happens, then, when the bar examiners add a seventh subject to an already challenging test? Correct answers will decline, not just in the new subject, but across all subjects. The February 2015 test-takers, I’m sure, studied just as hard as previous examinees. Indeed, they probably studied harder, because they knew that they would have to answer questions drawn from seven bodies of legal knowledge rather than six. But their memories could hold only so much information. Memorized rules of Civil Procedure took the place of some rules of Torts, Contracts, or Property.

Remember that the MBE tests only a fraction of the material that test-takers must learn. It’s not a matter of learning 190 legal principles to answer 190 questions. The universe of testable material is enormous. For Evidence, a subject that I teach, the subject matter outline lists 64 distinct topics. On average, I estimate that each of those topics requires knowledge of three distinct rules to answer questions correctly on the MBE–and that’s my most conservative estimate.

It’s not enough, for example, to know that there’s a hearsay exemption for some prior statements by a witness, and that the exemption allows the fact-finder to use a witness’s out-of-court statements for substantive purposes, rather than merely impeachment. That’s the type of general understanding I would expect a new lawyer to have about Evidence, permitting her to research an issue further if it arose in a case. The MBE, however, requires the test-taker to remember that a grand jury session counts as a “proceeding” for purposes of this exemption (see Q 19). That’s a sub-rule fairly far down the chain. In fact, I confess that I had to check my own book to refresh my recollection.

In any event, if Evidence requires mastering 200 sub-principles of this detail, and the same is true of the other five traditional MBE subjects, that was 1200 very specific rules to memorize and keep in memory–all while trying to apply those rules to new fact patterns. Adding a seventh subject upped the ante to 1400 or more detailed rules. How many things can one test-taker remember without checking a written source? There’s a reason why humanity invented writing, printing, and computers.

But They Already Studied Civil Procedure

Even before February, all jurisdictions (to my knowledge) tested Civil Procedure on their essay exams. So wouldn’t examinees have already studied those Civ Pro principles? No, not in the same manner. Detailed, comprehensive memorization is more necessary for the MBE than for traditional essays.

An essay allows room to display issue spotting and legal reasoning, even if you get one of the sub-rules wrong. In the Evidence example given above, an examinee could display considerable knowledge by identifying the issue, noting the relevant hearsay exemption, and explaining the impact of admissibility (substantive use rather than simply impeachment). If the examinee didn’t remember the correct status of grand jury proceedings under this particular rule, she would lose some points. She wouldn’t, however, get the whole question wrong–as she would on a multiple-choice question.

Adding a new subject to the MBE hit test-takers where they were already hurting: the need to memorize a large number of rules and sub-rules. By expanding the universe of rules to be memorized, NCBE made the exam considerably harder.

Looking Ahead

In upcoming posts, I will explain why NCBE’s equating/scaling process couldn’t account for the increased difficulty of this exam. Indeed, equating and scaling may have made the impact worse. I’ll also explore what this means for the ExamSoft discussion and what (if anything) legal educators might do about the increased difficulty of the MBE. To start the discussion, however, it’s essential to recognize that enhanced level of difficulty.

, View Comments (2)

Clueless About Salary Stats

April 11th, 2015 / By

Students and practitioners sometimes criticize law professors for knowing too little about the real world. Often, those criticisms are overstated. But then a professor like Michael Simkovic says something so clueless that you start to wonder if the critics are right.

Salaries and Response Rates

In a recent post, Simkovic tries to defend a practice that few other legal educators have defended: reporting entry-level salaries gathered through the annual NALP process without disclosing response rates to the salary question. Echoing a previous post, Simkovic claims that this practice was “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government.”

Simkovic doesn’t seem to understand how law schools and NALP actually collect salary information; the process is nothing like the government surveys he describes. Because of the idiosyncracies of the NALP process, the response rate has a particular importance.

Here are the two keys to the NALP process: (1) law schools are allowed–even encouraged–to supplement survey responses with information obtained from third parties; and (2) NALP itself is one of those third parties. Each year NALP publishes an online directory with copious salary information about the largest, best-paying law firms. Smaller firms rarely submit information to NALP, so they are almost entirely absent from the Directory.

As a result, as NALP readily acknowledges, “salaries for most jobs in large firms are reported” by law schools, while “fewer than half the salaries for jobs in small law firms are reported.” That’s “reported” as in “schools have independent information about large-firm salaries.”

For Example

To see an example of how this works in practice, take a look at the most recent (2013) salary report for Seton Hall Law School, where Simkovic teaches. Ten out of the eleven graduates who obtained jobs in firms with 500+ lawyers reported their salaries. But of the 34 graduates who took jobs in the smallest firms (those with 2-10 lawyers), just nine disclosed a salary. In 2010, 2011, and 2012, no graduates in the latter category reported a salary.

If this were a government survey, the results would be puzzling. The graduates working at the large law firms are among those “high-income individuals” that Simkovic tells us “often value privacy and are reluctant to share details about their finances.” Why are they so eager to disclose their salaries, when graduates working at smaller (and lower-paying) firms are not? And why do the graduates at every other law school act the same way? The graduates of Chicago’s Class of 2013 seem to have no sense of privacy: 149 out of 153 graduates working in the private sector happily provided their salaries, most of which were $160,000.

The answer, of course, is the NALP Directory. Law schools don’t need large-firm associates to report their salaries; the schools already know those figures. The current Directory offers salary information for almost 800 offices associated with firms of 200+ lawyers. In contrast, the Directory includes information about just 14 law firms employing 25 or fewer attorneys. That’s 14 nationwide–not 14 in New Jersey.

For the latter salaries, law schools must rely upon graduate reports, which seem difficult to elicit. When grads do report these salaries, they are much lower than the BigLaw ones. At Seton Hall, the nine graduates who reported small-firm salaries yielded a mean of just $51,183.

What Was the Problem?

I’m able to give detailed data in the above example because Seton Hall reports all of that information. It does so, moreover, for years going back to 2010. Other schools have not always been so candid. In the old days, some law schools merged the large-firm salaries provided by NALP with a handful of small-firm salaries collected directly from graduates. The school would then report a median or mean “private practice salary” without further information.

Was this “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government”? Clearly not–unless the government keeps a list of salaries from high-paying employers that it uses to supplement survey responses. That would be a nifty way to inflate wage reports, but no political party seems to have thought of this just yet.

Law schools, in other words, were not just publishing salary information without disclosing response rates. They were disclosing information that they knew was biased: they had supplemented the survey information with data drawn from the largest firms. The organization supervising the data collection process acknowledged that the salary statistics were badly skewed; so did any dean I talked with during that period.

The criticism of law schools for “failing to report response rates” became a polite shorthand for describing the way in which law schools produced misleading salary averages. Perhaps the critics should have been less polite. We reasoned, however, that if law schools at least reported the “response” rates (which, of course, included “responses” provided by the NALP data), graduates would see that reported salaries clustered in the largest firms. The information would also allow other organizations, like Law School Transparency to explain the process further to applicants.

This approach gave law schools the greatest leeway to continue reporting salary data and, frankly, to package it in ways that may still overstate outcomes. But let’s not pretend that law schools have been operating social science surveys with an unbiased method of data collection. That wasn’t true in the past, and it’s not true now.

, View Comments (24)

Law School Statistics

April 8th, 2015 / By

Earlier this week, I noted that even smart academics are misled by the manner in which law schools traditionally reported employment statistics. Steven Solomon, a very smart professor at Berkeley’s law school, was misled by the “nesting” of statistics on NALP’s employment report for another law school.

Now Michael Simkovic, another smart law professor, has proved the point again. Simkovic rather indignantly complains that Kyle McEntee “suggests incorrectly that The New York Times reported Georgetown’s median private sector salary without providing information on what percentage of the class or of those employed were working in the private sector.” But it is Simkovic who is incorrect–and, once again, it seems to be because he was misled by the manner in which law schools report some of their employment and salary data.

Response Rates

What did McEntee say that got Simkovic so upset? McEntee said that a NY Times column (the one authored by Solomon) gave a median salary for Georgetown’s private sector graduates without telling readers “the response rate.” And that’s absolutely right. The contested figures are here on page two. You’ll see that 362 of Georgetown’s 2013 graduates took jobs in the private sector. That constituted 60.3% of the employed graduates. You’ll also see a median salary of $160,000. All of that is what Solomon noted in his Times column (except that he confused the percentage of employed graduates with the percentage of the graduating class).

The fact that Solomon omitted, and that McEntee properly highlighted, is the response rate for the number of graduates who reported those salaries. That number appears clearly on the Georgetown report, in the same line as the other information: 362 graduates obtained these private sector jobs, but only 293 of them disclosed salaries for those jobs. Salary information was unavailable for about one-fifth of the graduates holding these positions.

Why does this matter? If you’ve paid any attention to the employment of law school graduates, the answer is obvious. NALP acknowledged years ago that reported salaries suffer from response bias. To see an illustration of this, take a look at the same Georgetown report we’ve been examining. On page 4, you’ll see that salaries were known for 207 of the 211 graduates (98.1%) working in the largest law firms. For graduates working in the smallest category of firms, just 7 out of 27 salaries (25.9%) were available. For public interest jobs that required bar admission, just 15 out of 88 salaries (17.0%) were known.

Simkovic may think it’s ok for Solomon to discuss medians in his Times column without disclosing the response rate. I disagree–and I think a Times reporter would as well. Respected newspapers are more careful about things like response rates. But whether or not you agree with Solomon’s writing style, McEntee is clearly right that he omitted the response rate on the data he discussed.

So Simkovic, like Solomon, seems to be confused by the manner in which law schools report information on NALP forms. 60% of the employed graduates held private sector jobs, but that’s not the response rate for salaries. And there’s a pretty strong consensus that the salary responses on the NALP questionnaire are biased–even NALP thinks so.

Misleading By Omission

The ABA’s standard employment report has brought more clarity to reporting entry-level employment outcomes. Solomon and Simkovic were not confused by data appearing on that form, but by statistics contained in NALP’s more outmoded form. Once again, their errors confirm the problems in old reporting practices.

More worrisome than this confusion, Solomon and Simkovic both adopt a strategy that many law schools followed before the ABA intervened: they omit information that a reader (or potential student) would find important. The most mind-boggling fact about Georgetown’s 2013 employment statistics is that the school itself hired 83 of its graduates–12.9% of the class. For 80 of those graduates, Georgetown provided a full year of full-time employment.

Isn’t that something you would want to know in evaluating whether “[a]t the top law schools, things are returning to the years before the financial crisis”? That’s the lead in to Solomon’s up-beat description of Georgetown’s employment statistics–the description that then neglects to mention how many of the graduates’ jobs were funded by their own law school.

I’m showing my age here, but back in the twentieth century, T14 schools didn’t fund jobs for one out of every eight graduates. Nor was that type of funding common in those hallowed years more immediately preceding the financial crisis.

I’ll readily acknowledge that Georgetown funds more graduate jobs than most other law schools, but the practice exists at many top schools. It’s Solomon who chose Georgetown as his example. Why are he and Simkovie then so silent about these school-funded jobs?

Final Thoughts

I ordinarily wouldn’t devote an entire post to a law professor’s errors in reading an employment table. We all make too many errors for that to be newsworthy. But Simkovic is so convinced that law schools have never misled anyone with their employment statistics–and here we have two examples of smart, knowledgeable people misled by those same statistics.

Speaking of which, Simkovic defends Solomon’s error by suggesting that he “simply rounded up” from 56% to 60% because four percent is a “small enough difference.” Rounded up? Ask any law school dean whether a four-point difference in an employment rate matters. Or check back in some recent NALP reports. The percentage of law school graduates obtaining nine-month jobs in law firms fell from 50.9% in 2010 to 45.9% in 2011. Maybe we could have avoided this whole law school crisis thing if we’d just “rounded up” the 2011 number to 50%.

, View Comments (11)

Compared to What?

April 7th, 2015 / By

Some legal educators have a New Yorker’s view of the world. Like the parochial Manhattanite in Saul Steinberg’s famous illustration, these educators don’t see much beyond their own fiefdom. They see law graduates out there in the world, practicing their profession or working in related fields. And there are doctors, who (regrettably) make more money than lawyers do. But really, what else is there? What do people do if they don’t go to law school?

Michael Simkovic takes this position in a recent post, declaring (in bold) that: “The question everyone who decides not to go to law school . . . must answer is–what else out there is better?” In a footnote, Simkovic concedes that “[a]nother graduate degree might be better than law school for a particular individual,” but he clearly doesn’t think much of the idea.

People, of course, work in hundreds of occupations other than law. Some of them even enjoy their work. Simkovic’s concern lies primarily with the financial return on college and graduate degrees. Even here, though, the contemporary options are much broader than many legal educators realize.

Time Was: The 1990s

Financially, the late twentieth century was a good time to be a lawyer. When the Bureau of Labor Statistics (BLS) published its first Occupational Employment Statistics (OES) in 1997, the four occupations with the highest salaries were medicine, dentistry, podiatry, and law. Those four occupations topped the salary list (in that order) whether sorted by mean or median salary. [Note that OES collects data only on salaries; it does not include self-employed individuals like solo practitioners or partners–whether in law or medicine. For more on that point, see the end of this post.]

Law was a pretty good deal in those days. The graduate program was just three years, rather than four. There were no college prerequisites and no post-graduate internships. Knowledge of math was optional, and exposure to bodily fluids minimal. Imagine earning a median salary of $109,987 (in 2014 dollars) without having to examine feet! Although a willingness to spend four years of graduate school studying feet, along with a lifetime of treating them, would have netted you a 28% increase in median salary.

But let’s not dally any longer in the twentieth century.

Time Is: 2014

BLS just released its latest survey of occupational wages, and the results show how much the economy has changed. Law practice has slipped to twenty-second place in a listing of occupations by mean salary, and twenty-sixth place when ranked by median. One subset of lawyers, judges and magistrates, holds twenty-fifth place on the list of median salaries, but practicing lawyers have slipped a notch lower.

About half the slippage in law’s salary prominence stems from the splintering of medical occupations, both in the real world and as measured by BLS. We no longer visit “doctors,” we see pediatricians, general practitioners, internists, obstetricians, anesthesiologists, surgeons, and psychiatrists–often in that order. These medical specialists, along with the dentists and podiatrists, all enjoy a higher median salary than lawyers.

There are two other health-related professions, meanwhile, that have moved ahead of lawyers in wages: nurse anesthetists and pharmacists. Both of these fields require substantial graduate education: at least two years for nurse anesthetists and two to four years for pharmacists. But the training pays off with a median salary of $153,780 for nurse anesthetists and $120,950 for pharmacists.

Today’s college graduates, furthermore, don’t have to deal with teeth, airways, or medications to earn more than lawyers do. The latest BLS survey includes nine other occupations that top lawyers’ median salary: financial managers, airline pilots, natural sciences managers, air traffic controllers, marketing managers, computer and information systems managers, petroleum engineers, architectural and engineering managers, and chief executives.

How much do salaried lawyers earn in their more humble berth on the OES list? They collected a median salary of $114,970 in 2014. That’s good, but it’s only 4.5% higher (in inflation-controlled dollars) than the median salary in 1997. Pharmacists enjoyed a whopping 28% increase in median real wages to reach $120,950 in 2014. And the average nurse anesthetist earned a full third more than the average lawyer that year.

If you’re a college student willing to set your financial sights just a bit lower than the median salary in law practice, there are lots of other options. Here are some of the occupations with a 2014 median salary falling between $100,000 and $114,970: sales manager, physicist, computer hardware engineer, computer and information research scientist, compensation and benefits manager, purchasing manager, astronomer, aerospace engineer, political scientist, mathematician, software developer for systems software, human resources manager, training and development manager, public relations and fundraising manager, optometrist, nuclear engineer, and prosthodontist (those are the folks who will soon be fitting baby boomers for their false teeth).

Law graduates could apply their education to some of these jobs; with a few more years of graduate education, a savvy lawyer could offer the aging boomers a package deal on a will and a new pair of choppers. But the most common themes in these salary-leading occupations do not revolve around law. Instead, the themes are math, science, and management–none of which we teach very well in law school.

Twenty-first Century Humility

Lawyers will not disappear. Even Richard Susskind, who asked about “The End of Lawyers?” in a provocative book title, doesn’t think lawyers are done for. We still need lawyers to fill both traditional roles and new ones. Lawyers, however, will not have the same economic and social dominance that they enjoyed in the late twentieth century.

Some lawyers will still make a lot of money. As the American Lawyer proclaimed last year, the “super rich” are getting richer. But the prospects for other lawyers are less certain, and the appeal of competing fields has increased.

If law schools want to understand their decline in talented applicants, they need to look more closely at the competition. What do today’s high school students and middle schoolers think about law? Those students will choose their majors soon after arriving at college. Once they choose engineering, computer science, business, or health-related courses, a legal career will seem even less appealing. If we want potential students to find law attractive, we need to know more about their alternatives and preferences.

We also need to be realistic about how many students ultimately will–or should–pursue a law degree. As citizens of a healthy economy, we need doctors, nurse anesthetists, pharmacists, managers, and software developers. We even need the odd astronomer or two. Law is just one of the many occupations that make a society thrive. The twenty-first century is a time of interdependence that should bring a sense of humility.

Notes

Here are some key points about the method behind the OES survey. For more information, see this FAQ page, which includes the information I summarize here:

1. OES obtains wage data directly from establishments. This method eliminates bias that may occur when individuals report their own wages. The survey, however, includes only wage data for salaried employees. Solo practitioners (in any field) are excluded, as are individuals who draw their income entirely from partnerships or other forms of profit sharing.

2. “Wages” include production bonuses and tips, but not end-of-year bonuses, profit-sharing, or benefits.

3. Although BLS publishes OES data every year, the data are gathered on a rolling basis. Income for “1997” or “2014” reflects data gathered over three years, including the reference year. BLS adjusts wage figures for the two older years, using the Employment Cost Index, so the reported wages appear in then “current” dollars. The three-year collection period, however, can mask sudden shifts in employment trends.

4. BLS cautions against using OES data to compare changes in employment data over time, unless the user offers necessary context. In particular, it is important for readers to understand that short-term comparisons are difficult (because of the point in the previous paragraph) and that occupational categories change frequently. For those reasons, I have limited my cross-time comparisons and have noted the splintering of occupational categories. The limited comparison offered here, however, seems helpful in understanding the relationship of law practice to other high-paying occupations.

5. For the data used in this post, follow this link and download the spreadsheets. The HTML versions are prettier, but they do not include all of the data.

, View Comment (1)

ExamSoft and NCBE

April 6th, 2015 / By

I recently found a letter that Erica Moeser, President of the National Conference of Bar Examiners (NCBE) wrote to law school deans in mid-December. The letter responds to a formal request, signed by 79 law school deans, that NCBE “facilitate a thorough investigation of the administration and scoring of the July 2014 bar exam.” That exam suffered from the notorious ExamSoft debacle.

Moeser’s letter makes an interesting distinction. She assures the deans that NCBE has “reviewed and re-reviewed” its scoring, equating, and scaling of the July 2014 MBE. Those reviews, Moeser attests, revealed no flaw in NCBE’s process. She then adds that, to the extent the deans are concerned about “administration” of the exam, they should “note that NCBE does not administer the examination; jurisdictions do.”

Moeser doesn’t mention ExamSoft by name, but her message seems clear: If ExamSoft’s massive failure affected examinees’ performance, that’s not our problem. We take the bubble sheets as they come to us, grade them, equate the scores, scale those scores, and return the numbers to the states. It’s all the same to NCBE if examinees miss points because they failed to study, law schools taught them poorly, or they were groggy and stressed from struggling to upload their essay exams. We only score exams, we don’t administer them.

But is the line between administration and scoring so clear?

The Purpose of Equating

In an earlier post, I described the process of equating and scaling that NCBE uses to produce final MBE scores. The elaborate transformation of raw scores has one purpose: “to ensure consistency and fairness across the different MBE forms given on different test dates.”

NCBE thinks of this consistency with respect to its own test questions; it wants to ensure that some test-takers aren’t burdened with an overly difficult set of questions–or conversely, that other examinees don’t benefit from unduly easy questions. But substantial changes in exam conditions, like the ExamSoft crash, can also make an exam more difficult. If they do, NCBE’s equating and scaling process actually amplifies that unfairness.

To remain faithful to its mission, it seems that NCBE should at least explore the possible effects of major blunders in exam administration. This is especially true when a problem affects multiple jurisdictions, rather than a single state. If an incident affects a single jurisdiction, the examining authorities in that state can decide whether to adjust scores for that exam. When the problem is more diffuse, as with the ExamSoft failure, individual states may not have the information necessary to assess the extent of the impact. That’s an even greater concern when nationwide equating will spread the problem to states that did not even contract with ExamSoft.

What Should NCBE Have Done?

NCBE did not cause ExamSoft’s upload problems, but it almost certainly knew about them. Experts in exam scoring also understand that defects in exam administration can interfere with performance. With knowledge of the ExamSoft problem, NCBE had the ability to examine raw scores for the extent of the ExamSoft effect. Exploration would have been most effective with cooperation from ExamSoft itself, revealing which states suffered major upload problems and which ones experienced more minor interference. But even without that information, NCBE could have explored the raw scores for indications of whether test takers were “less able” in ExamSoft states.

If NCBE had found a problem, there would have been time to consult with bar examiners about possible solutions. At the very least, NCBE probably should have adjusted its scaling to reflect the fact that some of the decrease in raw scores stemmed from the software crash rather than from other changes in test-taker ability. With enough data, NCBE might have been able to quantify those effects fairly precisely.

Maybe NCBE did, in fact, do those things. Its public pronouncements, however, have not suggested any such process. On the contrary, Moeser seems to studiously avoid mentioning ExamSoft. This reveals an even deeper problem: we have a high-stakes exam for which responsibility is badly fragmented.

Who Do You Call?

Imagine yourself as a test-taker on July 29, 2014. You’ve been trying for several hours to upload your essay exam, without success. You’ve tried calling ExamSoft’s customer service line, but can’t get through. You’re worried that you’ll fail the exam if you don’t upload the essays on time, and you’re also worried that you won’t be sufficiently rested for the next day’s MBE. Who do you call?

You can’t call the state bar examiners; they don’t have an after-hours call line. If they did, they probably would reassure you on the first question, telling you that they would extend the deadline for submitting essay answers. (This is, in fact, what many affected states did.) But they wouldn’t have much to offer on the second question, about getting back on track for the next day’s MBE. Some state examiners don’t fully understand NCBE’s equating and scaling process; those examiners might even erroneously tell you “not to worry because everyone is in the same boat.”

NCBE wouldn’t be any more help. They, as Moeser pointed out, don’t actually administer exams; they just create and score them.

Many distressed examinees called law school staff members who had helped them prepare for the bar. Those staff members, in turn, called their deans–who contacted NCBE and state bar examiners. As Moeser’s letters indicate, however, bar examiners view deans with some suspicion. The deans, they believe, are too quick to advocate for their graduates and too worried about their own bar pass rates.

As NCBE and bar examiners refused to respond, or shifted responsibility to the other party, we reached a stand-off: no one was willing to take responsibility for flaws in a very high-stakes test administered to more than 50,000 examinees. That is a failure as great as the ExamSoft crash itself.

, No Comments Yet

The Ethics of Academia

April 2nd, 2015 / By

What obligations, if any, do academic institutions owe potential students? When soliciting these “customers,” how candid should schools be in discussing graduation rates, scholarship conditions, or the employment outcomes of recent graduates? Do the obligations differ for a professional school that will teach students about the ethics of communicating with their own future customers?

New Marketing/New Concerns

Once upon a time, we marketed law schools with a printed brochure or two. That changed with the advent of the new century and the internet. Now marketing is pervasive: web pages, emails, blog posts, and forums.

With increased marketing, some educators began to worry about how we presented ourselves to students. As a sometime social scientist, I was particularly concerned about the way in which some law schools reported median salaries without disclosing the number of graduates supplying that information. A school could report that it had employment information from 99% of its graduates, that 60% were in private practice, and that the median salary for those private practitioners was $120,000. Nowhere did the reader learn that only 45% of the graduates reported salary information. [This is a hypothetical example; it does not represent any particular law school.]

I also noticed that, although law schools know only the average “amount borrowed” by their students, schools and the media began to represent that figure as the average “debt owed.” Interest, unfortunately, accumulates while a student is in law school, so the “amount borrowed” significantly understates the “debt owed” when loans fall due.

Other educators worried about a lack of candor when schools offered scholarships to students. A school might offer an attractive three-year scholarship to an applicant, with the seemingly easy condition that the student maintain a B average. The school knew that it tightly controlled curves in first-year courses, so that a predictable number of awardees would fail that condition, but the applicants didn’t understand that. This isn’t just a matter of optimism bias; undergraduates literally do not understand law school curves. A few years ago, one law school hopeful said to me: “What’s the big deal about grade competition in law school? It’s not like there’s a limit on the number of A’s or anything.” When I explained the facts of law school life, she went off to pursue a Ph.D. in botany.

And then there was the matter of nested statistics. Schools would report the number of employed graduates, then identify percentages of those graduates working in particular job categories. Categories spawned sub-categories, and readers began to lose sight of the denominator. Even respected scholars like Steven Solomon get befuddled by these statistics. Yesterday, Solomon misinterpreted Georgetown’s 2013 employment statistics due to this type of nesting: he mistook 60% of employed graduates for 60% of the graduating class. (Georgetown, to its credit, provides clearer statistics on a different page than the one Solomon used.)

Educators, of course, weren’t the only ones who noticed these problems. We were slow–much too slow–to address our lapses, and we suffered legitimate criticism from the media and organizations like Law School Transparency. Indeed, the criticisms continue, as professors persist in making misleading statements.

For me, these are ethical issues. I believe that educators do have a special obligation to prospective students; they are not just “customers,” they are people who depend upon us for instruction and wise counsel. At law schools, prospective students are also future colleagues in the legal profession; even while we teach, we are an integral part of the profession.

With that in mind, I communicate with prospective students as I would talk to a colleague asking about an entry-level teaching position or a potential move to another school. I tell students what I would want to know if I were in their position. And, consistent with my role as a teacher and scholar, I try to present the information in a manner that is straightforward and easy to understand. For the last few years, most law schools have followed the same golden rules–albeit with considerable prodding from Law School Transparency, the ABA, and the media.

Revisionist History

Now that law schools have become more careful in their communications with potential students, revisionist history has appeared. Ignoring all of the concerns discussed above (although they appear in sources he cites), Michael Simkovic concludes that “The moral critique against law schools comes down to this: The law schools used the same standard method of reporting data as the U.S. Government.”

Huh? When the government publishes salaries in SIPP, a primary source for Simkovic’s scholarship, I’m pretty sure they disclose how many respondents refused to provide that information. Reports on the national debt, likewise, include interest accrued rather than just the original amounts borrowed–although I will concede that there’s plenty of monkey business in that reporting. I’ll also concede that welfare recipients probably don’t fully understand the conditions in the contracts they sign.

Simkovic, of course, doesn’t mean to set the government up as a model on these latter points. Instead, he ignores those issues and pretends that the ethical critique of law schools focused on just one point: calculation of the overall employment rate. On this, Simkovic has good news for law schools: they can ethically count a graduate as employed as long as the graduate was paid for a single hour of work during the reporting week–because that’s the way the government does it.

I don’t think any law school has ever been quite that audacious, and the ABA certainly would not approve. The implications of Simkovic’s argument, however, illuminate a key point: law schools communicate for a different purpose, and to a different audience, than the Bureau of Labor Statistics. The primary consumers of our employment statistics are current and potential students. We draft our employment statistics for that audience, and the information should be tailored to them.

As for scholarship, I will acknowledge that the U.S. government owns the word “unemployment.” I used a non-standard definition of that concept in a recent paper, and clearly designated it as such. But this seems to distract some readers, so I’ll refer to those graduates as “not working.” I suspect it’s all the same to them.

View Comments (3)

What Use Is the BLS?

March 28th, 2015 / By

What is the Bureau of Labor Statistics (BLS), and what can it do for you? The BLS is an independent statistical agency that measures “labor market activity, working conditions, and price changes in the economy.” You’ve sampled BLS wares if you’ve relied upon the Consumer Price Index, unemmployment rates, or average wages.

One program within BLS tries to project employment growth for hundreds of different occupations. The Bureau issues these forecasts every two years, with each projection spanning a decade. The most recent projections, released in December 2013, attempt to forecast occupational growth between 2012 and 2022.

Why does BLS spend your tax dollars trying to do this? Most parents can’t predict what their teenagers will do next week. How does the BLS think it can predict the behavior of an entire economy, including growth rates in so many different occupations?

The truth is that it can’t, at least not with the level of accuracy that some users would like. There are just too many variables, not to mention acts of god and war. The latest evaluation of BLS’s occupational projections found that, when BLS projected occupational growth rates between 1996 and 2006, it failed to foresee the following:

* Immigration would be higher than the Census Bureau predicted
* Women’s labor force participation would decline
* Terrorists would hijack 4 jets, level the WTC, and damage the Pentagon
* The United States would go to war with both Afganistan and Iraq
* A housing bubble would double home prices over the decade
* Internet-based services would cut the number of travel agents by a third

It was a tumultuous decade, but so are most decades. Given the twists and turns of human history, which affect the type of work that humans do, why does BLS even bother with occupational projections?

Better Than the Alternatives

Like democracy, BLS’s projections seem to be better than the alternatives. In particular, these forecasts are better than ones that rely solely on historical trends. In 2010, the Bureau tested its model against four different “naive models” that drew solely on historical data. A common naive model (and one that the Bureau tested) predicts each occupation’s growth rate based on that occupation’s rate of growth during the previous 10 years. Another variation, also tested by the Bureau, uses the most recent five years to project future growth.

On three out of four measures, the Bureau’s predictions outperformed all of the naive models. Predicting the future is difficult, especially when that future includes human actions. The Bureau’s experience, however, suggests that past performance is not the best guide to occupational growth; adding other ingredients to the forecast improves information.

Who Needs It?

Even if BLS predictions are better than naive models, who needs these predictions? Why engage in such an imprecise exercise? BLS began projecting occupational growth after World War II in order to help returning veterans identify promising career paths. The program persisted as a way to serve “individuals seeking career guidance,” as well as “policymakers, community planners, and educational authorities who need information for long-term policy planning purposes.”

If BLS wants students to use its occupational projections for “career guidance,” then why does it warn against using the projections to predict labor shortages or surpluses? Don’t students examine these projections precisely to determine which occupations are growing and which ones are declining? How is occupational “growth” different from a labor “shortage” in that occupation?

The two concepts are related, yet different. Remember that BLS projects (however imperfectly) the number of people who will actually fill an occupation a decade later. The Bureau doesn’t estimate how many people will want to work in that field or how many will prepare to do so; that’s not its task. The Bureau also assumes that the labor market will “clear.” In other words, if demand falls for workers in a particular field, those workers will go elsewhere. They won’t simply hang around the edges of the occupation, constituting a surplus labor supply.

This doesn’t mean, however, that the number of workers preparing to enter an occupation is irrelevant to predicting job and salary prospects for that occupation. If the pipeline of aspiring workers is easy to quantify, and if the occupation itself is tightly defined, then comparing the worker supply to job projections can yield useful information. If labor supply greatly exceeds likely job openings, then one of three things are likely to happen: (1) some of the workers will take other jobs; (2) wages in the occupation will decline; or (3) both.

What About Law?

The worker pipeline is relatively easy to specify in law. Almost no one becomes a lawyer without obtaining a JD, and there is evidence (p. 72) that most law graduates want to practice law at least for a while. The occupation itself is also well defined. Law graduates can apply their education to a range of law-related jobs, but there is widespread consensus on which jobs are “lawyering” jobs that require bar admission. These are the same jobs that graduates, on the whole, prefer.

Under those conditions, it is useful to compare the number of law school graduates to projected job openings for lawyers. That is what I did several years ago. At that time, the number of students progressing through the law school pipeline greatly exceeded the number of lawyering positions that BLS projected. A substantial number of those graduates, I predicted, would have to find work outside of law practice. Wages for entry-level lawyers might also fall.

That is, in fact, what happened. My recent study of new lawyers admitted to the Ohio bar confirms that, four and a half years after graduation, one quarter of licensed lawyers were working in jobs that did not require a law degree. After accounting for graduates who didn’t take or pass the bar exam, it appears that a full third of recent law school graduates are not practicing as lawyers.

The good news is that my study suggests there may be more job openings for lawyers than BLS projected. Not enough to satisfy all of the graduates who want those jobs, but more than BLS estimated.

Meanwhile, there is also evidence that wages have declined for entry-level lawyers. The median starting salary reported to NALP for the Class of 2008 was $72,000; five years later, the median reported salary for the Class of 2013 was $62,467. The comparison looks even worse after adjusting for inflation: If the median wage had remained at the 2008 level, it would have reached almost $78,000 by 2013. The real median wage for new lawyers fell by 19.8% over those five years.

Will law graduates who were unable to find a lawyering job find satisfaction in other jobs? They might; probably some will and some won’t. Will they prosper financially from their law degree, regardless of occupation? They might, if historical patterns hold. To the extent their wage losses represent effects of the recession, will they make up those differences later in their careers? Again, they might if historical patterns hold. But for students investing more than $100,000 in a legal education, it’s worth considering as much information as possible. That includes BLS projections for their desired occupation.

These projections are also useful–when combined with other available information–for legal educators to consider. The career prospects of our graduates should inform the educational programs we design, as well as the information we offer potential applicants. BLS projections represent only a small piece of this puzzle, but they offer one perspective on how the labor market for lawyers is performing.

What About Those New Projections?

The BLS recently changed the way in which it measures occupational “separations.” That’s an estimate of the number of people who will leave a particular occupation. This measure, in turn, affects the projection of job openings; when a worker leaves an occupation, that departure often creates a job opening. Under this new method, BLS will project more lawyering jobs than it did in the past. That sounds like good news for aspiring lawyers, and it is–in part. The change also reveals some unsettling trends in our profession, which I’ll explore in a future post.

, No Comments Yet

ExamSoft: By the Numbers

March 26th, 2015 / By

Earlier this week I explained why the ExamSoft fiasco could have lowered bar passage rates in most states, including some states that did not use the software. But did it happen that way? Only ExamSoft and the National Conference of Bar Examiners have the data that will tell us for sure. But here’s a strong piece of supporting evidence:

Among states that did not experience the ExamSoft crisis, the average bar passage rate for first-time takers from ABA-accredited law schools fell from 81% in July 2013 to 78% in July 2014. That’s a drop of 3 percentage points.

Among the states that were exposed to the ExamSoft problems, the average bar passage rate for the same group fell from 83% in July 2013 to 78% in July 2014. That’s a 5 point drop, two percentage points more than the drop in the “unaffected” states.

Derek Muller did the important work of distinguishing these two groups of states. Like him, I count a state as an “ExamSoft” one if it used that software company and its exam takers wrote their essays on July 29 (the day of the upload crisis). There are 40 states in that group. The unaffected states are the other 10 plus the District of Columbia; these jurisdictions either did not contract with ExamSoft or their examinees wrote essays on a different day.

The comparison between these two groups is powerful. What, other than the ExamSoft debacle could account for the difference between the two? A 2-point difference is not one that occurs by chance in a population this size. I checked and the probability of this happening by chance (that is, by separating the states randomly into two groups of this size) is so small that it registered as 0.00 on my probability calculator.

It’s also hard to imagine another factor that would explain the difference. What do Arizona, DC, Kentucky, Louisiana, Maine, Massachusetts, Nebraska, New Jersey, Virginia, Wisconsin, and Wyoming have in common other than that their test takers were not directly affected by ExamSoft’s malfunction? Large states and small states; Eastern states and Western states; red states and blue states.

Of course, as I explained in my previous post, examinees in 10 of those 11 jurisdictions ultimately suffered from the glitch; that effect came through the equating and scaling process. The only jurisdiction that escaped completely was Louisiana, which used neither ExamSoft nor the MBE. That state, by the way, enjoyed a large increase in its bar passage rate between July 2013 and July 2014.

This is scary on at least four levels:

1. The ExamSoft breakdown affected performance sufficiently that states using the software suffered an average drop of 2 percentage points in bar passage.

2. The equating and scaling process amplified the drop in raw scores. These processes dropped pass rates as much as three more percentage points across the nation. In states where raw scores were affected, pass rates fell an average of 5 percentage points. In other states, the pass rate fell an average of 3 percentage points. (I say “as much as” here because it is possible that other factors account for some of this drop; my comparison can’t control for that possibility. It seems clear, however, that equating and scaling amplified the raw-score drop and accounted for some–perhaps all–of this drop.)

3. Hundreds of test takers–probably more than 1,500 nationwide–failed the bar exam when they should have passed.

4. ExamSoft and NCBE have been completely unresponsive to this problem, despite the fact that these data have been available to them.

One final note: the comparisons in this post are a conservative test of the ExamSoft hypothesis, because I created a simple dichotomy between states exposed directly to the upload failure and those with no direct exposure. It is quite likely that states in the first group differed in the extent to which their examinees suffered. In some states, most test takers may have successfully uploaded their essays on the first try; in others, a large percentage of examinees may have struggled for hours. Those differences could account for variations within the “ExamSoft” states.

ExamSoft and NCBE could make those more nuanced distinctions. From the available data, however, there seems little doubt that the ExamSoft wreck seriously affected results of the July 2014 bar exam.

* I am grateful to Amy Otto, a former student who is wise in the way of statistics and who helped me think through these analyses.

, No Comments Yet

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests