You are currently browsing the archives for Deborah J. Merritt.

Deborah Merritt is the John Deaver Drinko/Baker & Hostetler Chair in Law at The Ohio State University's Moritz College of Law. She has received multiple teaching awards for her work in both clinical and podium courses. With Ric Simmons, she developed an "uncasebook" for teaching the basic evidence course. West Academic has adopted their template to create a series of texts that reduce the traditional focus on appellate opinions. Deborah writes frequently about changes in legal education and the legal profession.

Notable Change in the ABA Questionnaire

July 8th, 2013 / By

Last week the ABA notified law school deans that it will no longer request annual information about each school’s expenditures. Schools will report three years of expenditures in connection with site visits, but the annual reporting of expenditures has been eliminated (see p. 4).

H/t to TaxProf and Brian Leiter for this breaking news. Now, what does the change mean for ABA data collection, legal education, and the US News rankings?

Background: The Annual Questionnaire

The ABA collects data from law schools every year through its annual questionnaire. That instrument, revised annually by the Council’s Data Policy & Collection Committee, gathers information about enrollment, courses, faculty composition, and other issues related to legal education. At least within recent years, the questionnaire has asked schools about both revenues and expenditures. The 2013 questionnaire will ask only about overall revenues, not overall expenditures.

The revised instrument still asks about two specific expenditures: money spent on library operations and money spent for student scholarships, grants, or loans. It does not, however, require schools to report other expenditures–such as money spent on salaries, conferences, coffee, and all of the other matters that make up a law school budget.

Going Forward: Data, the ABA, and Legal Education

I’m puzzled that the ABA has chosen to eliminate expenditures from the annual questionnaire, especially given the contemporary budget crunch at many law schools. Responding to the questionnaire tormented me when I was an associate dean, so I don’t advocate mindless data collection. The information collected by the ABA, however, seems to serve numerous valuable purposes. Questionnaire results help track the overall health of legal education, inform accreditation standards, and offer perspectives on policy issues related to law schools. The instructions to the fiscal portion of the questionnaire also suggest that the ABA uses this information to monitor the fiscal health of individual schools. Given the ABA’s role in protecting students, that is an important goal.

Given this range of objectives, why will the ABA continue to collect annual information about law school revenues, but not expenditures? Law schools seem to be facing unprecedented budgetary strain. In times like this, wouldn’t the ABA want to know both revenues and expenditures–so that it could gauge the financial course of legal education? As the Task Force on the Future of Legal Education finalizes its recommendations, wouldn’t it want to know how badly law schools are hurting? And as the Standards Review Committee considers the costs imposed by some accreditation measures, wouldn’t it be useful to know whether law schools are operating in the red?

I’m not suggesting that the ABA should distribute scorecards revealing the financial health of each law school. But wouldn’t aggregate data on revenue, expenditures, and the gap between the two be particularly useful right now? Annual reports of revenue give us some measure of our industry’s health, but expenditure figures are just as important. How else will we know whether schools are able to adapt to flat or declining revenues?

There’s also the matter of protecting students at individual schools. Each school will have to demonstrate its financial health during site visits, but those visits occur every seven years. Seven years is a long time–plenty long enough for a school to sustain significant financial damage and endanger the education of enrolled students. If the ABA is going to monitor anything, shouldn’t it check both revenues and expenditures on an annual basis?

I understand that many educators are celebrating elimination of the expenditures section, largely because of the US News effect discussed below. I assume, however, that the questionnaire once served purposes other than generating data for US News. Are we sure that we want to reduce our information about the financial health of legal education? Now?

Going Forward: US News

Against all reason, US News has long used expenditures as a significant component of its law school rankings. Expenditures currently account for 11.25% of the ranking formula. This component of the rankings has rightly provoked criticism from dozens, if not hundreds, of legal educators. The ABA’s elimination of expenditures from its annual questionnaire might be an attempt to discourage US News from incorporating this information.

If that’s the ABA’s motive, will the gambit work? It seems to me that US News has at least four options:

1. Continue to ask law schools to supply expenditure data. US News already asks for information that the ABA doesn’t request; it has no obligation to track the ABA’s questionnaire. Calculating expenditures takes time if you’re trying to game the system (or at least keep up with other schools that are gaming the system); the school has to think of every possible expenditure to include. Gamesmanship aside, however, it would be hard for a dean to claim with a straight face that a request for expenditures was too burdensome to meet. If a school isn’t tracking its annual expenditures, and doesn’t have a computer program that will spit those numbers out on demand, that’s really all we need to know about the school.

I hope US News doesn’t pursue this approach. I agree with all of the critics that expenditures serve no useful purpose in a ranking of law schools (even assuming that a ranking itself serves some useful purpose). It seems to me, however, that US News could easily maintain its ranking system without the ABA’s question on school expenditures.

2. Reconfigure the ranking formula to include just library and student aid expenditures. The ABA questionnaire, rather curiously, continues to ask for data on library and student aid expenditures. US News, therefore, could decide to plug just these expenditures into its ranking formula. The formula already does count student aid expenditures separately, so there’s precedent for that.

This approach would be even worse than the first option. Giving library expenditures extra weight would tempt law schools to increase spending in a part of the budget that many critics already think is too large. Creating incentives for additional student aid sounds beneficent, but it would fuel the already heated arms race to snare credentials with scholarship money. We need to wind that race down in legal education, not extend it further.

3. Replace expenditures with revenues. Since the ABA questionnaire still asks for each school’s annual revenue, US News could incorporate that figure into its ranking formula. This approach might be marginally more rational than the focus on expenditures: Schools with more money may be able to provide more opportunities to their students. Focusing on revenues, furthermore, would not penalize schools that saved some of their revenue for a rainy day.

On the other hand, this criterion would continue to bias the rankings in favor of wealthy, well established, and private schools. It would also invite the same type of gamesmanship that schools have demonstrated when reporting expenditures.

4. Eliminate money as a factor. This is my preferred outcome, and I assume that it is the one most educators would prefer. Expenditures don’t have a role in judging the quality of a law school, and they’re a source of endless manipulation. Both law schools and their consumers would be better off if we rid the rankings of the expenditures factor.

Conclusion

US News will do whatever it chooses to do. Years of entreaties, rants, and denunciation haven’t stopped it from incorporating expenditures into its law school ranking. I’m doubtful that the ABA’s change will suddenly bring US News to its senses. Meanwhile, I’m very worried about how we’re going to inform legal educators, regulators, and potential students about the financial health of law schools. Revenues are fun to count, but running a law school requires expenditures as well.

, View Comments (2)

New Salary Data

July 7th, 2013 / By

Law school critics have pressed schools to produce better information about the salaries earned by their graduates. Existing sources, as we know, provide incomplete or biased information. The Bureau of Labor Statistics (BLS) gathers data about lawyers’ salaries, but those reports omit solo practitioners, law firm partners, and law graduates who don’t practice law. Nor can we break down the BLS data to identify earnings by new lawyers or by graduates of particular schools.

The salary information gathered by NALP, in contrast, focuses on new graduates, includes graduates in non-practice jobs, and can be tied to particular schools (if a school chooses to publish their data). But these figures suffer from significant selection bias; NALP warns that these salaries “are biased upwards.”

Better salary information, however, is on the way. Researchers in other fields have found a new way to gather salary data about graduates of degree programs. The method hinges on the fact that employers pay unemployment taxes for each individual they employ. These taxes fund the pools used to support unemployment compensation. The government wants to make sure that it gathers its fair share of taxes, so employers report the wages they pay each individual. State unemployment compensation agencies, therefore, possess databanks of social security numbers linked to wages.

Educational institutions, similarly, possess the social security numbers of their graduates. It is possible, therefore, to use SSNs to link graduates with their salaries. The researchers doing this, of course, don’t examine the salaries of individual graduates. Instead, this “linked-records” approach allows them to generate aggregate salary data about graduates by college, major, year of degree, and several other criteria. The method also allows researchers to track salaries over time, both to see how entry-level salaries change and to track income as graduates gain workplace experience. For a brief overview of the method, see this paper from Berkeley’s Center for Studies in Higher Education.

The linked-record approach has the potential to generate very nuanced information about the financial pay-off of different educational programs. Salary information, in fact, is already available for several law schools. Before we get to that, however, let’s look more closely at the method’s wider application and its current limits.

Applications

California has used this research method to generate an extensive database of salary outcomes for graduates of its community college programs. Using the online “salary surfer,” you can discover that the highest earning graduates from those programs are individuals who earn a certificate in electrical systems and power transmission. Those graduates average $93,410 two years after certification and $123,174 five years out.

If you’re not willing to climb utility poles or hang out with high voltage wires, a plumbing certificate also pays off reasonably well in California, generating an average salary of $65,080 two years after graduation. That certificate, however, doesn’t seem to add more value with time–at least not during the early years of a career. Average salary for certified plumbers rises to just $65,299 five years after graduation.

Community college degrees in health-related work also generate substantial salaries. Degrees in the humanities, fine and applied arts, cosmetology, and travel services, on the other hand, are poor bets financially. Paralegal training falls in the middle: A paralegal degree from a California school yields an average salary of $38,191 two years after graduation and $42,332 five years out. Paralegal certificates, notably, generate higher wages. Those paralegals average $41,546 two years after certification and $47,674 after five years. I suspect that premium occurs because the certificate earners already hold four-year college degrees; they combine the paralegal certificate with a BA to earn more in the workplace.

You can spend hours with the California database, exploring the many subjects that community colleges teach and the varied financial pay-offs for those degrees. Let’s move on, however, to a much broader database.

The research organization College Measures is working with several states to identify salary outcomes for all types of post-secondary degrees. This database, like the one for California community colleges, relies upon the linked-records data collection method described above. The College Measures site currently includes schools in Arkansas, Colorado, Tennessee, Texas, and Virginia–with Florida and Nevada coming soon. The database doesn’t include every school or degree program in these states, but coverage is growing. Here are just a few findings to illustrate the detail available on the site:

* Chicken farming is a staple of the Arkansas economy, and the University of Arkansas’s main campus offers a BA in poultry science. Those degree holders average $37,251 during their first year after college–a little more than accounting BA’s from the same campus can expect to earn ($36,681).

* Arkansas, however, teaches much more than poultry science and accounting. Some of the highest earning graduates major in chemical engineering ($56,655), physics ($48,820), computer engineering ($45,589), and economics ($43,739). If you want to maximize income after graduation, on the other hand, stay away from majors in audiology ($20,417), classics ($20,842), and drama ($22,629).

* Moving to the Texas portion of the site, you won’t be surprised to discover that the most remunerative BA offered by the University of Texas at Austin is in Petroleum Engineering. Those graduates average $115,777 during their first year out of school.

* The least financially rewarding BA’s from the UT-Austin campus, at least initially, are general music performance ($11,098), Arabic Language and Literature ($17,192), and General Visual and Performing Arts ($17,749).

You can find similar results for other majors and schools in these states, as well as for schools in Colorado, Tennessee, and Virginia. Before continuing, however, let’s examine several key limits on the currently available data.

Limits

1. One State at a Time. The linked-records databases currently operate only within a single state: they can only identify salaries for graduates who work in the same state where they attended school. The Colorado database, for example, includes both of the state’s ABA-accredited law schools–but it reports only salaries for graduates who worked in Colorado the year after graduation.

This constraint will understate salaries for law schools that send a large number of graduates to other states for high-paying jobs. If Connecticut creates a database, for example, Yale Law School will receive no credit for the salaries of graduates who work in Massachusetts, New York, the District of Columbia, and other states. The University of Texas’s law school, currently included in the College Measures database, receives credit for salaries earned at BigLaw firms in Dallas or Houston–but not for those earned in Chicago, Los Angeles, or New York.

Researchers are working to overcome this limit by linking databases nationally. I suspect that will happen within the next year or two, making the linked-records method much more comprehensive. Meanwhile, the “one state” limit casts doubt on salary results for schools with a large number of graduates who leave the state.

For many law schools, however, even single-state salary reports can yield useful information. Most law schools place the majority of their graduates in entry-level jobs within the same state. All of the Texas law schools place more than half of their graduates with Texas employers. The same is true for the Arkansas law schools, Colorado schools, and two of the three Tennessee schools. Among the states for which linked-records data are currently available, only the Virginia law schools send a majority of their graduates out of state.

For law schools that place a majority of their graduates in-state, the linked-record databases provide a welcome perspective on a wide range of salaries. These databases include jobs with small law firms, local government, and small businesses. They will also identify law graduates with jobs outside of law practice. That’s a much wider scope than the salaries reported to NALP, which disproportionately represent large law firm jobs. Even if some of a school’s graduates leave the state, this in-state salary slice is likely to give prospective students a realistic perspective on the range of salaries earned by a school’s graduates.

2. Rolling Five-Year Averages. The linked-records databases report five-year averages, rather than average salaries for a single graduating class. This feature preserves anonymity in small programs and makes the data less “noisy.” The technique, however, can also mask dramatic market shifts.

This is particularly problematic in law, because average salaries rose dramatically from 2005 through 2009, and then plunged just as precipitously. Most of the states included in the College Measures database report the average salary for students who graduated in 2006 through 2010. For law graduates, those years include at least three high-earning years (2007 through 2009) and just one post-recession year (2010). The outdated averages on the College Measures site almost certainly overstate the amounts earned by more recent law school classes.

This problem, in my opinion, makes the salaries currently reported for law schools unreliable as predictors of current salaries. On the other hand, the data could be useful for other purposes. It would be instructive, for example, to compare each school’s linked-record average with an average of the salaries that school reported to NALP over the same five years. That comparison might indicate the extent to which NALP-reported salaries skew high. Within a few years, meanwhile, the linked-records databases will offer more useful salary projections for students considering law school. They will also help us see the extent to which salaries for law graduates have shifted over time.

3. Un- and Under-Employed Graduates. The linked-records databases do not reveal how many graduates are unemployed. Graduates who are missing from a state’s records may be unemployed or they may be working in another state. Researchers currently have no way to distinguish those two statuses.

As the research becomes more sophisticated, and especially if researchers are able to link records nationally, this problem will decrease. For now, users of the database have to remember that salaries reflect averages for employed graduates. Users need to search separately for the number of a school’s unemployed graduates.

For law schools, those figures are relatively easy to obtain because they appear on each school’s ABA employment summary. By combining that resource with the College Measures information, prospective students and others can determine the percentage of a law school’s graduates who were employed nine months after graduation, as well as the average salaries earned by graduates who worked in the same state as the school.

Underemployed graduates, those working in part-time or temporary jobs, do appear in most of the linked-record databases. This is a major advantage of the linked-record method: the method calculates each graduate’s annual earnings, even if those wages came from part-time or temporary work. If a graduate worked at more than one job, the linked records will aggregate wages from each of those jobs. The results won’t reveal how hard graduates had to work to generate their income, but database users will be able to tell how much on average they earned.

4. Excluded Workers. In addition to the caveats discussed above, the linked-records databases omit two important categories of workers. Most lack information about federal employees, although some states have started adding that information. Within a year or two, federal salaries should be fully integrated with other wages. For law school graduates, meanwhile, salaries for the most common federal jobs are already well known.

More significant, the linked-record databases do not include information about the self-employed. This omission matters more in some fields than others. Utility companies employ the workers who repair high-voltage power lines; you won’t find many free-lancers climbing utility poles. Plumbers, on the other hand, are more likely to set up shop for themselves.

For recent law graduates, the picture is mixed. Relatively few of them open solo practices immediately after graduation, but a growing number may work as independent contractors. The latter group, notably, may include graduates who receive career exploration grants from their schools. Depending on how those grants are structured, the graduates may not count as “employees” of either the school or the organization where they work; instead, they may be independent contractors. If that’s the case, their wages will not appear in the linked-record databases.

As experience grows with linked-record databases, it will be possible to determine how many law graduates fall outside of those records. It should be possible, for example, to compare the number of graduates who report in-state jobs to their schools with the number of in-state salaries recorded in a linked-record database. The difference between the two numbers will represent graduates who work as solos or independent contractors. The researchers creating these databases may also find ways to incorporate earnings data about self-employed graduates.

What About Law Schools?

Tomorrow, I will discuss salary information reported for the fifteen law schools currently included in the College Measures database. If you’re impatient, just follow the links. Those specific results, however, matter less than the overall scope of this salary-tracking method. The linked-record method promises much more sophisticated salary information than educational institutions have ever gathered on their own. The salaries can be tied to specific schools and degree programs. We (as well as prospective students and policymakers) will be able to compare financial outcomes across fields, schools, and states. As the databases grow in size, we will also be able to track salaries five, ten, fifteen, or twenty years after graduation. That amount of information is breathtaking–and a little scary.

, No Comments Yet

Bar Passage and Accreditation

July 4th, 2013 / By

The Standards Review Committee of the ABA’s Section of Legal Education has been considering a change to the accreditation standard governing graduates’ success on the bar examination. The heart of the current standard requires schools to demonstrate that 75% of graduates who attempt the bar exam eventually pass that exam. New Standard 315 would require schools to show that 80% of their graduates (of those who take the bar) pass the exam by “the end of the second calendar year following their graduation.”

I support the new standard, and I urge other academics to do the same. The rule doesn’t penalize schools for graduates who decide to use their legal education for purposes other than practicing law; the 80% rate applies only to graduates who take the bar exam. The rule then gives those graduates more than two years to pass the exam. Because the rule measures time by calendar year, May graduates would have five opportunities to pass the bar before their failure would count against accreditation. As a consumer protection provision, this is a very lax rule. A school that can’t meet this standard is not serving its students well: It is either admitting students with too little chance of passing the bar or doing a poor job of teaching the students that it admits.

The proposal takes on added force given the plunge in law school applications. As schools attempt to maintain class sizes and revenue, there is a significant danger that they will admit students with little chance of passing the bar exam. Charging those students three years of professional-school tuition, when they have little chance of joining the profession, harms the students, the taxpayers who support their loans, and the economy as a whole. Accreditation standards properly restrain schools from overlooking costs like those.

Critics of the proposal rightly point out that a tougher standard may discourage schools from admitting minority students, who pass the bar at lower rates than white students. This is a serious concern: Our profession is still far too white. On the other hand, we won’t help diversity by setting minority students up to fail. Students who borrow heavily to attend law school, but then repeatedly fail the bar exam, suffer devastating financial and psychological blows.

How can we maintain access for minority students while protecting all students from schools with low bar-passage rates? I discuss three ideas below.

The $30,000 Exception

When I first thought about this problem, I considered suggesting a “$30,000” exception to proposed Standard 315. Under this exception, a school could exclude from the accreditation measure any student who failed the bar exam but paid less than $10,000 per year ($30,000 total) in law school tuition and fees.

An exception like this would encourage schools to give real opportunities to minority students whose credentials suggest a risk of bar failure. Those opportunities would consist of a reasonably priced chance to attend law school, achieve success, and qualify for the bar. Law schools can’t claim good karma for admitting at-risk students who pay high tuition for the opportunity to prove themselves. That opportunity benefits law schools as much, or more, than the at-risk students. If law schools want to support diversification of our profession–and we should–then we should be willing to invest our own dollars in that goal.

A $30,000 exception would allow schools to make a genuine commitment to diversity, without worrying about an accreditation penalty. The at-risk students would also benefit by attending school at a more reasonable cost. Even if those students failed the bar, they could more easily pay off their modest loans with JD Advantage work. A $30,000 exception could be a win-win for both at-risk students and schools that honestly want to create professional access.

I hesitate to make this proposal, however, because I’m not sure how many schools genuinely care about minority access–rather than about preserving their own profitability. A $30,000 exception could be an invitation to admit a large number of at-risk students and then invest very little in those students. Especially with declining applicant pools, schools might conclude that thirty students paying $10,000 apiece is better than thirty empty seats. Since those students would not count against a school’s accreditation, no matter how many of them failed the bar exam, schools might not invest the educational resources needed to assist at-risk students.

If schools do care about minority access, then a $30,000 exception to proposed Standard 315 might give us just the leeway we need to admit and nurture at-risk students. If schools care more about their profitability, then an exception like that would be an invitation to take advantage of at-risk students. Which spirit motivates law schools today? That’s a question for schools to reflect upon.

Adjust Bar Passing Scores

One of the shameful secrets of our profession is that we raised bar-exam passing scores during the last three decades, just as a significant number of minority students were graduating from law school. More than a dozen states raised the score required to pass their bar exam during the 1990’s. Other states took that path in more recent years: New York raised its passing score in 2005; Montana has increased the score for this month’s exam takers; and Illinois has announced an increase that will take effect in July 2015.

These increases mean that it’s harder to pass the bar exam today than it was ten, twenty, or thirty years ago. In most states, grading techniques assure that scores signal the same level of competence over time. This happens, first, because the National Conference of Bar Examiners (NCBE), “equates” the scores on the Multistate Bar Exam (MBE) from year to year. That technique, which I explain further in this paper, assures that MBE scores reflect the same level of performance each year. An equated score of 134 on the February 2013 MBE reflects the same performance as a score of 134 did in 1985.

Most states, meanwhile, grade their essay questions in a way that similarly guards against shifting standards. These states scale essay scores to the MBE scores achieved by examinees during the same test administration. This means that the MBE (which is equated over time) sets the distribution of scores available for the essay portion of the exam. If the July 2013 examinees in Ohio average higher MBE scores than the 2012 test-takers, the bar examiners will allot them correspondingly higher essay scores. Conversely, if the 2013 examinees score poorly on the MBE (compared to earlier testing groups in Ohio), they will receive lower essay scores as well. You can read more about this process in the same paper cited above.

These two techniques mean that scores neither inflate nor deflate over time; the measuring stick within each state remains constant. A score of 264 on the July 2013 Illinois bar exam will represent the same level of proficiency as a score of 264 did in 2003 or 1993.

When a state raises its passing score, therefore, it literally sets a higher hurdle for new applicants. Beginning in 2015, Illinois will no longer admit test-takers who score 264 on the exam; instead it will require applicants to score 272–eight points more than applicants have had to score for at least the last twenty years.

Why should that be? Why do today’s racially diverse applicants have to achieve higher scores than the largely white applicants of the 1970s? Law practice may be harder today than it was in the 1970s, but the bar exam doesn’t test the aspects of practice that have become more difficult. The bar exam doesn’t measure applicants on their mastery of the latest statutes, their ability to interact with clients and lawyers from many cultures, or their adeptness with new technologies. The bar exam tests basic doctrinal principles and legal analysis. Why is the minimum level of proficiency on those skills higher today than it was thirty or forty years ago?

If we want to diversify the profession, we have to stop raising the bar as the applicant pool diversifies. I do not believe that states acted with racial animus when increasing their passing scores; instead, the moves seem more broadly protectionist, occurring during times of recession in the legal market and as the number of law school graduates has increased. Those motives, however, deserve no credit. The bottom line is that today’s graduates have to meet a higher standard than leaders of the profession (those of us in our fifties and sixties) had to satisfy when we took the bar.

Some states have pointed to the low quality of bar exam essays when voting to raise their passing score. As I have explained elsewhere, these concerns are usually misplaced. Committees convened to review a state’s passing score often harbor unrealistic expectations about how well any lawyer–even a seasoned one–can read, analyze, and write about a new problem in 30 minutes. Bad statistical techniques have also tainted these attempts to recalibrate minimum passing scores.

Let’s roll back passing scores to where they stood in the 1970s. Taking that step would diversify the profession by allowing today’s diverse graduates to qualify for practice on the same terms as their less-diverse elders. Preserving accreditation of schools that produce a significant percentage of bar failures, in contrast, will do little to promote diversity.

Work Harder to Support Students’ Success

Teaching matters. During my time in legal education, I have seen professors improve skills and test scores among students who initially struggled with law school exams or bar preparation. These professors, notably, usually were not tenure-track faculty who taught Socratic classes or research seminars. More often, they were non-tenure-track instructors who were willing to break the law school box, to embrace teaching methods that work in other fields, to give their students more feedback, and to learn from their own mistakes. If one teaching method didn’t work, they would try another one.

If we want to improve minority access to the legal profession, then more of us should be willing to commit time to innovative teaching. Tenure-track faculty are quick to defend their traditional teaching methods, but slow to pursue rigorous tests of those methods. How do we know that the case method or Socratic questioning are the best ways to educate students? Usually we “know” this because (a) it worked for us, (b) it feels rigorous and engaging when we stand at the front of the classroom, (c) we’ve produced plenty of good lawyers over the last hundred years, and (d) we don’t know what else to do anyway. But if our methods leave one in five graduates unable to pass the bar (the threshold set by proposed Standard 315), then maybe there’s something wrong with those methods. Maybe we should change our methods rather than demand weak accreditation standards?

Some faculty will object that we shouldn’t have to “teach to the bar exam,” that schools must focus on skills and knowledge that the bar doesn’t test. Three years, however, is a long time. We should be able to prepare students effectively to pass the bar exam, as well as build a foundation in other essential skills and knowledge. The sad truth is that these “other” subjects and skills are more fun to teach, so we focus on them rather than on solid bar preparation.

It is disingenuous for law schools to disdain rigorous bar preparation, because the bar exam’s very existence supports our tuition. Students do not pay premium tuition for law school because we teach more content than our colleagues who teach graduate courses in history, classics, mathematics, chemistry, or dozens of other subjects. Nor do we give more feedback than those professors, supervise more research among our graduate students, or conduct more research of our own. Students pay more for a law school education than for graduate training in most other fields because they need our diploma to sit for the bar exam. As long as lawyers limit entry to the profession, and as long as law schools serve as the initial gatekeeper, we will be able to charge premium prices for our classes. How can we eschew bar preparation when the bar stimulates our enrollments and revenue?

If we want to diversify the legal profession, then we should commit to better teaching and more rigorous bar preparation. We shouldn’t simply give schools a pass if more than a fifth of their graduates repeatedly fail the bar. If the educational deficit is too great to overcome in three years, then we should devote our energy to good pipeline programs.

Tough Standards

Some accreditation standards create unnecessary costs; they benefit faculty, librarians, or other educational insiders at the expense of students. Comments submitted to the ABA Task Force on the Future of Legal Education properly question many of those standards. The Standards Review Committee likewise has questioned onerous standards of that type.

Proposed Standard 315, however, is tough in a different way. That standard holds schools accountable in order to protect students, lenders, and the public. Private law schools today charge an average of $120,000 for a JD. At those prices, schools should be able to assure that at least 80% of graduates who choose to take the bar exam will pass that exam within two calendar years. If schools can’t meet that standard, then they shouldn’t bear the mark of ABA accreditation.

, View Comments (2)

Selling the Academy

June 30th, 2013 / By

The American Academy of Arts and Sciences has released a Report stressing the need to deepen education in the humanities and social sciences. The Report declares that these disciplines “teach us to question, analyze, debate, evaluate, interpret, synthesize, compare evidence, and communicate—skills that are critically important in shaping adults who can become independent thinkers.” (p. 17) I agree with that assertion, which is why I’m so disturbed by the way this Report analyzes and communicates evidence.

The Report’s Introduction sounds an ominous tone: “we are confronted with mounting evidence, from every sector, of a troubling pattern of inattention that will have grave consequences for the nation.” (p. 19). The Report then cites three pieces of evidence to illustrate the “grave consequences” we face. The Executive Summary stresses the same three warning signs and concludes: “Each of these pieces of evidence suggests a problem; together, they suggest a pattern that will have grave, long-term consequences for the nation.”

What are these three pieces of evidence? Presumably they were the most persuasive and best documented points that the Report’s authors could find. As I explain below, however, each claim is incorrect, incomplete, or misleading. That’s an embarrassing record for a blue-ribbon commission of distinguished educators, humanists, and social scientists.

Equally troubling, the misstatements represent much of what I hear from higher education today: a constant refrain of exaggerated claims about the academy’s worth, buttressed by misleading interpretations of the factual record. This Report is selling the academy, not analyzing or synthesizing evidence. Let me walk you through the three claims to show you what I mean.

1. “For a variety of reasons, parents are not reading to their children as frequently as they once did.”

As a parent, that statement immediately resonated with me. Parents are not reading to their children? What’s wrong with our society?!? I immediately envisioned children huddled in front of television sets or computers, ignored by their parents for weeks at a time. The claim, however, is incorrect or misleading in several respects.

First, the data stem from a survey that measured the percentage of children being read to, not the percentage of parents reading. The difference matters, because household composition varies over time. If the parents who are most likely to read to their children have fewer offspring, then the percentage of children being read to will decline. Confusing parents with children is sloppy (and potentially misleading) data analysis.

Second, the survey reveals how many preschoolers had a parent read to them every day in the week before the survey was administered. That’s a pretty high standard for single parents, working couples, and parents with multiple children. A family with three children, like the one I grew up in, wouldn’t meet the standard if the oldest sibling substituted for a busy parent one or two times a week.

Applying the survey’s tough standard, how deficient are contemporary parents? The Report language made me worry that today’s parents were reading to their children just a few times a month. It turns out that in 2007, the most recent year measured by this survey, more than half of all preschoolers (55.3%) experienced a parent reading to them seven full days a week.

Most important, that 2007 percentage is higher than the one reported for 1993, the earliest year of the study. Let me repeat that: The percentage of children listening daily to a parent read increased over the fifteen years tracked by this survey. The Report’s contrary claim hinges on the fact that the percentage decreased between 2005 and 2007, the two most recent years studied. The import of that decline, however, is unclear. Although the percentage clearly increased over the years studied, the figures varied somewhat from year to year; like many data trends, the path is not completely smooth.

The Report’s language does not convey this pattern. The reference to what parents “once did” suggests a long-term decline, rather than the latest move in an oscillating pattern that has moved upward with time. An accurate statement, based on the data cited by the Report, would be: “Between 1993 and 2007, the percentage of children who heard their parents read to them every day grew from 52.8% to 55.3%. The percentage rose and fell during that period with the highest level (60.3%) occurring in 2005 and the lowest levels (52.8% and 53.5%) registering in 1993 and 1999. The trend over time, however, is positive.”

2. “Humanities teachers, particularly in k-12 history, are less well-trained than teachers in other subject areas.”

This second claim is just wrong. The cited data show that music and art teachers are the most highly credentialed teachers in both middle and high schools. In 2007-08, the most recent year studied by the National Center for Education Statistics (NCES), 85.1% of high school music teachers had both majored in their field and earned a teaching certificate. NCES uses that combination as the best available evidence of teacher quality. Arts teachers were the next best educated, with 81.7% of them holding both certification and a major in their subject. A respectable 71.7% of English teachers had the same top training, just a shade under the 72.9% of teachers in the natural sciences–and well above the 64.4% of high school math teachers. Foreign language teachers varied widely in their training: 71.2% of French teachers held both certification and a major in their field, a percentage comparable to teachers in English and the Natural Sciences. Only 57.5% of Spanish teachers, in contrast, held those qualifications.

For middle schools, NCES’s most recent data stem from 2000 and lack information about teaching credentials. The patterns, however, were similar. Music and art teachers outshone colleagues in other fields, with 89.4% of them majoring in their subject. Natural sciences teachers ranked next, with 49.3% having majored in their field. Teachers of English (46.3%) and foreign languages (48.8%) were not far behind. Only a third of middle-school math teachers (33.8%), in contrast, had majored in their field.

These statistics belie the Report’s claim that “Humanities teachers . . . are less well-trained than teachers in other subject areas.” The qualifications of humanities teachers vary, with some subjects showing the highest level of training, others matching or exceeding levels in math and the natural sciences, and some falling below.

The embedded clause in the Report’s claim, “particularly in k-12 history,” has more truth. None of the cited statistics relate to elementary education, but history teachers in both middle schools and high schools do lag behind most of their peers. NCES reports that only 28.8% of high school history teachers hold both certification and a major in their field. That particularly low percentage stems more from lack of teaching certification than from lack of a history major; 62.0% of history teachers did, in fact, major in history. Still, it is true that high school history teachers have less overall training than teachers in other subjects.

Similarly, middle-school history teachers were less likely than peers in other fields to major in their subject; just 31.3% of them did. This percentage is very close to the expertise level of middle-school math teachers (with just 33.8% having majored in math), but it is the lowest reported figure.

The Report could have focused on the relatively low preparation of history teachers; instead, it makes an exaggerated (and incorrect) claim about all humanities teachers. A correct statement, again based on the data cited by the Report, would be: “In both middle and high schools, humanities teachers in music and the arts are better trained than teachers in any other subject. English and natural science teachers rank next in training; these two groups have similar credentials at each educational level. Math teachers lag behind all of these groups, with lower levels of training at both the middle and high school level. History teachers have the weakest training, with the poorest showing in high schools and one that is comparable to math teachers in middle schools.”

3. “And even as we recognize that we live in a shrinking world and participate in a global economy, federal funding to support international training and education has been cut by 41 percent in four years.”

This one is literally true: The federal government cut funding for foreign language study and area study centers at universities, as well as for Fulbright-Hays programs. The statistic, however, offers an isolated (and rather faculty-centric) measure of the vitality of international study and foreign languages on college campuses.

International study is booming among college students. The number of U.S. college students studying abroad almost tripled between the 1996-97 academic year and the 2009-10 one, growing from 99,448 students to 270,604. The number of foreign students enrolled at American universities, meanwhile, grew by more than 50% during the same period, from 453,787 in 1995-96 to 690,923 in 2009-10. In 2010-11, the most recent year available on the latter measure, the number reached 723,277.

The study of foreign languages is also on the rise at colleges and universities. The most recent study by the Modern Language Association found that “[c]ourse enrollments in languages other than English reached a new high in 2009. Enrollments grew by 6.6% between 2006 and 2009, following an expansion of 12.9% between 2002 and 2006. This increase continues a rise in enrollment in languages other than English that began in 1995.”

Students, furthermore, are learning a more diverse set of languages. The MLA reported that enrollment in “less commonly taught languages,” those outside the top fifteen, surged from 2002 through 2009. Between 2006 and 2009, U.S. colleges added 35 new languages to their offerings, bringing the total of “less commonly taught” languages to 217 tongues nationwide. That’s in addition to the fifteen most popular languages, which today include Arabic, Chinese, Japanese, Korean, Ancient Greek, and Biblical Hebrew.

Even two-year colleges participated in the upward trend of language study. These colleges registered increased enrollment in such diverse languages as Arabic, ASL, Chinese, Hawaiian, Italian, Japanese, Latin, Portuguese, Spanish, and Vietnamese. Two of those languages, Hawaiian and Vietnamese, do not rank among the top fifteen languages studied in four-year colleges, suggesting that community colleges play a special role in teaching some languages.

We certainly could do much more to teach foreign languages and encourage international understanding at all educational levels. The Report’s isolated reference to cuts in federal funding, however, paints a very one-sided picture of the status of these subjects in the United States.

Analyze, Evaluate, Interpret, Communicate

This Report, like so many other products of higher education, exhorts citizens to examine data carefully, think critically, and write precisely. Yet the Report itself falls far short of these goals. This is not a thoughtful document; it is one determined to sell the social sciences and humanities. I agree with many of the Report’s recommendations, but we can’t rest those recommendations on faulty interpretations of the factual record or misleading statements. The academy should lead by example, not just exhortation.

There is a final irony to the misstatements in this Report. Respected commenters like Verlyn Klinkenborg and David Brooks have cited the Report while deploring a shift in college majors from the humanities to more “vocational” studies. High on the list of those dreaded vocational majors is Business, where we fear that students learn to sell things rather than to think. But what behavior are we in the academy modeling?

, No Comments Yet

What Does Fisher Mean?

June 26th, 2013 / By

What does the Supreme Court’s enigmatic Fisher opinion mean for the daily operation of an admissions office? In particular, what does it herald for law school admissions? At first glance, Fisher seems to extend the status quo for affirmative action. The Court did not strike down the University of Texas’s race-conscious plan. Nor did it overturn the 2003 Grutter decision approving Michigan Law School’s approach to affirmative action during the 1990’s. A school that follows a procedure like the one Michigan successfully defended in Grutter, therefore, must be on safe ground–right? I’m not so sure.

Fisher Basics

On the surface, Fisher‘s holding is easy to digest. The Court held that:

(1) Courts must apply strict scrutiny to explicit consideration of race in university admissions.

(2) An “interest in the educational benefits that flow from a diverse student body” counts as a compelling interest. Indeed, the Court has recognized no other interest that counts in this context.

(3) Courts will defer to a university’s decision that it needs the educational benefits of diversity. A court must “ensure that there is a reasoned, principled explanation for the academic decision,” so academic institutions would be wise to build a record supporting their need for educational diversity. The courts, however, will not second-guess this conclusion if it is articulated and supported properly.

(4) Courts, on the other hand, must closely examine a university’s claim that its race-conscious program is narrowly tailored to achieve educational diversity. The university bears the burden of persuasion on this point, and courts will “examine with care” the university’s assertion.

What does an academic institution have to show to meet the narrowly-tailored prong of strict scrutiny? This is the key issue raised by Fisher. This is also the area in which the opinion includes more nuance than a first reading might suggest.

Narrowly Tailored

The Fisher majority pointed to two different showings that a university might have to make under the narrowly-tailored prong. The first is that the race-conscious process was “necessary” to achieve the university’s goal of attaining a diverse student body. If “workable race-neutral alternatives would produce the educational benefits of diversity,” then the university must embrace those methods instead of race-conscious ones.

This part of the ruling addresses a point that the Fisher plaintiffs stressed: Since the University of Texas had already achieved significant racial diversity through a legislatively imposed “top ten percent” plan, did the university really need additional race-conscious measures?

The Court didn’t answer this question, leaving it to the lower courts on remand. Texas, however, offered pretty persuasive justifications for its “add on” plan in the brief it submitted to the Court. The University pointed out that the ten-percent plan overlooks minority students who attend schools with no class rank; some of the state’s top private schools fall in that category. The ten-percent plan also omits minorities who perform well (but not within the top decile) at magnet schools and other challenging high schools. The University wasn’t aiming simply to enroll a given number of minority students; it wanted diverse students from a variety of backgrounds. The ten-percent plan couldn’t deliver that nuanced diversity.

I think Texas may prevail on this point in the lower courts–and even win affirmance from the Supreme Court if the case returns to that docket. But this portion of the narrowly-tailored discussion is irrelevant to law schools. We don’t have a “top ten percent” alternative, and there are no “workable race neutral” mechanisms that seem likely to create the racially diverse student bodies that law schools seek–and that seem more educationally valuable with every passing year.

Law schools may be able to show that race-conscious admissions policies are necessary to produce racially diverse classes. It’s the second, less noticed part of Fisher‘s “narrowly tailored” discussion that law schools should worry about.

Evaluating Applicants as Individuals

Justice Kennedy, writing for the Court in Fisher, stressed that a race-conscious admissions plan is narrowly tailored only if the “admissions processes ‘ensure that each applicant is evaluated as an individual and not in a way that makes an applicant’s race or ethnicity the defining feature of his or her application.'” This language stems from Grutter‘s majority opinion, and ultimately from Justice Powell’s opinion in Bakke, so law schools may assume that they’re on safe turf as long as they’re following processes upheld in the past–like the one Michigan successfully defended in Grutter.

The picture, unfortunately, is more complicated. Remember that Justice Kennedy dissented from Grutter, complaining that Michigan wasn’t really evaluating applicants as individuals. The school did review files holistically, without assigning a specific numerical plus to minority status. This “individualized” assessment, however, yielded a percentage of enrolled minority students that Justice Kennedy found suspiciously constant from year to year. The school’s admissions director also acknowledged that his office generated daily reports that tracked the racial composition of the evolving class. Especially during the final stage of the admissions season, this composition could affect the students chosen for admission.

Justice Kennedy concluded that this process preserved “individual evaluation” only during the early stages of admission. At later stages in the process, “an applicant’s race or ethnicity” might well become “the defining feature of his or her application.” The five Justices in the Grutter majority were willing to grant Michigan this leeway. As Justice O’Connor wrote for the Court, “‘some attention to numbers,’ without more, does not transform a flexible admissions system into a rigid quota.” (quoting Justice Powell’s opinion in Bakke). She also perceived more variation in the racial composition of Michigan’s entering classes than Justice Kennedy did.

But Justice O’Connor is no longer on the Court; Justice Kennedy is the necessary fifth vote to support any form of race-conscious admissions in higher education. And Justice Kennedy is very serious about the need for individualized evaluation throughout the admissions process. That’s what he said in Grutter and that’s what he affirmed in Fisher, this time for the Court.

Sticking to Individual Evaluation

Fisher is bad news for any school that tracks racial composition while admitting a class. As Justice Kennedy wrote in Grutter, it is hard for a school to claim that race plays a small, contextual, highly individualized role in assessing applicants when a school tracks that characteristic carefully during the final stages of admission. But don’t all schools do this if they care about matriculating a critical mass of minority students?

Apparently not. In fact, this is the ultimate irony of Fisher. UT has structured its admissions program to avoid any consideration (or even knowledge of) race during the final stages of its admissions process. Race is one factor that may affect an applicant’s “Personal Achievement Index (PAI),” but race plays a small role in generating that score. More important, once a file-reader generates the PAI score for an applicant, later decision-makers don’t know the basis for that score. Each applicant’s PAI represents a combination of work experience, extracurricular activities, community service, socioeconomic condition, minority race, and several other factors. The University extends admissions offers to applicants with PAI’s and Academic Indexes above chosen levels. When choosing those levels, however, the admissions officers do not know the races of the students they are including.

This system seems tailor-made to satisfy Justice Kennedy’s strict requirement of individual evaluation. Race truly is just one factor that the school considers in the context of an applicant’s full record. Once race has contributed to the PAI, it disappears from the decision-making process. The system furthers the school’s compelling interest in matriculating a racially diverse class, because it counts an applicant’s contribution to that end among other contributions. The system, however, does not allow race to become “the definining feature” of an applicant’s file.

Law schools that use a system like this may comply with Fisher‘s exacting standard. Schools that track race, as Michigan did before Grutter, may not fare as well.

But Isn’t Grutter Still Good Law?

It may seem odd to suggest that a school following the admissions process upheld in Grutter runs a risk today. The Fisher Court, after all, reaffirmed Grutter‘s principles and noted several times that the parties had not asked it to reexamine Grutter. Certainly a lower court might uphold a Grutter-like plan with that rationale. “Since the Court upheld this type of plan in Grutter,” a lower court judge might reason, “and the Court hasn’t overruled Grutter, then this plan must be constitutional.”

Justice Kennedy’s majority opinion in Fisher, however, offers support for a different conclusion. The opinion focuses on the need for lower courts to conduct searching scrutiny on the question of whether a race-conscious plan is, in fact, narrowly tailored. The Court specifically rebuked the Fifth Circuit and district court judge for being too deferential to the university. Lower court judges don’t like to be reversed that way; they’ll be reading Fisher closely for what the Court really wants.

On that score, there are two key sentences in Fisher. The first appears on page 12 of the majority’s slip opinion, where the Court declares: “Strict scrutiny does not permit a court to accept a school’s assertion that its admissions process uses race in a permissible way without a court giving close analysis to the evidence of how the process works in practice.” (emphasis added) Those italicized words are the essential ones; they hark straight back to Justice Kennedy’s dissent in Grutter. Lower courts cannot just accept a university’s description of multiple factors and holistic review; the courts must examine what the university actually does throughout its admissions process. If the university does things like track racial composition of the evolving class, Fisher raises a red flag.

The second sentence appears on the same page, in the preceding paragraph. There, Justice Kennedy notes that the Grutter Court “approved the plan at issue upon concluding that it was not a quota [and] was sufficiently flexible.” The paragraph continues, however, to observe that “the parties do not challenge, and the Court therefore does not consider, the correctness of that determination.” This reservation differs from the Court’s earlier observation that it would not reexamine the legal principles in Grutter. Here, the Court notes that it has not been asked to reconsider the factual correctness of Grutter–a sure sign that the majority harbors some doubts about that result. And, of course, we know that at least five Justices would disagree with that result.

What’s a Law School To Do?

All of this is problematic for law schools, because we try to shape our classes so closely. I haven’t served on a law school’s admissions committee for twenty years, so I may be out of date. My impression as a faculty member, however, is that law schools track LSAT scores, GPAs, race, and perhaps some other indicators (such as gender or in-state status) quite closely throughout the admissions process. Schools are trying to meet certain targets for those criteria. The targets may be soft ones, rather than strict quotas, but they may not satisfy the “individual evaluation” that Justice Kennedy fervently demands.

I’m a long-time advocate of affirmative action in university admissions. I authored an amicus brief in Grutter and wrote most recently about the issues here. So I’m not trying to read Fisher in a way that would restrict the flexibility of educational institutions. Instead, I’m concerned that schools may assume that Fisher‘s “compromise” decision holds little of note. I think that would be a mistake. I would assess any race-conscious program through the prism outlined above.

This is also a good time for law schools, bar associations, and courts to invest in more pipeline programs. Race-conscious admission policies have helped diversify law school classes, but they operate at the margins. Pipeline programs reach students much earlier in their educational lives, giving them both the tools and inspiration to prepare themselves for a professional career. Students who participate in pipeline programs, furthermore, carry their ambitions back to their schools and neighborhoods, where they may inspire other students.

It’s important, finally, for law schools to continue exploring new ways of measuring and valuing diversity. Michigan Law School’s current application asks students to provide race and ethnicity for reporting purposes, but declares that the information “will have no bearing on the Law School’s admission decision.”

Instead, the school seems to gauge diversity through an open-ended personal statement and a series of optional essays. Applicants may address any issue in the required personal statement, including “significant life experiences; meaningful intellectual interests and extracurricular activities; . . . significant obstacles met and overcome; . . . issues of sexual or gender identity; . . . socioeconomic challenges; . . . or experiences and perspectives relating to disadvantage, disability, or discrimination.” These topics allow applicants of any race to paint a holistic picture of themselves, including ways in which they might contribute to diversity of the student body.

Similarly, one of Michigan’s optional essays invites applicants to “describe an experience that speaks to the problems and possibilities of diversity in an educational or work setting,” while another optional prompt asks: “How might your perspectives and experiences enrich the quality and breadth of the intellectual life of our community or enhance the legal profession?” These portions of the application, I assume, enable Michigan to assemble a diverse class (on many dimensions) without allowing race to become a “defining feature” of an application.

Skeptics might claim that schools using essays like these are “really” looking just for racial identity. That, though, underestimates both the goals of admissions committees and the complexity of race. Race has never been a simple, binary concept, but its complexities have compounded. Increased immigration, a growing number of citizens who identify as multiracial, majority-minority urban centers, and a host of other factors mean that today’s minority Americans experience race in hundreds of different ways. Schools that want to enroll racially diverse classes, as well as white students who perceive the pervasive impacts of race–need to probe racial experience more deeply than just asking students to tick off a box. Pushing ourselves to think beyond those boxes, ironically, may help us preserve the racial diversity that the boxes initially helped us achieve.

, View Comment (1)

Old Tricks

June 23rd, 2013 / By

From time to time, I like to read real books instead of electronic ones. During a recent ramble through my law school’s library, I stumbled across an intriguing set of volumes: NALP employment reports from the late nineteen seventies. These books are so old that they still have those funny cards in the back. It was the content, though, that really took my breath away. During the 1970s, NALP manipulated data about law school career outcomes in a way that makes more contemporary methods look tame. Before I get to that, let me give you the background.

NALP compiled its first employment report for the Class of 1974. The data collection was fairly rudimentary. The association asked all ABA-accredited schools to submit basic data about their graduates, including the total number of class members, the number employed, and the number known to be still seeking work. This generated some pretty patchy statistics. Only 83 schools (out of about 156) participated in the original survey. Those schools graduated 17,188 JDs, but they reported employment data for just 13,250. More than a fifth of the graduates (22.9%) from this self-selected group of schools failed to share their employment status with the schools.

NALP’s early publications made no attempt to analyze this selection bias; the reports I’ve examined (for the Classes of 1977 and 1978) don’t even mention the possibility that graduates who neglect to report their employment status might differ from those who provide that information. The reports address the representativeness of participating schools, but in a comical manner. The reports divide the schools by institutional type (e.g., public or private) and geographic region, then present a cross-tabulation showing the number and percentage of schools participating in each category. For the Class of 1977, participation rates varied from 62.5% to 100%, but the report gleefully declares: “You will note the consistently high percentage of each type of institution, as well as the large number of schools sampled. I believe we can safely say that our study is, in fact, representative!” (p. 7)

Anyone with an elementary grasp of statistics knows that’s nonsense. The question isn’t whether the percentages were “high,” it’s how they varied across categories. Ironically, at the very time that NALP published the quoted language, I was taking a first-year elective on “Law and Social Science” at my law school. It’s galling that law schools weren’t practicing the quantitative basics that they were already teaching.

NALP quickly secured more participating schools, which mooted this particular example of bad statistics. By 1978, NALP was obtaining responses from 150 of the 167 ABA-approved law schools. Higher levels of school participation, however, did not solve the problem of missing graduates. For the Classes of 1974 through 1978, NALP was missing data on 19.4% to 23.7% of the graduates from reporting schools. Blithely ignoring those graduates, NALP calculated the employment rate each year simply by dividing the number of graduates who held any type of job by the number whose employment status was known. This misleading method, which NALP still uses today, yielded an impressive employment rate of 88.1% for the Class of 1974.

But even that wasn’t enough. Starting with the Class of 1975, NALP devised a truly ingenious way to raise employment rates: It excluded from its calculation any graduate who had secured neither a job nor bar admission by the spring following graduation. As NALP explained in the introduction to its 1977 report: “The employment market for new attorneys does not consist of all those that have graduated from ABA-approved law schools. In order for a person to practice law, there is a basic requirement of taking and passing a state bar examination. Those who do not take or do not pass the bar examination should therefore be excluded from the employment market….” (p. 1)

That would make sense if NALP had been measuring the percentage of bar-qualified graduates who obtained jobs. But here’s the kicker: At the same time that NALP excluded unemployed bar no-admits from its calculation, it continued to include employed ones. Many graduates in the latter category held jobs that we call “JD Advantage” ones today. NALP’s 1975 decision gave law schools credit for all graduates who found jobs that didn’t require a law license, while allowing them to disown (for reporting purposes) graduates who didn’t obtain a license and remained jobless.

I can’t think of a justification for that–other than raising the overall employment rate. Measure employment among all graduates, or measure it among all grads who have been admitted to the bar. You can’t use one criterion for employed graduates and a different one for unemployed graduates. Yet the “NALP Research Committee, upon consultation with executive committee members and many placement directors from throughout the country” endorsed this double standard. (id.)

And the trick worked. By counting graduates who didn’t pass the bar but nonetheless secured employment, while excluding those who didn’t take the bar and failed to get jobs, NALP produced a steady rise in JD employment rates: 88.1% in 1974 (under the original method), 91.6% in 1975, 92.5% in 1976, 93.6% in 1977, and a remarkable 94.2% in 1978. That 94.2% statistic ignored 19.5% of graduates who didn’t report any employment status, plus another 3.7% who hadn’t been admitted to the bar and were known to be unemployed but, whatever.

NALP was very pleased with its innovation. The report for the Class of 1977 states: “This revised and more realistic picture of the employment market for newly graduated and qualified lawyers reveals that instead of facing unemployment, the prospects for employment within the first year of graduation are in fact better than before. Study of the profile also reveals that there has been an incremental increase in the number of graduates employed and a corresponding drop in unemployment during that same period.” (p. 21) Yup, unemployment rates will fall if you ignore those pesky graduates who neither found jobs nor got admitted to the bar–while continuing to count all of the JD Advantage jobs.

I don’t know when NALP abandoned this piece of data chicanery. My library didn’t order any of the NALP reports between 1979 and 1995, so I can’t trace the evolution of NALP’s reporting method. By 1996, NALP was no longer counting unlicensed grads with jobs while ignoring those without jobs. Someone helped them come to their senses.

Why bring this up now? In part, I’m startled by the sheer audacity of this data manipulation. Equally important, I think it’s essential for law schools to recognize our long history of distorting data about employment outcomes. During the early years of these reports, NALP didn’t even have a technical staff: these reports were written and vetted by placement directors from law schools. It’s a sorry history.

, View Comment (1)

NALP Numbers

June 20th, 2013 / By

NALP, the National Association for Law Placement, has released selected findings about employment for the Class of 2012. The findings and accompanying press release don’t tell us much more than the ABA data published in late March, but there are a few interesting nuggets. Here are my top ten take-aways from the NALP data.

1. Law school leads to unemployment. I’m sorry to put that so bluntly, but it’s true. Even after adopting a very generous definition of employment–one that includes any work for pay, whatever the nature of the work, the number of hours worked per week, and the permanence (or lack thereof) of the position–only 84.7% of graduates from ABA-accredited schools were employed nine months after graduation. Almost one in six graduates had no job at all nine months after graduation? That statistic is beyond embarrassing.

Some of those graduates were enrolled in other degree programs, and some reported that they were not seeking work. Neither of those categories, however, should offer much comfort to law schools or prospective students. It’s true that yet another degree (say an LLM in tax or an MBA) may lead to employment, but those degrees add still more time and money to a student’s JD investment. Graduates who are unemployed and not seeking work, meanwhile, often are studying for the February bar exam–sometimes after failing on their first attempt. Again, this is not a comforting prospect for students considering law school.

Even if we exclude both of those categories, moreover, 10.7% of 2012 graduates–more than one in every ten–was completely unemployed and actively seeking work in February 2013. The national unemployment rate that month was just 7.7%. Even among 25-to-29-year olds, a group that faces higher than average unemployment, the most recent reported unemployment rate (for 2012) was 8.9%. Recent graduates of ABA-accredited law schools are more likely to be unemployed than other workers their age–most of whom have far less education.

2. Nine months is a long time. When responding to these dismal nine-month statistics, law schools encourage graduates to consider the long term. Humans, however, have this annoying need to eat, stay warm, and obtain health care in the present. Most of us would be pretty unhappy if we were laid off and it took more than nine months to find another job. How would we buy food, pay our rent, and purchase prescriptions during those months? For new graduates it’s even worse. They don’t have the savings that more senior workers may have as a cushion for unemployment; nor can they draw unemployment compensation. On the contrary, they need to start repaying their hefty law school loans six months after graduation.

When we read nine-month statistics, we should bear those facts in mind. Sure, the unemployed graduates may eventually find work. But most of them already withdrew from the workforce for three years of law school; borrowed heavily to fund those years; borrowed still more to support three months of bar study; sustained themselves (somehow) for another six months; and have been hearing from their loan repayment companies for three months. If ten percent are still unemployed and seeking work the February after graduation, what are they living on?

3. If you want to practice law, the outlook is even worse. Buried in the NALP releases, you’ll discover that only 58.3% of graduates secured a full-time job that required bar admission and would last at least a year. Even that estimate is a little high because NALP excludes from its calculation over 1000 graduates whose employment status was unknown. Three years of law school, three months of bar study, six months of job hunting, and more than two out of every five law graduates still has not found steady, full-time legal work. If you think those two wanted JD Advantage jobs, read on.

4. Many of the jobs are stopgap employment. Almost a quarter of 2012 graduates with jobs in February 2013 were actively looking for other work. The percentage of dissatisfied workers was particularly high among those with JD Advantage positions: forty-three percent of them were seeking another job. JD Advantage positions offer attractive career options for some graduates, but for many they are simply a way to pay the bills while continuing the hunt for a legal job.

5. NALP won’t tell you want you want to know. When the ABA reported similar employment statistics in March, it led with the information that most readers want to know: “Law schools reported that 56.2 percent of graduates of the class of 2012 were employed in long-term, full-time positions where bar passage was required.” The ABA’s press release followed up with the percentage of graduates in long-term, full-time JD Advantage positions (9.5%) and offered comparisons to 2011 for both figures. Bottom line: Nine months after graduation, about two-thirds of 2012 graduates had full-time, steady employment related to their JD.

You won’t find that key information in either of the two reports that NALP released today. You can dig out the first of those statistics (the percentage of the class holding full-time, long-term jobs that required bar admission), but it’s buried at the bottom of the second page of the Selected Findings. You won’t find the second statistic (the percentage of full-time, long-term JD Advantage jobs) anywhere; NALP reports only a more general percentage (including temporary and part-time jobs) for that category.

NALP’s Executive Director, James Leipold, laments disclosing even that much. He tells us that the percentage of full-time, long-term jobs requiring bar passage “is certainly not a fair measure of the value of a legal education or the return on investment, or even a fair measure of the success of a particular graduating class in the marketplace.” Apparently forgetting the ABA’s attention to this employment measure, Leipold dismisses it as “the focus of so much of the media scrutiny of legal education.”

What number does NALP feature instead? That overall employment rate of 84.7%, which includes non-professional jobs, part-time jobs, and temporary jobs. Apparently those jobs are a more “fair measure of the value of a legal education.”

6. Law students are subsidizing government and nonprofits. NALP observes that the percentage of government and public interest jobs “has remained relatively stable for more than 30 years, at 26-29%.” At the same time, it reports that most law-school-funded jobs lie in this sector. If the percentage of jobs has remained stable, and law schools are now funding some of those spots, then law schools are subsidizing the government and public interest work. “Law schools,” of course, means students who pay tuition to those schools. Even if schools support post-graduate fellowships with donor money, those contributions could have been used to defray tuition costs.

I’m all in favor of public service, but shouldn’t the taxpayers and charitable donors pay for that work? In the current scheme, law students are borrowing significant sums from the government, at high interest rates, so that they can pay tuition that is used to subsidize government and nonprofit employees. Call me old fashioned, but that seems like a complicated (and regressive) way to pay for needed services. Why not raise taxes on people like me, who actually earn money, rather than issue more loans to people who hope someday to earn money?

7. Don’t pay much attention to NALP’s salary figures. NALP reports some salary information, which the ABA eschews. Those tantalizing figures draw some readers to the NALP report–and hype the full $95 version it will release in August. But the salary numbers are more misleading than useful. NALP reports salary information only for graduates who hold full-time, long-term positions and who report their salaries. That’s a minority of law graduates: Last year NALP reported salaries for just 18,639 graduates, from a total class of 44,495. Reported salaries, therefore, represented just 41.9% of the class. The percentage this year is comparable.

That group, furthermore, disproportionately represents the highest salaries. As NALP itself recognizes, salaries are “disproportionately reported for those graduates working at large firms,” so median salaries are “biased upward.” Swallow any salary reports, in other words, with a tablespoon of salt.

8. After accounting for inflation, today’s reported salaries are lower than ones from the last century. Although NALP’s reported salaries skew high, they offer some guidance to salary trends over time. Unfortunately, those trends are negative. During the early nineteen nineties, the country was in recession and law firms hadn’t yet accelerated pay for new associates. The median reported salary for 1991 graduates was just $40,000. Accounting for inflation, that’s equivalent to a 2012 median salary of $67,428. The actual reported median for that class, however, was just $61,245. Even when today’s graduates land a full-time, steady job, they’re earning 9.2% less than graduates from the last century.

9. The lights of BigLaw continue to dim. NALP acknowledges the “‘new normal’ in which large firm hiring has recovered some but remains far below pre-recession highs.” The largest firms, those with more than 500 lawyers, hired more than 3,600 members of the Class of 2012, a total that modestly exceeded the number hired from the Class of 2011. Current employment, however, remains well shy of the 5,100 graduates hired from the Class of 2009. Meanwhile, a growing percentage of those BigLaw hires are staff attorneys rather than associates. These lower-status, lower-paid lawyers currently comprise 4% of new BigLaw hires, and they are “more common than just two years ago.”

Inflation, meanwhile, has eroded salaries for even the best paid associates in BigLaw. In 2000, NALP reported a median salary of $125,000 for graduates joining firms that employed more than 500 lawyers. Adjusting for inflation, that would be $166,662 for the Class of 2012. BigLaw associates won’t starve on the median $160,000 they’re actually earning, but they’re taking home less in real dollars than the associates who started at the turn of the century.

For associates joining the second tier of BigLaw, firms that employ 251-500 lawyers, the salary news is even worse. In 2000, those associates also reported a median salary of $125,000, which would translate to $166,662 today. The actual median, however, appears to be $145,000 (the same figure reported for 2011). That’s a decline of 13% in real dollars.

10. It goes almost without saying that these 2012 graduates paid much more for their law school education than students did in 1991, 2000, or almost any other year. Law school tuition has far outpaced inflation over the last three decades. It’s scant comfort to this class–or to the classes of 2010, 2011, or 2013–that heavy discounts are starting to ease tuition. These are classes that bought very high and are selling very low. There’s little that law schools can do to make the difference up to these graduates, but we shouldn’t forget the financial hardship they face. If nothing else, the tuition-jobs gap for these classes should make us commit to the boldest possible reforms of legal education.

, View Comments (2)

An Employment Puzzle

June 18th, 2013 / By

Pedagogically and professionally, it makes sense for law schools to teach practical skills along with theory and doctrine. New lawyers should know how to interview clients, file simple legal documents, and analyze real-world problems, just as new doctors should know how to interview patients, use a stethoscope, and offer a diagnosis. Hands-on work can also deepen knowledge received in the classroom. Law students who apply classroom theories to real or simulated clients develop stronger intellectual skills, as well as new practical ones.

Employers say they are eager to hire these better-trained, more rounded, more “practice ready” lawyers–and they should be. That’s why the employment results for Washington & Lee’s School of Law are so troubling. Washington & Lee pioneered an experiential third-year program that has won accolades from many observers. Bill Henderson called Washington & Lee’s program the “biggest legal education story of 2013.” The National Jurist named the school’s faculty as among the twenty-five most influential people in legal education. Surely graduates of this widely praised program are reaping success in the job market?

Sadly, the statistics say otherwise. Washington & Lee’s recent employment outcomes are worse than those of similarly ranked schools. The results are troubling for advocates of experiential learning. They should also force employers to reflect on their own behavior: Does the rhetoric of “practice ready” graduates align with the reality of legal hiring? Let’s look at what’s happening with Washington & Lee graduates.

Employment Outcomes

I used the law-job calculator developed by Educating Tomorrow’s Lawyers to compare Washington & Lee’s employment outcomes with those of other schools. Drawing upon ABA data that reports job outcomes nine months after graduation, the calculator allows users to choose their own formulas for measuring outcomes. I chose two formulas that I believe resonate with many observers:

(a) The number of full-time, long-term jobs requiring bar admission, minus (i) any of those jobs funded by the law school and (ii) any solo positions; all divided by the total number of graduates.

(b) The number of full-time, long-term jobs requiring bar admission or for which the JD provided an advantage, minus (i) any of those jobs funded by the law school and (ii) any solo positions; all divided by the total number of graduates.

[Note: These are not the only formulas for measuring job outcomes; other formulas may be appropriate in other contexts. These formulas work here because they allow the most straightforward comparison of employment outcomes across schools. These formulas also make the best case for Washington & Lee’s outcomes, because that school did not report any long-term, full-time solos or school-funded jobs in 2011 or 2012.]

Using those two measures, Washington and Lee’s employment outcomes for 2011 were noticeably mediocre. By nine months after graduation, only 55.0% of the school’s graduates had obtained full-time, long-term jobs that required bar admission. That percentage placed Washington & Lee 76th among ABA-accredited schools for job outcomes. Using the second, broader metric, 64.3% of Washington & Lee’s class secured full-time, long-term positions. But that only nudged the school up a few spots compared to other schools–to 73rd place.

In 2012, the numbers were even worse. Only 49.2% of Washington & Lee’s 2012 graduates obtained full-time, long-term jobs that required a law license, ranking the school 119th compared to other accredited schools. Including JD Advantage jobs raised the percentage to 57.7%, but lowered Washington & Lee’s comparative rank to 127th.

These numbers are depressing by any measure; they are startling when we remember that Washington & Lee currently is tied for twenty-sixth place in the US News ranking. Other schools of similar rank fare much better on employment outcomes.

The University of Iowa, for example, holds the same US News rank as Washington & Lee and suffers from a similarly rural location. Yet Iowa placed 70.8% of its 2012 graduates in full-time, long-term jobs requiring bar admission–more than twenty percentage points better than Washington & Lee. The College of William & Mary ranks a bit below Washington & Lee in US News (at 33rd) and operates in the same state. After excluding solos and school-funded positions (as my formula requires), William & Mary placed 55.9% of its 2012 graduates in full-time, long-term jobs requiring bar admission–significantly better than Washington & Lee’s results.

What’s the Explanation?

Law school employment outcomes vary substantially. Geography, school size, and local competition all seem to play a role. But Washington & Lee’s outcomes are puzzling given both the prominence of its third-year program and the stridency of practitioner calls for more practical training. Just last week, California’s Task Force on Admissions Regulation Reform suggested: “If, in the future, new lawyers come into the profession more practice-ready than they are today, more jobs will be available and new lawyers will be better equipped to compete for those jobs.” (p. 14) If that’s true, why isn’t the formula working for Washington & Lee?

I think we need to explore at least four possibilities. First and most important, the connection between practical training and jobs is much smaller than practitioners and bar associations assert. Employers like practice-ready graduates because those new lawyers are cheaper to train; an employer thus might be more likely to hire a practice-ready graduate than a clueless one. Most of those hiring decisions, however, involve choosing among applicants, not creating new positions. A few employers might hire a practice-ready graduate when they wouldn’t have otherwise hired any lawyer, but those job-market gains are likely to be small.

Practice-readiness can even reduce the number of available jobs. If a practice-ready lawyer handles more work than a less-experienced one, her employer may need fewer entry-level lawyers. Even the best-trained new lawyer is unlikely to grow the client base immediately. The number of legal jobs depends much more on client demand and employer entrepreneurship than on the experience that new graduates possess. Maybe the employers recruiting at Washington & Lee have recognized that truth.

Second, even when allocating existing jobs, employers may care less about practical training than they claim. Law school clinicians have noted for years that legal employers rarely demand “clinical experience” as a prerequisite for on-campus interviews. Instead, their campus interviewing forms are more likely to list “top ten percent” or “law review.” Old habits die hard. Employers have maintained for the last few years that “this time we really mean it when we ask for practical skills,” but maybe they don’t.

Third, employers may care about experience, but want to see that experience in the area for which they’re hiring. This possibility is particularly troubling for law schools that are trying to expand clinical and other client-centered offerings. As a professor who teaches both a criminal defense clinic and a prosecution one, I can see the ways in which these experiences apply to other practice areas. A student who learns to discern the client’s individual needs, as our defense lawyers do, can transport that lesson to any practice area. A student who weighs competing interests in deciding whether to prosecute can apply similar skills for any employer.

Unfortunately, however, I don’t think employers always share my impression. Over the years, I’ve had the sense that students from the criminal defense clinic are stereotyped as public defenders, do-gooders, or (worse) anti-establishment radicals–even if they took the clinic for the client counseling, negotiation, and representation experience. Prosecution students don’t encounter the same negative images, but they sometimes have trouble persuading law firms and corporations that they’re serious about practicing corporate law.

No matter how many clinics and simulations a law school offers–and Washington & Lee provides a lot–each student can only schedule a few of these experiences. If a student chooses experiential work in entertainment law and intellectual property, does the student diminish her prospects of finding work in banking or family law? Does working in the Black Lung Legal Clinic create a black mark against a student applying to work later for corporate clients?

I wonder, in other words, if the menu of clinical choices we offer students actually operates against them. Would it be better to cycle all students through a series of required clinical experiences? That’s the way that medical school rotations work. Under that system, would employers better understand that all clinical experience has value for a new lawyer? Would they be less likely to lock graduates into particular career paths based on the clinical experiences they chose? These are questions we need to pursue as we expand experiential education in law schools.

A fourth possible explanation for Washington & Lee’s disappointing employment outcomes is that the students themselves may have developed higher or more specialized career ambitions than their peers at other schools. Some students may have been so excited by their clinical work that they were unwilling to accept jobs in other areas. Others, buoyed by employers’ enthusiasm for practice-ready graduates, may have held out for the most attractive positions on the market. If this explanation has power, then Washington & Lee’s graduates may fare better as more months pass. Maybe practice-ready graduates get better jobs, and perform better for their employers, but the matches take longer to make.

What Do We Learn?

What lessons should we take from Washington & Lee’s 2011 and 2012 employment outcomes? First, the school still deserves substantial credit for its willingness to innovate–as well as for the particular program it chose. If law school remains a three-year, graduate program, then experiential work should occupy a larger segment of the curriculum than it has at most schools in the past. That makes pedagogic sense and, even if experiential learning doesn’t expand the job market, it should produce more thoughtful, well rounded attorneys.

Second, legal employers should take a hard look at the factors they actually value in hiring. What role does clinical experience really play? Do grades and law review membership still count more? Are employers discounting clinical work done outside their practice area? Are they even holding that work against a candidate? Law schools are engaging in significant introspection about the education they provide; it is time for employers to critically examine their own actions and hiring assumptions.

Third, law schools and employers should work together to design the best type of experiential education–one that prepares graduates for immediate employment as well as long-term success. If employers value a 4-credit externship with their own organization more than 12 credits of clinical work in a different area, we need to grapple with that fact. Schools might decide not to accommodate that desire; we might worry that externships are too narrow (or too exploitative of students) and encourage employers to value other clinical training more highly. On the other hand, we might agree that the best experiential education relates directly to a student’s post-graduate job. Unless we work together, we won’t figure out either the hurdles or the solutions.

Washington & Lee’s employment outcomes are a puzzle that we all need to confront. Graduates from most law schools, even high-ranking ones, are struggling to find good jobs. Experiential education can work pedagogic magic and prepare better lawyers, but it’s not a silver bullet for employment woes or heavy debt. On those two issues, we need to push much harder for remedies.

, View Comments (10)

Hippocrates and the Law

June 15th, 2013 / By

As readers of this blog know, I advocate many changes in legal education. Law schools, however, are not my only source of concern; practitioners and bar associations have also neglected their obligations to new lawyers and under-served clients. On that score, I’m pretty fed up with state bar reports that deplore a loss of “professionalism” and then burden aspiring lawyers with mandatory skills training, apprenticeships, and pro bono service. When are senior lawyers going to take responsibility for unmet legal needs and untrained junior lawyers?

I’ll leave unmet legal needs for another post; this one is about training. Car salesmen have no duty to train other salesmen. Restaurant owners have no obligation to tutor the next generation of restauranteurs. In businesses governed by the free market, newcomers must find (or purchase) education where they can. Professions, however, are different. Because we claim the right to govern ourselves, we assume a responsibility to educate our junior colleagues. That obligation is part of what distinguishes us as professionals.

Hippocrates, a father of professionalism as well as medicine, understood this. The Hippocratic Oath binds a doctor to “impart a knowledge of the art to my own sons, and those of my teachers, and to students bound by this contract and having sworn this Oath to the law of medicine . . . .” A more modern version, recited by many medical school graduates, states: “I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.”

This intergenerational commitment lies at the heart of professionalism. Unfortunately, however, I don’t hear that commitment from many lawyers today. Instead, I hear that “clients will no longer pay for training.” Clients ultimately pay for everything we do, including the huge sums we spend on legal education, but that’s not the point. The lawyers who blame miserly clients are really saying “I won’t realize as much profit if I spend time training new lawyers, so I’m not going to do it–unless, maybe, the lawyers have to work free for me.” That’s a repudiation of their professional duties.

Look, for example, at this week’s report from the California Bar’s Task Force on Admissions Regulation Reform. The report opens with noble language recognizing that the training of new lawyers “must involve a collaborative effort in which the law school community, practicing lawyers, and the Bar each have a role.” (p. 2) Similarly, the report acknowledges that educating new lawyers “must be a shared endeavor in which burdens are shared and responsibility is shared as well.” (id.)

What, however, will make the practicing bar shoulder these burdens–a task that professionalism already mandates? The report rather artfully squirms around this issue. It notes first that “clients no longer want to pay for that training and are refusing to do so.” (p. 5) “Changes in the economics of the profession,” therefore, “are making it more and more difficult for new lawyers to find the training, hands-on guidance and mentoring that is necessary for a successful transition into practice.” (id.) Senior lawyers, in other words, aren’t willing to compromise profits in order to offer the same levels of training that they received as newcomers to the profession.

Faced with this reality, the California task force doesn’t require senior lawyers to mentor their junior colleagues. Instead, it suggests that “the right incentives and support from the State Bar” might support development of voluntary mentoring programs. (p. 7) In particular, the task force recommends offering CLE credit to mentors as a “potentially valuable tool to incent their participation.” (id.)

If we have to “incent” lawyers to mentor junior colleagues, are we still a profession? Isn’t mentoring a responsibility that complements the “privilege of holding a law license”? (id.) The California task force, like similar committees in other states, is very eager for new lawyers to receive “orientation in the values of professionalism.” (id.) But how is that possible when senior lawyers are not voluntarily shouldering their own professional obligation to train their junior colleagues?

In addition to discussing these incentives, the California report focuses on the “administrative capacity” that the bar will need to oversee training, apprenticeship, and other programs imposed on new lawyers. (p. 15) The report says very little about how the bar will train the senior lawyers to do a more effective job training the new ones. One of the problems we have in our profession is that many senior lawyers don’t know how to mentor. When a new lawyer fails to do exactly what the senior lawyer wanted–but may have failed to convey–the senior lawyer often gets exasperated, decides it’s easier to do the task herself, and labels the junior lawyer “worthless.”

For this, I do blame law schools–not because we neglect to train students fully for practice, but because we fail to provide adequate models for professional training. The Socratic method may have some uses in the classroom, but it’s useless in practice. Yet that is the primary model we give our graduates:

Senior Lawyer: Ms. Newbie, file an answer to this complaint!
Newbie: Of course, Mr. Senior. Just tell me, how would I go about doing that?
Senior Lawyer: How do you think you should do it, Ms. Newbie? I’m not going to spoonfeed you.
Newbie: Oh, I see. This is just like law school. I’ll get right on it, sir. [Newbie goes on desperate search for an appropriate nutshell or tips from a slightly more experienced lawyer.]

If we’re going to preserve law as a profession, then we all have to act as professionals. Legal educators must recognize their obligation to train future members of a profession: we must give graduates foundational training in a full range of skills, as well as the ability to mentor themselves and others. Practicing lawyers, meanwhile, must meet their responsibilities to continue that training, even if clients won’t pay directly for the work. That intergenerational compact is essential to any profession. Without it, we’re simply businesses hiding behind anti-competitive restraints of trade.

, View Comments (2)

Organizational Form for Postgraduate Law Firms

June 13th, 2013 / By

John Colombo has posted a useful paper examining the best organizational form for postgraduate law firms created by law schools. Several law schools are exploring that type of firm; we discussed the general idea in several earlier posts. Professor Colombo probes the important tax consequences of organizing these entities, an issue that no school would want to ignore.

Colombo’s analysis suggests, first, that a firm operating as a division of a law school would not endanger the school’s tax-exempt status. Even if the firm charged clients for representation, paid graduates employed by the firm, and generated net revenue, the firm would not negate the school’s tax-exempt status as long as its activities remained “functionally related to the educational mission of the underlying school.” Colombo offers more detail on meeting that and related IRS tests, but concludes that postgraduate firms should readily pass muster.

Some schools, however, might prefer to establish a law firm as a separate non-profit entity. In particular, schools (and their governing universities) might prefer to isolate the school from liabilities incurred by the firm. Professor Colombo’s analysis, however, shows that it would be difficult for a separate non-profit to qualify for tax-exempt status. The precedents conflict, but “the bulk of these precedents indicate that organizations conducting commercial-like businesses as their primary activity will face considerable hostility from the IRS in seeking exempt status.” Even if a school-related law firm ultimately won the day, few law schools would want a new project like this to face IRS opposition.

Fortunately, there is a solution for schools located in states that allow law practices to function as limited liability companies (LLC’s). If the law school creates an LLC to house the firm, with the school as the LLC’s only member, then “the law school will receive the state-law liability protection of a limited-liability entity, while having the tax exemption issues analyzed as though the firm were operated as a ‘division’ of the law school.” The firm, in other words, would receive the school’s tax-exempt status.

I can’t pretend to evaluate Professor Colombo’s assessment; I’ve figured out relevant parts of the personal income tax, but don’t have a clue about the taxation of businesses or other organizations. Colombo, however, is a pro in this area, and his analysis is cogent–even readable for those of us who don’t commune daily with the Internal Revenue Code. Tax treatment is only factor in choosing organizational form, but it’s a significant one. Any law school considering creation of a postgraduate law firm should read Colombo’s concise perspective on organizational form and tax exemption.

, No Comments Yet

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests