By a narrow vote of 10-9, the ABA’s Legal Education Council has approved a proposal to move back the reporting date for new-graduate employment–from nine months after graduation to ten months after earning a degree. Kyle and I have each written about this proposal, and we each submitted comments opposing the change. The decision, I think, tells prospective students and the public two things.
First, the date change loudly signals that the entry-level job market remains very difficult for recent graduates, and that law schools anticipate those challenges continuing for the foreseeable future. This was the rationale for the proposal, that large firms are hiring “far fewer entry level graduates,” that “there is a distinct tendency of judges” to seek experienced clerks, and that other employers are reluctant to hire graduates until they have been admitted to the bar.
The schools saw these forces as ones that were unfairly, and perhaps unevenly, affecting their employment rates; they wanted to make clear that their educational programs were as sound as ever. From a prospective student’s viewpoint, however, the source of job-market changes doesn’t matter. An expensive degree that leads to heavy debt, ten months of unemployment, and the need to purchase still more tutoring for the bar, is not an attractive degree. Students know that the long-term pay-off, in job satisfaction or compensation, may be high for some graduates. But this is an uncertain time in both the general economy and the regulation of law practice; early-career prospects matter to prospective students with choices.
Second, and more disappointing to me, the Council’s vote suggests a concern with the comparative status of law schools, rather than with the very real changes occurring in the profession. The ABA’s Task Force on the Future of Legal Education has just issued a working paper that calls upon law faculty to “reduce the role given to status as a measure of personal and institutional success.” That’s a hard goal to reach without leadership from the top.
Given widespread acknowledgement that the proposal to shift the reporting date stemmed from changes in the US News methodology, we aren’t getting that leadership. Nor are we getting leadership on giving students the information they need, when they need it. This is another black eye for legal education.
I haven’t been surprised by the extensive discussion of the recent paper by Michael Simkovic and Frank McIntyre. The paper deserves attention from many readers. I have been surprised, however, by the number of scholars who endorse the paper–and even scorn skeptics–while acknowledging that they don’t understand the methods underlying Simkovic and McIntyre’s results. An empirical paper is only as good as its method; it’s essential for scholars to engage with that method.
I’ll discuss one methodological issue here: the small sample sizes underlying some of Simkovic and McIntyre’s results. Those sample sizes undercut the strength of some claims that Simkovic and McIntyre make in the current draft of the paper.
What Is the Sample in Simkovic & McIntyre?
Simkovic and McIntyre draw their data from the Survey of Income and Program Participation, a very large survey of U.S. households. The authors, however, don’t use all of the data in the survey; they focus on (a) college graduates whose highest degree is the BA, and (b) JD graduates. SIPP provides a large sample of the former group: Each of the four panels yielded information on 6,238 to 9,359 college graduates, for a total of 31,556 BAs in the sample. (I obtained these numbers, as well as the ones for JD graduates, from Frank McIntyre. He and Mike Simkovic have been very gracious in answering my questions.)
The sample of JD graduates, however, is much smaller. Those totals range from 282 to 409 for the four panels, yielding a total of 1,342 law school graduates. That’s still a substantial sample size, but Simkovic and McIntyre need to examine subsets of the sample to support their analyses. To chart changes in the financial premium generated by a law degree, for example, they need to examine reported incomes for each of the sixteen years in the sample. Those small groupings generate the uncertainty I discuss here.
Confidence Intervals
Statisticians deal with small sample sizes by generating confidence intervals. The confidence interval, sometimes referred to as a “margin of error,” does two things. First, it reminds us that numbers plucked from samples are just estimates; they are not precise reflections of the underlying population. If we collect income data from 1,342 law school graduates, as SIPP did, we can then calculate the means, medians, and other statistics about those incomes. The median income for the 1,342 JDs in the Simkovic & McIntyre study, for example, was $82,400 in 2012 dollars. That doesn’t mean that the median income for all JDs was exactly $82,400; the sample offers an estimate.
Second, the confidence interval gives us a range in which the true number (the one for the underlying population) is likely to fall. The confidence interval for JD income, for example, might be plus-or-minus $5,000. If that were the confidence interval for the median given above, then we could be relatively sure that the true median lay somewhere between $77,400 and $87,400. ($5,000 is a ballpark estimate of the confidence interval, used here for illustrative purposes; it is not the precise interval.)
Small samples generate large confidence intervals, while larger samples produce smaller ones. That makes intuitive sense: the larger our sample, the more precisely it will reflect patterns in the underlying population. We have to exercise particular caution when interpreting small samples, because they are more likely to offer a distorted view of the population we’re trying to understand. Confidence intervals make sure we exercise that caution.
Our brains, unfortunately, are not wired for confidence intervals. When someone reports the estimate from a sample, we tend to focus on that particular reported number–while ignoring the confidence interval. Considering the confidence interval, however, is essential. If a political poll reports that Dewey is leading Truman, 51% to 49%, with a 3% margin of error, then the race is too close to call. Based on this poll, actual support for Dewey could be as low as 48% (3 points lower than the reported value) or as high as 54% (3 points higher than the reported value). Dewey might win decisively, the result might be a squeaker, or Truman might win.
Is the Earnings Premium Cyclical?
Now let’s look at Figure 5 in the Simkovic and McIntyre paper. This figure shows the earnings premium for a JD compared to a BA over a range of 16 years. The shape of the solid line is somewhat cyclical, leading to the Simkovic/McIntyre suggestion that “[t]he law degree earnings premium is cyclical,” together with their observation that recent changes in income levels are due to “ordinary cyclicality.” (pp. 49, 32)
But what lies behind that somewhat cyclical solid line in Figure 5? The line ties together sixteen points, each of which represents the estimated premium for a single year. Each point draws upon the incomes of a few hundred graduates, a relatively small group. Those small sample sizes produce relatively large confidence intervals around each estimate. Simkovic & McIntyre show those confidence intervals with dotted lines above and below the solid line. The estimated premium for 1996, for example, is about .54, but the confidence interval stretches from about .42 to about .66. We can be quite confident that JD graduates, on average, enjoyed a financial premium over BAs in 1996, but we’re much less certain about the size of the premium. The coefficient for this premium could be as low as .42 or as high as .66.
So what? As long as the premiums were positive, how much do we care about their size? Remember that Simkovic and McIntyre suggest that the earnings premium is cyclical. They rely on that cyclicality, in turn, to suggest that any recent downturns in earnings are part of an ordinary cycle.
The results reported in Figure 5, however, cannot confirm cyclicality. The specific estimates look cyclical, but the confidence intervals urge caution. Figure 5 shows those intervals as lines that parallel the estimated values, but the confidence intervals belong to each point–not to the line as a whole. The real premium for each year most likely falls somewhere within the confidence interval for each year, but we can’t say where.
Simkovic and McIntyre could supplement their analysis by testing the relationship among these estimates; it’s possible that, statistically, they could reject the hypothesis that the earnings premium was stable. They might even be able to establish cyclicality with more certainty. We can’t reach those conclusions from Figure 5 and the currently reported analyses, however; the confidence intervals are too wide for certain interpretation. All of the internet discussion of the cyclicality of the earnings premium has been premature.
Recent Graduates
Similar problems affect Simkovic and McIntyre’s statements about recent graduates. In Figure 6, they depict the earnings premium for law school graduates aged 25-29 in four different time periods. The gray bars show the estimated premium for each time period, with the vertical lines indicating the confidence interval. Notice how long those confidence intervals are: The interval for 1996-1999 stretches from about 0.04 through about 0.54. The other periods show similarly extended intervals.
Those large confidence intervals reflect very small sample sizes. The 1996 panel offered income information on just sixteen JD graduates aged 25-29; the 2001 panel included twenty-five of those graduates; the 2004 panel, seventeen; and the 2008 panel twenty-six graduates. With such small samples, we have very little confidence (in both the every day and statistical senses) that the premium estimates are correct.
It seems likely that the premium was positive throughout this period–although the very small sample sizes and possible bimodality of incomes could undermine even that conclusion. We can’t, however, say much more than that. If we take confidence intervals into account, the premium might have declined steadily throughout this period, from about 0.54 in the earliest period to 0.33 in the most recent one. Or it might have risen, from a very modest 0.05 in the first period to a robust 0.80 more recently. Again, we just don’t know.
It would be useful for Simkovic and McIntyre to acknowledge the small number of recent law school graduates in their sample; that would help ground readers in the data. When writing a paper like this, especially for an interdisciplinary audience, it’s difficult to anticipate what kind of information the audience may need. I’m surprised that so many legal scholars enthusiastically endorsed these results without noting the large confidence intervals.
Onward
There has been much talk during the last two weeks about Kardashians, charlatans, and even the Mafia. I’m not sure any legal academic leads quite that exciting a life; I know I don’t. As a professor who has taught Law and Social Science, I think the critics of the Simkovic/McIntyre paper raised many good questions. Empirical analyses need testing, and it is especially important to examine the assumptions that lie behind a quantitative study.
The questions weren’t all good. Nor, I’m afraid, were all of the questions I’ve heard about other papers over the years. That’s the nature of academic debate and refining hypotheses: sometimes we have to ask questions just to figure out what we don’t know.
Endorsements of the paper, similarly, spanned a spectrum. Some were thoughtful, others seemed reflexive. I was disappointed at how few of the paper’s supporters engaged fully in the paper’s method, asking questions like the ones I have raised about sample size and confidence intervals.
I hope to write a bit more on the Simkovic and McIntyre paper; there are more questions to raise about their conclusions. I may also try to offer some summaries of other research that has been done on the career paths of law school graduates and lawyers. We don’t have nearly enough research in the field, but there are some other studies worth knowing.
I was busy with several projects this week, so didn’t have a chance to comment on the new paper by Michael Simkovic and Frank McIntyre. With the luxury of weekend time, I have some praise, some caveats, and some criticism for the paper.
First, in the praise category, this is a useful contribution to both the literature and the policy debates surrounding the value of a law degree. Simkovic and McIntyre are not the first to analyze the financial rewards of law school–or to examine other aspects of the market for law-related services–but their paper adds to this growing body of work.
Second, Simkovic and McIntyre have done all of us a great service by drawing attention to the Survey of Income and Program Participation. This is a rich dataset that can inform many explorations, including other studies related to legal education. The survey, for example, includes questions about grants, loans, and other assistance used to finance higher education. (See pp. 307-08 of this outline.) I hope to find time to work with this dataset, and I hope others will as well.
Now I move to some caveats and criticisms.
Sixteen Years Is Not the Long Term
Simkovic and McIntyre frequently refer to their results as representing “long-term” outcomes or “historic norms.” A central claim of the study, for example, is that the earnings premium from a law degree “is stable over the long term, with short term cyclical fluctuations.” (See slide 26 of the powerpoint overview.) These representations, however, rest on a “term” of just sixteen years, from 1996-2011. Sixteen years is less than half the span of a typical law graduate’s career; it is too short a period to embody long-term trends.
This is a different caveat from the one that Simkovic and McIntyre express, that we can’t know whether contemporary changes in the legal market will disrupt the trends they’ve identified. We can’t, in other words, know that the period from 2012-2027 will look like the one from 1996-2011. Equally important, however, the study doesn’t tell us anything about the years before 1996. Did the period from 1980-1995 look like the one from 1996-2011? What about the period from 1964-1979? Or 1948-1963?
The SIPP data can’t tell us about those periods. The survey began during the 1980s, but the instrument changed substantially in 1996. Nor do other surveys, to my knowledge, give us the type of information we need to perform those historical analyses. Simkovic and McIntyre didn’t overlook relevant data, but they claim too much from the data they do have.
Note that SIPP does contain data about law graduates of all ages. This is one of the strengths of the database, and of the Simkovic/McIntyre analysis. This study shows us the earnings of law graduates who have been practicing for decades, not just those of recent graduates. That analysis, however, occurs entirely within the sixteen-year window of 1996-2011. Putting aside other flaws or caveats for now, Simkovic and McIntyre are able to describe the earnings premium for law graduates of all ages during that sixteen-year window. They can say, as they do, that the premium has fluctuated within a particular band over that period. That statement, however, is very different than saying that the premium has been stable over the “long term” or that this period sets “historic norms.” To measure the long term, we’d want to know about a longer period of time.
This matters, because saying something has been “stable over the long term” sounds very reassuring. Sixteen years, however, is less than half the span of a typical law graduate’s career. It’s less, even, than the time that many graduates will devote to repaying their law school loans. The widely touted Pay As You Earn program extends payments over twenty years, while other plans structure payments over twenty-five years. Simkovic and McIntyre’s references to the “long term” suggest a stability that their sixteen years of data can’t support.
What would a graph of truly long-term trends show? We can’t know for sure without better data. The data might show the same pattern that Simkovic and McIntyre found for recent years. On the other hand, historic data might reveal periods when the economic premium from a law degree was small or declining. A study of long-term trends might also identify times when the JD premium was rising or higher than the one identified by Simkovic and McIntyre. A lot has changed in higher education, legal education, and the legal profession over the last 25, 50, or 100 years. That past may or may not inform the future, but it’s important to recognize that Simkovic and McIntyre tell us only about the recent past–a period that most recognize as particularly prosperous for lawyers–not about the long term.
Structural Shifts
Simkovic and McIntyre discount predictions that the legal market is undergoing a structural shift that will change lawyer earnings, the JD earnings premium, or other aspects of the labor market. Their skepticism does not stem from examination of particular workplace trends; instead it rests largely on the data they compiled. This is where Simkovic and McIntyre’s claim of stability “over the long term” becomes most dangerous.
On pp. 36-37, for example, Simkovic and McIntyre list a number of technological changes that have affected law practice, from “introduction of the typewriter” to “computerized and modular legal research through Lexis and Westlaw; word processing; electronic citation software; electronic document storage and filing systems; automated document comparison; electronic document search; email; photocopying; desktop publishing; standardized legal forms; will-making and tax-preparing software.” They then conclude (on p. 37) that “[t]hrough it all, the law degree has continued to offer a large earnings premium.”
That’s clearly hyperbole: We have no idea, based on the Simkovic and McIntyre analysis, how most of these technological changes affected the value of a law degree. Today’s JD, based on a three-year curriculum, didn’t exist when the typewriter pioneered. Lexis, WestLaw, and word processing have been around since the 1970s; photocopying dates further back than that. A study of earnings between 1996 and 2011 can’t tell us much about how those innovations affected the earnings of law graduates.
It is true (again, assuming for now no other flaws in the analysis) that legal education delivered an earnings premium during the period 1996-2011, which occurred after all of these technologies had entered the workforce. Neither typewriters nor word processors destroyed the earnings that law graduates, on average, enjoyed during those sixteen years. That is different, however, from saying that these technologies had no structural effect on lawyers’ earnings.
The Tale of the Typewriter
The lowly typewriter, in fact, may have contributed to a major structural shift in the legal market: the creation of three-year law schools and formal schooling requirements for bar admission. Simkovic and McIntyre (at fn 84) quote a 1901 statement that sounds like a melodramatic indictment of the typewriter’s impact on law practice. Francis Miles Finch, the Dean of Cornell Law School and President of the New York State Bar Association, told the bar association in 1901 that “current conditions are widely and radically different from those existing fifty years ago . . . the student in the law office copies nothing and sees nothing. The stenographer and the typewriter have monopolized what was his work . . . and he sits outside of the business tide.”
Finch, however, was not wringing his hands over new technology or the imminent demise of the legal profession; he was pointing out that law office apprentices no longer had the opportunity to absorb legal principles by copying the pleadings, briefs, letters, and other work of practicing lawyers. Finch used this change in office practices to support his argument for new licensing requirements: He proposed that every lawyer should finish four years of high school, as well as three years of law school or four years of apprenticeship, before qualifying to take the bar. These were novel requirements at the turn of the last century, although a movement was building in that direction. After Finch’s speech, the NY bar association unanimously endorsed his proposal.
Did the typewriter single-handedly lead to the creation of three-year law schools and academic prerequisites for the bar examination? Of course not. But the changing conditions of apprentice work, which grew partly from changes in technology, contributed to that shift. This structural shift, in turn, almost certainly affected the earnings of aspiring lawyers.
Some would-be lawyers, especially those of limited economic means, may not have been able to delay paid employment long enough to satisfy the requirements. Those aspirants wouldn’t have become lawyers, losing whatever financial advantage the profession might have conferred. Those who complied with the new requirements, meanwhile, lost several years of earning potential. If they attended law school, they also transferred some of their future earnings to the school by paying tuition. In these ways, the requirements reduced earnings for potential lawyers.
On the other hand, by raising barriers to entry, the requirements may have increased earnings for those already in the profession–as well as for those who succeeded in joining. Finch explicitly noted in his speech that “the profession is becoming overcrowded” and it would be a “benefit” if the educational requirements reduced the number of lawyers. (P. 102.)
The structural change, in other words, probably created winners and losers. It may also have widened the gap between those two groups. It is difficult, more than a century later, to trace the full financial effects of the educational requirements that our profession adopted during the first third of the twentieth century. I would not, however, be as quick as Simkovic and McIntyre to dismiss structural changes or their complex economic impacts.
Summary
I’ve outlined here both my praise for Simkovic and McIntyre’s article and my first two criticisms. The article adds to a much needed literature on the economics of legal education and the legal profession; it also highlights a particularly rich dataset for other scholars to explore. On the other hand, the article claims too much by referring to long-term trends and historic norms; this article examines labor market returns for law school graduates during a relatively short (and perhaps distinctive) recent period of sixteen years. The article also dismisses too quickly the impact of structural shifts. That is not really Simkovic and McIntyre’s focus, as they concede. Their data, however, do not provide the type of long-term record that would refute the possibility of structural shifts.
My next post related to this article will pick up where I left off, with winners and losers. My policy concerns with legal education and the legal profession focus primarily on the distribution of earnings, rather than on the profession’s potential to remain profitable overall. Why did law school tuition climb aggressively from 1996 through 2011, if the earnings premium was stable during that period? Why, in other words, do law schools reap a greater share of the premium today than they did in earlier decades?
Which students, meanwhile, don’t attend law school at all, forgoing any share in law school’s possible premium? For those who do attend, how is that premium distributed? Are those patterns shifting? I’ll explore these questions of winners and losers, including what we can learn about the issues from Simkovic and McIntyre, in a future post.
.
Clearly, Simkovic and McIntyre’s article has given new life to those who would defend the status quo. However, even assuming the statistical methodology is sound (which I do, as I have no reason to believe otherwise and no time to recreate it), the study suffers from a number of crucial weaknesses.
First, Part IV makes the assumption that current market challenges reflect no more than the historically cyclical nature of the legal market. If you do not agree with this assumption (and I do not–I think Susskind’s view on this issue is far more sound), then the entire study is fundamentally flawed. However, even if you buy this assumption, there remain further issues with the study.
The title itself, the “Million-Dollar Law Degree” is misleading at best. This million dollar figure reflects the mean value, where the mean is skewed significantly higher than the median. Thus, it overstates the value for significantly more than half of all JD grads. It also reflects “pre-tax” value, a point that the authors do not address until near the end of the article at Part V.C. There, the authors acknowledge that their calculated benefit must be divided between private “after-tax” earnings and public tax revenues. (more…)
I won’t spend much time summarizing the new paper by Michael Simkovic, an associate law professor at Seton Hall University School of Law, and Frank McIntyre, an assistant professor of finance and economics at Rutgers University Business School. Inside Higher Ed summarized the report just fine.
Instead, I want to comment on what I see as a misguided attempt to quell critics claiming that the law school investment is not a sound choice for many people. I hope Professor Simkovic and Professor McIntyre are correct that, on average and even down to the 25th percentile, the law school investment makes financial sense.
It just completely misses the point and grossly under-appreciates the human element.
Rather than viewing law degree holders in isolation, we can get better estimates of the causal effect of education by comparing the earnings of individuals with law degrees to the earnings of similar individuals with bachelor’s degrees while being mindful of the statistical effects of selection into law school.
Unfortunately, law degree holders are individuals who are sometimes (perhaps often) hurt by going to law school. Talking about groups necessarily smooths over the stories underneath the data—the ones that make you feel good and the ones that make you sick to your stomach. The reality is that there are many people that have been hurt and are hurt right now as a direct consequence of the costs associated with entering the legal profession (or trying to). These graduates very well may make more money in the long run. But this is hardly comforting to those considering law school and those who care about the people who do.
As I told Inside Higher Ed, law schools have made a habit out of capturing as much value out of their students as possible—and for a long time, used deceptive and immoral marketing tactics to do so. The dynamics are changing and should change because of the outrageously high price of obtaining a legal education. Even if an analysis shows an investment has a positive net present value in the long run, people are not like corporations. The short-term matters more for real people. Tens of thousands of law graduates leave school each year wondering how they’re going to manage to pay off their six-figure loans. That’s what motivates critics and frightens prospective law students.
Long-term value is not irrelevant, but using it to argue that education isn’t priced too high troubles me. If we think our society and our country are better for having an educated population, as these two authors do, then we had better stop pricing people out of education.
I wrote last week about a group of states that are using a “linked-records” method to collect detailed salary information for graduates of higher education. The method has some flaws, but it is improving rapidly. The databases, meanwhile, already contain information about graduates of fifteen law schools spread over five states. Let’s take a look, starting alphabetically with Arkansas.
Arkansas has two ABA-accredited law schools: the University of Arkansas at Fayetteville and the University of Arkansas at Little Rock. Both schools place a substantial majority of their graduates with employers in Arkansas, making them excellent candidates for the linked-records system. For the class of 2012, according to ABA data, 81 of Fayetteville’s 119 employed graduates (68.1%) took their first jobs in Arkansas. For the Little Rock campus, the figure was 85.3% (93 out of 109 employed graduates).
Average Salaries in Law
What salaries did those graduates earn? The College Measures database doesn’t have figures yet for 2012 graduates–or even for 2011 ones in Arkansas. But it does report the average first-year earnings of graduates from the classes of 2006 through 2010 who stayed in-state to work. For the Fayetteville campus, the average was $45,745, and for the Little Rock campus it was $47,060.
Those averages come with all of the caveats I mentioned in my earlier post: They exclude graduates working out of state, graduates holding federal jobs, and self-employed graduates. Perhaps most important, those averages include the legal market’s boom years of 2006 through 2009, along with just one down year. When the database incorporates salaries for the classes of 2011 through 2013, the averages may be lower.
Comparisons with Other Programs
Even including those boom years, however, the salaries of Arkansas law graduates suffer in comparison to starting salaries in other advanced degree programs. The Little Rock campus collected sufficient salary data from three different PhD programs: higher education administration, educational leadership, and physical sciences. The average starting salary in each of those programs was higher than in law, ranging from $52,726 in physical sciences to $72,134 in educational leadership.
To be fair, doctoral candidates in educational leadership or higher education administration often have significant workplace experience; they’re less likely than law students to move directly from college to graduate school. The salaries for these PhD’s, therefore, may partly reflect their workplace experience–not just the value of the degree. Still, eight of Little Rock’s undergraduate programs produced higher starting salaries than its law school did, topping out at $65,978 for registered nurses.
The story is similar at the Fayetteville main campus. There, five of seven doctoral programs produced higher starting salaries than law–and a sixth came within $500 of of law. I was surprised to see that the starting salaries of Arkansas law graduates compare unfavorably with those of graduates holding doctorates in adult and continuing education (average starting salary of $58,013), educational leadership ($85,245), and public policy analysis ($68,425). Even a master’s degree in political science produced an average starting salary ($44,202), within shouting distance of a law salary.
Equally depressing comparisons come from the University of Arkansas’s medical sciences campus. Dental hygienists with just an associate’s degree averaged higher starting salaries ($49,644) than law graduates from either Arkansas campus. A master’s in public health garnered, on average, $56,074. And doctors of pharmacy out-earned almost everyone with an average starting salary of $104,977.
Some of these careers, of course, may reach salary plateaus; it’s possible that Arkansas’s law graduates will earn more as their experience mounts. Even at the entry level, an Arkansas law degree continues to produce higher earnings than most undergraduate degrees. College graduates from the Fayetteville campus averaged just $33,956 during their first year in the workforce.
NALP Data
How do the linked-records salaries compare to ones reported to NALP? I couldn’t find salary information on either Arkansas law school’s website, but NALP’s Jobs and JDs book, available in hard cover, offers some interesting data. In 2007, law graduates working full-time in Arkansas reported an average salary of $49,966. That’s higher than the rolling averages compiled through the linked-records method, but not too far off. (Note that the NALP figures refer to all law graduates working in Arkansas, while the linked-records data include all Arkansas law graduates working in Arkansas. The salary pools, however, should be comparable.)
For 2011, on the other hand, NALP’s reported salaries seem quite high for Arkansas jobs. The reported mean is $52,806–more than six thousand dollars higher than the linked-records average for the boom years. It’s possible that the highest paying legal jobs in Arkansas are going to graduates of out-of-state schools. But it’s also quite likely, as NALP and law schools acknowledge, that the NALP-reported salaries skew high. That’s a good reason to support continued development of other methods for tracking salaries.
Below Minimum Wage
The last piece of information from the Arkansas linked-records database is particularly interesting. When calculating average salaries, Arkansas excluded any graduates who earned less than $13,195 per year, which is the state’s minimum wage threshold. Most employees earning less than that threshold are part-time or temporary workers. Including those salaries in a calculation of average full-time earnings would unfairly depress the average, so the researchers excluded these “below minimum wage” workers from the calculations.
Arkansas, however, does report the number of these “below minimum wage” workers for each degree program. Those numbers are depressingly high for the two law schools. Fifty-two of Little Rock’s graduates, 8.4% of all students who graduated between 2006 and 2010, earned less than $13,195 for the year that started six months after their graduation date. The percentage was the same for the Fayetteville campus: fifty-five graduates, or 8.4% of those who graduated between 2006 and 2010, earned less than minimum wage once they entered the workforce. That’s one in every twelve law graduates.
A few of these graduates may have worked in Arkansas for a few months and then moved to another state; that would produce a small amount of earnings in the Arkansas database. Others may have worked part-time for employers to supplement a solo practice or freelance work. The one in twelve figure, on the other hand, doesn’t include graduates who subsisted entirely on freelance wages or who found no paying work at all; those graduates don’t appear at all in the linked-records database.
Observations
What do we make of these data? The linked-records databases, like other sources of employment information, are incomplete. It is particularly difficult to distinguish unemployed graduates from those who have moved to other states–or to determine salary levels for the latter group of graduates. If researchers ultimately link databases across states, those connections would greatly improve the available information.
This brief examination of Arkansas data, meanwhile, illustrates the kind of comparisons facilitated by linked-records databases. Starting salaries for law graduates exceed those for most (although not all) college majors, but they lag behind salaries for many other advanced-degree holders. As we continue to debate reforms in legal education, we have to remember the options available to prospective students. Starting salaries are an important element in that calculus, one that students will be able to track more easily with databases like the ones available through College Measures.
Legal educators on several blogs have been discussing the ABA’s decision to eliminate expenditure data from the annual questionnaire completed by law schools. I called Scott Norberg, Deputy Consultant to the ABA’s Section of Legal Education and Admissions to the Bar, to find out more about the change.
Professor Norberg noted that the expenditure elimination is part of a larger project to slim down the annual questionnaire. Most of the changes went into effect last year, but the Section’s Council waited a year to implement elimination of the expenditure section. No objections arose to the proposed change, so the Council adopted it for this fall’s questionnaire.
Although the annual questionnaire will no longer ask explicitly about expenditures, it does request information about a law school’s reserve funds and debt (p. 7). These questions will allow the ABA to identify schools that may be in financial trouble, without needing more detailed expenditure data every year.
That’s a relief from a consumer protection perspective. But do we have to worry now that US News will incorporate financial reserves or debt level into its ranking scheme? I’m not sure I even want to think about that one.
Last week the ABA notified law school deans that it will no longer request annual information about each school’s expenditures. Schools will report three years of expenditures in connection with site visits, but the annual reporting of expenditures has been eliminated (see p. 4).
H/t to TaxProf and Brian Leiter for this breaking news. Now, what does the change mean for ABA data collection, legal education, and the US News rankings?
Background: The Annual Questionnaire
The ABA collects data from law schools every year through its annual questionnaire. That instrument, revised annually by the Council’s Data Policy & Collection Committee, gathers information about enrollment, courses, faculty composition, and other issues related to legal education. At least within recent years, the questionnaire has asked schools about both revenues and expenditures. The 2013 questionnaire will ask only about overall revenues, not overall expenditures.
The revised instrument still asks about two specific expenditures: money spent on library operations and money spent for student scholarships, grants, or loans. It does not, however, require schools to report other expenditures–such as money spent on salaries, conferences, coffee, and all of the other matters that make up a law school budget.
Going Forward: Data, the ABA, and Legal Education
I’m puzzled that the ABA has chosen to eliminate expenditures from the annual questionnaire, especially given the contemporary budget crunch at many law schools. Responding to the questionnaire tormented me when I was an associate dean, so I don’t advocate mindless data collection. The information collected by the ABA, however, seems to serve numerous valuable purposes. Questionnaire results help track the overall health of legal education, inform accreditation standards, and offer perspectives on policy issues related to law schools. The instructions to the fiscal portion of the questionnaire also suggest that the ABA uses this information to monitor the fiscal health of individual schools. Given the ABA’s role in protecting students, that is an important goal.
Given this range of objectives, why will the ABA continue to collect annual information about law school revenues, but not expenditures? Law schools seem to be facing unprecedented budgetary strain. In times like this, wouldn’t the ABA want to know both revenues and expenditures–so that it could gauge the financial course of legal education? As the Task Force on the Future of Legal Education finalizes its recommendations, wouldn’t it want to know how badly law schools are hurting? And as the Standards Review Committee considers the costs imposed by some accreditation measures, wouldn’t it be useful to know whether law schools are operating in the red?
I’m not suggesting that the ABA should distribute scorecards revealing the financial health of each law school. But wouldn’t aggregate data on revenue, expenditures, and the gap between the two be particularly useful right now? Annual reports of revenue give us some measure of our industry’s health, but expenditure figures are just as important. How else will we know whether schools are able to adapt to flat or declining revenues?
There’s also the matter of protecting students at individual schools. Each school will have to demonstrate its financial health during site visits, but those visits occur every seven years. Seven years is a long time–plenty long enough for a school to sustain significant financial damage and endanger the education of enrolled students. If the ABA is going to monitor anything, shouldn’t it check both revenues and expenditures on an annual basis?
I understand that many educators are celebrating elimination of the expenditures section, largely because of the US News effect discussed below. I assume, however, that the questionnaire once served purposes other than generating data for US News. Are we sure that we want to reduce our information about the financial health of legal education? Now?
Going Forward: US News
Against all reason, US News has long used expenditures as a significant component of its law school rankings. Expenditures currently account for 11.25% of the ranking formula. This component of the rankings has rightly provoked criticism from dozens, if not hundreds, of legal educators. The ABA’s elimination of expenditures from its annual questionnaire might be an attempt to discourage US News from incorporating this information.
If that’s the ABA’s motive, will the gambit work? It seems to me that US News has at least four options:
1. Continue to ask law schools to supply expenditure data. US News already asks for information that the ABA doesn’t request; it has no obligation to track the ABA’s questionnaire. Calculating expenditures takes time if you’re trying to game the system (or at least keep up with other schools that are gaming the system); the school has to think of every possible expenditure to include. Gamesmanship aside, however, it would be hard for a dean to claim with a straight face that a request for expenditures was too burdensome to meet. If a school isn’t tracking its annual expenditures, and doesn’t have a computer program that will spit those numbers out on demand, that’s really all we need to know about the school.
I hope US News doesn’t pursue this approach. I agree with all of the critics that expenditures serve no useful purpose in a ranking of law schools (even assuming that a ranking itself serves some useful purpose). It seems to me, however, that US News could easily maintain its ranking system without the ABA’s question on school expenditures.
2. Reconfigure the ranking formula to include just library and student aid expenditures. The ABA questionnaire, rather curiously, continues to ask for data on library and student aid expenditures. US News, therefore, could decide to plug just these expenditures into its ranking formula. The formula already does count student aid expenditures separately, so there’s precedent for that.
This approach would be even worse than the first option. Giving library expenditures extra weight would tempt law schools to increase spending in a part of the budget that many critics already think is too large. Creating incentives for additional student aid sounds beneficent, but it would fuel the already heated arms race to snare credentials with scholarship money. We need to wind that race down in legal education, not extend it further.
3. Replace expenditures with revenues. Since the ABA questionnaire still asks for each school’s annual revenue, US News could incorporate that figure into its ranking formula. This approach might be marginally more rational than the focus on expenditures: Schools with more money may be able to provide more opportunities to their students. Focusing on revenues, furthermore, would not penalize schools that saved some of their revenue for a rainy day.
On the other hand, this criterion would continue to bias the rankings in favor of wealthy, well established, and private schools. It would also invite the same type of gamesmanship that schools have demonstrated when reporting expenditures.
4. Eliminate money as a factor. This is my preferred outcome, and I assume that it is the one most educators would prefer. Expenditures don’t have a role in judging the quality of a law school, and they’re a source of endless manipulation. Both law schools and their consumers would be better off if we rid the rankings of the expenditures factor.
Conclusion
US News will do whatever it chooses to do. Years of entreaties, rants, and denunciation haven’t stopped it from incorporating expenditures into its law school ranking. I’m doubtful that the ABA’s change will suddenly bring US News to its senses. Meanwhile, I’m very worried about how we’re going to inform legal educators, regulators, and potential students about the financial health of law schools. Revenues are fun to count, but running a law school requires expenditures as well.
Law school critics have pressed schools to produce better information about the salaries earned by their graduates. Existing sources, as we know, provide incomplete or biased information. The Bureau of Labor Statistics (BLS) gathers data about lawyers’ salaries, but those reports omit solo practitioners, law firm partners, and law graduates who don’t practice law. Nor can we break down the BLS data to identify earnings by new lawyers or by graduates of particular schools.
The salary information gathered by NALP, in contrast, focuses on new graduates, includes graduates in non-practice jobs, and can be tied to particular schools (if a school chooses to publish their data). But these figures suffer from significant selection bias; NALP warns that these salaries “are biased upwards.”
Better salary information, however, is on the way. Researchers in other fields have found a new way to gather salary data about graduates of degree programs. The method hinges on the fact that employers pay unemployment taxes for each individual they employ. These taxes fund the pools used to support unemployment compensation. The government wants to make sure that it gathers its fair share of taxes, so employers report the wages they pay each individual. State unemployment compensation agencies, therefore, possess databanks of social security numbers linked to wages.
Educational institutions, similarly, possess the social security numbers of their graduates. It is possible, therefore, to use SSNs to link graduates with their salaries. The researchers doing this, of course, don’t examine the salaries of individual graduates. Instead, this “linked-records” approach allows them to generate aggregate salary data about graduates by college, major, year of degree, and several other criteria. The method also allows researchers to track salaries over time, both to see how entry-level salaries change and to track income as graduates gain workplace experience. For a brief overview of the method, see this paper from Berkeley’s Center for Studies in Higher Education.
The linked-record approach has the potential to generate very nuanced information about the financial pay-off of different educational programs. Salary information, in fact, is already available for several law schools. Before we get to that, however, let’s look more closely at the method’s wider application and its current limits.
Applications
California has used this research method to generate an extensive database of salary outcomes for graduates of its community college programs. Using the online “salary surfer,” you can discover that the highest earning graduates from those programs are individuals who earn a certificate in electrical systems and power transmission. Those graduates average $93,410 two years after certification and $123,174 five years out.
If you’re not willing to climb utility poles or hang out with high voltage wires, a plumbing certificate also pays off reasonably well in California, generating an average salary of $65,080 two years after graduation. That certificate, however, doesn’t seem to add more value with time–at least not during the early years of a career. Average salary for certified plumbers rises to just $65,299 five years after graduation.
Community college degrees in health-related work also generate substantial salaries. Degrees in the humanities, fine and applied arts, cosmetology, and travel services, on the other hand, are poor bets financially. Paralegal training falls in the middle: A paralegal degree from a California school yields an average salary of $38,191 two years after graduation and $42,332 five years out. Paralegal certificates, notably, generate higher wages. Those paralegals average $41,546 two years after certification and $47,674 after five years. I suspect that premium occurs because the certificate earners already hold four-year college degrees; they combine the paralegal certificate with a BA to earn more in the workplace.
You can spend hours with the California database, exploring the many subjects that community colleges teach and the varied financial pay-offs for those degrees. Let’s move on, however, to a much broader database.
The research organization College Measures is working with several states to identify salary outcomes for all types of post-secondary degrees. This database, like the one for California community colleges, relies upon the linked-records data collection method described above. The College Measures site currently includes schools in Arkansas, Colorado, Tennessee, Texas, and Virginia–with Florida and Nevada coming soon. The database doesn’t include every school or degree program in these states, but coverage is growing. Here are just a few findings to illustrate the detail available on the site:
* Chicken farming is a staple of the Arkansas economy, and the University of Arkansas’s main campus offers a BA in poultry science. Those degree holders average $37,251 during their first year after college–a little more than accounting BA’s from the same campus can expect to earn ($36,681).
* Arkansas, however, teaches much more than poultry science and accounting. Some of the highest earning graduates major in chemical engineering ($56,655), physics ($48,820), computer engineering ($45,589), and economics ($43,739). If you want to maximize income after graduation, on the other hand, stay away from majors in audiology ($20,417), classics ($20,842), and drama ($22,629).
* Moving to the Texas portion of the site, you won’t be surprised to discover that the most remunerative BA offered by the University of Texas at Austin is in Petroleum Engineering. Those graduates average $115,777 during their first year out of school.
* The least financially rewarding BA’s from the UT-Austin campus, at least initially, are general music performance ($11,098), Arabic Language and Literature ($17,192), and General Visual and Performing Arts ($17,749).
You can find similar results for other majors and schools in these states, as well as for schools in Colorado, Tennessee, and Virginia. Before continuing, however, let’s examine several key limits on the currently available data.
Limits
1. One State at a Time. The linked-records databases currently operate only within a single state: they can only identify salaries for graduates who work in the same state where they attended school. The Colorado database, for example, includes both of the state’s ABA-accredited law schools–but it reports only salaries for graduates who worked in Colorado the year after graduation.
This constraint will understate salaries for law schools that send a large number of graduates to other states for high-paying jobs. If Connecticut creates a database, for example, Yale Law School will receive no credit for the salaries of graduates who work in Massachusetts, New York, the District of Columbia, and other states. The University of Texas’s law school, currently included in the College Measures database, receives credit for salaries earned at BigLaw firms in Dallas or Houston–but not for those earned in Chicago, Los Angeles, or New York.
Researchers are working to overcome this limit by linking databases nationally. I suspect that will happen within the next year or two, making the linked-records method much more comprehensive. Meanwhile, the “one state” limit casts doubt on salary results for schools with a large number of graduates who leave the state.
For many law schools, however, even single-state salary reports can yield useful information. Most law schools place the majority of their graduates in entry-level jobs within the same state. All of the Texas law schools place more than half of their graduates with Texas employers. The same is true for the Arkansas law schools, Colorado schools, and two of the three Tennessee schools. Among the states for which linked-records data are currently available, only the Virginia law schools send a majority of their graduates out of state.
For law schools that place a majority of their graduates in-state, the linked-record databases provide a welcome perspective on a wide range of salaries. These databases include jobs with small law firms, local government, and small businesses. They will also identify law graduates with jobs outside of law practice. That’s a much wider scope than the salaries reported to NALP, which disproportionately represent large law firm jobs. Even if some of a school’s graduates leave the state, this in-state salary slice is likely to give prospective students a realistic perspective on the range of salaries earned by a school’s graduates.
2. Rolling Five-Year Averages. The linked-records databases report five-year averages, rather than average salaries for a single graduating class. This feature preserves anonymity in small programs and makes the data less “noisy.” The technique, however, can also mask dramatic market shifts.
This is particularly problematic in law, because average salaries rose dramatically from 2005 through 2009, and then plunged just as precipitously. Most of the states included in the College Measures database report the average salary for students who graduated in 2006 through 2010. For law graduates, those years include at least three high-earning years (2007 through 2009) and just one post-recession year (2010). The outdated averages on the College Measures site almost certainly overstate the amounts earned by more recent law school classes.
This problem, in my opinion, makes the salaries currently reported for law schools unreliable as predictors of current salaries. On the other hand, the data could be useful for other purposes. It would be instructive, for example, to compare each school’s linked-record average with an average of the salaries that school reported to NALP over the same five years. That comparison might indicate the extent to which NALP-reported salaries skew high. Within a few years, meanwhile, the linked-records databases will offer more useful salary projections for students considering law school. They will also help us see the extent to which salaries for law graduates have shifted over time.
3. Un- and Under-Employed Graduates. The linked-records databases do not reveal how many graduates are unemployed. Graduates who are missing from a state’s records may be unemployed or they may be working in another state. Researchers currently have no way to distinguish those two statuses.
As the research becomes more sophisticated, and especially if researchers are able to link records nationally, this problem will decrease. For now, users of the database have to remember that salaries reflect averages for employed graduates. Users need to search separately for the number of a school’s unemployed graduates.
For law schools, those figures are relatively easy to obtain because they appear on each school’s ABA employment summary. By combining that resource with the College Measures information, prospective students and others can determine the percentage of a law school’s graduates who were employed nine months after graduation, as well as the average salaries earned by graduates who worked in the same state as the school.
Underemployed graduates, those working in part-time or temporary jobs, do appear in most of the linked-record databases. This is a major advantage of the linked-record method: the method calculates each graduate’s annual earnings, even if those wages came from part-time or temporary work. If a graduate worked at more than one job, the linked records will aggregate wages from each of those jobs. The results won’t reveal how hard graduates had to work to generate their income, but database users will be able to tell how much on average they earned.
4. Excluded Workers. In addition to the caveats discussed above, the linked-records databases omit two important categories of workers. Most lack information about federal employees, although some states have started adding that information. Within a year or two, federal salaries should be fully integrated with other wages. For law school graduates, meanwhile, salaries for the most common federal jobs are already well known.
More significant, the linked-record databases do not include information about the self-employed. This omission matters more in some fields than others. Utility companies employ the workers who repair high-voltage power lines; you won’t find many free-lancers climbing utility poles. Plumbers, on the other hand, are more likely to set up shop for themselves.
For recent law graduates, the picture is mixed. Relatively few of them open solo practices immediately after graduation, but a growing number may work as independent contractors. The latter group, notably, may include graduates who receive career exploration grants from their schools. Depending on how those grants are structured, the graduates may not count as “employees” of either the school or the organization where they work; instead, they may be independent contractors. If that’s the case, their wages will not appear in the linked-record databases.
As experience grows with linked-record databases, it will be possible to determine how many law graduates fall outside of those records. It should be possible, for example, to compare the number of graduates who report in-state jobs to their schools with the number of in-state salaries recorded in a linked-record database. The difference between the two numbers will represent graduates who work as solos or independent contractors. The researchers creating these databases may also find ways to incorporate earnings data about self-employed graduates.
What About Law Schools?
Tomorrow, I will discuss salary information reported for the fifteen law schools currently included in the College Measures database. If you’re impatient, just follow the links. Those specific results, however, matter less than the overall scope of this salary-tracking method. The linked-record method promises much more sophisticated salary information than educational institutions have ever gathered on their own. The salaries can be tied to specific schools and degree programs. We (as well as prospective students and policymakers) will be able to compare financial outcomes across fields, schools, and states. As the databases grow in size, we will also be able to track salaries five, ten, fifteen, or twenty years after graduation. That amount of information is breathtaking–and a little scary.
The American Academy of Arts and Sciences has released a Report stressing the need to deepen education in the humanities and social sciences. The Report declares that these disciplines “teach us to question, analyze, debate, evaluate, interpret, synthesize, compare evidence, and communicate—skills that are critically important in shaping adults who can become independent thinkers.” (p. 17) I agree with that assertion, which is why I’m so disturbed by the way this Report analyzes and communicates evidence.
The Report’s Introduction sounds an ominous tone: “we are confronted with mounting evidence, from every sector, of a troubling pattern of inattention that will have grave consequences for the nation.” (p. 19). The Report then cites three pieces of evidence to illustrate the “grave consequences” we face. The Executive Summary stresses the same three warning signs and concludes: “Each of these pieces of evidence suggests a problem; together, they suggest a pattern that will have grave, long-term consequences for the nation.”
What are these three pieces of evidence? Presumably they were the most persuasive and best documented points that the Report’s authors could find. As I explain below, however, each claim is incorrect, incomplete, or misleading. That’s an embarrassing record for a blue-ribbon commission of distinguished educators, humanists, and social scientists.
Equally troubling, the misstatements represent much of what I hear from higher education today: a constant refrain of exaggerated claims about the academy’s worth, buttressed by misleading interpretations of the factual record. This Report is selling the academy, not analyzing or synthesizing evidence. Let me walk you through the three claims to show you what I mean.
1. “For a variety of reasons, parents are not reading to their children as frequently as they once did.”
As a parent, that statement immediately resonated with me. Parents are not reading to their children? What’s wrong with our society?!? I immediately envisioned children huddled in front of television sets or computers, ignored by their parents for weeks at a time. The claim, however, is incorrect or misleading in several respects.
First, the data stem from a survey that measured the percentage of children being read to, not the percentage of parents reading. The difference matters, because household composition varies over time. If the parents who are most likely to read to their children have fewer offspring, then the percentage of children being read to will decline. Confusing parents with children is sloppy (and potentially misleading) data analysis.
Second, the survey reveals how many preschoolers had a parent read to them every day in the week before the survey was administered. That’s a pretty high standard for single parents, working couples, and parents with multiple children. A family with three children, like the one I grew up in, wouldn’t meet the standard if the oldest sibling substituted for a busy parent one or two times a week.
Applying the survey’s tough standard, how deficient are contemporary parents? The Report language made me worry that today’s parents were reading to their children just a few times a month. It turns out that in 2007, the most recent year measured by this survey, more than half of all preschoolers (55.3%) experienced a parent reading to them seven full days a week.
Most important, that 2007 percentage is higher than the one reported for 1993, the earliest year of the study. Let me repeat that: The percentage of children listening daily to a parent read increased over the fifteen years tracked by this survey. The Report’s contrary claim hinges on the fact that the percentage decreased between 2005 and 2007, the two most recent years studied. The import of that decline, however, is unclear. Although the percentage clearly increased over the years studied, the figures varied somewhat from year to year; like many data trends, the path is not completely smooth.
The Report’s language does not convey this pattern. The reference to what parents “once did” suggests a long-term decline, rather than the latest move in an oscillating pattern that has moved upward with time. An accurate statement, based on the data cited by the Report, would be: “Between 1993 and 2007, the percentage of children who heard their parents read to them every day grew from 52.8% to 55.3%. The percentage rose and fell during that period with the highest level (60.3%) occurring in 2005 and the lowest levels (52.8% and 53.5%) registering in 1993 and 1999. The trend over time, however, is positive.”
2. “Humanities teachers, particularly in k-12 history, are less well-trained than teachers in other subject areas.”
This second claim is just wrong. The cited data show that music and art teachers are the most highly credentialed teachers in both middle and high schools. In 2007-08, the most recent year studied by the National Center for Education Statistics (NCES), 85.1% of high school music teachers had both majored in their field and earned a teaching certificate. NCES uses that combination as the best available evidence of teacher quality. Arts teachers were the next best educated, with 81.7% of them holding both certification and a major in their subject. A respectable 71.7% of English teachers had the same top training, just a shade under the 72.9% of teachers in the natural sciences–and well above the 64.4% of high school math teachers. Foreign language teachers varied widely in their training: 71.2% of French teachers held both certification and a major in their field, a percentage comparable to teachers in English and the Natural Sciences. Only 57.5% of Spanish teachers, in contrast, held those qualifications.
For middle schools, NCES’s most recent data stem from 2000 and lack information about teaching credentials. The patterns, however, were similar. Music and art teachers outshone colleagues in other fields, with 89.4% of them majoring in their subject. Natural sciences teachers ranked next, with 49.3% having majored in their field. Teachers of English (46.3%) and foreign languages (48.8%) were not far behind. Only a third of middle-school math teachers (33.8%), in contrast, had majored in their field.
These statistics belie the Report’s claim that “Humanities teachers . . . are less well-trained than teachers in other subject areas.” The qualifications of humanities teachers vary, with some subjects showing the highest level of training, others matching or exceeding levels in math and the natural sciences, and some falling below.
The embedded clause in the Report’s claim, “particularly in k-12 history,” has more truth. None of the cited statistics relate to elementary education, but history teachers in both middle schools and high schools do lag behind most of their peers. NCES reports that only 28.8% of high school history teachers hold both certification and a major in their field. That particularly low percentage stems more from lack of teaching certification than from lack of a history major; 62.0% of history teachers did, in fact, major in history. Still, it is true that high school history teachers have less overall training than teachers in other subjects.
Similarly, middle-school history teachers were less likely than peers in other fields to major in their subject; just 31.3% of them did. This percentage is very close to the expertise level of middle-school math teachers (with just 33.8% having majored in math), but it is the lowest reported figure.
The Report could have focused on the relatively low preparation of history teachers; instead, it makes an exaggerated (and incorrect) claim about all humanities teachers. A correct statement, again based on the data cited by the Report, would be: “In both middle and high schools, humanities teachers in music and the arts are better trained than teachers in any other subject. English and natural science teachers rank next in training; these two groups have similar credentials at each educational level. Math teachers lag behind all of these groups, with lower levels of training at both the middle and high school level. History teachers have the weakest training, with the poorest showing in high schools and one that is comparable to math teachers in middle schools.”
3. “And even as we recognize that we live in a shrinking world and participate in a global economy, federal funding to support international training and education has been cut by 41 percent in four years.”
This one is literally true: The federal government cut funding for foreign language study and area study centers at universities, as well as for Fulbright-Hays programs. The statistic, however, offers an isolated (and rather faculty-centric) measure of the vitality of international study and foreign languages on college campuses.
International study is booming among college students. The number of U.S. college students studying abroad almost tripled between the 1996-97 academic year and the 2009-10 one, growing from 99,448 students to 270,604. The number of foreign students enrolled at American universities, meanwhile, grew by more than 50% during the same period, from 453,787 in 1995-96 to 690,923 in 2009-10. In 2010-11, the most recent year available on the latter measure, the number reached 723,277.
The study of foreign languages is also on the rise at colleges and universities. The most recent study by the Modern Language Association found that “[c]ourse enrollments in languages other than English reached a new high in 2009. Enrollments grew by 6.6% between 2006 and 2009, following an expansion of 12.9% between 2002 and 2006. This increase continues a rise in enrollment in languages other than English that began in 1995.”
Students, furthermore, are learning a more diverse set of languages. The MLA reported that enrollment in “less commonly taught languages,” those outside the top fifteen, surged from 2002 through 2009. Between 2006 and 2009, U.S. colleges added 35 new languages to their offerings, bringing the total of “less commonly taught” languages to 217 tongues nationwide. That’s in addition to the fifteen most popular languages, which today include Arabic, Chinese, Japanese, Korean, Ancient Greek, and Biblical Hebrew.
Even two-year colleges participated in the upward trend of language study. These colleges registered increased enrollment in such diverse languages as Arabic, ASL, Chinese, Hawaiian, Italian, Japanese, Latin, Portuguese, Spanish, and Vietnamese. Two of those languages, Hawaiian and Vietnamese, do not rank among the top fifteen languages studied in four-year colleges, suggesting that community colleges play a special role in teaching some languages.
We certainly could do much more to teach foreign languages and encourage international understanding at all educational levels. The Report’s isolated reference to cuts in federal funding, however, paints a very one-sided picture of the status of these subjects in the United States.
Analyze, Evaluate, Interpret, Communicate
This Report, like so many other products of higher education, exhorts citizens to examine data carefully, think critically, and write precisely. Yet the Report itself falls far short of these goals. This is not a thoughtful document; it is one determined to sell the social sciences and humanities. I agree with many of the Report’s recommendations, but we can’t rest those recommendations on faulty interpretations of the factual record or misleading statements. The academy should lead by example, not just exhortation.
There is a final irony to the misstatements in this Report. Respected commenters like Verlyn Klinkenborg and David Brooks have cited the Report while deploring a shift in college majors from the humanities to more “vocational” studies. High on the list of those dreaded vocational majors is Business, where we fear that students learn to sell things rather than to think. But what behavior are we in the academy modeling?
Cafe Manager & Co-Moderator
Deborah J. Merritt
Cafe Designer & Co-Moderator
Kyle McEntee
Law School Cafe is a resource for anyone interested in changes in legal education and the legal profession.
Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.