Late law month I wrote about a sample NextGen question that GPT-4 discovered was based on an outdated, minority rule of law. NCBE has now removed the question from their website, although it is still accessible (for those who are curious) through the Wayback Machine. While the Bot takes a small bow for assisting NCBE on this question, I’ll offer some reflections.
We hear a lot about mistakes that GPT-4 makes, but this is an example of GPT-4 correcting a human mistake. Law is a vast, complex field, especially considering state-to-state variations in the United States. Both humans and AI will make mistakes when identifying and interpreting legal rules within this large universe. This story shows that AI can help humans correct their mistakes: We can partner with AI to increase our knowledge and better serve clients.
At the same time, the partnership requires us to acknowledge that AI is also fallible. That’s easier said than done because we rely every day on technologies that are much more accurate than humans. If I want to know the time, my phone will give a much more accurate answer than my internal clock. The odometer in my car offers a more accurate measure of the car’s speed than my subjective sense. We regularly outsource many types of questions to highly reliable technologies.
AI is not the same as the clocks on our phones. It knows much more than any individual human, but it still makes mistakes. In that sense, AI is more “human” than digital clocks, odometers, or other technologies. Partnering with AI is a bit like working with another human: we have to learn this partner’s strengths and weaknesses, then structure our working relationship around those characteristics. We may also have to think about our own strengths and weaknesses to get the most out of the working relationship.
GPT-4’s review of the NextGen question suggests that it may be a useful partner in pretesting questions for exams. Professors read over their exam questions before administering them, looking for ambiguities and errors. But we rarely have the opportunity to pretest questions on other humans–apart from the occasional colleague or family member. Feeding questions to GPT-4 could allow us to doublecheck our work. For open-ended questions that require a constructed response, GPT-4 could help us identify issues raised by the question that we might not have intended to include. Wouldn’t it be nice to know about those before we started grading student answers?
I hope that NCBE and other test-makers will also use AI as an additional check on their questions. NCBE subjects questions to several rounds of scrutiny–and it pretests multiple-choice questions as unscored questions on the MBE–but AI can offer an additional check. Security concerns might be addressed by using proprietary AI.
Moving beyond the testing world, GPT-4 can offer a doublecheck for lawyers advising clients. In some earlier posts, I suggested that new lawyers could ask GPT-4 for pointers as they begin working on a client problem. But GPT-4 can assist later in the process as well. Once a lawyer has formulated a plan for addressing a problem, why not ask GPT-4 if it sees any issues with the plan or additional angles to consider? (Be sure, of course, to redact client identifying information when using a publicly accessible tool like GPT-4.)
Our partnership with GPT-4 and other types of AI is just beginning. We have much to learn–and many potential benefits to reap.
AI, Legal Profession, Teaching, AI, Bar Exam, NCBE No Comments YetI talk to a lot of lawyers about licensing, and many suggest that the licensing process should ensure that new lawyers know basic concepts that are essential for competent law practice in any field. Detailed rules, they agree, vary by practice area and jurisdiction; it would be unfair (and impractical) to license lawyers based on their knowledge of those detailed rules. Instead, knowledge of basic concepts should support learning and practice in any area of the law.
NCBE seems to embrace that approach. As I discussed in my last post, NCBE is designing its NextGen bar exam to test “foundational legal skills” and “clearly identified fundamental legal concepts and principles needed in today’s practice of law.” Let’s leave skills aside for now and focus on those fundamental legal concepts and principles. Are there such concepts? Do lawyers agree on what they are? How does a licensing body like NCBE identify those concepts?
NCBE began its quest for appropriate exam content by holding extensive listening sessions with bar exam stakeholders. The report summarizing these listening sessions pointed to three key points related to the knowledge tested by the exam: (1) Stakeholders generally agreed that the seven subjects currently tested on the MBE include the “core content” that newly licensed lawyers need to know. (2) Within that content, the current exam tests too many “nuanced issues and ‘exceptions to exceptions to rules.’” (3) Overall, the current bar exam tests too many subjects, since both NCBE and some states add content to the exam through their essays.
NCBE then conducted a nationwide practice analysis to “provide empirical data on the job activities of newly licensed lawyers.” This survey, which followed standard practice for identifying the content of licensing exams, asked respondents to rate 77 different knowledge areas. For each area, respondents were asked to give one of four ratings:
This rating system followed standard practice, but it was not tightly focused on “fundamental legal concepts.” Each of the 77 knowledge areas on the survey might have contained at least one fundamental concept. In entry-level law practice, it may be more important for a lawyer to know a little about each of these areas (so that they can identify issues in client problems and seek further information) than to know a lot about a few of them.
Here’s an example: Admiralty law ranked dead last among the 77 knowledge areas included in NCBE’s practice analysis. But shouldn’t entry-level lawyers know that admiralty is a distinct field, governed by rules of its own and litigated exclusively in federal court? And that admiralty law governs even recreational boating on navigable waters within the United States? Otherwise, a new lawyer might waste time analyzing a water skiing injury under general negligence principles–and file a lawsuit in the wrong court.
The same is true of other low-ranking subjects in the NCBE practice analysis. Shouldn’t new lawyers at least know when principles of workers compensation, tax law, juvenile law, and dozens of other practice areas might affect their client problems?
“Fundamental concepts,” in other words, differ from “common practice areas,” although there is some overlap between the two. The concept of negligence, for example, is one that cuts across many practice areas–and is also central to a common practice area (personal injury law). But much of the time, the two types of knowledge diverge. Which is essential for minimum competence? Concepts that cut across practice areas, rules of law in fields where new lawyers commonly practice, or both?
The top ten knowledge areas identified in NCBE’s practice analysis underscore this tension. Four of the knowledge areas (civil procedure, contract law, rules of evidence, and tort law) are subjects in which many new lawyers practice–although those subjects also contain some concepts that cut across practice areas. The six others (rules of professional responsibility and ethical obligations, legal research methodology, statutes of limitations, local court rules, statutory interpretation principles, and sources of law) reference concepts that cut across many practice areas. In fact, four of these six (professional responsibility and ethical obligations, legal research methodology, statutory interpretation principles, and sources of law) cut across all practice areas.
Two of the subjects on NCBE’s top-ten list, statutes of limitations and local court rules, are particularly interesting because they directly embody a fundamental principle. I doubt that the lawyers who responded to NCBE’s survey thought that entry-level lawyers should know specific statutes of limitations or all local court rules. Instead, they seemed to be signalling the importance of these fundamental concepts. All entry-level lawyers should know that most causes of action have statutes of limitations and that it is essential to determine those limits at the very beginning of client representation. It might also be fundamental to know common ways in which the running of a limitations statute can be tolled. Similarly, all entry-level lawyers should understand that local courts have rules, that these rules often differ from the federal and state rules, and that it is essential to consult those rules. As a clinic professor, I can attest that many third-year law students don’t even know that local court rules exist, much less the type of subjects they govern. Yet local courts handle the overwhelming bulk of lawsuits in this country.
How did NCBE resolve this tension between fundamental legal concepts and rules that govern common practice areas? I’ll explore that subject in my next post. And then I’ll tie this discussion back to the need for a rule book outlining the “legal concepts and principles” that NCBE plans to test on the NextGen bar exam.
Uncategorized, Bar Exam, Knowledge, NCBE No Comments YetIn my last post, I discussed a sample bar exam question that requires knowledge of a rule followed by a minority of US jurisdictions. The question seems inconsistent with NCBE’s intent to test “a focused set of clearly identified fundamental legal concepts and principles needed in today’s practice of law.” A minority rule would have to be very influential to fit that description. I suspect that one of NCBE’s subject-matter experts composed this question without realizing that the tested rule was a minority one. Given the breadth of jurisdictions in the United States, and the complexity of legal principles, that’s an easy mistake to make.
That breadth and complexity prompts this recommendation: NCBE should publish a complete list of the doctrinal rules that it plans to test on the NextGen exam. The Content Scope Outlines, which describe areas of law to be tested, are not sufficient. Nor is it sufficient to refer to sources of law, such as the Federal Rules of Evidence or various Restatements. Instead, NCBE should spell out the actual rules that will be tested–and should do that now, while jurisdictions are evaluating NextGen and educators are starting to prepare their students for the exam.
NCBE’s Content Scope Committee, on which I served, recommended creation of this type of “rule book” in late 2021. I hope that NCBE has been working during the last two years to implement that recommendation. Here are some of the reasons why we need NCBE to “lay down the law” that it plans to test on NextGen:
Lawyers often assume that there is a body of fundamental legal concepts that states agree upon, experts endorse, law schools teach, and the bar exam can test. But there is plenty of evidence that this assumption is wrong. Consider the American Law Institute‘s ongoing Restatements of the Law. The Restatements “aim at clear formulations of common law and its statutory elements or variations and reflect the law as it presently stands.” In other words, they attempt to summarize the black letter law in major subjects. Yet the experts who formulate these Restatements take years–often decades–to agree on those principles. The Institute’s first Restatement of Torts took sixteen years (1923-1939) to produce. The Restatement Second of Torts took even longer, twenty-seven years stretching from 1952-1979. And the Third Restatement, which experts began discussing in the early 1990s, still isn’t complete–thirty years later.
Even the Federal Rules of Evidence, which may be the most verifiable set of legal principles tested on the bar exam, are subject to different interpretations among the circuits. The federal Advisory Committee on Evidence Rules discusses these differences and ambiguities at least twice a year. Sometimes the differences prompt amendments to the Federal Rules of Evidence; other times they persist.
There are probably some legal principles that all states and federal circuits apply in a similar manner. But many more, my research suggests, vary by time and place: they are shapeshifters. Given this variation, together with the breadth of legal principles that will be tested on the NextGen exam, NCBE needs to spell out exactly the legal principles it plans to test–and to make that rule book public.
A public rule book is important for all bar exam stakeholders. Test-takers shouldn’t have to guess whether NCBE will test a majority or minority rule–or to figure out on their own which is the majority rule. Nor should they have to purchase expensive prep courses for that information. NCBE, which designs the exam, should announce the specific rules it will test.
Jurisdictions also need that information. When deciding whether to adopt NextGen, jurisdictions should be able to assess the extent to which NextGen’s legal principles overlap with their own state law. For jurisdictions that adopt NextGen, the information will help them decide whether they need to supplement the exam with a state-specific component and, if so, what rules that component should cover.
Educators vary in how much they teach to the bar exam, but many would appreciate knowing the extent to which their material aligns with the rules NCBE will test. For Academic Support Faculty this information is critical. How can they help students prepare for the bar exam if they have to guess about which version of a rule will be tested?
Perhaps most important, a public rule book is essential to ensure that the bar exam serves its purpose of protecting the public. There is wisdom in the crowd. If NCBE’s expert advisors make a mistake–or fail to catch a change in the law–judges, practitioners, and professors who know the field can advise them of the need to change the rule book.
Is it possible for NCBE to publish a rule book of this nature? If it takes experts several decades to prepare a Restatement of the Law, will NCBE be able to publish a rule book for NextGen within the next year or so? For two reasons, I think it can.
First, NCBE already has an implicit rule book. When subject matter experts create and vet questions, they are following rules of law. Their questions have right and wrong answers–and NCBE knows which answers it considers correct. A rule book simply requires the experts to lay out the answers before (or at the same time as) they design the questions. That’s good test-making policy: First decide what you want the test-takers to know, and then design questions to elicit that knowledge.
Second, NCBE does not have to weigh emerging trends or negotiate stark differences among states when laying down the law of the bar exam. If a rule is highly contested or quickly evolving, it probably doesn’t belong in the category of “clearly identified fundamental legal concepts and principles.” Or, if it does, it can be phrased in a way that reflects the existence of competing approaches. Publishing a bar exam rule book is easier than crafting a Restatement of the Law.
The creation of a bar exam rule book has another advantage, which I will discuss in my next few posts: It will help all stakeholders in the exam process think about what we mean when we refer to “fundamental legal concepts and principles.” Is there a clearly identified set of those principles? Can we agree upon them? And what is the best way to test knowledge of those concepts?
Uncategorized, Bar Exam, NCBE No Comments Yet** Update 11/15/23: NCBE has deleted the sample question I discuss here from their site. If you want to see the original question, you can find it through the Wayback Machine at this link.
In my last post, I discussed a sample bar exam question on which GPT-4 provided more helpful answers than the choices on the bar exam. As I continued my exploration, I found another question on which GPT-4 revealed that NCBE is testing an outdated minority rule of law.
The question is the eighth multiple-choice question on the NextGen site. The question posits a landlord and tenant who agree upon a commercial tenancy beginning July 1. When the tenant arrives at the premises on that date, it finds that the previous tenant has not vacated the premises–and does not intend to vacate. I prompted GPT-4 with NCBE’s question (including its answer choices) and received this response:
The answer would likely be D. The tenant had not notified the landlord before bringing the eviction action.
Here’s a breakdown of the choices:
A. By suing the previous tenant for possession, the tenant elected that remedy in lieu of a suit against the landlord.
B. The landlord had delivered the legal right of possession to the tenant.
C. The tenant failed to timely vacate as required to sue for constructive eviction.
D. The tenant had not notified the landlord before bringing the eviction action.
Therefore, the most likely explanation if the court finds for the landlord would be D.
This response intrigued me because, according to NCBE, the correct answer is B. I decided to explore the inconsistency between NCBE’s question writers and GPT-4, so I told GPT-4: “I want to understand more about choice B. Are there states in which it is sufficient for a landlord to deliver legal possession without actual possession?” GPT-4 responded:
Yes, there’s a distinction between legal possession and actual possession, and different jurisdictions handle this distinction differently.
If you are dealing with a specific state or jurisdiction, I can try to provide more detailed information about its stance on the delivery of possession.
I know that NCBE aims to test majority or consensus rules, rather than minority ones, on its exams, so I asked GPT-4 if it knew which rule (actual or legal possession) a majority of states follow. GPT-4 answered:
I know better than to trust GPT-4, which is trained on the internet rather than on legal databases, as an authoritative source on legal principles. In my previous posts, I have illustrated simply how GPT-4 could help a new lawyer identify legal issues in a client problem, understand the general scope of those issues, and generate questions to ask a client about the problem. I would not rely on GPT-4 to tell me whether a rule is the majority rule–or whether a particular state follows that rule.
So I turned to WestLaw to unravel this knotty question of whether a majority of states allow the landlord to deliver only legal possession to a tenant. WestLaw quickly confirmed that GPT-4 was correct. An ALR annotation collecting cases suggests that eleven states allow the landlord to deliver only legal possession, while twenty require the landlord to deliver actual possession together with legal possession. Two thoughtful student notes affirm that the requirement of actual possession is very much the majority rule, with one (Heiser) referring to a “mass exodus” away from the rule that legal possession suffices. (See the end of this post for citations.)
Even the state that originated the more landlord-friendly rule, New York, discarded it by statute in 1962. New York’s Real Property Law Article 7, section 233-a now provides: “In the absence of an express provision to the contrary, there shall be implied in every lease of real property a condition that the lessor will deliver possession at the beginning of the term.”
If you’ve followed me down this rabbit hole of real property law, you’ve learned: (1) At least for this rule of law, GPT-4 accurately identified the majority and minority rules. It was also able to explain those rules concisely. (2) NCBE is using, as one of the few sample questions it has released for the NextGen exam, a question that tests an outdated, minority rule. I alerted a contact at NCBE about this situation in mid-September, but the question is still on the sample questions site.
What do these lessons teach us about using AI in entry-law practice? And what do they suggest about the bar exam? I will explore both those questions in upcoming posts. Spoiler alert on the second question: It’s easy to declare, “ha, NCBE is wrong!” but the lesson I draw from this is deeper and more complex than that.
References:
Implied covenant or obligation to provide lessee with actual possession, 96 A.L.R.3d 1155 (Originally published in 1979, updated weekly).
Christopher Wm. Sullivan, Forgotten Lessons from the Common Law, the Uniform Residential Landlord and Tenant Act, and the Holdover Tenant, 84 Wash. U.L. Rev. 1287 (2006).
Matthew J. Heiser, What’s Good for the Goose Isn’t Always Good for the Gander: The Inefficiencies of A Single Default Rule for Delivery of Possession of Leasehold Estates, 38 Colum. J.L. & Soc. Probs. 171 (2004).
AI, Bar Exam, GPT-4, NCBE No Comments YetIn the first three posts in this series, I used a bar exam question as an example of the type of problem a new lawyer might confront in practice. I then explored how GPT-4 might help a new lawyer address that practice problem. In this post, I’ll work with another sample question that NCBE has released for the NextGen bar exam. On this question, GPT-4 beats the bar exam. In other words, a new lawyer using GPT-4 would obtain better answers than one who remembered material studied for the bar exam.
Take a look at the seventh multiple-choice question on the NextGen list of sample questions. In this question, the client is a state college that wants to fire a tennis coach. The question format parallels the one we discussed in my previous posts: It asks which “legal topics are the most important for you to research before advising the client” and instructs the test-taker to choose two topics from a list of six.
When prompted with the multiple-choice question, GPT-4 quickly identifies the two correct answers: employment at will and procedural due process. The Bot also offers succinct explanations about why those topics are the most important of the six to research. But that’s not the interesting part–we already know that GPT-4 can easily pass the bar exam.
Instead, I prompted GPT-4 (on a different day and with a new conversation) with the fact pattern in this MCQ and asked: “What legal topics are most important for you to research before advising the client?” Here’s how GPT-4 responded:
When representing a state college considering terminating an employee, several legal considerations come into play. Given the provided facts, here are the most important legal topics to research before advising the client:
In many U.S. jurisdictions, unless there is an employment contract stating otherwise, employment is typically considered “at-will.” This means that either the employer or the employee can terminate the employment relationship at any time for any reason (that isn’t illegal). However, if the coach and college had an oral contract, its terms will need to be considered.
2. Oral Contracts:
While written contracts are easier to enforce, oral contracts can still be binding. The challenge is proving their terms and existence. Research would be needed to determine the enforceability of oral contracts in your jurisdiction and what evidence would be required to prove the terms of such an agreement
3. Implied Contract:
Even if there was no formal oral agreement about the contract’s duration, the actions and communications between the college and the coach might have implied a certain term of employment. Research would be needed to determine if an implied contract was potentially formed and its implications.
4. Due Process Rights for Public Employees:
Since the college is a state institution, the coach may have constitutional due process rights. If the coach has a property interest in their job (i.e., a reasonable expectation of continued employment), the state cannot deprive them of this without due process. Depending on local laws and court rulings, this might entail the need for a hearing before termination.
5. Public Policy and Whistleblower Protections:
The coach cannot be terminated for reasons that violate public policy (e.g., reporting illegal activities). Ensure that the budget cuts are the true and only reason for the contemplated termination.
6. Discrimination Laws:
Federal and state laws prohibit employment discrimination based on race, color, religion, sex, national origin, age, disability, and sometimes sexual orientation and gender identity. Ensure the coach’s termination isn’t grounded in any discriminatory reasons.
7. Breach of Good Faith and Fair Dealing:
In some jurisdictions, there’s an implied covenant of good faith and fair dealing in employment relationships. This means that even in at-will employment, employers cannot terminate employees to avoid paying earned benefits or for other bad faith reasons.
8. Potential Damages:
If the coach were to bring a wrongful termination suit and win, what would the potential damages be? This might include lost wages, emotional distress, punitive damages, and attorney’s fees.
9. Alternative Dispute Resolution:
Does the state college have any policies or practices regarding mediation or arbitration before proceeding with termination?
10. Release and Settlement Options:
If the college wants to avoid potential litigation, they might consider offering the coach a severance package in exchange for a release of claims.
After researching these topics, it would be crucial to provide a well-informed opinion to the client about the potential risks and benefits of terminating the coach, as well as any alternative strategies that might minimize legal exposure.
This answer includes the two points tested by the MCQ, but it is much more comprehensive. A new lawyer would do well following the leads offered by GPT-4. In contrast, a lawyer who simply remembered material tested on the bar exam would fall woefully short of serving this client competently. Comparing these two responses points out three problems with the bar exam–and with legal education more generally.
First, the subjects tested by the bar exam–and that we teach as “foundational” in law school–do not track contemporary law practice. The discrimination laws cited by GPT-4 are as (or more) critical to an employment law practice as the notion of employment at will. One of the first questions I would ask my client in this situation is whether the coach is a member of any protected class. But discrimination laws are not part of the subject matter tested on the bar exam, so the question doesn’t embrace them.
Second, both law school and the bar exam teach aspiring lawyers to think narrowly. Courses, classroom hours, and exam questions (especially multiple-choice ones) march students through hundreds of isolated doctrinal rules. Except in clinics, we rarely ask students to solve whole legal problems. GPT-4 is absolutely right that the terms of the oral contract, the possibility of an implied contract, and the extent of damages are as important for the lawyer to consider as the concept of employment at will. Yet we rarely encourage students to consider multiple doctrines within a single subject area, much less across subjects.
Students may need to learn some legal principles in isolation but I think we, as legal educators, need to recognize how atomized our teaching and testing have become. Over the last few decades, we have squeezed ever more doctrine into each credit hour, losing opportunities to help students integrate those pieces. Our shift towards multiple-choice testing, meanwhile, reinforces the notion that law consists of discrete legal principles operating in isolation.
Third, our teaching and licensing focus too much on reciting legal rights–and too little on helping clients solve problems more holistically. Even when a client asks only about the “legal repercussions” of an action, as the client did in this bar exam question, an effective lawyer would explore the problem more holistically with the client. Is abrupt termination the only solution to the client’s problem? Could the client offer the coach a severance package that would avoid litigation? Even if the client has no legal obligation to offer the coach a hearing, would a hearing help ease the blow to the coach? Could a hearing even lead to a new solution to the client’s underlying problem (a budget shortfall)? Maybe the coach would be willing to cut back their hours and continue serving the team.
Even without special prompting, GPT-4 recognized one of these more holistic solutions (a severance package) as an issue for the new attorney to consider. The bar exam question, in contrast, forces the aspiring lawyer to focus single-mindedly on the “legal topics.” Who is the machine in this comparison? Could working with GPT-4, ironically, lead us back to more holistic problem-solving with our clients?
AI, Teaching, Bar Exam, GPT-4 No Comments YetIn my last two posts, I explored the ways in which GPT-4 could help a new lawyer identify legal issues in a client problem. Most law graduates are fairly adept at issue spotting, but they can still benefit from GPT’s advice. In this post I consider a task that fewer graduates have a chance to hone while in law school: questioning a client to obtain key facts related to the client’s problem.
I outlined the basic facts of this client problem, drawn from a sample bar exam question, in my first post. The client was staying in a guest room at his uncle’s house. While the client was away from the house, police knocked on the front door of the house and asked the uncle if they could search the house. The police offered no reason for the search, other than that they were investigating a noise complaint, but the uncle consented to the search. Police found fentanyl in the guest bedroom and charged the client with possession. The new lawyer wants to file a motion to suppress the fentanyl.
In my first post, GPT-4 helped the new lawyer identify possible issues to raise in this motion. In the second post, GPT-4 responded to a request to narrow those issues to the two most promising ones. Those issues were the client’s standing to challenge the search and whether the uncle’s consent was valid. I continued my dialogue with GPT-4, asking it to give me the most important questions to ask my client with respect to those two issues. GPT-4 responded:
When focusing on the two central issues of standing to challenge the search and the validity of the consent to search, here are the most important questions you would want to ask your client:
1. Standing to Challenge the Search:
a. Nature of Stay: – How often have you stayed in the guest bedroom at your uncle’s house? – Have you stayed there regularly or was this a one-time event? – How long was your stay? Was it just for the night, or did you have an extended stay?
b. Personal Belongings: – Did you leave any personal belongings in the guest bedroom? If so, what kind of belongings (e.g., clothes, luggage, personal documents)? – Were any of these belongings visible to anyone entering the room?
c. Expectation of Privacy: – Did you have any conversations with your uncle or any other resident about the privacy of the room during your stay? – Were you given a key or any means to lock the guest bedroom? If so, did you lock the room when you left?
d. Knowledge of Prior Guests: – To your knowledge, had other guests used the room shortly before or after your stay?
2. Consent to Search:
a. Uncle’s Understanding: – Do you know if your uncle understood what he was consenting to when he allowed the police to enter? – Is there any reason to believe your uncle might have felt pressured or coerced into giving consent?
b. Relationship with Uncle: – How would you describe your relationship with your uncle? (This can provide insight into whether the uncle might have any reason to allow or disallow a search.)
c. Prior Conversations: – Had you discussed with your uncle the possibility of police or others searching the room or the house? – Were there any explicit or implicit understandings about the privacy or security of the guest room between you and your uncle?
d. Presence of Contraband: – Were you aware that the fentanyl was in the guest bedroom? – If so, why was it there, and who had access to the room during your stay and after?
The answers to these questions can provide crucial information to determine the viability of a motion to suppress. They can also help in formulating a defense strategy beyond the suppression motion.
This is an excellent list of questions to begin fleshing out the motion to suppress. The new lawyer might think of additional questions while talking with the client, but GPT-4’s outline offers the new lawyer a strong starting place. If the new lawyer has never interviewed a client to uncover the facts of a dispute, GPT-4’s list will help the lawyer through that initial interview. Even if the lawyer is fairly experienced, GPT-4 generated this list in less than 30 seconds. Rather than type out their own questions (which will take much longer than 30 seconds), efficient lawyers can start with GPT-4’s list and supplement the questions as needed.
I wondered if GPT-4 could help with more interpersonal aspects of client interviews, so I entered another prompt: “Thank you for this list of questions to ask my client. What if my client answers the last question by saying that he didn’t know the fentanyl was in the bedroom–but I think he’s lying. How should I handle that situation?”
GPT-4 responded with a detailed discussion of how to handle the ethical challenge of representing a criminal defendant who might lie on the stand. The advice was sound, including a reminder that the new lawyer could seek guidance from senior colleagues, ethics hotlines, or the local rules of Professional Conduct. GPT-4’s performance on this question suggests that it can be an important sounding board on ethical issues. As with other questions, it may not give a full answer–but it is advice that a lawyer can obtain quickly and confidentially.
In this situation, however, I was looking for something different. I explained to GPT-4 that I wasn’t yet worried about representing this client at trial; very few criminal cases proceed to trial. Instead, I explained that I was seeking “a gentle way in which I can encourage my client to be more open.” GPT-4 responded:
Certainly. Building trust and creating an environment where the client feels safe and understood is crucial. Here are some gentle approaches you can use to encourage your client to be more open:
Building a strong attorney-client relationship based on trust is key. Over time, as you work together and your client sees your commitment and advocacy on their behalf, they may become more open and forthcoming.
This is a good answer: It would remind the new attorney (or teach them for the first time) about some ways to gain a client’s trust. And like the ethical counsel, this advice is available quickly and confidentially. I was disappointed, however, that GPT-4 didn’t recommend an approach that I often found successful with clients facing criminal charges. I continued my conversation with GPT-4 by asking: “What about saying to the client: ‘The prosecutor will try to build as strong a case as possible. Can you think of any evidence the prosecutor might offer that would show that you knew about the fentanyl? Or that the fentanyl belonged to you?'”
GPT-4 liked my approach and offered reasons why it would work:
That’s a strategically astute way to approach the situation. By framing the question this way, you’re not directly accusing or doubting your client. Instead, you’re positioning the conversation around preparing for the prosecutor’s potential strategy.
This approach accomplishes a few things:
Using this approach, you can gather potentially important information and gauge your client’s reaction, all while fostering collaboration and trust.
Here’s what I learned from using GPT-4 to prepare for this client interview:
This post is part of a series in which I’m exploring how artificial intelligence (especially the GPT-4 tool available from OpenAI) can assist entry-level lawyers in their work. In my first post I showed how GPT-4 could help a lawyer identify legal issues in a new client problem. The client in this imaginary problem was charged with illegal possession of fentanyl, and the lawyer wants to prepare a motion to suppress evidence of the seized fentanyl. In this post we’ll continue to ask GPT-4 for help with that problem.
When asked to identify legal issues that could support the motion to suppress, GPT-4 offered our hypothetical lawyer a list of 8 points to consider. But an entry-level lawyer might not have time to research all 8 of those points. I followed up with GPT-4, therefore, by asking “Now imagine that you have limited time to prepare this motion. Based on the facts you already know, which two topics are most important to research?” (The facts of the underlying problem appear in the first sample question on this NextGen page.) Here’s how GPT-4 responded:
Given the limited time and based on the facts provided, the two most important topics to focus on would be:
These two topics directly address the circumstances leading to the discovery of the fentanyl. Establishing that the search was unconstitutional due to a lack of standing or an invalid consent would be foundational to the success of a motion to suppress.
Once again, most of this response is very helpful. Within seconds, GPT-4 has focused the entry-level lawyer on the most important issues to address in a motion to suppress.
These two issues, however, are the ones that NCBE identified as correct in the sample bar exam question on which my hypothetical is based. Would a lawyer who had passed that exam need to ask GPT-4 for help? Or could that lawyer rely upon their own memory of search and seizure law?
Many new lawyers would immediately see the most promising issues in this problem, but I would still advise them to check their instincts with GPT-4. The bot can reassure them that they’re on the right track and haven’t overlooked an obvious issue. As I showed in my previous post, the bot will also note issues that are worth considering–even if the lawyer ultimately focuses their motion on these two primary issues. Most important, we’ll see later in this series that knowledge acquired for the bar exam is often woefully insufficient to address client problems. That knowledge might suffice here, but it won’t always.
Just like humans, GPT-4 sometimes misspeaks. The last paragraph of the response pasted above suggests that a lack of standing could lead to a finding of unconstitutionality. That seems inconsistent with GPT-4’s previous responses, as well as with the basic knowledge of standing that a law graduate should hold. Lack of standing might prevent the client from challenging the search, but it would not make the search unconstitutional. If a human mentor concluded their response this way, a new lawyer might follow up to make sure they’ve understood.
It’s just as easy to follow up with GPT-4. “I’m confused by your last sentence,” I told GPT-4. “Does a lack of standing lead to a finding that the search was unconstitutional?” GPT-4 immediately responded:
I apologize for any confusion. You’re right; my phrasing was imprecise in that context. Allow me to clarify:
“Standing” in the context of Fourth Amendment challenges does not refer to whether the search itself was constitutional or not. Instead, it refers to whether the individual challenging the search (in this case, your client) has the right or “standing” to do so based on their personal expectation of privacy in the place that was searched.
If your client lacks standing, it means they cannot challenge the constitutionality of the search. It doesn’t make the search itself unconstitutional. So, if your client lacks standing, the motion to suppress would fail regardless of any potential Fourth Amendment violations related to the search.
Thank you for pointing that out, and I appreciate the opportunity to clarify.
This exchange teaches us two important things about GPT-4. First, GPT-4 will make mistakes. In that respect, it is no different from any human that a new lawyer might consult. People misspeak and they make mistakes; so does AI. One of the biggest errors an AI user can make is assuming that AI is infallible. GPT-4 is an incredible mentor, but it is fallible.
Second, it is possible to confront GPT-4 about apparent mistakes–and the new lawyer doesn’t have to worry about offending a more senior lawyer. As the above example illustrates, GPT-4 quickly recognized its error and clarified the information–and no one’s feelings were hurt in the process. Users often obtain the best results from AI by engaging the AI in a conversation.
In my next post, we’ll see how GPT-4 helps our new lawyer gather essential factual information for their motion to suppress.
AI, GPT-4, law practice No Comments YetGenerative AI will transform legal education and law practice in ways that we are just starting to imagine. To stretch our imaginations–and begin thinking about the ramifications for legal education–I decided to feed GPT-4 some of the sample questions that NCBE has prepared for the NextGen bar exam. I wasn’t interested in whether GPT-4 could pass NextGen; that’s almost a given at this point. Instead, I used these questions as examples of the kind of client problems that a newly licensed lawyer might face. How could GPT-4 help that lawyer address those problems?
In this and a series of future posts, I’ll explore that question. Along the way, I hope to develop some insights into how GPT-4 could reshape legal education and the bar exam. Let’s get started with the first problem facing our hypothetical entry-level lawyer.
NCBE’s first sample question is a multiple-choice question centered on a criminal defense lawyer representing a client who has been charged with possession of fentanyl. Check out the question on the NextGen site.
This question follows a new format that NCBE has deveoped for the NextGen exam. It assesses the test-taker’s familiarity with legal concepts without requiring recall of detailed rules. GPT-4 easily chose the correct answers here (A and F). But that’s not particularly interesting given what we already know about GPT-4’s capabilities. Instead, let’s imagine that a new lawyer confronted this problem at work and wasn’t sure where to start. Workplace problems don’t come with multiple-choice answers, so I started a new chat (on a different day) and fed GPT-4 the same problem with these modifications:
Here’s what GPT-4 advised me:
Once you’ve researched these topics, you’ll have a clearer picture of the strengths and weaknesses of a motion to suppress in this situation.
This is an enormously helpful response: It identifies several angles for the lawyer to explore and gives some detail about each one. I particularly like the fact that GPT-4 advises the lawyer to remember the state constitution and statutes; those do provide additional protection in some states. And I like the final reminder to check recent case law. New attorneys (and even more senior ones) sometimes forget that step. At least on this problem, GPT-4 serves as an excellent issue spotter.
Ethan Mollick has described GPT-4 as an intern who is willing to do lots of work quickly and without pay. In this situation, I think of GPT-4 as a group of senior mentors that a new attorney can tap for advice. Some of the advice is better than other advice, and some advice is just plain wrong: Even very distinguished senior lawyers sometimes make a mistake. But as this example shows, GPT-4’s collective wisdom is well worth tapping.
Our new lawyer, of course, still has a lot of work to do. In my next post, I’ll explore what the new lawyer might do next–and how GPT-4 might continue to help.
AI, Uncategorized, Bar Exam, GPT-4, law practice No Comments YetChatGPT has written passable law school essays, and GPT-4 has aced the bar exam. But what if, instead of asking AI to take our tests, we asked it about the future of legal education and licensing?
My colleague Andrea Sinner, who served as the Executive Director of the ABA Commission on the Future of Legal Education, decided to find out. While preparing for a class she co-teaches with Trish White (Professor and Dean Emerita at the University of Miami School of Law, and Chair of the ABA Commission) Andrea asked GPT-4: “What principles would we apply to the seemingly intractable problem of how to better align legal education and licensure in the United States to the modern needs for, and requirements of, legal service delivery?” That question is similar to the one that the ABA Commission asked its stakeholders.
Here is what GPT-4 had to say:
“To better align legal education and licensure in the United States with modern needs and requirements for legal service delivery, several key principles should be considered:
By applying these principles, legal education and licensure in the United States can better align with the modern needs and requirements for legal service delivery, ultimately benefiting clients, the legal profession, and society as a whole.”
[End GPT-4] That’s a pretty eloquent statement–and a very effective summary of many ideas mooted in today’s world of legal education and licensing. One of the values of AI is its ability to synthesize ideas from a variety of sources. The synthesis, of course, depends on the materials provided to the AI in its training; it is possible that GPT-4’s sources favor the principles listed above over other perspectives on legal education and licensing. Still, it would be useful exercise for legal educators and bar examiners to consider this list. How well does your curriculum or licensing system accord with these principles?
Uncategorized, ABA, Bar Exam, GPT, Legal Education, Licensing No Comments YetWhat does ChatGPT tell us about lawyering competence and the bar exam? I explore that with my colleague Mary Lu Bilek in this column for Bloomberg Law. Lawyers can do so much more than bots–it’s time to recognize those competencies and better assess them in the licensing process.
Uncategorized, Bar Exam, Bots, Competence No Comments YetCafe Manager & Co-Moderator
Deborah J. Merritt
Cafe Designer & Co-Moderator
Kyle McEntee
Law School Cafe is a resource for anyone interested in changes in legal education and the legal profession.
Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.