Teaching Written Advocacy in the Digital Age

Davis G. Yee1

11 Stetson J. Advoc. & L. 129 (2024)

Contents
  1. I. Introduction
  2. II. What Technology Never Changes
    1. A. Modes of Persuasion
    2. B. Stories and Storytelling
    3. C. Structural Techniques
  3. III. What AI Can Currently Do
  4. IV. Problems with Using AI for Legal Writing
    1. A. Regulatory and Ethical Issues
    2. B. Originality and Style
  5. V. How to Use AI for Written Advocacy
  6. VI. Conclusion
  7. Footnotes
  8. Downloads

I. Introduction

Our brief was due the next day, and we were using the latest technology to finish it. My moot court partner brought a computer disk. It contained her part of our brief. After combining both parts in a word processor, we waited at the law library for a computer terminal to free up. We knew how to find cases from the stacks of Federal Reporters. We even knew how to check whether they were still ’good law.’ On prior occasions, we had thumbed through pocket parts of Shepard’s Federal Citations, but it was just easier to use Lexis or Westlaw.

Technology has certainly advanced since my days as a law student. Today’s students can use online document editors like Google Docs to simultaneously write briefs. They can access legal databases through a web browser from a laptop at home. They can conduct Internet research. They can email their briefs instead of mailing them, as my moot court partner and I had to do, through the U.S. Postal Service.

What technology is available affects how written advocacy is taught, and to a smaller degree, what is taught. For the how, comments can be made to a SharePoint version of a draft before going over them with the student on Zoom. For the what, for example, legal writing instructors rely on exemplars. Justice Elena Kagan’s opinions will be among them when, after all, they are used to calibrate commercial legal editing software for flow, punchiness, and plain English.2

To further illustrate the effect of technology on what is taught, suppose printing and typography are limited to hot metal typesetting. This nineteenth-century technology involved injecting molten type metal into a mold that had the shape of one or more glyphs, which then resulted in sorts or slugs that were used to press ink onto paper.3 And so, last-minute edits were difficult to do. The ability to rearrange paragraphs was similarly limited. Accordingly, the drafting process, I would have advised my students, might have included using 5” x 8” index cards, with a separate handwritten argument paragraph per card.

In the twenty-first century, hot metal typesetting is no longer prevalent.4 It has been supplanted in the Digital Age in which technology has advanced from power-driven machines to digital technology.5 Many things done in the Digital Age are done by computer and by large amounts of information that are available because of computer technology.6 In the Digital Age, the technology du jour is artificial intelligence, or AI.7

AI is defined as “the ability of machines to perform tasks that are usually associated with intelligent beings.”8 While still in its infancy in some arenas, it is becoming more mainstream.9 In the chess arena, for example, chess champions routinely use AI to help prepare for human-versus-human tournaments.10 In the legal arena, however, many note that a particular AI tool, ChatGPT, is currently not sophisticated enough to write a brief good enough to file.11 Even so, questions abound as to how AI can change how written advocacy is taught and done.12

With those questions in mind, this Article respectfully offers one response, particularly with respect to moot court briefs. But first, Part II of this Article highlights aspects of written advocacy that are immutable to technological change. Part III then provides an overview of what AI can currently do, and by implication, what it cannot do. Even if AI’s legal writing capabilities were to meet or exceed those of human beings, Part IV discusses problems specific to using this technology for written advocacy, problems related to regulatory and ethical issues, as well as those related to originality and style. Finally, Part V of this Article unveils how AI can be used ethically and creatively for written legal advocacy.

II. What Technology Never Changes

Some aspects of written advocacy never change. They are immutable because they have survived the test of time or because they are the results of proven experimental findings. They include the modes of persuasion, the story format, and structural techniques.

A. Modes of Persuasion

Aristotle’s Rhetoric hails from Ancient Greece during the 4th century BCE. It explained three modes to persuade an audience: Logos, Ethos, and Pathos. Logos appeals to reason by using logic. Ethos appeals to the authority or credibility of the persuader. And pathos appeals to the emotions of the audience.13

For the written component of moot court competitions, ethos matters the least. Briefs submitted for review are generally required to be on an anonymous basis. The competition judges do not want to know the names of the law school or the team members.14 Logos, on the other hand, matters the most, especially for the argument section. Pathos matters when writing the statement of facts. That section should be written to elicit the desired emotions from the reader.

Typography matters as well. Suppose a statement comes in two formats, bold and regular:

Oliver Wendell Holmes was born in 1843.

Oliver Wendell Holmes was born in 1839.

Studies show that more people believe the sentence in bold,15 even though both of them are incorrect, because Oliver Wendell Holmes was born in 1841.16 This finding does not mean that an entire brief should be submitted in bold font. Rather, bold selected key statements.

Font style can also persuade. Unless the competition rules or court rules provide otherwise,17 use Baskerville. In 2012, readers of the New York Times unwittingly took a quiz. They read the same passage, but in one of six randomly assigned typefaces: Baskerville, Computer Modern, Georgia, Helvetica, Comic Sans, and Trebuchet. Of these typefaces, Baskerville swayed readers the most18. It actually engendered a belief in the reader that the passage was true.

B. Stories and Storytelling

The caves of Sulawesi bear paintings of buffaloes being hunted by part-human, part-animal creatures holding spears and possibly ropes. The oldest of these cave paintings date back to at least 43,900 years ago.19 Some regard these prehistoric paintings as the earliest forms of storytelling. These paintings, according to archaeologists, “come in the form of narrative compositions . . . from which one can infer actions taking place among the figures.”20

Storytelling has obviously endured beyond prehistoric times. Neuroscience offers an explanation. The human brain seems hardwired for stories.21 People remember them much better than snippets of facts.22 And so, the statement of facts would be an ideal section of the brief for a nonfictional narrative.

The challenge is how to make that narrative interesting and compelling. Some suggest pacing.23 Other suggests using active voice, anaphora,24 and alliteration. Another technique is to vary not only sentence length, but also the narrative by incorporating dialogue.25 In a brief, that dialogue can come in the form of key deposition testimony, email statements, or contract terms at issue.

While there are other techniques, the one that contributes most to legal storytelling is diction. “Language,” as Chief Justice Roberts observed, “is the central tool of our trade.” To be sure, his observation highlighted the importance of words when construing a statute or the Constitution. They were, as he put it, “the building blocks of the law.”26 But words are much more than that. They are also one half of what Jonathan Swift defined as style: that is, “The proper words in the proper places.”

C. Structural Techniques

Structural techniques ensure that the proper words are put in their proper places. These proper places are threefold.

First, start strong and finish strong. This exhortation is due to the serial-position effect.27 When asked to recall a list of words, people tend to best recall the words at the end of the list. This tendency is what psychologists refer to as recency bias. People also tend to recall the words at the beginning of the list more frequently than those in the middle. This tendency is what psychologists refer to as primacy bias. What that means for written advocacy is that the first and last paragraphs should be the strongest ones in a brief.

Second, use IRAC or one of its variants. Every law student knows that IRAC stands for Issue, Rule, Application, and Conclusion. Professor Terri LeClercq referred to it as “the golden-rule acronym for organized legal discussions.” A search of legal literature dates back to 1961 for the first reference to IRAC.28 In short, IRAC and its variants, such as CREAC, are time-tested structures for expressing legal reasoning.

Third, engage in what I refer to as a lovers’ quarrel structure: It is not enough to say why I am a right; I also need to tell you why you are wrong. This is a common structure where the majority opinion first raises arguments in support of its holding and then addresses arguments raised by the dissent.29 Briefs can employ a similar structure. When asked what she considered to be the most important part of a brief, the late Justice Ruth Bader Ginsburg responded:

If you’re on the petitioner’s side, to anticipate what is likely to come from the respondent and account for it in your brief. Make it part of your main argument. You know the vulnerable points, so deal with them. Don’t wait for the reply brief; just incorporate in the main brief as part of your affirmative statement answers to what you think you will most likely find in the responsive brief. …30

Her advice differs slightly from the lovers’ quarrel structure. Instead of making affirmative arguments before rebuttal ones in the same brief, she would combine them. Her underlying reasoning was the same, nonetheless. Whatever structural technique is employed should address why the other side’s arguments are incorrect. As Justice Ginsburg put it, “You know the vulnerable points, so deal with them.”

III. What AI Can Currently Do

A series of human-versus-machine contests illustrate what AI can do. In 1997, Deep Blue beat Garry Kasparov at chess.31 At the time, Grandmaster Kasparov was the highest ranked human player.32 Over a decade later, in 2011, IBM Watson won the gameshow Jeopardy! against two human champion contestants.33 Then, in 2019, Project Debater challenged Harish Natarajan, a globally recognized debate champion.34

Just last year, OpenAI unveiled ChatGPT for public use.35 AI powers this natural language processing tool. The human user types in “prompts” that generate written responses by ChatGPT. A sample prompt might be one from a standardized test by the Department of Education:

One morning a child looks out the window and discovers that a huge castle has appeared overnight. The child rushes outside to the castle and hears strange sounds coming from it. Someone is living in the castle!

The castle door creaks open. The child goes in.

Write a story about who the child meets and what happens inside the castle.

Continue this story at a 4th-grade reading level. Give the child a name and be descriptive about the castle.

In response, ChatGPT wrote an essay that a fourth-grade teacher had difficulty determining whether AI or a 9-year-old student wrote it.36

Law students and lawyers, however, need not worry yet. To be fair, ChatGPT is a C+ student37 that can pass the bar exam.38 However, as an example of its current capabilities for written advocacy, Dean Andrew Perlman prompted ChatGPT: “Draft a brief to the United States Supreme Court on why its decision on same-sex marriage should not be overturned.”39 ChatGPT responded not with a brief, but with a five-paragraph letter totaling only 282 words. It wrote the following three arguments without any citations to authority:

First, the Court’s decision in Obergefell is firmly rooted in the principle of equality under the law …

Second, the Court’s decision in Obergefell is consistent with a long line of precedent establishing the fundamental right to marry …

Third, the Court’s decision in Obergefell has been widely accepted and has had a positive impact on the lives of same-sex couples and their families …

The potential is there, but as one legal commentor noted, ChatGPT and AI tools like it are “not ready for prime time — at least, not quite yet.”40

IV. Problems with Using AI for Legal Writing

A. Regulatory and Ethical Issues

Even so, ChatGPT has caused a stir in academia. Some faculty are reevaluating their grading guidelines. Should more weight be given to oral arguments instead?41 Other faculty are considering revamping their curriculum.42 Can AI be weaved into legal writing lessons?43 Whatever the answers to these questions may be, even AI enthusiasts acknowledge that law-related uses of AI present regulatory and ethical issues.44

In the legal arena, the regulatory rules include the ABA Model Rules of Professional Conduct. Those rules prohibit ghostwriting, except for pro se litigants.45 However, they do not apply to AI, the ghostwriter at issue. The focus of regulation thus has been on policies that apply to law students.46 Those policies fall into two categories.

The first category calls for banning a law student’s use of AI altogether.47 However, both Lexis and Westlaw are powered by AI.48 But even if the ban were narrowed to ChatGPT and the like, that policy would be hard to enforce. As AI improves, legal writing instructors will eventually be in the same predicament as the fourth-grader teacher who could not determine the true author of written work product.

The other policy category does not ban the use of AI, but rather focuses on plagiarism. However, this focus suffers from the same enforceability problem as an outright ban. Even before the advent of ChatGPT, it was not always easy to detect plagiarism, nor to prevent unethical human ghostwriting. Even more, focusing on plagiarism runs contrary to the fundamental purpose of its proscription against stealing. Black’s Law Dictionary defines the term to mean the “deliberate and knowing presentation of another person’s original ideas and creative expressions as one’s own.”49 But under this definition, AI is not a “person.” It is a tool, like an electric lawnmower. It does not own what it makes, no matter how sophisticated it is. Aside from these existential-related issues, the creators of AI want law students to use their product. There is, in other words, implicit permission to take those ideas and expressions as the student’s own.

B. Originality and Style

The real problem with AI is not plagiarism. Law students can easily give attribution. The real problem is that even with attribution, AI can stifle originality and style.

Consider what happened after Deep Blue defeated a human grandmaster. As noted earlier, chess champions began routinely using AI to help prepare for human-versus-human tournaments.50 This preparation, if done improperly, ruined chess, according to Vladimir Kramnik. He had been a world champion who had dethroned the same human grandmaster whom Deep Blue defeated. His observation was that:

[f]or quite a number of games on the highest levels, half of the game — sometimes a full game — [were] played out of memory.” He lamented, “You don’t even play your own preparation; you play your computer’s preparation.”51

What Grandmaster Kramnik longed for was chess brilliancies. A brilliancy, in chess parlance, is a “game that contains a spectacular, deep and beautiful strategic idea, combination, or original plan.”52 Chess tournaments do award brilliancy prizes,53 even nowadays.54 But overreliance on AI can diminish the number of games that contain especially original and imaginative combinations.

Besides originality, overreliance on AI can also stifle writing style. There is little doubt that AI can be programmed to write well. There is also little doubt that AI, when prompted in the future, will be able to write like the author of one’s choice. There is a difference, though, between the basic elements of good writing and writing style. The latter, as Thomas E. Spahn described it, is “as individual as a fingerprint.”55 But unlike a fingerprint, a human writer is not born with writing style. It only grows from that writer’s practiced and particular use of “[t]he proper words in the proper places.” “Plagiarizing” AI, or even quoting AI with attribution, does nothing to foster that growth.

V. How to Use AI for Written Advocacy

Despite these concerns, law students should be able to use AI. They already do so, without objection, for legal research with Lexis and Westlaw. And so, for written advocacy, the issue is to define an acceptable usage of AI. Such a usage that promotes originality in both argument and writing style.

Here is how: Use AI to vet arguments. Dean Perlman proved that this use is possible. As noted earlier, he prompted ChatGPT to draft a brief on why the United States Supreme Court should not overrule its decision on same-sex marriage. ChatGPT responded with three arguments.

Likewise, a student can ask AI to confirm the main arguments to make for a moot court problem. A student can also follow Justice Ginsburg’s advice by asking AI to identify “vulnerable points” and to vet the student’s counterarguments to those points. Thereafter, the student can exercise original thinking by coming up with arguments that AI did not make. The student can then express all these arguments in her moot court brief in a way that AI did not, that is, in her own writing style. Moot court judges may nevertheless want some way to verify whether a student “plagiarized” what AI wrote. They need not worry, though. The briefs that overly rely on AI will look the same. With every student having access to AI, the brief that stands out (positively) will be the one that has novel arguments or a distinctive writing style. That brief is the one that should earn the legal writing equivalent of a chess brilliancy prize.

Indeed, a law student’s use of AI to vet arguments is analogous to how chess students, as opposed to chess professionals, use AI. As Grandmaster Kramnik noted earlier, chess professionals use AI to analyze their human opponent’s games to find tendencies and weaknesses. AI then suggests 500 to 1,000 chess moves, which the professional memorizes, hoping that the human opponent will play one of the predicted moves.56 In contrast, students of the game arrange their pieces on the board to analyze their next move. They simultaneously have the chess computer analyze the position. But for chess students to improve, one grandmaster strongly frowns upon blind acceptance and memorization of the computer’s suggested move. He instead recommends independent analysis.57 The reason is that such analysis promotes creativity and a conceptual understanding as to why a certain move was made.58

Suppose our chess board is filled with facts from a first-year torts case: Wagner v. International Railway Company. In that case, Individual A and his cousin, Individual B, boarded an overcrowded railway car. The conductor did not close the doors. After climbing an inclined trestle, the car lurched violently as it made a turn onto a bridge. Individual B was thrown out. After a cry of “Man overboard,” the car crossed the bridge before stopping. Individual A and railway employees then went to look for Individual B. By then, “[n]ight and darkness had come on.”59 Individual A went to the bridge to look for his cousin. He testified that the conductor had asked him to go there and that the conductor followed with a lantern. The conductor denied this testimony. Meanwhile, others went under the bridge and found Individual B. As they stood there, unbeknownst to them, Individual A had missed his footing on the bridge, fell, and struck the ground near them.

Based on these facts, what was not at issue in the Wagner case was whether the railway company was liable to Individual B, the person thrown from the car. The railway company was negligent. The doors were left open; the car violently lurched; and Individual B was thrown out, sustaining injuries as a result. Rather, what was at issue was whether the railway company was liable to Individual A for negligence. He went to search for, and tried to rescue, Individual B.

To resolve this issue, suppose the applicable tort law is before November 22, 1921, when Judge Benjamin N. Cardozo issued the Wagner opinion. If, under this law, AI were to argue that there were two separate chains of events — one chain from boarding to stopping the car and another chain from the beginning of the search to Individual A’s injury, then that argument would not be original. The trial judge had already thought of that. In addition, if AI were to argue that Individual A voluntarily assumed the risk in trying to rescue his cousin, then that argument would also not be original. Defense counsel had already thought of that. And finally, if AI were to argue proximate causation, then that argument would not be original either. Plaintiff’s counsel had already thought of that.60

To be clear, all these arguments are good ones and should be addressed. But Judge Cardozo did something different. He wrote, “Danger invites rescue.”61 In three words, he personified both danger and rescue, and he did it with style. If he had instead written an obvious “Danger invites bodily injury” or “Danger invites lawsuits,” then either statement is not interesting. Also from a stylistic perspective, he “moved abruptly and without preliminaries from a statement of the facts to what, at that point, was an explanation.” In short, Judge Cardozo put the proper words in the proper places.

His now-canonical phrase — “Danger invites rescue” — is also original under the then-existing precedent. As Professor Abraham and Professor White explained, “in Wagner, Cardozo took a question on which there already was substantial precedent, and re-answered the question in a new, inimitable, and memorable way.” What others had analyzed in terms of proximate cause or “chain of causation,” he analyzed as relational risk — “the small risk of injury to a rescuer was sufficiently connected to the large risk of injury to a passenger.” The result is an opinion worthy of a brilliancy prize. The opinion is still celebrated today, it has been included in casebooks and the first Restatement of Torts.62

One may wonder, though, whether AI — using pre-1921 tort law — would have been “intelligent” enough to come up with what Judge Cardozo did. Perhaps. But the larger point remains: While AI is a powerful tool that law students should use to vet arguments, divorcing that use from independent human analysis will deprive us of originality in argument and in style. Through proper usage, though, the hope is that AI will invite creativity, just as danger invited rescue a century ago.

VI. Conclusion

From hot metal typesetting to word processors to ChatGPT, technology affecting written advocacy will continue to advance. Perhaps one day we will have the legal writing equivalent of Data, a sentient AI android from Star Trek. Perhaps one day legal writing instructors will no longer be needed. And perhaps one day, because of AI, the written component of moot court competitions will change as we know it.

But until that day, we want to celebrate creative expression of logos. We want to be moved by pathos. We want to marvel at our law student’s legal writing style. We can do all that, even if law students use AI, so long as their use is limited to confirming arguments and counterarguments that the other side might make. Then, armed with this information, law students have an opportunity to come up with novel arguments and express them in a way that AI did not. Because in the end, as it was in the beginning of this Article, moot court coaches and legal writing instructors want a human being — and not a machine — to win the best brief award.

Downloads