Law professor outlines risks, encourages best practices to use AI for legal, academic writing
LAWRENCE — One of the biggest concerns regarding artificial intelligence is that people will use it as a writing tool, then pass off the results as their own work. But when Andrew Torrance and Bill Tomlinson tried to list AI as a co-author on a law review article, journals didn’t like that either.
That was just one step on a journey the legal scholars have taken while using artificial intelligence in academic writing, which has included the publication of a piece guiding others on best practices and mistakes to avoid. In their first citation, they noted that the paper was written not by, but with, the assistance of AI.
Torrance, the Paul E. Wilson Distinguished Professor of Law at KU, and Tomlinson of the University of California-Irvine have been longtime collaborators. Their early work using AI in scholarly writing has developed into several papers. “ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in Writing with AI,” written with Rebecca Black of the University of California-Irvine, was published in the SMU Law Review.
“We wrote a bunch of papers using AI and got them accepted. And along the way we learned a lot about what worked and what didn’t when using AI,” Torrance said. “It’s enhanced productivity a lot. Before, one paper a year or so would be good. Now you can do so much more. We edit ourselves to make sure those pitfalls don’t happen. In some cases, we consider AI to be a co-author. That’s one of the things we learned right away, is be explicit. We celebrate that we use it.”
The paper provides guidelines for those curious about using either of the leading AI engines in their academic writing. They largely apply to any kind of writing, but the authors found while AI can be a useful tool, a human touch is still necessary to avoid faulty work. The guidelines include:
- Using standardized approaches.
- Having AI form multiple outlines and drafts.
- Using plagiarism filters.
- Ensuring arguments make sense.
- Avoiding AI "hallucinations," in which the tools simply make things up.
- Watching for repetition, which the models tend to use.
Torrance is also an intellectual property scholar, so violating others’ copyrights would look especially bad, he said. Making sure citations of others’ work are accurate is also vital.
The researchers provide step-by-step guidelines on usage as well as information about the ethics of AI in writing and its place in legal scholarship.
“It gives you a huge head start when using these tools,” Torrance said. “Remember, these are the absolute worst versions of these tools we’ll see in our lifetimes. We’re on the Model T now, but even the Model T is amazing. But you need to be sure you don’t drive it into a ditch.”
Tomlinson and Torrance also noted using AI allows for “late-finding scholarship.” In traditional publishing, if the science or scholarship changed, that information would have to wait for a new edition. Now, as understanding evolves, writing can continuously be updated. That opens the door for publications that can be “dynamically definitive instead of statistically definitive,” Torrance said, while simultaneously making knowledge more accessible.
Torrance, Tomlinson and collaborators Black and Don Patterson of UC-Irvine wrote that, regardless of what one thinks about AI, it can play an incredibly useful role in academic writing and that those who use it properly can have a decided advantage in productivity.
“We hope this paper allows or helps people to shift some of the mentality around AI. I’m sure we haven’t identified all the possible pitfalls,” Torrance said. “Frankly, a lot of these are mistakes you need to avoid, period. I think a lot of the same principles apply between a human writing and using AI. We thought, as a public service, we should put this out there.”
Regardless of how AI evolves, the authors have laid a foundation for how scholars could use the tool in legal and responsible ways. And a piece of advice Torrance offers students in his legal analytics class can apply to all, even if they’re not in the field of law.
“The tagline for the class is, ‘Be the lawyer who masters AI, not the one who is run over by it,’” he said.
Image credit: Adobe Stock.
Read this article from the KU News Service