Admitting Artificial: The Approaches to Admitting Generative AI in Court Settings
Blog Post | 112 KY. L. J. ONLINE | January 16, 2024
Admitting Artificial: The Approaches to Admitting Generative AI in Court Settings
By: Turner Reynolds, Staff Editor, Vol. 112
On December 20, 2023, LexisNexis announced that after a brief test period with over 450 law school librarians, its generative artificial intelligence (“AI”) platform, Lexis+ AI, would be available to second- and third-year law students for immediate use.[1] According to LexisNexis, 78% of law school faculty reportedly plan to teach students to use generative AI tools in legal scholarship.[2] With the Lexis+ AI tool, student users can choose from two functions: “Ask a legal question” or “Generate a draft.”[3] There is no doubt that Lexis+ AI and other similar generative AI platforms will continue to transform today’s legal landscape, and such innovative technology certainly give rise to many important considerations, both predictable and unforeseen. In response to these technological developments, several jurisdictions have implemented new rules concerning the submission of AI-prepared materials.[4] Kentucky is not one of them.
Although generative artificial intelligence tools can increase efficiency, clarity, and accuracy when properly used, the efficacy of these tools can also disguise many pitfalls awaiting cavalier lawyers. Aside from very clearly implicating multiple of the American Bar Association’s Model Rules of Professional Conduct,[5] the use of AI brings with it confidentiality, plagiarism, and accuracy concerns.[6] These concerns were recently illuminated in a New York case, where attorneys submitted fictitious case law and citations to the Court after inappropriately using OpenAI’s ChatGPT to prepare arguments.[7] The key is learning these pitfalls, and learning the technology itself; however, as this happens, the legal profession must protect itself, and its clients.[8] For the time being, the easiest way to do this is to encourage, or require, human oversight.[9]
In response to these issues, the American Bar Association adopted Resolutions 112[10] and 604,[11] further encouraging State Bar Associations to adopt their own resolutions. Although the Kentucky Bar Association Rules of Ethics require lawyers to “keep abreast of changes in the law and practice,” there are no ethical opinions specifically relating to AI.[12]
Quickly emerging now are two types of rules from District Judges and State Bar Associations. Some —like U.S. District Judge Michael J. Newman in the Southern District of Ohio—are passing Standing Orders or rules banning the submission of filings prepared by any form of AI.[13] More commonly, some jurisdictions require disclosure of the use of generative AI in the court filing itself, identification of the AI tool used, and certification of the accuracy of the citations and legal authority included.[14] The Eastern District of Texas reminds litigants that they are bound by the requirements of Rule 11,[15] remaining responsible for the accuracy, or inaccuracy, of the information submitted to the court, even if generated by AI tools.[16]
A complete ban on the use of AI tools to generate court submissions and filings is likely not a long-term solution; however, the second class of rules requiring disclosure of AI use shows promise. These disclosures operate to notify judges and clerks to approach these submissions and filings with an extra watchful eye while incentivizing those responsible for the submission to do the same. Furthermore, they act as a reminder that the human being who is submitting the work is responsible for it, not the computer or tool that wrote it.[17]
Although generative AI is a daunting innovation, it can usher the legal field into a new era and propel us, as legal professionals, to new heights, if used correctly. Like all new technology, only time will tell the specific pitfalls that may emerge as we learn to use it. As generative AI is developing, the importance of implementing ethical and legal constraints on its use in the legal profession is necessary, in Kentucky, and everywhere.
[1] Press Release, LexisNexis, LexisNexis Collaborates with U.S. Law Schools to Roll Out Lexis+ AI, Marking First Widespread Use of Legal Generative AI Solution in Law School Education (Dec. 20, 2023), https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-collaborates-with-u-s-law-schools-to-roll-out-lexis-ai-marking-first-widespread-use-of-legal-generative-ai-solution-in-law-school-education.
[2] Id.
[3] Lexis+ AI, LexisNexis, https://plusai.lexis.com/ (last visited Jan. 12, 2024). The “Ask a Legal Question” function allows users to employ a conversational search for answers to complex legal questions, while the “Generate a Draft” function guides users through the legal drafting process. Id.
[4] See, e.g., Rules of the District Courts of the State of Hawai’i (RDCH) General Order 23-1 (2023) (requiring that a “Reliance on Unverified Source” document be submitted with filings or submissions generated by artificial intelligence); SBM Ethics Opinion JI-155 (2023) (instructing judges to maintain competence and understanding of AI’s “ethical implications”); S.D. Ohio Standing Order Governing Civil Cases (Newman, J.) (July 14, 2023), https://www.ohsd.uscourts.gov/sites/ohsd/files//MJN%20Standing%20Civil%20Order%2012.14.2023.pdf (banning any filing in which a portion is prepared or drafted by any form of AI); E.D. Tex. L.R. Civ. R. 11 (requiring review and verification that computer-generated content complies with necessary standards).
[5] See generally Model Rules of Pro. Conduct (Am. Bar Ass’n, Discussion Draft 1983) (enumerating a Model Code of ethical obligations to guide the nation’s legal field). For more information on the ethics of AI use in the legal field see Karen Sloan, A Lawyer Used ChatGPT to Cite Bogus Cases. What Are the Ethics?, Reuters (May 30, 2023, 5:15 PM), https://www.reuters.com/legal/transactional/lawyer-used-chatgpt-cite-bogus-cases-what-are-ethics-2023-05-30/.
[6] Zachary Foster & Melanie Kalmanson, Litigators Should Approach AI Tools with Caution, Law360 (Feb. 2, 2023, 6:08 PM), https://www.lexisnexis.com/pdf/practical-guidance/ai/litigators-should-approach-ai-tools-wit-caution-l360.pdf.
[7] Larry Neumeister, Lawyers Submitted Bogus Case Law Created by ChatGPT. A Judge Fined Them $5,000, AP News (June 22, 2023, 6:16 PM), https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c.
[8] See Big Problems (and Benefits) of Generative AI Are Here, ABA (Aug. 9, 2023), https://www.americanbar.org/news/abanews/aba-news-archives/2023/08/problems-and-benefits-of-ai/.
[9] Id.
[10] “[T]he American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (‘AI’) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.” ABA Resolution 112 (Aug. 12–13, 2019).
[11] “[U]rges organizations that design, develop, deploy, and use artificial intelligent (“AI”) systems and capabilities to follow [specific] guidelines.” ABA Resolution 604 (Feb. 6, 2023).
[12] Michelle C. Fox & A. Riley Grant, Law 2.0: Artificial Intelligence Advancements May Change the Way We Practice Law, Louisville Bar Ass’n Bar Briefs (May 2023).
[13] S.D. Ohio Standing Order Governing Civil Cases (Newman, J.) (July 14, 2023), https://www.ohsd.uscourts.gov/sites/ohsd/files//MJN%20Standing%20Civil%20Order%2012.14.2023.pdf.
[14] See, e.g., RDCH General Order 23-1 (2023) (requiring that a “Reliance on Unverified Source” document be submitted with filings or submissions generated by artificial intelligence); E.D. Tex. L.R. Civ. R. 11 (requiring review and verification that computer-generated content complies with necessary standards); D.N.J. Judge Evelyn Padin’s General Pretrial and Trial Procedures (Nov. 13, 2023), https://www.njd.uscourts.gov/sites/njd/files/EPProcedures.pdf (adding required identification of the specific portions drafted using AI).
[15] See Fed. R. Civ. P. 11 (requiring certification that representations to the Court presented in filings and submissions are accurate, necessary, and warranted).
[16] E.D. Tex. L.R. Civ. R. 11.
[17] ABA House Adopts 3 Guidelines to Improve Use of Artificial Intelligence, ABA (May 24, 2023), https://www.americanbar.org/advocacy/governmental_legislative_work/publications/washingtonletter/may-23-wl/ai-0523wl/.