While a lack of internet regulation is the norm in the United States, generative artificial intelligence presents a series of new challenges, particularly in the legal field. Those who are trained in the law know to check their sources whether they come from case law or a generative AI tool like ChatGPT, but the average consumer is not so discerning. When that average consumer is in the midst of dealing with legal issues and has to navigate those issues without a lawyer, he or she is less likely to sit back and evaluate the information they’re being given, particularly if it looks bright, shiny, and full of knowledge and the ability to help navigate the legal system quickly and efficiently. This lapse in judgment, whether conscious or subconscious, may deepen the justice gap and cause those who are unfamiliar with the legal system to become even more distrustful of not only the system, but the resources that are meant to help self-represented litigants navigate that system in a meaningful way.
This Article will begin with a brief explanation and analysis of generative artificial intelligence more broadly, as well as its current role in the legal field. It will go on to analyze global regulatory frameworks surrounding artificial intelligence and compare those frameworks to the current approaches in the United States. In Part II, the Article will discuss access to justice in the United States and the ways in which technology currently is and is not filling that gap, as well as the regulations to the industry. Part III will propose a scheme for regulating consumer-facing generative AI products and analyze the potential and pitfalls of regulation. Next, Part IV will discuss enforcement of any consumer-facing generative AI products that may be created to fill the justice gap, while Part V will look on the other side of the looking glass, and discuss predictions based on whether or not meaningful consumer-facing generative-AI reaches those in the justice gap, and whether regulating those products becomes a reality.