All Roads Lead to Rome: Ethics and AI in Legal Practice

Angelina M. Vigliotti1

13 Stetson J. Advoc. & L. 271 (2026)

Contents
  1. I. Introduction: Artificial Intelligence in the Practice of Law
  2. II. AI in Legal Research
  3. III. The Rome Call
    1. A. Transparency: “In principle, AI systems must be explainable”
    2. B. Inclusion: “The needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop”
    3. C. Responsibility: “Those who design and deploy the use of AI must proceed with responsibility and transparency”
    4. D. Impartiality: “Do not create or act according to bias, thus safeguarding fairness and human dignity”
    5. E. Reliability: “AI systems must be able to work reliably”
    6. F. Security and privacy: “AI systems must work securely and respect the privacy of users”
  4. IV. Conclusion
  5. Footnotes
  6. Downloads

I. Introduction: Artificial Intelligence in the Practice of Law

When discussing modern legal research, it has become nearly impossible to have a conversation that does begin or end with Artificial Intelligence (AI). As the major legal research databases, Westlaw, Lexis, and Bloomberg, specifically, produce their own AI products for research and practice, law students and attorneys alike find themselves in a new technological landscape that points toward revolutionary opportunities in the practice of law, and a simultaneously dystopian threat of disruption.2 Uncertainty is to be expected at this point following such a rapid and prolific adoption of a new technology that even the American Bar Association (ABA) cannot definitively define. Regardless, AI is here, and now the legal field must confront it as it is increasingly integrated into the practice of law. In particular, the legal field must seriously confront how the integration of AI will affect advocacy for their clients.

Legal advocacy requires professional dedication to the effective practice of law and the duty of an attorney to represent the best interests of their client. The Model Rules of Professional Conduct and likewise the ethics rules of each jurisdiction guide the professional behavior of attorneys and create a standardized foundation of ethics by which they will conduct themselves. These standards are relevant in all aspects of their work, including their work as legal researchers. Whether performing complex and novel research into indeterminate areas of law, or simply engaging in standard practice searches, attorneys must act with the same professional dedication to competency and ethics in these tasks as they do in any task affecting a client’s outcomes. As a result, the adoption of Generative AI at different levels of practice necessitates extremely critical analyses of the AI tools themselves in light of the standards of professional conduct. Attorneys, whenever presented with a new technology or program to support their practice, have always embraced the responsibility of ensuring compliance with professional standards while still implementing changes. They are, after all, advocates in every aspect of practice, including as legal researchers and managers of legal technology. As such, they know that researching accurately, effectively, and ethically can provide the cornerstone on which legal advocacy is supported and thus the choices made in employing specific research tools likewise can support, or hinder, the success of an advocate. To navigate such a rapidly evolving technological landscape, attorneys will likely need guidance that centers their duty as an advocate for their clients while integrating Generative AI tools in their practice.

Though formal opinions addressing the use of AI in the practice of law have been made available to attorneys through the ABA3 and state bar associations4, AI has become so prolific in so many aspects of practice that a more general framework to guide attorneys in their use is needed to proactively address ethics issues as they appear. Fortunately, international institutions have been developing exactly that. On February 28, 2020, in Rome, Italy, the Pontifical Academy for Life with signatories from Microsoft, IBM, the Food and Agriculture Organization (FAO), and the Italian Ministry of Innovation presented to the world a document titled “Call for an AI Ethics” (henceforth the Rome Call). This document lists six guiding principles, transparency, inclusion, responsibility, impartiality, reliability, and security, as a means by which to encourage an approach of ethics by design when engaging with AI. Though neither binding nor intended for solely a legal audience, the document offers six principles to guide AI users in upholding human dignity while utilizing generative AI, a concern particular to advocates who seek to do the same in their practice. This document thus presents a structured approach to ethical engagement with AI that is markedly useful for attorneys who recognize their duty as advocates to center the human dignity, interests, and needs of their clients. In embracing the six principles presented in the Rome Call and reflecting on their work through the lens of such principles, attorneys stand to satisfy their professional standards and strongly advocate for the interests of their clients in the midst of a rapidly developing field of legal research technology.5

II. AI in Legal Research

Recognizing the need to address such rapid changes in the legal technology landscape, the ABA has directly responded to such developments and opened communication with attorneys on how to proceed within this technological renaissance. Formal Opinion 512, authored by the Standing Committee on Ethics and Professional Responsibility, addressed the need for attorneys to remain firm in their commitment to their ethical standards while also acknowledging that AI tools, and specifically generative AI tools, are developing so rapidly that continual updates are needed to assist attorneys in understanding their appropriate role in legal advocacy. With concern specifically for the ways in which the use of AI in the legal field will implicate the rules of conduct that bind all attorneys, this opinion recognizes how the use of legal technology does not exist in an ethical vacuum. The duties and responsibilities of attorneys provide context for the technological choices made by attorneys.6

Comment 8 to Model Rule 1.1 addresses the competency requirements of attorneys as it pertains to legal technology, demonstrating how professional standards are pertinent when engaging in research or otherwise utilizing technology in their scope of practice. In fact, comment 8 specifically addresses how competency includes understanding the benefits and risks associated with technology being used in legal practice.7 In fact, as the ABA addresses in Formal Opinion 512, the use of technology further implicates the duty of attorneys to maintain confidentiality under Model Rule 1.6.8 Such ethical concerns should give attorneys pause when conducting legal research that includes utilizing commercial tools for research assistance and feeding data into such tools. This has only become more relevant as commercial databases produce their own legal research AI features and tools.9 As a result, integrating AI into a legal research framework requires attorneys who seek to abide by the rules of professional conduct to recognize the risks ineffective use of such a tool will create. In fact, as advocates, attorneys experience a unique incentive to engage with AI in their legal research to improve their effectiveness, their efficiency, and otherwise strengthen their argumentation. However, this must be done while exercising due caution to prevent overdependence on such tools that might lead to inaccuracy, breaches of confidentiality, bias, or an undermining of the human dignity of both advocate and client.

III. The Rome Call

Recognizing the need for a human-centered approach to the AI renaissance, international organizations have undertaken the task of promoting a sense of responsibility among users “to create a future in which digital innovation and technological progress serve human genius and creativity and not their gradual replacement.”10 Following a meeting hosted by the Pontifical Academy for life, a group of select international organizations collectively recognized a need to establish a framework of ethical engagement to address the prolific adoption of generative AI. From religious leaders to titans of the technology industry, and even governmental representatives, the meeting that followed demonstrated how the disruptive nature of AI is bound to implicate moral, legal, and commercial activities while requiring a deeper commitment to center humans while wading into such a novel technology landscape. The resulting publication was signed by Archbishop Vincenzo Paglia, President of the Pontifical Academy for Life, Dr. Brad Smith, President of Microsoft, Dr. John Kelly III, IBM Executive Vice President, Dr. Dongyu Qu, FAO Director-General Minister, and DR. Paola Pisano, Italian Minister of Innovation, as a demonstration of their commitment to promoting an ethical approach to AI. The Rome Call presents principles of ethical engagement with generative AI that would uphold the dignity of human persons as the ultimate object of such engagement. Termed “algorethics,” these principles seek to ensure that the disruptive nature of the current AI renaissance does not supplant the human objectives of seeking the good of human society by centering humanity in such endeavors.11 While such principles may seem abstract, the concerns they seek to address are poignantly real in the legal community where AI has become a new frontier of possibility and danger.12

The value of the Rome Call is not in its authority, but in its objective usefulness for attorneys confronting engagement with AI in their professional role. Whereas Formal Opinion 512 recognizes the rules implicated by the use of AI in the legal field, as well as the need for state bar associations to keep abreast of changes and further implications, the Rome Call provides a guiding framework for using AI toward the benefit of humans, not their replacement. Such an approach unites both the practical concerns of an attorney and the ethical concerns of the advocate. The six principles in the Rome Call can thus support compliance with the standards of professional conduct and structure an ethical approach to investing in and utilizing generative AI in the practice of law, and, specifically, when conducting legal research.

A. Transparency: “In principle, AI systems must be explainable”

The first principle is one that presents wholly unique challenges in the realm of legal research. Where Formal Opinion 512 recognized the difficulty in even defining AI, attorneys face the unique challenge of incorporating this uncertain technology in their research endeavors.13 Though familiar vendors such as Lexis and Westlaw have developed their own products to support attorneys specifically, attorneys still must be proactive in exploring the finer details of tools they use- especially as they pertain to their function, training, and security of data. When using such tools, can you verify which sources the AI has access to, and thus what is the scope of its search? How is data stored and for how long? How is the AI trained? And, how will the answers to these questions affect your chosen AI tool’s responses to your prompts? These questions may lead to a review of the terms of service for such tools, or even a call to a vendor representative for clarity, but, ultimately, an attorney who seeks to employ such a tool when conducting sensitive research ought to know what exactly they are using.

B. Inclusion: “The needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop”

A particularly poignant principle in the Rome Call which considers AI as it serves the good acknowledges that “these system must not discriminate against anyone because every human being has equal dignity.”14 While the use of generative AI in the legal field has been touted as a major advancement toward closing the justice gap, disparate access to such tools raises concerns of how AI might actually inadvertently widen that gap. The effectiveness of certain AI tools, and the cost-prohibitive nature of some of the more sophisticated legal AI products, means that access to the tools themselves may not be inclusive, leading to disparate outcomes based on resource availability. Thus, while integrating AI into legal practice and research specifically stands to promote justice through access to information, a critical eye toward who truly has access will help prevent the pitfalls of resource inequality leading to injustice.15

C. Responsibility: “Those who design and deploy the use of AI must proceed with responsibility and transparency”

Attorneys who serve as decision makers regarding the technology purchased and used in their legal environment must recognize their responsibility to maintain ethical standards while doing so. Ultimately, “there must always be someone who takes responsibility for what a machine does.”16 Legal advocates deal with sensitive data pertaining to their practice and their clients. They also produce legal materials that will be submitted to courts under their name. Their choice to utilize an AI tool does not absolve them from the professional standards of competency and confidentiality. This reality has recently been thrown into sharp effect as attorneys have publicly faced sanctions for their overdependence on AI tools. In fact, attorneys have specifically been threatened with sanctions for their use of AI in legal research that led to hallucinated legal authorities being submitted to courts without verification of their relevance or existence. While AI has incredible potential to improve accuracy and efficiency in legal research, making it a valuable resource for attorneys to employ, advocates must still exercise their integrity and their responsibility for the choices made and the work product produced knowing the risks and limits of such technology in its current form.17

D. Impartiality: “Do not create or act according to bias, thus safeguarding fairness and human dignity”

This principle directly addresses a serious concern that has arisen when utilizing AI for legal research. Generative AI and large language models (LLMs) generally operate by recognizing and perpetuating patterns and relationships to predict the appropriate or desired response to a prompt. This can lead to insidious results when utilizing such tools to research areas of law where the patterns of the past were intentionally broken. If the data used to train AI or an LLM is biased, the AI will likely generate biased outcomes. A study from Tulane University considered this reality, and noted that AI could be used to perpetuate patterns of bias in the legal field if allowed, suggesting that ethical oversight remains essential to avoid AI resurrecting intentionally abandoned patterns of behavior and belief. While AI might perpetuate biases unintentionally due to its ability to only recognize patterns but not assign them moral weight, impartiality places the onus on advocates to proactively prevent such biases and violations of human dignity from being perpetuated by or in their use of AI tools. Advocates therefore must be knowledgeable enough about the tools they employ to recognize the risks of bias and either mitigate those risks or wholly reject them.18

E. Reliability: “AI systems must be able to work reliably”

Particularly significant in the practice of law is the need to understand the true scope of AI’s capability when choosing which tools, if any, to use and when. This requires critical examination of the accuracy of such tools if using them for research specifically. Though multiple databases have touted their ability to prevent “hallucinations,” or fabricated information supplied in response to a prompt, a preprint study by Stanford RegLab and HAI researcher found that the major legal research databases do, in fact, still hallucinate.19 This is a serious concern for attorneys who turn to AI for assistance in finding relevant, legitimate authorities. Even efforts to prevent hallucinations have not managed to remove the risk completely, and at times even result in new shortcomings. Retrieval-augmented generation (RAG) is a two-step process that has been integrated into some legal research AI tools. It requires first the retrieval of relevant materials selected from large datasets in response to a prompt. Then, a response is generated based on those select relevant materials and the language of the prompt which have both been provided to an LLM. Though this technique is intended to prevent hallucinations by limiting the data used by AI to specific legitimate resources with which a response is generated, it failed to account for the complexity of legal research. Novel or indeterminate questions, which are common in legal research, can complicate, or even prevent, the retrieval of relevant sources as these systems may struggle when there is not a definitive resource to answer such question but rather a complex and nuanced interplay of various sources. The importance of context and jurisdictional authority in legal research may be undermined by retrieval systems that seek textual relevance.20 Advocates must therefore utilize these tools within their limits and recognize where their traditional research expertise is needed until the performance of such products improves.21

F. Security and privacy: “AI systems must work securely and respect the privacy of users”

Finally, advocates have a particular concern regarding the security of such tools and the privacy of sensitive information fed to it. Attorneys often handle confidential data on behalf of their clients, including everything from medical records to financial statements, and personal details that are both relevant to the legal issue at hand and wholly private in nature. The duty to maintain confidentiality under Model Rule 1.6 has been noted in Formal Opinion 512 as a significant consideration in response to the introduction of generative AI to the legal field. When utilizing AI, the ABA has emphasized this exact point by urging lawyers to consider how the information used to develop a prompt could be accessed by others outside their firm, undermining the confidentiality of the information. Generative AI and its learning capabilities thus present a unique risk regarding the security of sensitive data.22 Though privacy safeguards may develop as features of such products, attorneys still must be proactive and cautious in their choice of tools and their development of prompts that are both appropriate to the research inquiry and sufficiently obscure to prevent a violation of attorney-client confidentiality. The Model Rules recognize the accountability of advocates in protecting sensitive data, and this accountability is not reduced in the commission of novel technology.23

IV. Conclusion

Today’s advocates face a technological development that is set to alter the practice of law at every level. Regardless of the technology landscape that has accompanied the prolific introduction of generative AI, their responsibility to provide ethical and competent representation is constant. In complying with such standards of professionalism, attorneys will need assistance in staying abreast of both the opportunities and the risks presented by this rapidly advancing technology, particularly as it ingratiates into firm, court, government, and academic technology infrastructures. Fortunately, guidance such as the Rome Call can direct attorneys toward compliance with the Model Rules and their state bar standards by directing their decision-making and engagement toward a human-centric approach to AI. The six principles that structure this framework are particularly relevant to the legal field and provide a solid foundation on which advocates can build toward an ethical use of AI that serves humans first and foremost. As the legal field grapples with these historic changes, advocates must renew their commitment to upholding the human dignity of their clients and utilize generative AI always toward that end.

Downloads