The Honorable Michael C. Baggé-Hernández1
12 Stetson J. Advoc. & L. 225 (2025).
Contents
- I. Introduction to AI and Generative AI
- II. The Evolution of AI
- III. Impact of AI and the Practice of Law
- IV. Ethical and Legal Considerations — Florida Bar Ethics Opinion 24–1
- 1. Competence (Rule 4–1.1)
- 2. Confidentiality (Rule 4–1.6)
- 3. Oversight of Generative AI and Candor to the Tribunal (Rule 4–3.3)
- 4. Unlicensed Practice of Law
- V. Florida Supreme Court Opinion No. SC2024–0032
- VI. Practical Implications for Legal Practice
- VII. Conclusion
- Footnotes
- Downloads
I. Introduction to AI and Generative AI
The rise of artificial intelligence (AI) and specifically Generative AI represents a transformative shift in technology, comparable in impact to the steam engine and electricity. This technology enables computers to perform complex tasks typically requiring human intelligence, including speech recognition, language comprehension, and data-driven decision making. Distinctly, Generative AI focuses on creating content from existing data sets, which has profound implications for automating and enhancing tasks in the legal domain such as drafting documents and formulating legal arguments. As these tools become integrated into legal practices, it is crucial to address the accompanying ethical and legal challenges.
II. The Evolution of AI
AI has evolved significantly since its conceptual beginnings, transitioning through various phases marked by skepticism and breakthroughs. From the Turing Test to the establishment of neural networks, AI’s journey has been pivotal, with notable applications in healthcare and legal technology sectors.2 Despite periods known as “AI winters,”3 where progress slowed due to funding and interest drops, advancements in neural networks and data availability in the late 20th century led to significant developments, such as IBM’s Deep Blue and modern speech recognition technologies.4 The introduction of deep learning systems and generative adversarial networks has further expanded AI capabilities, making large language models like OpenAI’s GPT series and Google’s LaMDA possible.5 These technologies, while impressive, face challenges such as bias and unpredictability,6 challenges that are crucial to address as AI integrates more deeply into various sectors, including law.
III. Impact of AI and the Practice of Law
The integration of AI in the legal sector has transformed the landscape, beginning with the development of legal research databases in the 1970s. These early systems laid the groundwork for more sophisticated platforms like LexisNexis and Westlaw, revolutionizing legal research through advanced AI algorithms.7 These platforms utilize natural language processing and machine learning to enable precise, efficient searches across extensive legal databases.8 Additionally, predictive analytics tools have emerged, using historical data to forecast case outcomes, thereby enhancing strategic legal planning.9 AI’s role extends to document automation, management systems, and litigation support, particularly in e-discovery, where AI processes help identify, collect, and produce electronically stored information swiftly and accurately.10 The continual advancement of AI technologies promises to unlock even more sophisticated legal research, analysis, and client services capabilities.
IV. Ethical and Legal Considerations — Florida Bar Ethics Opinion 24–1
Incorporating AI in legal practices brings complex ethical challenges, partly addressed by the Florida Bar Ethics Opinion 24–1 issued on January 19, 2024, (the opinion). This opinion underscores the importance of adhering to ethical standards when integrating AI tools, focusing on competence, confidentiality, and candor. It suggests that lawyers must maintain their legal expertise and develop a reasonable understanding of the AI technologies they employ. The opinion highlights the need for thorough due diligence in selecting AI technologies, ensuring they meet security standards and respect client confidentiality. Moreover, it calls for transparency in the use of AI, requiring lawyers to keep clients informed and obtain their consent when lawyers use AI tools in their cases.11
Recognizing the rapid advancement and increasing reliance on AI tools in various aspects of legal work, the opinion seeks to clarify how lawyers can uphold their ethical duties in this new technological context.12 The essence of this guidance revolves around ensuring that AI enhances the legal profession’s integrity without compromising the fundamental ethical obligations owed to clients and the justice system.
The primary purpose of Ethics Opinion 24–1 is to give lawyers a framework for ethically integrating AI technologies into their practice. It recognizes the potential of AI to enhance the efficiency and effectiveness of legal services but also warns against the uncritical adoption of AI tools. The opinion stresses the importance of understanding the capabilities and limitations of AI technologies to ensure their use complements the lawyer’s professional judgment and adheres to ethical standards.13
1. Competence (Rule 4–1.1)
The opinion extends the duty of competence under Rule 4–1.1 of the Florida Rules of Professional Conduct to include understanding AI technologies used in legal work. It emphasizes that competent representation now requires not only legal expertise but also mastery over AI tools relevant to a lawyer’s practice areas. According to Rule 4–1.1, lawyers must ensure their use of AI technologies adheres to the necessary standards of legal knowledge, skill, thoroughness, and preparation for client representation.14
The opinion advises lawyers to familiarize themselves with the operational mechanisms of AI to enhance the quality of legal services while managing potential risks. This responsibility includes thoroughly assessing AI-generated outputs, focusing specifically on identifying and mitigating any biases or errors to comply with the required standards of skill and thoroughness mandated by Rule 4–1.1.15
Furthermore, the opinion underscores the necessity for continuous education in AI technologies, acknowledging the rapid advancements in legal tech. It encourages lawyers to stay informed about new technological developments and their ethical implications, ensuring that their practice evolves appropriately with these advancements to maintain competent representation.16
The opinion notes the now famous case of Mata v. Avianca, Inc., illustrates how a lawyer misusing generative AI can run afoul of Rule 4–1.1. In Mata, the respondents engaged in actions that led to significant ethical breaches. They submitted court filings that included citations to non-existent judicial opinions and fake quotes, which were unknowingly generated by the artificial intelligence tool ChatGPT. Opposing counsel and the court uncovered these actions when they could not locate the cases cited in the filings. Despite multiple opportunities to correct their submissions, the respondents persisted in their claims, compounding their initial errors. The situation escalated following several court orders requiring the production of the cited cases, ultimately revealing the use of fabricated legal precedents. This sequence of events culminated in a formal sanction hearing where the depth of the respondents’ misconduct was fully exposed, leading to sanctions aimed at deterring similar future conduct.17
The court stated that the respondents violated their competence duty as outlined in the Federal Rules of Civil Procedure and the American Bar Association’s Model Rules of Professional Conduct. Specifically, Rule 11 of the Federal Rules of Civil Procedure requires attorneys to ensure their legal contentions are supported by existing law or a substantial argument for its extension, modification, or reversal. The respondents failed in this duty by submitting and defending fabricated judicial opinions generated by AI, thus basing their legal arguments on non-existent precedents.18
Additionally, Model Rule 1.1 of the ABA Rules, which mirrors Rule 4–1.1, mandates that lawyers provide competent representation, which includes the legal knowledge, skill, thoroughness, and preparation necessary for the representation. By relying on fictitious cases and failing to verify their authenticity, the respondents demonstrated a lack of thoroughness and preparation, undermining the competence expected in legal practice.19
The court found that this breach of duty not only misled the court but also degraded the quality of their legal service, showing stark neglect of the diligent and informed practice required of all lawyers. The sanctions imposed by the court aimed to address these violations and underscore the imperative of maintaining rigorous standards of professional competence, especially when integrating new technologies like AI into legal workflows.20
Guiding lawyers on the ethical use and selection of AI tools, the opinion provides a framework that aligns with the accuracy, security, and transparency criteria inherent in Rule 4–1.1. The opinion instructs lawyers evaluate AI tools against these standards, ensuring any technology employed upholds the high expectations of client service and data protection that legal practice demands. This approach fulfills the competence mandate, reflecting a commitment to ethical integrity in using AI.21
2. Confidentiality (Rule 4–1.6)
The emphasis on confidentiality under Rule 4–1.6 of the Florida Rules of Professional Conduct receives significant attention in the opinion, highlighting its critical importance when integrating AI into legal practices. AI technologies, which can process and store large amounts of data, introduce complex challenges to safeguarding sensitive client information. The opinion observes that these challenges require lawyers to take proactive and meticulous steps to ensure the ethical handling of such data aligns with professional standards.22
The opinion directs lawyers to undertake rigorous due diligence to thoroughly examine AI technologies and their providers. This due diligence demands an in-depth review of the data security measures that AI providers implement, including evaluating encryption practices, security protocols for data transfer and access, and compliance with relevant data protection laws. The opinion asserts that this comprehensive evaluation helps ensure that lawyers’ technologies do not become vulnerabilities in protecting client confidentiality.23
Additionally, the opinion highlights the necessity for lawyers to understand the specifics of data storage used by AI technologies. The opinion indicates that lawyers must know where data resides, how providers manage it, and how long they store it. It also asserts that they also need to assess potential risks associated with data breaches, including the steps providers take to mitigate these risks and their protocols for responding to breaches. With this knowledge, lawyers can make informed decisions about using AI tools, ensuring the security and confidentiality of client data.24
As outlined in the opinion, the principle of confidentiality also includes the lawyer’s duty to communicate transparently with clients about using AI tools in their cases. The opinion encourages lawyers to engage in open discussions with clients about the benefits and potential privacy implications of employing AI technologies. It suggests that this dialogue should educate clients on how their data will be used, stored, and protected, fostering trust and transparency. The opinion contends that obtaining informed consent becomes crucial in this process, ensuring that clients are fully aware of and agree to the use of AI technologies to handle sensitive information.25
Lawyers must choose AI tools that adhere to the highest data protection standards and implement best practices within their firms to safeguard client confidentiality as asserted by the opinion. The opinion suggests developing internal policies on the use of AI, conducting regular security audits, and training legal staff on ethical considerations related to AI and data privacy. By embedding these practices into their firm’s operations, the opinion asserts that lawyers can create a culture of confidentiality that aligns with the ethical obligations outlined in Rule 4–1.6.26
Rule 4–1.6’s focus on confidentiality in the Ethics Opinion underscores the multifaceted challenges and responsibilities lawyers face when integrating AI into legal practice. By conducting thorough due diligence, understanding data storage and security measures, communicating transparently with clients, and implementing best practices, lawyers can navigate these challenges, ensuring that the integration of AI technologies upholds the sacred trust between lawyer and client.27
3. Oversight of Generative AI and Candor to the Tribunal (Rule 4–3.3)
The opinion stresses the importance of candor towards the tribunal as mandated by Rule 4–3.3 of the Florida Rules of Professional Conduct, highlighting the critical need for honesty and truthfulness in court representations. Specifically, the opinion highlights that lawyers utilizing generative AI for tasks such as research, drafting, communication, and client intake are subject to similar risks associated with relying on inexperienced or overconfident nonlawyer assistants. To mitigate these risks, the opinion directs lawyers to establish firm policies ensuring that the deployment of generative AI aligns with their professional obligations.28
Practically, this guidance requires lawyers to meticulously verify the accuracy and sufficiency of all research conducted by generative AI. Neglecting this duty may result in breaches of core professional responsibilities, such as those outlined in Rule 4–3.3 related to candor towards the tribunal, and Rule 4–4.1 concerning truthfulness to others. Failure to adequately oversee the use of generative AI can lead not only to ethical violations but also to potential sanctions imposed by tribunals against both the lawyer and their client.29
Not specifically mentioned in the opinion, the case of Park v. Kim, serves as a pertinent example of how AI technologies can influence a lawyer’s duty of candor towards the court. In Park, the United States Court of Appeals for the Second Circuit dealt with the dismissal of a medical malpractice lawsuit due to issues related to discovery non-compliance and professional misconduct involving AI-generated legal citations. The ethical concerns in this case arose from the plaintiff’s attorney, who submitted a brief citing a fictitious case generated by ChatGPT. The court found that the plaintiff’s attorney’s actions violated Federal Rule of Civil Procedure 11 and the Model Rules of Professional Conduct Rule 3.3. Id. Rule 11 mandates that legal filings must be well-grounded in fact and law, a standard the plaintiff violated by failing to verify the AI-generated citation. Rule 3.3, which mirrors Rule 4–3.3, requires attorneys to avoid making false statements of law to the court, which the plaintiff breached by including the non-existent case in her brief. As a result, the court referred the plaintiff’s attorney to the Court’s Grievance Panel, emphasizing the need for lawyers to rigorously verify AI-generated information to maintain legal proceedings’ integrity and adhere to professional ethical standards.30 This case highlights the challenges lawyers face when incorporating AI tools into practice and underscores the critical importance of accuracy and maintaining a duty of candor toward the tribunal.
The opinion emphasizes the importance of Rule 4–3.3, which stresses candor towards the tribunal, by underscoring the critical role of lawyers in integrating rapidly advancing AI technologies within the framework of traditional legal values. It highlights that lawyers must actively ensure the accuracy of AI-generated outputs and be prepared to defend their use in court. By doing so, the opinion indicates that lawyers fulfill their ethical duties and enhance the fair and effective administration of justice.31
A. Supervising Non-Attorneys
The opinion clarifies the duties of attorneys under Rule 4–5.3 of the Rules Regulating the Florida Bar, which addresses responsibilities regarding nonlawyer assistants, including artificial intelligence. It specifies that attorneys must ensure that the work delegated to nonlawyers or AI adheres to professional ethical standards. The opinion directs that this involves implementing rigorous oversight measures to maintain ethical compliance, particularly for tasks that require professional judgment.32
Rule 4–5.3 mandates that lawyers actively supervise any nonlawyer assistant, which the opinion extends to AI technologies. The opinion directs that this is to prevent unauthorized practice of law and to uphold the integrity of legal services. The opinion notes that lawyers are expected to regularly review and evaluate AI’s performance to ensure that it operates within the legal and ethical boundaries set by the profession.33
Additionally, the opinion highlights that lawyers must address any actions by nonlawyers or AI that could constitute a violation of the Rules of Professional Conduct if performed by a lawyer. It emphasizes the need for lawyers to monitor the outcomes produced by AI to avoid ethical issues such as the spread of misinformation or breaches of confidentiality.34
The ABA model rules 5.3 also address nonlawyers assisting lawyers with legal work.35 However, to clarify that the rule encompasses all forms of nonlawyer assistance, whether by individuals or entities, the Commission recommended renaming Model Rule 5.3 from “Nonlawyer Assistants” to “Nonlawyer Assistance.” The commentaries accompanying the rule notes that nonlawyer services are not only provided by individuals, such as investigators or freelance paralegals, but also by entities like electronic discovery vendors and cloud computing providers. The inclusion of “cloud computing” as an example in the first sentence of the proposed Comment [3] specifically highlights that the rule applies to services offered by entities, including those provided over the Internet, as well as services from individual providers.36
While it is not yet clear if Florida will adopt the same wording change from “Nonlawyer Assistants” to “Nonlawyer Assistance” as the ABA, the opinion seems to interpret the rule in a similar way, acknowledging that it applies to individual and entity-provided nonlawyer services.37
4. Unlicensed Practice of Law
Although not contained in the opinion, unauthorized practice of law (UPL) is still relevant to understanding broader implications of AI in legal practice. While the opinion provides comprehensive guidelines on how lawyers should ethically integrate generative AI, emphasizing the protection of client confidentiality, the verification of AI-generated content, and adherence to advertising rules, it does not directly address UPL. However, the principles laid out may offer indirect guidance on managing UPL risks associated with the use of AI technologies.
The opinion underscores the necessity for lawyers to retain ultimate responsibility for their work product and professional judgment, even when employing AI tools.38 Moreover, the need for human oversight is implicitly supported by the directive that lawyers develop policies ensuring their use of generative AI aligns with their professional and ethical obligations, as cited in various related rules and comments such as RPC: 4–1.1; 4–1.1 Comment; and others noted within the opinion.
This perspective resonates with broader legal standards evident in cases such as Lola v. Skadden, Arps, Slate, Meagher & Flom LLP39 and The Florida Bar v. We The People Forms and Service Center of Sarasota, Inc., et al.,40 where the line between mere automation and substantive legal work is distinctly drawn. These cases illustrate the legal community’s ongoing effort to define the role of AI in practice, emphasizing that while AI can perform many tasks, it cannot replace the nuanced, judgment-based work of licensed professionals.
In this context, although the opinion does not expressly discuss the unauthorized practice of law, it establishes a framework potentially helpful in preventing such issues by detailing the responsibilities lawyers have when incorporating AI into their practice. This framework ensures that while lawyers can leverage AI’s efficiency and capabilities, they must do so in a way that aligns with ethical standards and respects the legal definition of professional practice.
V. Florida Supreme Court Opinion No. SC2024–0032
The Florida Supreme Court issued Opinion No. SC2024–0032 in response to the growing influence of AI technologies in the legal field and Florida Bar Ethics Opinion 24–1. Recognizing the transformative impact of generative AI tools, the Court amended several key rules within Chapter 4 of the Rules Regulating the Florida Bar. The amendments became effective on October 28, 2024, and specifically address professional competence, client confidentiality, and supervision of nonlawyer assistants. The Court explained that these changes were necessary to ensure lawyers adapt to the complexities of modern legal practice. As noted in the opinion, “a lawyer should keep abreast of changes in the law and its practice, engage in continuing study and education, including an understanding of the benefits and risks associated with the use of technology, including generative artificial intelligence.”41
1. Key Changes in the Rules and Their Implications
A. Competence (Rule 4–1.1)
What Changed: The Court amended Rule 4–1.1 to explicitly include a lawyer’s duty to understand and address the risks and benefits associated with the use of technology, particularly generative AI. The accompanying comment emphasizes the importance of continuing education in this area: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice… including an understanding of the benefits and risks associated with the use of technology, including generative artificial intelligence.”42
This amendment broadens the scope of competence to encompass technological proficiency. The Court highlighted that lawyers must not only understand the law but also the tools they use to apply it. For instance, in Mata v. Avianca, Inc., an attorney’s unverified reliance on AI-generated legal citations led to sanctions,43 underscoring the risks of using AI tools without proper oversight.
To comply with these requirements, lawyers must:
-
Develop a foundational understanding of how AI tools process and analyze data.
-
Recognize and address potential biases in AI-generated insights.
-
Engage in continuing legal education (CLE) programs focused on AI technologies.
The Florida Supreme Court emphasized that ongoing education is essential for ensuring competent representation in an increasingly digital legal environment.44
B. Confidentiality (Rule 4–1.6)
What Changed: Rule 4–1.6 was amended to address the confidentiality risks posed by AI technologies. The updated comment states: “A lawyer should be aware that generative artificial intelligence may create risks to the lawyer’s duty of confidentiality.”45 Lawyers are now required to evaluate the security measures of AI providers, ensure compliance with data protection laws, and obtain informed consent from clients before using AI tools.
Generative AI tools often rely on cloud-based systems, raising concerns about data storage and potential breaches. The Court acknowledged these risks, emphasizing that lawyers must take reasonable precautions to safeguard client information. This includes:
-
Conducting thorough due diligence on AI providers, including reviewing their encryption and security protocols.46
-
Regularly auditing AI tools to ensure compliance with confidentiality standards.
-
Communicating transparently with clients about the risks and safeguards associated with AI usage.
For example, the opinion notes that lawyers must assess whether AI tools store data in jurisdictions with weaker privacy protections.47 Failure to address these issues could lead to breaches of confidentiality, undermining client trust and exposing lawyers to ethical violations.
C. Candor to the Tribunal (Rule 4–3.3)
What Changed: The Court clarified that the duty of candor extends to AI-generated materials. Lawyers must verify the accuracy and reliability of all submissions derived from AI tools to ensure they are supported by credible sources. The opinion underscores this requirement by referencing prior cases where unverified AI outputs compromised the integrity of legal proceedings.48
The amendments to Rule 4–3.3 address the risk of misinformation in court filings. The case of Park v. Kim illustrates the consequences of failing to verify AI-generated citations, where reliance on fictitious cases led to significant sanctions.49
To meet this standard, lawyers should:
-
Establish firm-wide protocols for reviewing AI-generated research.
-
Validate the accuracy of AI-derived legal arguments using traditional research methods.
-
Maintain detailed records of how AI tools are used to ensure accountability.50
D. Supervision of Nonlawyer Assistants (Rule 4–5.3)
What Changed: The Court explicitly extended Rule 4–5.3 to include AI tools within the definition of nonlawyer assistants. The updated comment states: “A lawyer should also consider safeguards when assistants use technologies such as generative artificial intelligence.”51 Lawyers are now required to supervise AI tools as rigorously as they would human assistants.
The amendments reflect the Court’s recognition that AI tools, while valuable, cannot replace human judgment. Lawyers retain ultimate accountability for the work product generated by AI. To fulfill their supervisory responsibilities, lawyers must:
-
Monitor the accuracy and appropriateness of AI-generated outputs.
-
Provide clear guidance on the ethical use of AI tools.
-
Develop firm-wide policies for overseeing AI systems.52
These measures ensure that AI complements, rather than undermines, the ethical standards of the legal profession.
VI. Practical Implications for Legal Practice
The amendments to the Florida Rules of Professional Conduct represent a proactive approach to addressing the challenges posed by AI in the legal field. By emphasizing competence, confidentiality, candor, and supervision, these changes provide a framework for integrating AI responsibly into legal practice.
Recommendations for Lawyers:
-
Education and Training:
-
Participate in CLE programs focused on AI technologies.
-
Stay informed about advancements in AI and their implications for legal practice.
-
-
Data Security:
-
Conduct regular security audits of AI providers.
-
Implement robust encryption and data protection measures.
-
-
Oversight and Accountability:
-
Develop standardized protocols for reviewing AI-generated materials.
-
Assign dedicated personnel to oversee the ethical use of AI tools within the firm.
-
VII. Conclusion
The Florida Supreme Court’s Opinion No. SC2024–0032 and the Florida Bar Ethics Opinion 24–1 establish a clear roadmap for navigating the ethical complexities of AI integration in legal practice. By updating the Rules of Professional Conduct, the Court has ensured that lawyers can leverage the benefits of AI while maintaining their ethical responsibilities. These changes underscore the importance of vigilance, education, and oversight in preserving the integrity of the legal profession in the face of rapid technological change.