ABA Formal Opinion 512 is the federal floor on attorney use of generative AI. It is not the ceiling, and for any firm operating in California, New York, or Florida, it is not the binding guidance either. The state bars in all three jurisdictions have now issued their own detailed frameworks — and those frameworks have moved faster, gone further, and in several respects diverged from the federal baseline in ways that matter.
For a firm with offices in multiple states, the compliance question is not what Opinion 512 requires. It is what the specific state bar requires for each attorney, each matter, each client. The answers are not identical. The gaps between them are where ethics exposure lives.
California — the structural framework that set the template
On November 16, 2023, the State Bar of California's Committee on Professional Responsibility and Conduct (COPRAC) issued the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law. California was the first major jurisdiction to publish a structured framework, and the document has since served as a reference point for other state bars, including New York.
The California guidance organizes obligations around confidentiality, competence and diligence, communication with clients, compliance with the law, candor to the tribunal, prohibition on discrimination, professional responsibilities, supervision, and fees and billing. On confidentiality, the language is direct: a lawyer must not input any confidential client information into a generative AI solution unless the lawyer reasonably understands how the tool will use that information, including whether inputs are retained, shared, or used to train the model. The guidance specifically instructs lawyers to anonymize inputs where practical.
California also goes further than the federal guidance on fees. A lawyer may charge for time actually spent refining AI inputs or reviewing AI outputs, but may not charge hourly fees for the time saved by using the technology. That is a specific, auditable constraint that does not appear with the same precision in Opinion 512.
The California guidance states that the duty of competence requires more than the mere detection and elimination of false AI-generated results. The lawyer's professional judgment is non-delegable.
The framework remains non-binding in the sense that it is guidance rather than a rule, but California courts have begun to treat violations seriously. In September 2025, the California Court of Appeal in Noland v. Land of the Free, L.P. sanctioned an attorney $10,000 for submitting appellate briefs containing fabricated AI-generated citations, with required notification to the client and the State Bar. The guidance provides the framework. The court provides the consequences.
Florida — the sharpest specific requirement
The Florida Bar's Ethics Opinion 24-1, issued January 19, 2024, is the document most worth reading line by line. It is brief, specific, and produces a concrete requirement that does not appear in the California or New York materials with the same clarity.
Opinion 24-1 requires that a lawyer using a generative AI tool sufficiently understand the technology, including whether the program is self-learning. Where the tool's use involves the potential disclosure of confidential client information to a third-party provider, Opinion 24-1 directs the lawyer to obtain the client's informed consent before proceeding. The opinion explicitly contemplates scenarios where the vendor's terms of service are ambiguous or where the tool's data handling cannot be fully verified, and in those scenarios the default position is consent.
This is a sharper requirement than the analogous federal language. Opinion 512 frames informed consent as something the lawyer should consider in certain circumstances. Florida's Opinion 24-1 is more prescriptive: where the analysis produces uncertainty, the lawyer's obligation is to seek consent. The opinion also explicitly addresses chatbots and AI-driven client intake, requiring clear identification of non-human systems and limits on what an automated tool can do without attorney review.
Florida's framework also treats generative AI the way the Florida Rules treat nonlawyer assistance. Rule 4-5.3's supervision duties apply, and the opinion spells out that a lawyer's duty to review AI-generated work product is equivalent to the duty to review the work of a paralegal — which is to say, personal, substantive, and non-delegable.
New York — layered guidance with the clearest disclosure expectation
New York's framework is distributed across two sources, and both matter. In April 2024, the New York State Bar Association's Task Force on Artificial Intelligence published an 85-page Report and Recommendations approved by the House of Delegates. In July 2024, the New York City Bar Association's Professional Ethics Committee issued Formal Opinion 2024-5 on generative AI in the practice of law.
The NYSBA Report is the broader document. It covers competence, confidentiality, supervision, fees, bias, solicitation, candor to tribunals, and recommends an expansion of Comment 8 to Rule 1.1 to explicitly include AI competence. Where it goes further than the federal guidance is on disclosure: the Report advises lawyers to disclose AI use to clients, including through engagement letter language, and treats this as a meaningful element of the lawyer's communication obligations rather than a discretionary practice.
NYCBA Formal Opinion 2024-5 follows the California format closely but calibrates the analysis to New York's Rules of Professional Conduct. It addresses confidentiality, conflicts, competence, advertising, supervision, client consultation, candor to tribunals, and discrimination. On confidentiality, the opinion requires that lawyers understand the specific AI tool in use and its handling of inputs, and that client consent be obtained where the tool's use involves anything more than the kind of incidental AI now embedded in ordinary software like Word or Westlaw.
The New York materials are the most explicit on the disclosure expectation as a routine matter — not only where confidentiality risk is unclear, but as a general practice in client communication. A New York firm operating under the NYSBA Report's framework that does not address AI use in its engagement letters is operating below the published standard.
Where the variance produces real exposure
For a firm operating in only one of these jurisdictions, the compliance path is relatively clean: read the applicable framework, conduct the tool-specific analysis, document the supervision and confidentiality protocols, and reassess periodically. The analysis is demanding but self-contained.
For a firm operating in multiple states — and particularly for firms with offices in California, New York, and Florida, or attorneys licensed across those jurisdictions — the compliance path is materially more complex. The same AI tool used by the same attorney on the same matter produces different disclosure expectations, different supervision framings, and different informed-consent triggers depending on which state's framework governs. A single generative AI deployment across a multi-jurisdiction firm may satisfy Opinion 512 while falling short of Florida's specific consent expectation for certain tool categories, and while not meeting the NYSBA's disclosure practice for engagement letters.
This is not a hypothetical concern. Texas Opinion 705, issued February 2025, added another set of specific requirements on human oversight of AI-generated work product. North Carolina's 2024 Formal Ethics Opinion 1 and Pennsylvania's Joint Formal Opinion 2024-200 have added further variations. Roughly half of U.S. state bars have now issued some form of AI guidance, and the direction is clear: the variance is widening, not converging.
What a sophisticated compliance posture looks like in 2026
For a firm that takes this seriously, the work is not finished when a single AI policy has been adopted and circulated. The posture that survives scrutiny has four elements.
First, tool-specific analysis. The firm identifies each AI tool in active use, documents how client information moves through that tool, and maps the handling against the controlling ethics framework in each jurisdiction where the tool is deployed. This is not vendor-questionnaire work. It is the kind of analysis a partner can sign.
Second, jurisdiction-specific calibration. Where the firm operates in more than one state, the compliance documentation reflects the strictest applicable standard — not the federal floor, and not the laxest state framework.
Third, client communication practices. Engagement letters, matter-opening procedures, and consent protocols are updated to address AI use at the level of specificity that the applicable jurisdiction's framework requires. For New York matters, that is closer to a default disclosure practice. For Florida matters involving tools where confidentiality handling is uncertain, that is specific consent.
Fourth, ongoing supervision protocols. Rule 5.1 and 5.3 analogues in each jurisdiction apply to AI use the same way they apply to nonlawyer assistance. The supervision must be documented and specific, not implicit in general firm policy.
The question worth sitting with
If a client in California asked the firm to walk through specifically how its current AI use satisfies the California Practical Guidance for the specific matter the client is paying for, could the firm answer in the next meeting? The same question for a Florida client referencing Opinion 24-1. The same question for a New York client referencing the NYSBA Report and NYCBA Formal Opinion 2024-5.
A firm that can answer all three, specifically and without caveat, is ahead of the 2026 compliance curve. A firm that can answer one well but would have to reconstruct the others is where most of the profession currently sits. The state bars have done the work of defining the standards. The firms that address the variance first are the ones whose compliance posture will hold up under the scrutiny that 2027 is going to bring.