In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512. The opinion addresses attorney use of generative artificial intelligence tools under the Model Rules of Professional Conduct. Read at the surface, it restates familiar obligations under competence, confidentiality, communication, and supervision. Read carefully, it establishes a framework — not a checklist — that most firms have not yet worked through against the specific legal AI tools they have deployed.
Two years in, the more productive conversation is not whether firms are aware of Opinion 512. They are. The question is whether the due diligence that preceded their current AI deployments was designed to answer the questions the opinion actually asks. In most cases it was not, because most vendor review processes were built for a different category of technology.
What the opinion actually requires
Opinion 512's confidentiality analysis proceeds from Rule 1.6(c), which obligates lawyers to make reasonable efforts to prevent unauthorized disclosure of information relating to the representation. The opinion is explicit that this obligation extends to AI tools and that a reasonable-efforts analysis requires the attorney to understand, specifically:
- How the AI tool uses, stores, and protects client information during processing
- Whether the tool's handling of that information is consistent with the firm's confidentiality obligations
- Whether circumstances require obtaining informed client consent before inputting matter-related information into the tool
- How the firm's supervision duties under Rules 5.1 and 5.3 extend to attorney and non-attorney use of AI
This is a framework, not a binary test. It produces different answers for different tools, different engagements, and different risk profiles. The opinion does not declare any class of tool compliant or non-compliant. It assigns the analysis to the lawyer.
Where most due-diligence processes fall short
The majority of firms deploying legal AI in 2024 and 2025 ran the deployment through a vendor review process that had been refined across a decade of cloud software adoption. Those processes are well-calibrated to questions like data center location, encryption standards, breach notification timelines, and third-party certifications. They produce reliable answers on a set of questions that matter — and a different set of questions than Opinion 512 is asking.
A SOC 2 Type II report confirms that a vendor operates controls appropriate to the assurance criteria it selected. It does not, and was not designed to, answer whether a specific use of a generative AI tool satisfies Rule 1.6(c).
The distinction matters because Opinion 512's analysis concerns the handling of client information in motion — during processing — not only the handling of information at rest. When a query containing matter-relevant details is submitted to a cloud-hosted AI platform, that information transits infrastructure operated by parties outside the firm. The reasonable-efforts analysis must account for that transit, not only for what happens to the information after it arrives.
A firm can hold a vendor's SOC 2 report, a signed data processing agreement with zero-retention terms, and a current certification against the relevant ISO standards — and still have an open question under Rule 1.6(c) if no one has specifically analyzed what happens to privileged information during the processing chain for the particular workflows the firm has deployed. The analysis is specific to the tool, the use case, and the matter.
State bars have moved faster than federal guidance
Opinion 512 is the federal floor. Several state bars have since issued guidance that is more specific and, in some respects, more demanding.
The Florida Bar's Opinion 24-1 addressed generative AI directly and flagged that review of vendor security documentation is not, on its own, sufficient compliance. California's State Bar Practical Guidance on generative AI, issued in November 2023, goes further on vendor data practices and explicitly addresses whether attorney use of AI may require client consent. New York has issued layered guidance through the NYCBA and NYSBA, with particular attention to supervision duties under Rules 5.1 and 5.3 when attorneys use AI in substantive work.
A firm operating in multiple jurisdictions does not satisfy its obligations by mapping its practices to federal guidance alone. The state variance is material, and it is widening.
The agentic shift changes the reasonable-efforts calculation
Opinion 512 was drafted against a fact pattern in which attorneys use AI tools to assist with discrete tasks — drafting, summarizing, research — with a human attorney reviewing each output. That fact pattern has changed. By 2026, the dominant product direction across legal AI is autonomous agent capability: multi-step workflows planned and executed by AI, with human review at checkpoints rather than on every step.
The reasonable-efforts analysis becomes more demanding, not less, when a human is not reviewing every output. The attorney's supervisory obligations under Rule 5.3 do not diminish because the tool is more capable. If anything, the opposite is true: greater autonomy requires more rigorous architectural safeguards, clearer documentation of how the system handles privileged information, and more specific attention to what the tool does without direct human oversight.
This is not an argument against autonomous AI in legal work. It is an observation that the governance framework has to keep pace with the capability, and that most firms' governance frameworks were calibrated for a simpler fact pattern than the one they are now operating under.
The reasonable-efforts analysis is easier to complete, document, and defend when the firm has direct visibility into how privileged information moves through the system. That is true regardless of vendor model — enterprise SaaS, dedicated tenancy, or custom-built infrastructure. The firms that have done the cleanest compliance work are the firms that can answer the opinion's architectural questions without a call to their vendor.
The question worth asking internally
For a firm that has deployed one or more legal AI tools over the past two years, a useful internal exercise is straightforward:
For each tool currently in use, can the firm produce — without contacting the vendor — a written analysis that (1) identifies the specific workflows in which the tool is used, (2) describes how client information moves through the tool during those workflows, (3) confirms that the handling is consistent with Rule 1.6(c) under the current state of the vendor's contractual terms, and (4) documents the supervisory mechanisms the firm has in place to satisfy Rules 5.1 and 5.3?
A firm that can produce this documentation has likely satisfied the reasonable-efforts standard for that tool, at least as of the date of the analysis. A firm that cannot has an open compliance question. That is not an indictment. It is a starting point for the work that Opinion 512 assigned to the profession two years ago.
The firms that will be best positioned to answer client inquiries, regulatory requests, and carrier questionnaires in 2027 are the firms completing this analysis carefully now. The question is not whether to do the work. The question is what the work produces when it is done.