ABA Formal Opinion 512 — The Starting Point for Every Jurisdiction
The short version: Attorneys must understand AI before using it, protect client confidentiality when using it, and charge fees honestly because of it.
In July 2024, the American Bar Association issued its first formal guidance on generative AI in legal practice. Formal Opinion 512 does not prohibit AI use — it sets the ethical floor. Four Model Rules are squarely implicated:
- Rule 1.1 — Competence. Attorneys must understand the benefits and risks of AI tools they use, including how the tool handles client data, whether it hallucinates, and what its limitations are. Choosing a tool without understanding it is not competent practice.
- Rule 1.6 — Confidentiality. Any client information processed by an AI tool must be protected under the same standard as any other confidential communication. This requires understanding the platform's privacy policy, data retention practices, and whether inputs are used for model training.
- Rule 1.4 — Communication. Attorneys may have an obligation to disclose AI use to clients, particularly where it affects how their matter is being handled or what costs are being incurred.
- Rule 1.5 — Fees. Efficiencies gained through AI must be passed on to clients on hourly matters. Billing full time for work that took a fraction of the time because of AI is an ethics violation.
Opinion 512 is the national baseline. Whether your state bar has issued its own guidance or not, these four obligations apply. The opinion has been cited in nearly every state bar AI opinion issued since — treating it as settled foundation, not optional guidance.
EIQ is purpose-built for Rule 1.6 compliance. Client documents are processed through private AWS infrastructure, never used for model training, and encrypted at rest with AES-256-GCM. The tool is designed for attorney-directed use on specific matters — not general-purpose AI where confidentiality cannot be guaranteed.
California Practical Guidance — Understanding LLMs Is Not Optional
The short version: California attorneys must understand how large language models actually work before using them on client matters — including the specific risks of the tool they choose.
The California State Bar's Practical Guidance, approved by the Board of Trustees in November 2023 and developed further through 2024, goes further than most states on the competence requirement. It is not enough to know that AI exists or that it can help — attorneys must understand the model they are using, including its hallucination risks, its data handling practices, and whether its outputs can be verified.
- Understand the tool specifically. Generic AI literacy is not sufficient. An attorney using a legal AI tool must understand that specific tool's data policy, model architecture at a basic level, and known limitations.
- Evaluate confidentiality before using. Before inputting any client data, attorneys must evaluate whether the platform's privacy policy and terms of service are compatible with California's confidentiality obligations under Business and Professions Code § 6068(e) and Rule 1.6 of the California Rules of Professional Conduct.
- Supervise AI output. AI-generated work product must be reviewed by the attorney before use. Delegation without supervision is not ethically permissible.
California's guidance is the most demanding on the competence side. Attorneys cannot simply adopt a tool their firm has approved — they bear individual responsibility for understanding the tool they are using and evaluating its compatibility with client confidentiality obligations.
EIQ is designed for attorney evaluation. The privacy policy and data handling practices are specific and verifiable: private cloud routing, no training on client data, encryption at rest. The tool produces cited output linked directly to source documents — every answer is verifiable against the record, not a hallucinated summary.
Florida Opinion 24-1 — The First State Bar Formal AI Ethics Opinion
The short version: Florida attorneys can use AI, but confidentiality, competence, billing, and advertising obligations all apply — and billing for AI-generated time savings you didn't spend is a violation.
Florida's Opinion 24-1, issued January 2024, was the first comprehensive state bar ethics opinion on AI in legal practice in the country. Its four key areas:
- Confidentiality first. Lawyers using AI must educate themselves on the tool's policies for data handling, sharing, and self-learning before use. Inputting confidential client information into a platform that trains on user data or shares with third parties is a potential Rule 4-1.6 violation.
- Competence requires verification. AI output must be independently verified before use. Submitting AI-generated work product without review exposes the attorney to sanctions, malpractice liability, and bar discipline.
- Billing must reflect reality. If AI cuts a four-hour research task to forty-five minutes, the client on hourly billing is entitled to the benefit of that efficiency. Billing the full four hours is unethical. Opinion 24-1 makes clear that AI savings belong to the client, not the firm.
- Advertising rules apply. AI-generated marketing materials, client communications, and website content are subject to the same advertising rules as attorney-drafted content. Attorneys are responsible for AI-generated content they publish.
Florida's opinion is the clearest on the billing issue, which is the one most attorneys underestimate. The efficiency argument for AI cuts both ways — it is a reason to adopt AI and a reason to reduce hourly billing simultaneously. On confidentiality, Florida's standard tracks the ABA: the platform's data practices must be compatible with Rule 4-1.6 before any client data goes in.
EIQ's pricing model is a flat monthly subscription, not hourly billing — eliminating the billing ethics tension entirely. On confidentiality, the platform meets Florida's standard: documented data practices, no training on client data, verifiable privacy commitments.
Texas Opinion 705 — Vet Your Vendor Before You Use It
The short version: Texas attorneys must thoroughly vet AI tools for confidentiality safeguards before using them on client matters — and must train staff to comply with the same obligations.
Texas Opinion 705, issued February 2025, addresses four core obligations. The most operationally significant for firms adopting AI is the vendor vetting requirement.
- Understand how the tool functions. Rule 1.01 of the Texas Disciplinary Rules requires competence in the tools attorneys use. For AI, this means understanding how the model generates outputs, what its known failure modes are, and how to identify errors.
- Vet the vendor specifically for confidentiality. Opinion 705 requires attorneys to thoroughly evaluate AI tools for confidentiality safeguards before use — not to assume compliance. The evaluation should include: whether data is shared with third parties, whether inputs are used for training, what the data retention policy is, and whether the platform provides contractual confidentiality commitments.
- Train staff. The obligation extends to non-attorney staff using AI at the attorney's direction. Supervisory attorneys are responsible for ensuring subordinates understand and comply with confidentiality rules when using AI tools.
- Verify all output. No AI output may be submitted to a court or provided to a client without independent attorney verification. The hallucination risk is explicitly noted as a professional responsibility concern.
Texas Opinion 705 is the most operationally demanding on the vendor side. Attorneys cannot rely on a vendor's marketing representations — they must review the actual data policy and evaluate it against their confidentiality obligations. Firms should document their vendor evaluation process as part of their AI policy.
EIQ is designed to survive vendor vetting. The privacy commitments are specific and contractually grounded: AWS Bedrock routing with documented no-training guarantees, AES-256-GCM encryption, zero telemetry collection. The data practices are stated precisely so attorneys can evaluate them — not summarized in marketing language.
NYC Bar Formal Opinion 2025-6 — AI Notetakers and Client Calls
The short version: Before using any AI tool to record, transcribe, or summarize client conversations, attorneys must obtain client consent, evaluate privilege implications, and understand the tool's data practices.
NYC Bar Opinion 2025-6, issued December 2025, addresses a specific and rapidly expanding practice: using AI notetakers and transcription tools on calls with clients. It is the most recent major bar opinion and directly relevant to any attorney using tools like Otter.ai, Zoom AI, or similar platforms on client calls.
- Client consent is required. Before recording any client call with an AI tool, the attorney must obtain the client's informed consent — not just notify them, but ensure they understand the implications.
- Privilege analysis is mandatory. Attorneys must consider whether recording the conversation creates discoverable material. A preserved AI transcript of a client's candid remarks may be usable by opposing counsel. The attorney must evaluate whether recording serves the client's interest in the specific matter.
- Understand the data practices of the tool. Attorneys must evaluate where recordings are stored, how long they are retained, whether the data is used for training, whether there is a right to deletion, and how the data might be retrieved through discovery.
- Accuracy verification is required. AI transcripts and summaries must be checked for accuracy before being relied upon or preserved as work product.
Opinion 2025-6 is a direct response to the widespread adoption of AI notetaking tools without adequate attorney oversight. Many firms adopted these tools during the remote-work period without evaluating their ethical implications. The opinion makes clear that the confidentiality analysis applies to every tool that touches client communications — not just document review platforms.
EIQ is a document analysis platform, not a call recording tool — so Opinion 2025-6 does not directly govern its use. However, the underlying principle is the same: any AI tool that processes client information must meet the confidentiality standard. EIQ's architecture is built to meet that standard across all the guidance above.
The Through-Line Across All Five Opinions
Every major ethics opinion issued in the last two years points to the same framework for compliant AI use:
- The tool must process client data in a private environment.
- The platform must contractually commit to not training on user data.
- The attorney must direct the use — not the client independently.
- The output must be verified before reliance.
Consumer AI platforms fail the first two conditions by design. Enterprise tools built for legal use, with documented privacy commitments and private cloud infrastructure, are what the guidance is pointing toward.
EvidentIQ is built to meet this standard.
Private AWS infrastructure. Zero AI training on your documents. AES-256 encryption. Cited output linked directly to source documents. Built for attorney-directed use on specific matters.