AI has officially moved from “interesting new tech” to an “indispensable legal assistant,” especially for plaintiff firms navigating complex medical records, rushed deadlines, and high case volumes. Understanding legal AI ethics has become essential, because while AI is fast and efficient, and unlike most humans, doesn’t panic when asked to summarize 3,000 pages of radiology notes, it also introduces new ethical and risk considerations that attorneys can’t afford to ignore.
But with great efficiency comes great responsibility. And while AI won’t ask for a raise or take a vacation, it does introduce a range of ethical and malpractice risks that no attorney can afford to overlook.
The legal profession is discovering that AI is less like magic and more like having a very bright but occasionally overconfident intern: capable of incredible work but in need of consistent supervision. Understanding the ethical expectations surrounding AI isn’t optional anymore, it’s part of being a competent lawyer in today’s landscape.
Below is a practical, plain-English guide to the most important issues your firm needs to understand.
1. The New Reality: Technological Competence Means Knowing Your AI Tools

More than forty states now impose an explicit duty of technological competence, and whether bars say it directly or not, this duty unquestionably extends to AI. This doesn’t require you to understand the mathematical architecture of a language model or decode machine-learning jargon. But you do need to know the fundamentals: what your AI tool does, what its limits are, and when it requires human intervention.
Think of it this way, if you wouldn’t let a paralegal draft a demand letter without first training and supervising them, you certainly shouldn’t let a machine generate one without understanding how it works. AI can analyze, summarize, and draft with astonishing speed, but it still relies heavily on the lawyer’s judgment, review, and guidance. The better you understand the tool, the safer and more effective it becomes.
2. Confidentiality Is Still Everything, Even When a Machine Is Doing the Work
One of the biggest ethical risks with AI is the simplest: many publicly available AI tools collect, store, or use your data in ways that would make any malpractice carrier faint. The duty of confidentiality under Rule 1.6 doesn’t change just because the work is being done by a computer. If anything, it becomes more critical.
Before using an AI tool, lawyers have to consider whether client information is being stored, transmitted, or reused. Some tools use prompts to train future models; some send inputs to remote servers with unclear security protocols; some don’t even explain what happens to your data at all.
For plaintiff firms handling medical records and sensitive client histories, this is a flashing-red warning sign. If you wouldn’t hand a stranger a box of HIPAA-protected documents, you shouldn’t upload them into an AI platform whose privacy standards are vague or nonexistent. The safest systems are those specifically designed for legal practice, that clearly state they do not train on your data, and that follow secure storage and deletion protocols.
3. AI Must Be Supervised, Just Like Any Nonlawyer Assistant

Ethics committees increasingly agree that AI functions like a nonlawyer assistant. That means lawyers have a duty under Rule 5.3 to supervise the work AI performs, just as they would supervise a paralegal or researcher. The lawyer remains responsible for the final work product, no matter how much assistance technology provides.
This expectation is not a hypothetical one. Courts have already sanctioned attorneys for filing briefs containing fabricated citations that AI tools created. The problem wasn’t the use of AI, it was the failure to check the output. AI can be extraordinarily helpful, but it cannot replace human review. The final responsibility for accuracy, clarity, and truthfulness always rests with the lawyer.
Successful firms treat AI as a powerful assistant, not an autonomous decision-maker. They maintain consistent review practices, ensure staff understand when AI is appropriate, and document the lawyer’s oversight of the work.
4. Emerging Disclosure Rules: What Some Courts Now Expect
While disclosure of AI use isn’t universally required, a growing number of judges and jurisdictions are beginning to address it. Some courts now require attorneys to certify whether AI assisted in drafting a filing; others require lawyers to explicitly state that all citations and authorities have been checked manually. A handful prohibit the use of certain generative AI tools unless the lawyer can guarantee accuracy.
These rules vary dramatically from one jurisdiction to another, but the trend is unmistakable: transparency is becoming more important. Plaintiff lawyers, especially those filing in multiple states—should stay alert to local rules and standing orders that mention AI. Even in jurisdictions without formal requirements, documenting your internal review process is simply smart practice.
5. Malpractice Risks: Old Duties, New Pathways to Violate Them
AI does not create entirely new malpractice risks, but it does create new ways to fall into existing traps. For plaintiff attorneys, the biggest areas of exposure tend to involve medical summaries, factual analysis, and legal citations.
If an AI system misreads a medical chart, misses a timeline issue, or misstates a diagnosis, and the attorney relies blindly on that information, the attorney, not the AI, bears the responsibility. The same holds true for legal research. An AI-generated brief may sound persuasive, but if the citations are wrong or nonexistent, the consequences are very real.
Another emerging risk involves deadlines. Some firms rely heavily on AI-powered automation for calendaring, drafting, or workflow management. If those systems fail or produce delays, the firm may face exposure for missed statutes or filing deadlines.
Finally, security issues remain one of the most significant malpractice risks. An insecure AI vendor that mishandles PHI or client information can expose the firm to lawsuits, regulatory actions, and ethical violations.
6. Building a Safe and Sustainable AI Practice in Your Firm

The good news is that mitigating AI-related ethical risks doesn’t require overhauling your entire practice. It simply requires clear processes. The most successful firms take time early on to develop an internal AI-use policy that outlines which tools are approved, how they should be used, when human review is required, and what information should never be uploaded into non-approved systems.
Training is equally important. Staff should understand not only how to use the tool, but also how to identify potential errors or red flags in AI output. The goal is not to eliminate mistakes entirely, no system, human or technological, can do that, but to catch them before they impact a case.
Documentation is another underrated safeguard. Recording when AI was used, what it produced, and who reviewed it can protect your firm if a question arises years later. Think of it as the digital version of keeping good notes in a case file.
Finally, firms should periodically audit their AI usage. An audit can reveal patterns, accuracy issues, or areas where human review needs strengthening. It also demonstrates a good-faith effort to maintain ethical and professional standards, something regulators and courts increasingly expect.
7. The Road Ahead: AI Will Become a Standard Part of Legal Practice
In the coming years, the legal profession will likely see more formalized AI ethics rules, more disclosure requirements, and more pressure to use secure, well-documented systems rather than casual, public AI tools. Insurance carriers may adjust malpractice premiums based on a firm’s AI policies. Courts may expand local rules related to AI-assisted drafting. Clients may even begin asking how firms use AI to improve efficiency and accuracy.
The takeaway is simple: AI isn’t a passing trend. It’s becoming woven into the very fabric of legal practice. And firms that adopt it responsibly will gain a real advantage, both ethically and competitively.
Final Thought: AI Isn’t the Risk, Unsupervised AI Is
AI can streamline your workflow, strengthen your case preparation, and help your team handle more cases with greater accuracy. But no AI tool, no matter how advanced, can replace the lawyer’s role as supervisor, reviewer, strategist, and ethical decision-maker.
Use AI. Embrace it. Leverage its strengths. But never forget that you, not the machine, are ultimately responsible for the work produced. Good ethics and good technology aren’t in conflict; together, they create a safer, stronger, more efficient plaintiff practice.




