Tech titans are telling you that AI bots can be your friend, your lover and your whipsmart tutor or assistant.
But you can’t trust it to do your homework. Just ask a growing list of attorneys who face sanctions or were grilled by judges for filing inaccurate documents produced with artificial intelligence.
Our colleague Maggie Prosser wrote recently about an experienced Dallas corporate attorney who faces potential sanctions for citing fake cases in a court filing that was allegedly prepared using generative AI. The attorney, Heidi Hafer, was representing herself in a legal matter, and it’s not clear whether she might have used AI technology without realizing it. Hafer told a of judges that she used Google to look up case law and didn’t recall using any other AI-powered tool, Prosser reported. Google can offer AI-generated summaries in response to people’s search queries.
Hafer took responsibility for the error, according to her attorney.
This isn’t an isolated case. News headlines are quickly piling up about attorneys who presented false information after relying on faulty research from ChatGPT and other tools.
Attorneys might not be trying to fool courts on purpose, but the stakes are too high for the legal profession to be so blasé about the use of AI. In many cases, courts are deciding the fate of people’s families, livelihoods and freedoms.
The American Bar Association issued a formal ethics opinion last year that urged attorneys to consider the ethical implications of AI use in their work. Among other things, it stated that if AI helped attorneys do their work more efficiently, they shouldn’t bill for more time than what they actually spent entering information into a tool or reviewing the resulting draft for accuracy.
But the opinion is guidance, and it’s not always straightforward. For instance, it stated that depending on the circumstances, “client disclosure [of generative AI tools] may be unnecessary.”
Clearly, more robust guidance on disclosure and privacy protection is needed, especially since the confidentiality of client information that is entered into AI tools could be compromised.
The State Bar of Texas is working on its own framework on the ethical use of AI and how its use should be billed. According to a survey of 651 bar last year, 30% said they or their firm use AI in their practice.
“AI has become so pervasive in most technology applications that it is not feasible for attorneys to eliminate the use of AI, even if that were desirable,” reads a 2024 report by the State Bar’s AI task force.
All the more reason for judges and attorneys to act more urgently to set firm guardrails around its use.