Plenty of people have written about whether AI is going to replace lawyers. Leaving that question aside, not enough people have focused on the way widespread use of AI is going be used as evidence in legal proceedings.
Throughout modern times, when people had legal questions, they asked lawyers. And if they formed an attorney-client relationship with those lawyers, those conversations would be protected from being used against them in the legal process.
Now, the most powerful form of pseudo-intelligence in history has arisen over the course of just a few years, and people are already starting to use this tool en masse as a cost-effective substitute for legal advice. But unlike with not-always-cost-effective lawyers, everything that people type into these super-tools can and will be used against them.
Stated another way, your conversations with ChatGPT are neither privileged nor confidential. If you ask ChatGPT a question about whether you’re violating a law, and you later are accused of having violated that law, that can be used against you in a variety of ways.
And so smart plaintiffs’ lawyers and prosecutors will be able to use ChatGPT and other AI tools to establish liability and show wrongdoing and intentional misconduct. And in a great many cases, ChatGPT conversations will make that burden much easier.
ChatGPT is going to save a lot of people money on legal fees. But it’s also going to get a lot of people in trouble who arguably “confess” to illegal conduct in conversations with an artificial intelligence.
To be clear, search results have always been admissible, but the conversational nature of AI chats and prompts is a game-changer. GPT prompts much more closely resemble the kind of questions and inquiries you would ask an actual lawyer. And the answers are much more precise. The results of a search query may lead to a variety of results, some of which may directly answer your questions and some of which do not. But ChatGPT and other LLMs answer your question in a way that makes it much easier for someone to argue that you knew or should have known that what you were doing was illegal.
I’ve already experienced a few awkward and problematic moments with clients involving the discoverability of AI-related conversations. As one example, I had a client that was having daily conversations with me as we prepared for litigation last year. All those conversations were privileged, of course. Around that time, one of the principals of the company had a call with a family member where he vented about the litigation and divulged most of the details of our conversations. And he had an AI-transcription service summarize the entirety of that call for his records. That call and its contents were not privileged. That means the other side got a sneak peek at our litigation strategy.
You can imagine thousands of permutations of this with the all the sway people are starting to use AIs in their daily lives.
There are many legal claims where intent, knowledge, and notice are elements of the claim. With discoverable AI queries and prompts, recorded conversations, and integrations into every all parts of our day, defendants will be doing the dirty work for plaintiffs and prosecutors at rates perhaps never seen before.
Smart people have already started incorporating AI in their legal processes. But very smart people will know that there are certain types of questions and processes you shouldn’t be talking to an AI, even if you are confident that the AI can tell you the correct answer.