I have published a few blogs regarding the emerging use of artificial intelligence in the legal
industry and primarily I have taken an extremely skeptical view that AI will have a significant impact on the law anytime soon. ChatGPT, however, may signal a game changer - for the worse.
In short, ChatGPT is a chat-based interface with a program created by OpenAI. You can go ask it to explain what it is and how it works, and it will answer you far better than I can explain it. It seems to me, however, that it is exceptionally good at digesting many questions and giving intelligent and flowing answers to questions and even able to maintain a dialogue or conversation; at least compared to its predecessors.
The problem, however, is that people are going to start interfacing with programs like ChatGPT and believe they're getting sufficiently competent legal, medical, or other advice, and act on it. Of course, I can only really give an opinion on its legal results, but it is so obvious of a problem that the creators of ChatGPT include warnings in basically any response that can remotely result in someone relying on its potentially bad advice.
I would say, however, those warnings are not going to stop people from, as the adage goes, knowing just enough to be dangerous.
Take, for example, the input of "What is the code section in Virginia for libel's statute of limitations?"
I had to try a few different inputs to get a result, however, the response was Va. Code § 8.01-243 which is the personal injury section. Defamation's limitations period (libel, slander, etc.) is provided in Va. Code § 8.01-247.1. This is not an endzone dance on the mean AI getting an answer wrong, it is an examination of how it gets the answer wrong. I have a great deal of respect for the ChatGPT program and its programmers - it is a fascinating development and I believe it will be a revolutionary tool over the next 10 years.
And, again, to reiterate, the AI is programmed to explain that you should consult with an attorney for these types of questions - but the fascinating part, to me, is how it got the wrong answer because it had to be exceptionally intuitive and intelligent to get to this wrong answer.
I have observed that many people think defamation is not an injury to a person. However, the law generally views a defamation injury as a personal injury. Thus, the AI had to examine the law, figure out that defamation is a personal injury, then return the output of the personal injury section instead of, for example, an injury to the person's property interest in their reputation.
Even more fascinating, I challenged ChatGPT by telling it the correct answer; it apologized and incorporated the correct information in that discussion! A limitation of the current implementation, however, is that it seems intentionally designed not to incorporate and not to cross-pollinate new information learned from users. I initiated a new interface with it and asked the same question, it again supplied the wrong information.
I would anticipate that is a feature and not a bug, there have been examples of similar AIs going absolutely crazy as they learned from the public at large.
It is fascinating how far this interface has come. It will refuse to generate a Will, for example, but it will give very powerful, but sometimes specious, answers to questions. The next few years, I fear, we're going to see people relying on things like ChatGPT to horrible, or perhaps funny, results.
I'm just hopeful that as our AI overlords take over the world, they'll view us humans as we view cute kittens instead of ants or bacteria.
Comments