top of page
  • Writer's pictureMatthew Crist

AI, CHATGPT, AND THE RULES OF EVIDENCE

Updated: Aug 3, 2023


An average person has probably never considered the Rules of Evidence and how they may impact one's life. Hopefully, almost none of you will ever need to learn them, because, if you do, it likely means you’re in a legal battle that may result in incarceration or significant liability. As discussions swirl about Artificial Intelligence and ChatGPT, I am constantly considering how these incredible tools are going to substantially destroy the current conception of the Rules of Evidence.


A basic primer on the Rules of Evidence may be in order.


I recall a quote from Star Trek that I had hoped originated from some jurist of historical significance, but, no, it seems it comes, only, from Captain Picard in a Star Trek episode: “Your honor, the courtroom is a crucible. In it we burn away irrelevancies until we are left with a pure product, the truth.”


This line, likely barely remembered by the legal community, does not miss the mark too much. A trial does meet that description in many ways. However, not only irrelevancies, but documents and testimony which are cumulative, prejudicial, or have a low likelihood of accuracy.


All day long in a trial, many very relevant documents may be ruled as inadmissible because they have some problem that the court finds should preclude admission into the record for consideration by the jury. Take, for example, in a car accident case, a crash report and certain DMV records will be precluded.


Clearly, a report on the crash made by a police officer who arrived within minutes is relevant, it has a tendency to make the existence of the matter in the document more probable, but there is a hard rule in Virginia to preclude those relevant documents. Va. Code § 46.2-379. This statutory preclusion was likely made on the grounds that the jury would give it great weight, but it very well may have significantly misleading or wrong opinions or prejudicial effect that cannot be overcome. This likely happened so often that the General Assembly just said, enough, and created the statutory preclusion.


Other core issues involve hearsay: hearsay is an extremely complex topic. Hearsay is an out-of-this-court statement that is being presented to attempt to prove that the content of the statement (either verbal or documentary) is true. Hearsay could be its own year in law school – so I will not go too far down the rabbit hole. Suffice it to say, if someone says something not on the witness stand in front of the jury, today, then it is likely hearsay and likely to be precluded.


This rule comes from a policy position that the person on the sore end of that statement is fundamentally disabled from challenging the truth of a hearsay statement and, therefore, no matter how relevant or probative it may be, it will be precluded (keep in mind there are many exceptions, but this is the general idea).


Hearsay, authentication, foundation, and many other concepts in the Rules of Evidence set up a system whereby we burn away inadmissible evidence to attempt to arrive at the truth. True, accurate, relevant, and compete sets of documents and witness testimony that puts to the jury a framework of what happened, who caused a harm, what was the result of the harm, and how much should the Plaintiff be compensated – these are, hopefully, the pure product at the end of a trial.


Enter: AI, stage left.


Even just 7 years ago, making a movie with a dead actor was a feat of sophisticated and expensive computers, extremely talented artists, and wowed crowds who lined up to see it. Rogue One was released in 2016 and contained the character Grand Moff Tarkin played by Peter Cushin who died in 1994.


Today, children who were born when Rogue One was released are competent enough to use AI programs to make exceptionally convincing videos and audio clips. The results of these exceptionally simple programs are so convincing, and easy to use, that the foundation of the Rules of Evidence is shaking.


Forging a signature is a classic way a bad actor may pull a fast one over the judicial system – producing fake text messages is a few years old now, and hasn’t been too damaging in my estimation, but being able to make video or audio clips that can pass expert examination is devastating.


The Rules of Evidence and Courts are not prepared.


Many trials bear the vituperous din of one party making a claim as simple as water is wet with the opposing party claiming nope, it is dry. These types of polar opposite views of the same topic are resolved by a jury or judge who views the evidence, views the parties making their case on the stand, and make a judgment call, one walks away happy, the other exclaiming an injustice has occurred (often, no one walks away happy, but that is another story).


At present, the system is built upon rules of evidence and procedure to attempt only to permit evidence that is most likely credible and fairly presented. No one experienced in the law is naïve enough to believe that the result of every trial is in perfect alignment to the fact of the matter; but the rules attempt to get us as close as possible to that goal.


With tools presently available, true and false claims, which are equally important, of falsified evidence are going to quickly make their way into Courts.


In this video, Mike Boyd, who is not a particularly advanced programmer, shows the whole process he used, 3 months ago, to make several artificial videos.






While there are still clearly faults in his results, in just the last 3 months there has been significant progress on these tools. What is the next year going to bring? What about the next 10 years?


What should we do about it?


One fact that may surprise many: Courts generally proceed from a presumption that the litigants before the court are acting in good faith, obeying the law, and are obeying the instructions from the court. That presumption is sound and explains why many judges hit bad-faith litigants pretty hard when their bad conduct is proved. But, the rules built upon generations of wisdom may quickly be turned on their head as AI gets out.


And, indeed, the genie is out of the bottle. At this stage, I have some basic ideas about proposed rule and procedure changes that may resolve some issues, but I think, at a minimum, industry leaders and legislators need to examine the implications for falsified audio and video clips and need to examine how best to progress the rules of evidence to thwart bad actors who will use these tools.


One major boy who cried wolf problem is going to quickly appear as people on the wrong end of falsified evidence are going to be ignored or wrongfully trusted, equally problematic. I have seen cases of forged signatures where expert witnesses were both able to authenticate and discredit the signature. It then comes down to a credibility roll of the dice and juries might get it right.


Fundamentally, I think the rules of evidence should be relaxed after a claim that evidence is falsified, which might play a significant, dispositive, role in a case. One possibility, for example, comes from the dead man statute in Virginia that permits certain statements from a dead person to be admitted into evidence, which is otherwise inadmissible hearsay, only after a showing of some evidence corroborating the dead person’s statement.


Similarly, I believe the rules can be modified to admit otherwise inadmissible hearsay for a limited purpose where an audio clip, alleged to be falsified through AI, can be challenged.


Also, video and audio clips, otherwise admissible, should be confronted with additional authentication requirements such as production, by the proponent, of the device that recorded the video or audio to a court-appointed independent forensic examiner, paid equally by the parties, to corroborate the recording. Certain metadata can be extracted to authenticate audio and video files from the device that took them.

There are going to be manifold problems with any proposed solution, and some will disagree; what I do not believe is up for debate, however, is that AI software available today is already problematic for the Rules of Evidence. Courts and practitioners must meet these challenges head on and not wait to be reactive.


I suspect, very soon, we’re going to see evidence of a murder that is AI generated – and we’re not going to know the difference. This problem cannot be overstated, but the law seems to be standing still, at best.

78 views0 comments
  • LinkedIn
bottom of page