Lawyer busted for using ChatGPT to write brief after fictitious court cases cited

An attorney, representing a client in a lawsuit against an airline, has been called out for using ChatGPT after fictitious court cases were cited. 

That attorney could now face sanctions. 

The client, Roberto Mata, is currently suing Avianca Airlines after he claimed an employee struck him "in his left knee with a metal serving cart" causing him "to suffer severe personal injuries." The incident reportedly happened in August 2019 on a flight from San Salvador to New York. 

Mata is being represented by the law firm Levidow, Levidow & Oberman. Avianca is being represented by Condon & Forsyth. 

RELATED: Artificial intelligence and the music industry: Harmonious or harmful?

When the airline filed a motion to dismiss, Mata's attorneys opposed it, citing several court cases and decisions as to why the lawsuit should go forward. They cited Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, among others. 

That's when things took a turn – when it was realized those cases didn’t exist. 

According to court documents obtained by FOX Television Stations, attorneys for the airlines said they could not locate the cases and decisions cited by Mata's attorneys. 

RELATED: AI-powered system can inspect a car in seconds using bomb detecting tech

The airline took the matter to the judge. 

In an affidavit, one of Mata's attorneys, Steven A. Schwartz, responded that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed."

ChatGPT appeared to have given Schwartz nonexistent cases as part of the research. Schwartz admitted that he did not check the source of ChatGPT and apologized for his actions, saying he never intended to deceive. 

However, Schwartz could still  face sanctions. A judge will take up the matter next month.

FOX Television Stations has reached out to attorneys on both sides for comment. 

ChatGPT launched on Nov. 30 but is part of a broader set of technologies developed by the San Francisco-based startup OpenAI, which has a close relationship with Microsoft.

RELATED: Artificial intelligence model to help scientists predict whether breast cancer will spread

It’s part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.

Millions of people have played with it over the past month, using it to write silly poems or songs, to try to trick it into making mistakes, or for more practical purposes such as helping compose an email. All of those queries are also helping it get smarter.

However, ChatGPT is being criticized for not being factual. 

Its launch came with little guidance for how to use it, other than a promise that ChatGPT will admit when it’s wrong, challenge "incorrect premises" and reject requests meant to generate offensive answers. Since then, however, its popularity has led its creators to try to lower some people’s expectations.

"ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness," OpenAI CEO Sam Altman said on Twitter in December.

Altman added that "it’s a mistake to be relying on it for anything important right now."

The Associated Press contributed to this report. This story was reported from Los Angeles.