Jerome Dewald sat with his legs crossed and his hands folded in his arms in front of a secondary group of New York state judges, ready to support the reversal of the lower court’s ruling in his dispute with a former employer.
The court had allowed Mr Dewald, who was not a lawyer and represented himself, to accompany his argument with a predetermined video presentation.
As the video began to play, it showed a man seemingly younger than Mr Dewald’s 74 years wearing a blue shirt and a beige sweater and stands in front of what seems to be a blurry virtual background.
A few seconds in the video, one of the judges, confused from the image on the screen, asked Mr Dewald if the man was his lawyer.
“I created this,” Mr Dewald replied. “This is not a real person.”
The judge, justice Sallie Manzanet-Daniels of the first court of the secondary section, stopped for a moment. He was clear that he was unhappy with his answer.
“It would be nice to know that when you made your application,” he broke.
“I don’t appreciate being misled,” he added before shouting for someone to turn off the video.
What Mr Dewald failed to reveal was that he had created the digital avatar using artificial intelligence software, the latest example of AI that enters the US legal system in potentially alarming ways.
The hearing at which Mr Dewald made his presentation on March 26 was filmed by court cameras and reported earlier than the Associated Press.
He arrived on Friday, Mr Dewald, the plaintiff in the case, said he was overwhelmed by the embarrassment during the hearing. He said that he had sent the judges a letter of apology shortly afterwards, expressing his deep regret and acknowledging that his actions “excluded” the court.
He said he had resorted to the use of the software after stumbling his words in previous legal procedures. Using AI for the presentation, he thought, could reduce the pressure he felt in the courtroom.
He said he had planned to make a digital version of himself, but had encountered “technical difficulties”, which prompted him to create a fake person for recording instead.
“My intention was never to cheat, but to present my arguments in the most effective way,” he said in his letter to the judges. “However, I acknowledge that correct revelation and transparency should always prevail.”
A self -styled businessman, Mr Dewald addressed a previous decision in a contract with a former employer. Finally he presented an oral argument in the secondary hearing, to hit and get frequent pauses to reconstitute and read prepared observations from his cellphone.
As awkward as he can, Mr Dewald could get some comfort in the fact that real lawyers have taken the problem for the use of AI in court.
In 2023, a New York lawyer had serious implications after using ChatGPT to create a short brief short with false judicial views and legal references. The case presented the defects on artificial intelligence and resonated throughout legal trade.
In the same year, Michael Cohen, a former lawyer and stabilizer for President Trump, provided his lawyer with false legal references he had received from Google Bard, an artificial intelligence program. Mr Cohen finally supported mercy from the federal judge chairing his case, stressing that he did not know that the Genetic Text Service could provide false information.
Some experts say that artificial intelligence and large linguistic models can help people who have legal issues they need to face, but cannot afford lawyers. Still, the risks of technology remain.
“They can still distort – produce very exciting” actually “either false or foolishly,” said Daniel Shin, Assistant Research Director at the Legal and Judicial Technology Center at William & Mary Law School. “This risk must be addressed.”