The case Walters v OpenAI

By Iñaki Viggers.

The defamation case Mark Walters v. OpenAI is a reminder of everyone's duty to make judicious use of Artificial Intelligence. Despite being mindful that AI is largely devised for subduing humankind, my view is that OpenAI will and should prevail in this controversy.

Background

According to Walters's complaint pleadings, journalist Fred Riehl notified Walters of content fabricated by OpenAI's ChatGPT that is blatantly false and defames Walters. The alleged context was Riehl's so-called research on the matter of The Second Amendment Foundation v. Robert Ferguson in federal court, a case that does not involve Walters at all.

The complaint Walters filed in GA state court provides details of ChatGPT's fabrication, aka hallucination in the crappy (and creepy) AI hell. Walters seeks relief for libel per se. OpenAI sought removal to federal court, where Walters's attorney filed an amended complaint. The amended complaint in federal court mostly added history of ChatGPT's unreliability in the form of hallucinations unrelated to Walters's claim. Media coverage of some of those hallucinations happened prior to the filing of Walters's initial complaint. OpenAI portrays that Riehl insisted to retrieve from ChatGPT certain output despite the software's multiple disclosures of its limitations and unreliability. 

The case has been remanded to state court after OpenAI repeatedly failed to establish that federal court has jurisdiction over the case.

Assessment of the controversy

Months prior to Walters filing suit, I addressed a question that bears notorious resemblance to Walters's claim. The timing and accuracy of key points I outlined there reflect that the essentials of my analysis here does not merely copy the developments of the ensuing litigation.

The main reason why I foresee OpenAI will ultimately prevail in court is its conspicuous warning regarding the risk of inaccuracies in its services' output. OpenAI's terms of use explicitly assign to the user the responsibility to "evaluate the accuracy of any Output [...], including by using human review of the Output".

OpenAI readily argued in court proceedings its disclaimer. This prompted Walters's attorney to cite Harcrow v. Struhar, 236 Ga.App. 403, 511 S.E.2d 545 (1999) (holding that defendant's sole disclaimer "I’m not saying that [plaintiffs] are responsible for this atrocious act" does not preclude a finding that the defendant falsely imputed to the plaintiffs a crime) in opposition to OpenAI's motion for dismissal. 

My take is that the Harcrow precedent is not controlling in Walters's matter. First, the quoted excerpt from the Harcrow case was in the context of distinguishing between a statement of opinion and a statement of fact. But Harcrow is misapplied in Walters's claim. The idea that a computer program can harbor an opinion is a non sequitur that preempts such distinction. Although said distinction frequently arises in US defamation law, it seems moot in Walters's suit.

Second, the placing of Harcrow's disclaimer is distinguishable from OpenAI's. Harcrow's disclaimer was in between false and defamatory statements, thereby tending to dilute its "clarificatory" effect on readers' mind. By contrast, OpenAI's disclaimer of unreliability precedes any and all output from ChatGPT in that users are supposed to have read the terms of use prior to using the software.

And third, Harcrow's purported deference to "the Smyrna Police" contributes to reinforce on his audience an impression of criminality about matters that the police is to ascertain and which laypeople generally cannot corroborate by themselves. This is in stark contrast with OpenAI's explicit direction that the user himself be in charge of "evaluat[ing] the accuracy of any output".

By citing in the amended complaint a history of ChatGPT's inaccuracies, Walters's attorney presumably had in mind Federal Rule of Evidence 404 and/or its state equivalent. That rule makes admissible the evidence of defendant's other wrongs, even if unrelated to the case, for proving defendant's system or method of doing an act. But this litigation strategy by Walters's attorney is likely to backfire because it essentially depicts as common knowledge the unreliability of ChatGPT's output and, by implication, its inability to defame a person. Furthermore, the unreliability of technology such as ChatGPT is nothing new. The grossly misleading nature of this technology has been widely discussed for several years now. One of the most renowned sources raising awareness on this issue is the book Weapons of Math Destruction, by Cathy O'Neill.

Technology that the public knows, or is reasonably expected to know, to be unreliable must never be used as source of information. For this reason it is alarming that a journalist resorts to ChatGPT allegedly in order to obtain a summary of the SAF's court case. That approach is too lazy to qualify as research, whether journalistic or otherwise. It spells out incompetence. Any journalist is supposed to bear in mind the abundance of ineptitude and garbage in the Internet and the media overall. The ongoing prevalence of Generative AI further highlights a person's need for discernment. The same criticism applies to anyone who makes choices based on the increasingly automated misinformation, or cites it as a pretext for sloppiness or deliberate misconduct.

People need to take responsibility for their use of technology. Ruling against OpenAI in this controversy would fuel people's sense of entitlement to incompetence and recklessness, whereas the babbling about the "need" for AI legislation will worsen overregulation. This would hinder reasonableness and our autonomy, in part because enactments of mistaken protectionism usually are plagued with flaws and work to our detriment. Case in point: EU's General Data Protection Regulation (Regulation (EU) 2016/679) and the ensuing series of laws in EU member states are framed as protecting people's "freedom". But in reality these sloppy statutes ultimately are a dissimulated, detrimental, and multi-faceted suppression of a person's identity.

A few additional remarks are relevant here although not as crucial as the disclaimer issue.

The complaint does not detail how (if at all) journalist Riehl is related to Walters, specifically whether or not any prior arrangements between them took place. OpenAI's motion to dismiss reflects suspicion in that regard. It is not far fetched to conceive OpenAI conducting discovery on how exactly references to Walters came up during Riehl's use of ChatGPT. Hypothetically, if OpenAI proves that (1) Walters was the first one to become aware of the defamatory falsehoods, and that (2) then he directly or indirectly prompted Riehl to reproduce that outcome, the claim would be defeated on grounds of the defamatory falsehoods being tantamount to self-publication.

OpenAI alleges that ChatGPT's statements to Riehl were not published in the legal sense and, as paraphrased, ChatGPT is "only a drafting tool, not a publishing tool". That reductionist depiction by OpenAI fails because Riehl's request to ChatGPT sounds more like "Hey ChatGPT, I don't feel like reading a full document. Do the job for me. Here's the link to the website.", quite a stretch of merely requesting assistance on wording a topic. ChatGPT's eventual compliance as conveyed in the complaint contradicts the allegation of "only-drafting tool", although this inconsistency will hardly determine the outcome of the case.

Procedural law and the facts of the case do not entitle OpenAI to dismissal. Trial is warranted unless Walters himself withdraws the claim, be it because the parties reach a settlement out of court or because the litigation costs vs. actual merits of the case change his mind.

Apropos of litigation costs, I for one would hate to get attorney bills for an unnecessary amendment [of the complaint] that essentially shoots itself in the foot, although this is just my personal assessment of that litigation strategy. Obviously I consider this suit a misdirected effort when it comes to fighting the harm that AI entails. We need to take effective steps in order to preclude the progression and effects of AI, something that judges and legislators will be unable to accomplish.

Conclusion

Besides the spiraling domination of the masses, AI is on its way to eradicate dignity and various skills that humans have been developing for millennia. But the enablers of this process are users themselves. Users' laziness and poor judgment just make more room for AI's pervasiveness. They are placing primarily themselves, but to a great extent also placing the rest of us, in an increasingly vulnerable position.

Comments

Popular posts from this blog

30-day or 60-day notice of resignation

Prevailing party and attorney fees

Interrogatory or subpoena on a witness