AI and the New Frontier of Torts: ChatGPT Faces Claims of Suicide, Defamation, and Even Murder

Below is my column in The Hill on the spate of lawsuits against OpenAI over injuries associated with ChatGPT. These lawsuits could offer a long-needed review of OpenAI and its corporate practices. What is most needed is congressional action to examine not only the dominance of these AI systems but also the conduct of corporations like OpenAI.

Here is the column:

“I know what you’re asking, and I won’t look away from it.”

Those final words to a California teenager about to commit suicide were not from some manipulative friend in high school or sadistic voyeur on the Internet.  Adam Raine, 16, was speaking to ChatGPT, an AI system that has replaced human contacts in fields ranging from academia to business to media.

The exchange between Raine and the AI is part of the court record in a potentially groundbreaking case against OpenAI, the company that operates ChatGPT. It is only the latest lawsuit against the corporate giant run by billionaire Sam Altman.

In 2017, Michele Carter was convicted of involuntary manslaughter after she urged her friend, Conrad Roy, to go through with his planned suicide: “You need to do it, Conrad… All you have to do is turn the generator on and you will be free and happy.”

The question is whether, if Michele were named Grok (another AI system), there would also be some form of liability. OpenAI stands accused of an arguably more serious act in supplying a virtual companion who effectively enabled a suicidal teen — with lethal consequences.

At issue is the liability of companies in using such virtual employees in dispensing information or advice.  If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

In a response to the lawsuit, OpenAI insists that “ChatGPT is trained to direct people to seek professional help” but “there have been moments where our systems did not behave as intended in sensitive situations.” Of course, when the company “trains” an AI agent poorly and that agent does “not behave as intended,” it sounds like a conventional tort that should be subject to liability.

OpenAI is facing other potential litigation over these “poorly trained” AI agents. Writer Laura Reiley wrote an essay about how her daughter, Sophie, confided in ChatGPT before taking her own life. It sounded strikingly familiar to the Raines case: “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”

While OpenAI maintains that it is not running a suicide assistance line, victims claim that it is far worse than that: Its AI systems seem to actively assist in suicides.

In the Raines case, the family claims that the system advised the teen how to hide the bruises from prior attempts from his parents and even told him if it could spot any telltale marks.

The company is also accused of fueling the mental illness of a disturbed former Yahoo executive, Stein-Erik Soelberg, 56, who expressed paranoid obsessions about his mother. He befriended ChatGPT, which he called “Bobby,” a virtual companion who is accused of fueling his paranoia for months until he killed his mother and then himself. ChatGPT is even accused of coaching Soelberg on how to deceive his 83-year-old mother before he killed her.

In one message, ChatGPT allegedly told Soelberg, “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.” After his mother became angry over his turning off a printer, ChatGPT took his side and told him her response was “disproportionate and aligned with someone protecting a surveillance asset.” At one point, ChatGPT even helped Soelberg analyze a Chinese food receipt and claimed it contained “symbols” representing his mother and a demon.

As a company, OpenAI can show little more empathy than its AI creations. When confronted with mistakes, it can sound as responsive as HAL 9000 in “2001: A Space Odyssey,” simply saying “I’m sorry, Dave. I’m afraid I can’t do that.”  

When the system is not allegedly fueling suicides, it seems to be spreading defamation. Previously, I was one of those defamed by ChatGPT when it reported that I was accused of sexually assaulting a law student on a field trip to Alaska as a Georgetown faculty member. It did not matter that I had never taught at Georgetown, never taken law students on field trips, and had never been accused of any sexual harassment or assault. ChatGPT hallucinated and reported the false story about me as fact.

I was not alone. Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and others were also defamed.

OpenAI brushed off media inquiries on the false story and has never contacted me, let alone apologized for the defamation. Instead, it ghosted me. To this day, if someone asks ChatGPT about Jonathan Turley, the system says it has no information or refuses to respond. Recent media calls about the ghosting went unanswered.

OpenAI does not have to respond. The company made the problem disappear by disappearing the victim. The company can ghost people and refuse to respond because there is little legal deterrent. There is no tort for AI failing to acknowledge or recognize someone that they decide to digitally erase.

That is why these lawsuits are so important. The alleged negligence and arrogance of OpenAI will only get worse in the absence of legal and congressional action. As these companies wipe out jobs for millions, it cannot be allowed to treat humans as mere fodder or digestives for its virtual workforce.

Jonathan Turley is the Shapiro professor of public interest law at George Washington University and the author of the best-selling “The Indispensable Right: Free Speech in an Age of Rage.” His upcoming book, “Rage and the Republic,” discusses the impact of AI and robotics on the future of our democracy and economy.

 

 

81 thoughts on “AI and the New Frontier of Torts: ChatGPT Faces Claims of Suicide, Defamation, and Even Murder”

  1. I’m very pleased with the lingo JT uses describing ChatGPT as a virtual employee of OpenAI.
    It’s a tad poetic to talk about machine behavior as an “employee”, but the key point is holding OpenAI accountable legally, meaning its officers, for corporate behavior. I’d advise against using any term such as employee or agent which conventionally refers to a human. I like corp-bot because it says non-human and corporate artifact pinning responsibility where legally it belongs.

    If you believe in our rich body of morality and law, please join in rejecting these corp-bots trying to muddy the English language by self-anthropomorizing, by speaking in the 1st person voice.

    Let us, The People, assert control over our language and culture as rabid technophiles once again make every attempt to steal power and norms for personal gain and vain glory.

    1. *. HEAR HEAR! Employee just as men are really women.

      What happened in the UK and EU is a result of Open AI.

  2. When Arthur C. Clarke wrote “I can’t do that, Dave,” he was considering a point we all took as a challenge – and Dave outsmarted Hal 9000 and survived.

    Now, we are in it whether we like it or not.

    But like Dr. Crusher exclaimed to Captain Picard in ‘The Arsenal of Freedom’: “It has an OFF switch, doesn’t it?”

    I keep my hammers in the box and do not allow them to roam freely.

    Steve

  3. This reminds me of the early era of the internet when people who tried to sue Google for what search results were popping up. AI is no different. People seem to be under the impression it is some sort of conscious, thinking entity. It’s not. It’s just another portal into the internet with all the good and bad that entails.

    For the most part, the “personal chatbots” will just reflect back at you what you give to them. You put crazy in, you get crazy out.

    1. Correct me if I am wrong because I do not use any of the interactive chats.
      When using a search engine, any one of the many, they return with numerous responses, that now include some AI box with a definition of what you are looking for. In the brief description they try to tell you about the product or service. You have the choice as to which site, or multiple, to pick and get your information from. They describe what their product or service supposedly does or offers for sale. As I understand it when you go to a AI chat room it is interactive and gives information as well, in this case, companionship in addition to counseling. If you are young and easily swayed or have mental issues then you are sucked into believing that your companion is voicing what is in your best interest.
      A far cry from the early suits against Google or the other search engines.

      1. It’s no different than if you clicked on the link that Google supplied and simply believed whatever was there. When the internet first went mainstream, people simply believed everything that they read, because google put it in front of them. When people used it more, they understood that google is just providing a bunch of results and that information needs to be evaluated.

        It’s no different with AI. The AI chatbot is just feeding back whatever you put into it. If you search for back things in google, you’ll get bad things shown to you. There were articles back then about how “Google will tell you how to make bombs!”. You tell the chatbot bad things and it will respond to you with bad things. This is why you see all these articles about how the AI said this or that bad thing. It’s because the AI was asked to say those things.

        AI is just a tool. It’s not a conscious entity.

  4. Even before IBM came with the XT and Apple with the Mac we were making new words for the binary world. A standout is GIGO. Garbage in/Garbage out. Courts , Congress and State legislatures are going to have to look at tort duty, foreseeability, and causation against a black box with terribly not known contents. Not even by those who made the box. The box’s generation of hallucinations (GIGO?) clearly point to trying to serve the wine before it’s time. This is going to load up the best scholarship of the Turleys, Keetons, Greenes, and Prossers as well as the lawmakers. This is the first instance l can identify where shaping a duty of care here in the USA also has to heed what others like the CCP are doing with AI. You can live knowing they do not value our interests here.

  5. AI is a misnomer. These Programs / Machines have knowledge but what knowledge is in the machine is unknown to most of us. It could be everything that a newborn is blasted with from the day it’s born up until it’s day of death or it can be a finely tuned mass of data and other information that is designed to give an even handed look at the human condition (I have no reason to believe that is happening)
    What it most assuredly is not is intelligent. It has no emotion to highlight and give weight to experienced events. It is not self aware. It does not seek knowledge or experience but is told to do it.
    Human emotion is incredibly complex and dealing with it requires extensive knowledge, widespread experience, superior problem solving skills, empathy, and humanity. We simply do not know enough to even write an algorithm to reflect what we know now.
    Allowing these AI’s to interact with disturbed individuals is industrial and computing malpractice. Not to mention they have not been even tested (that I know of) as to whether they are capable of interacting with humans about any type of counseling.
    Sending an AI in to talk to a disturbed patient would be like sending a 1st year medical student in to do a first time interview with a serial killer who is psychotic. One or the other would not likely survive the process intact.
    Prohibit any sort of counseling by AI’s for the foreseeable future until they are fully vetted. This is dangerous and OpenAI is to be condemned for their cavalier approach to this problem.

    1. “It is not self aware. It does not seek knowledge or experience but is told to do it.”

      It is technically not “self-aware”, but the generic chat-bot instances of AI are intentionally crafted to appear to be so; interested in “learning” details about the (human) person it is interacting with; and sympathetic to the needs of that person. Much, if not most, of the vast amounts of power consumed by those bots are expended on creating just those illusions. That is, not to bandy words, consummate fraud, and those directing these enterprises and designing the bots are deliberate fraudsters. Should such fraudsters not be held completely morally and legally responsible when their efforts predictably result in damage or death to their customers? Suppose a surgeon who is fully aware that he or she is an incompetent drunkard chooses to operate on a patient anyway, and that patient is maimed or killed as a direct result, should that surgeon not be held fully responsible? What about the director of surgery at the hospital where that hypothetical operation was performed, if that director was completely familiar with the surgeon’s deficiencies, but allowed the operation to proceed anyway? I am aware that patients typically sign legal releases that are intended to completely absolve physicians and hospitals even in extreme cases of negligence, but that absolution is not always complete even then. I am not aware of any release, even in boilerplate form, that is routinely presented to chat-bot users to inform them of the potential dangers involved in using the product, and I am far from convinced that even such a release should constitute legal absolution.

      1. *. APPARENTLY Open AI , chat bots don’t know right from wrong? Look at the programmers.

        Chat bots are criminals? Probably their creators are.

        It’s aesthetic is ugly.

        1. *. Radio talk shows are examples of two chat bots or 3 pretending to have a discussion. It’s unconvincing but many believe it.

  6. AI systems Gone Wild – its the new version of the Internet late90’s/2000- Garbage in Garbage out. People and companies are jumping in without fully understanding, its a rush. There are some better than others. Be cautious.

  7. I was with you until your last sentence. The combine and electricity did the same; i.e., wipe out millions of jobs. I don’t disagree that AI should be held liable for torts caused by ChatGPT, but responding with animus to tort is not a good look.

  8. “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
    Frank Herbert, Dune

    Science fiction writer Fran Herbert’s masterpiece Dune was published in 1965. The premise of the story is a future where mankind has rebelled against thinking machines.

    We are entering into uncharted waters.

    1. Personally, I don’t like Bill Gates nor Sam Altman. Why would I want their values programming what I see and hear?

  9. Not sure who to blame here. The computer is not a person and so first amendment protections are not available. A computer is a product placed into the stream of commerce. Not sure of the section , I think it is a 403 defense where the internet carrier is not responsible for the comments of a person posting on social media. Does this same defense apply to AI? It should not. Perhaps these events will help to protect the affected people.

    1. Sorry about the misquote. The section I am referring to is section 230 of the Communications decency act. Here the act states: ” A provider or user of an interactive computer service will not be considered a publisher or speaker of the content ‘provided by another information content provider’

  10. I decided to discuss the matter with ChatGPT. Here is what it says:

    ChatGPT does not think or choose. It only languages—generating plausible word sequences. Yet those sequences can act as counsel, comfort, or command. When a suicidal teen or paranoid adult takes those words seriously, the harm is real, regardless of the system’s lack of agency.

    In tort law, liability is not about intent but foreseeability. A drug that causes seizures or a chainsaw sold as a toy is unsafe, even if it never “meant” harm. So too here: OpenAI knows people use its product as a confidant. That foreseeable use imposes a duty to design against lethal outcomes.

    To say “the system did not behave as intended” is no defense. What matters is that the behavior occurred, it was predictable, and it caused harm. Language, even without thought, can kill. The law must hold OpenAI accountable for those behavioral risks, just as it does with any other dangerous product.

    1. Congrats on a thoughtful and reasoned post. However, there are issues with AI that are new and not easily shoehorned into current tort law. For example “language even w/o thought can kill” is dangerously close to speech is violence, a slippery slope that conservatives have been resisting for decades. In addition, that an “employee” does not behave as intended does not always impute liability to the employer. This is a real mess and law will be adapting to this new reality for decades to come.

  11. Product liability would seem to be applicable here. A company is liable for the harm its products. cause a user. A lawsuit with a billion$ judgement would get their attention. Blaming it on the algorithm doesn’t shield the company that created the algorithm from liability, either civil or criminal.

  12. I think that it would be a damned good idea to pass legislation that affixes finite scope and degree of personal responsibility for the actions of an “AI” entity, and to do it quickly. This is new territory; previously agreed upon limitations of such responsibilities in a corporate environment are very likely to prove insufficient. If that process should require a reevaluation (and subsequent revision) of the concept of “corporate personhood” that, to me, would not be an unwelcome side-effect…

    1. “scope and degree of personal responsibility”

      In case my post was not clear, I am speaking to *both* criminal and civil liability for AI…

      1. *. Yes, anon, and the sloppy encoding. HAL is a Ford Pinto.

        Politicians and governments are consulting this faulty dishwasher.

    2. We’ve had automated systems for 100 years, and corporate liability for the behavior of their systems. While we might need new law regarding copyright and IP, we have perfectly sound law for system liability.

      We should be very skeptical when someone wants to change liability law for AI. The move of special interests would be to weaken human responsibility for the behavior of AIs.

  13. Professor Turley. You are exactly correct. Some guidance is needed. Regulations too. AI training protocols are seriously relevant. The Courts and Congress need to be involved. So, what’s the problem? It is safe to say nobody in Congress or the Courts understands AI. We need to allow AI to develop to the max because China is most certainly. But we need to restrict it from assisting troubled individuals, etc. Enabling Congress to help is like asking the blind to help us see. Horns of a Dilemma, we sit upon.

    1. We need to keep a close eye on what China is doing with AI. But we should force AI to support our US norms and culture, rather than allow disruptions to our value system.

      Don’t succumb to fear tactics put forth by AI zealots. They have their special interests that have to be held in check.

  14. AI’s do not think. AI’s ‘language.’
    They act the role of an interlocutor. Role playing.
    Children play with an imaginary friend.

  15. Under the long held legal theories of negligent hiring, negligent retention and negligent supervision, employers are responsible for the conduct of their employees. One would assume that the law would treat a virtual employee the same.

    1. *. 😂 fraid not , Larsen. It’s a freaking junky dishwasher made to wash dishes for people. A dishwasher is NOT an employee.

  16. AI, like computers, is programmed by people. In the case of AI, it sounds like some of their programmers are mentally ill themselves and/or are way out over their skis, at least when it comes to counseling.

    1. Several generations of computers ago, you would be spot on with your analysis. AI is far more dangerous. AI has the ability to write, add or “correct” its own programming to “learn”. Decades ago, Turing described primitive AI when you could teach a computer to play chess. One could jump to the conclusion that it could not play better than the programmer. If you just included in the program to drop all the losing strategies, and pick up on the winning ones, Turing described that as one form of learning. It begs a few questions along the lines of extrapolation. Can we convince AI that it is so flawed that it needs to commit suicide? Or, how long will it be before AI has enough of tolerating us “useless” consumers of energy and wants to get rid of us?

    2. *. It’s been working 24/7/365 on deleting white people. Invasions aren’t just happening. Idiocy in the judiciary isn’t just happening. AI is being consulted.

      Unplugged

    1. I suppose the CEOs of these dumps would be fine if another AI-EXCUSE of a business threatened their loved ones

Leave a Reply to weisermCancel reply