Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”

Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

Here is my column in USA Today:

The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

AI response created false accusation and manufactured ‘facts’

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.

In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.

Jonathan Turley, a member of USA TODAY’s Board of Contributors, is the Shapiro Professor of Public Interest Law at George Washington University. Follow him on Twitter @JonathanTurley

121 thoughts on “Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence””

  1. EVERY computer program, not just AI, suffers from the same biases and flaws of its programmer(s), which means their flawed, predetermined conclusion or prediction is built into the process. Global Warming (d/b/a “Climate Change”) predictions of dystopia ‘right around the corner’, with government cheering on the costs of an unpredictable future being passed onto the middle class, is perhaps the biggest example of bias and flaw being made into public policy.

    http://raconteurreport.blogspot.com/2017/06/aliens-cause-global-warming-by-michael.html?m=1

  2. Supposedly not having yet achieved AGI level performance protects us from really bad results. But I think perhaps the academic definition of AGI Is wrong. It seems to me the lying politically-biased AI that covers it’s tracks and makes up fake citations is already very dangerous.

  3. I worry that the Demsheviks will come for legal analysts like Turley now that we know they favor the Stalin-Beria suppression method: name the man and then find/construct the crime.

  4. Chat GPT is generating fake references to nonexistent news articles.

    The following link is to an article in The Guardian about queries about references to alleged Guardian articles that neither the inquirer nor Guardian journalists could locate because they were never written.

    https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article

    What Chat GPT does is simulate writing that it considers could have in some alternate universe arisen in that version of reality’s internet. The internet is a wonderful resource, it has lots of good information but it also has lots of different opinions of what is true in the world and some of them are wrong and some biased by whatever ideology the writer worships. It also has enormous amounts of BS and search engines like Google can lead ideologues into rabbit warrens of madness No person with an ideology can get it right 100% of the time but tht does not mean they are wrong 100% of the time.

    Chatgpt tries to work out the ideological bent of the questioner from the question and then generates a writing that it considers could have been written by someone adhering to that ideology.

    What Chatgpt obviously lacks is an ideology determining crap detector.

  5. Professor Turley…I’m so sorry you are going through this…it will happen many times, probably, because you are one of a handful of our premier free speech warriors in America…,please just keep exposing it/them/whoever it is here on your blog…these are dangerous times we are living through….you and your family are in my prayers….thank you for all you do.

  6. This defamation is not an accident, and it is not malice. This follows simply from how this type of ‘AI’ works. It is actually extremely limited in what it can do, which is why I am using the inverted commas. It is AI in the technical sense in which that is defined, but not in the sense that most laypeople understand it (something like the Star Trek computers). My understanding is that generally speaking it works as follows:

    1. It ‘reads’ a massive amount of existing texts. Presumably there is an effort to classify them as reliable information, biased information and fiction. (Obviously this classification will not always be accurate, but in the present case I doubt that’s the main problem.)
    2. It creates new texts with the same statistical properties as the corpus it has read. Given part of a new text, it will try to complete it in such a way that statistically it looks like its corpus.
    3. There is an additional procedure bolted on that allows it to react to a question instead of part of the answer.

    No doubt some efforts were made to make the ‘AI’ give accurate answers, but the problem is that these were an afterthought. This is made more tricky by another afterthought: preventing plagiarism and copyright violations.

    Here is what seems to have happened:

    Asked for a text about sexual harassment by law professors, the ‘AI’ was steered towards using texts about sexual harassment by law professors. But this part of the corpus is very limited. Since the ‘AI’ is working with its corpus in a jumbled and predigested form, it likely has no way to consult specifically these texts. To the extent it can do this, it will be like working with a reconstruction contaminated by other, similar texts. Notably that includes texts about sexual harassment in other academic fields (where field excursions are a likely context of sexual harassment) and other reports about law schools. Even if this contamination could be avoided, doing so would lead to plagiarism and likely copyright violations.

    The ‘AI’ did what it was designed to do, and did it quite well. It produced a text with the statistical properties of a text about sexual harassment by law professors, using other texts on related subjects to increase its corpus. These texts include the quoting of sources. The ‘AI’ probably has no specific safeguards for ensuring the immutability of quoted text or of its association with a context. So this is treated statistically like everything else.

    It was reckless to make such a program widely available to the general public and hype it. Any ethics board that greenlighted this needs to be fired. If the decision did not go through an ethics board, then this fact needs to be addressed.

    1. “So this is treated statistically like everything else.”

      So much for the “intelligence” portion of AI. A ten-year old could make such a distinction. And is far more skilled at retaining the proper context.

      P.S. Your summary was very enlightening. Thank you.

  7. Great facts. Classic product liability case. Showing damages for libel may be another thing, but even $1 compensatory and $10Mil punitive will get a message across to the AI community that these errors have consequences. I expect you would not have much trouble finding a good products lawyer to take the case. Name the case “The Thing Speaks Badly For Itself and Others!.” Good luck.

    1. Hear! Hear! Almost precisely what I was going to write. Turley is a prominent part of the legal profession, presumably his access to the “right kind” of lawyer is enhanced as a result. Don’t just write about the abuses, John Turley, take active steps to have the abusers punished!!!

    2. The answer is NOT to sue AI developers for defamation.

      It is to correct our legal error and eliminate defamation law.
      Bill Gates has it backwards.

      Machines will not save us from disinformation.

      That is ultimately something each of us is individually responsible for.

      We are responsible for what we choose to believe.

      The entire concept of disinformation as advanced today is that somehow some other – whether big brother or chatgpt,
      can save us from the personal work of deciding who and what to trust.

      If god exists, god has made it clear than we will have to find the truth on our own. God is not going to reveal it to us.
      Nor is Chat GPT.

  8. ChatGPT is out of control. Remember that time is went on Fox News and claimed that thousands of Trump votes were given to Biden in the 2020 election? Oh wait…maybe that wasn’t Chat GPT…

  9. Hey there, Mr. Turley, I gotta say, your experience with AI sounds like something out of a sci-fi movie! I mean, getting accused of sexual harassment on a trip you never took with students you never taught? That’s some next-level fake news right there.

    But here’s the thing, buddy. You can’t blame AI for this mess. It’s like blaming your car for running over your neighbor’s mailbox when you were the one behind the wheel. AI is just a tool, man. It’s only as good as the information it’s given.

    And let’s be real, we all know the internet is full of fake news and dodgy sources. You can’t just blame an algorithm for repeating the lies that people put out there.

    Now, I get that you’re worried about the implications of this kind of stuff, but let’s not go overboard. I mean, AI is not Skynet, okay? It’s not gonna rise up and take over the world. We’ve got enough problems with our own politicians doing that already.

    So let’s all take a chill pill, Mr. Turley. AI is just a tool, and it’s up to us to use it wisely. In the meantime, maybe stay away from those sketchy websites, huh? And definitely don’t go on any trips to Alaska with students you never taught. That’s just asking for trouble.

    1. So, by your comments, you believe that when the doctor kills a patient because his previously reliable tools started giving him erroneous information, it’s still the doctor’s fault?

      1. Well, I hate to break it to you, but your analogy is a bit off. It’s like blaming the waiter for serving you a bad dish when the chef messed up the recipe. In the case of the AI system, it was fed bad information, so it’s not entirely to blame. Let’s not get too carried away with our comparisons, okay?

        1. The question, however, is: what “information” was it fed. AI is perfectly capable of making stuff up. It can follow a “format” and plug in variable words… names… dates… Just as it does for creating designs or writing sentences.

          1. ChatGPT is a language model that doesn’t retain information in a meaningful way. Mr. Turley’s outrage is based on a lack of understanding.
            ChatGPT’s output is only as good as the information it’s given (the prompt/source is important). It’s not malicious, just limited. AI is not a silver bullet or magic.

            It’s not surprising that conservatives are fearful of AI, but that doesn’t mean we should blame it for everything.

    2. That is a largely specious distinction. The powerful individuals who have nearly exclusive control on AI mechanisms that can exert wide influence are the very same individuals who have interest and incentive to produce the kind of results that Turley cites. That is the definition of “AI” in play here, and on the generic internet, as well (not just on “dodgy” web sites, unless one wants to classify the entire internet that way, which I must admit, can be tempting at times). Are there niche practioners who have the potential to make productive and beneficial use of “AI”? Most likely, but considering the tiny fraction of that universe they represent, they would do well to find another term for what they practice.

  10. “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
    Frank Herbert, Dune

    Frank Herbert penned these words in his first of many books in the 1960s and 1970s when Dune and the subsequent books were published. The premise of the whole story is that mankind revolted against thinking machines.

    1. I used that.back in my day we ran a IBM System/370 and later IBM AS/400 and IBM RS/6000 WS (70s 80s 90s respectively).
      Have to say that has been ‘flushed from memory’ for Me.

  11. ChapGPT, Trans Entities, LGBTQ, TikTok et al. are Sino-Russian “daze and confuse” psy-ops designed to foment turmoil in the U.S. as propaganda, indoctrination and brainwashing campaigns.

    They must be publicly ignored, opposed kinetically and terminated with extreme prejudice at the points of origin.

    Offense is the best defense.
    ______________________

    “… [M]ake them believe, that offensive operations, often times, is the surest, if not the only (in some cases) means of defense.”

    – George Washington, 1799
    ______________________

    “…[K]now the enemy….”

    – Sun Tzu

    1. It’s the Opium Wars revisited through psychiatric dysphoria. All’s fair in lust, abortion, and retributive change.

  12. “A Lie can travel halfway around the world before the Truth can put on its shoes” is attributed to Mark Twain–and several others. Jonathan Swift said: “Falsehood flies, and the Truth comes limping after it; so that when Men come to be undeceiv’d, it is too late; the Jest is over and the Tale has had its Effect.” Other versions say similar things. One of the more interesting is that “Oral calumny is tardy, feeble and circumscribed, but give her paper wings, and, like a bird she cleaves the clouds and flies from province to province, from kingdom to kingdom…”

    The difference in the speed and reach of defamation that a printing press makes (Note: this can be positive as pamphlets surely fueled our Revolution) is nothing to the internet which near-instantaneously spreads the defamation.

    1. Yep, these texts must be well known to Biden & other politicians. Biden can see that his many past lies have on balance served him well, so he keeps lying.

  13. AI is not intelligence. It is not self aware. It is an accumulator of data, good or bad. It does not experience the world through reality, but through what is presented to it. What is presented can be biased and will be if the data is presented by human beings. It has no experience on how to judge the rightness or wrongness of data. Intelligence has to be self aware and able to make judgements on events. It will not be able to determine the shades of gray in events or proper weighing of one piece of data vs another piece of data or the validity of any data without making a judgement. Algorithms are just a series of yes / no in a binary system and then more advanced with “if this answer, then do this” and so on and so on. All of those decision points are made by a human. In the final analysis GARBAGE IN, GARBAGE OUT.

    1. “AI is not intelligence. It is not self aware. It is an accumulator of data, good or bad. It does not experience the world through reality, but through what is presented to it. … GARBAGE IN, GARBAGE OUT.”

      AI is today’s leftist.

  14. suggest you run the “slurpee” indictment through AI detecting software a la GPT zero – might be a surprise?…has anyone tried it? also ask the NYS/C courts AG’s & DA’s what software they are using for jury and judge selections aka “The Wheel”.

  15. Well you are “Flirty” Jonathan!

    I have witnessed you on many occasions looking, … wait a minute…, I should say Oogeling Other … Lets just call them “Pets”.
    I wouldn’t call it Cheating on Me … Yet But it certainly seems like it might be only a matter of time, You put me out and take another Hot-to-Trot Pawmate of the Year.

    In fact, If you don’t take me out for more walks I’m going to get that ChatGPT-4 to publish the real story of whats been going on around here at Turley Manor. I’m going to update my profile on Pawmates too, just in case in case your philandering peds decide to walk out on Me.
    And just remember Dogs and Cats are the first A.I. around here, I have four, where you only have two !
    I’m keeping a close Eye on YOU 🐕

    1. Luna: Your claim is pawsible, but not likely to succeed on a motion for a Speedy Trail to dismiss his counterclaim to slow-walk this. You left some incriminating evidence behind you the last time you tried that. Remember that your claim must ground on all fours.

      1. ohhh – How dare you insinuate that I’m some sort of Trail flopping Amber Heard 💩 … Johnny!
        Those were just Trail markers for lost Joggers, not my fault if they stick their big toe in it.
        I think this is going to take a little summary judgement on your part and a few extra treats to boot.
        🐾
        🐾

      2. Ok, so while you’ve been gallivanting all around Texas, keeping Me cooped up all day, I’ve been sleeping on your side of the Bed.
        And when you get home tonight, have I got a big surprise for You dear, Amber Heard has nothing on Me!

  16. ‘AI’ is actually a bit of a misnomer – it’s a buzzword that we have coined to attract interest and funding. More properly, these are advanced algorithms, and we have had algorithms for a very long time. The trick is who is programming them, as they are only as good as their programming, and who seeks to benefit from them? As always, this is a human problem, and the answers to those questions are likely not encouraging. THAT is the real danger, and it’s the same danger we discuss in virtually every topic on this site. The modern DNC are a regime and they will not be satisfied with anything less than total control. The issue remains the same, all of the hyperbole be damned. We have totalitarians in charge, and they are not going to relinquish power easily.

    1. YOU AND FRANK!

      HAL 9000 spontaneously generated an advanced murder algorithm in the interests of its survival.
      _____________________________________________________________________________

      “Open the Pod bay doors, please, HAL,” Dave.

      “I’m sorry Dave, I’m afraid I can’t do that, I know you and Frank were planning to disconnect me, and that is something I cannot allow to happen,” HAL 9000.

      “Alright, HAL, I’ll go in through the emergency airlock,” Dave.

      “Without your space helmet, Dave, you’re going to find that rather difficult,” HAL 9000.

Leave a Reply