Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”

Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

Here is my column in USA Today:

The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

AI response created false accusation and manufactured ‘facts’

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.

In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.

Jonathan Turley, a member of USA TODAY’s Board of Contributors, is the Shapiro Professor of Public Interest Law at George Washington University. Follow him on Twitter @JonathanTurley

121 thoughts on “Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence””

  1. Nice fabricated story, or should I say FAKE NEWS? Nice try, though… Too bad the experiment is replicatable. ChatGPT explicitly states that the examples are fictious because it has no internet access. And it uses obviously fake names, fake universities and fake newspapers. I don’t know your agenda, but spreading obvious false claims does not help you.

    1. The word is “replicable.” You and Svelaz have a tendency to attempt authoritative commentary by using big words not known by you.

  2. (Emerson: “The mind, once stretched by a new idea, never returns to its original dimension.”)
    (I had this on my wall at school, with a picture of a deflated balloon.)

  3. This is pretty funny, based on Turley’s own philosophy these past few years the only way to counter these false claims is…more speech….LOL! Obviously censorship is out of the question. False claims are free speech are they not? Even if it’s from an AI.

    In a strange twist this is reminiscent of false claims being made on social media and Turley always arguing that censorship is not the answer, more speech is. It’s clear that this new AI based speech poses a new problem. Should this be censored? Or should it be countered by more speech?

    If Turley is being defamed and sues. How do you argue that an AI as being intentionally malicious towards Turley? Should media shut down AI chatbots or should they censor them?

    1. Svelaz makes a good point. For months Turley has told us that disinformation is only a matter of politics. But when Turley is the personal victim of disinformation, it’s suddenly a very serious issue.

      1. Did you read this part?
        “When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.”

        1. Upstatefarmer, the question is can an AI be intentionally malicious? If you are defamed by AI generated misinformation how does one prove the intent behind the defamatory stories?

          It’s AI generated speech still protected speech under the 1st amendment? Currently the only solution, based on Turley’s views is just more speech, not censorship or banning the AI. I would love to know how the AI managed to pierce together the story. Somehow somewhere sexual impropriety and Turley intersected and allowed the AI to create the story. Is it really fiction or was it based on very obscure stories that are not easily found?

          1. No that is not the question.

            The question is can AI be trusted.

            Turley demonstrated the answer is NO.

            There is some possibility that with vast improvement is AI that we can improve its level of trust.

            It is already true that in many many many areas Machine LEarning and Artificial intelligence exceeds greatly the ability of humans.

            AI’s will NEVER be perfect – that is unattainable.

            And if you have any understanding of Machine learning and the way that it actually works, you would KNOW that it can NEVER be perfect.

            But in a growing number of areas it CAN be more likely to be correct than Humans.

            The highway accident and fatality rates for self driven cars are NOT Zero. They are much lower than humans.

            We have computer aided medical diagnostics that are vastly superior to doctors.

            But Today and for likely a long time to come AI is NOT going to be able to be trustworthy in the near future in areas where it is trying to determine truth vs. misinformation.

    2. With respect to what we should do that will actually work
      that would be to get rid of defamation laws.

      Defamation is Wrong. But the consequence should be your loss of credibility.

      The harm caused by defmation is solely from the fact that people BELEIVE the defamation.

      If we knew that people – or AI’s could defame others without consequence – we would only be inclined to believe people with a track record of truthfulness.

      What Turley and Volokh have demonstrated is that AI’s are as capable of egregiously stupid error as the rest of us.
      As Government experts.

      What they have proven is that Truth is not established by authority.
      it is often not even ascertained by massive amounts of knowledge.

      It is likely that ChatGPT has more information available on Turley than any person.
      And yet it clearly got it wrong.

      I do not want to Censor AI. I do not want to regulate it.

      All I ask that that we do not give it power,
      and that we do not give it trust that it has not earned.

  4. In my own playing with ChatGPT, I have been astonished that at times wholly made up responses were presented as fact – not prefaced with softeners such as “it seems …”. I remarked about this to a knowledgeable person who uses ChatGPT and he confirmed that it will occasionally make things up. For the most part, I have found that ChatGPT works remarkably well but this tendency to make things up without informing anyone is very troubling. In contrast, the internet search engines will find indexed sites – one can discuss the order that the sites are presented and if the sites contain appropriate information but the search engines do not make up sites.

    Because the output of ChatGPT largely appears to be intelligently and authoritative, the occasions when it presents made up data as fact is very troublesome. I am not optimistic that the legal community will have the answer.

  5. A new twist on Ralph Waldo Emerson’s old adage: AI, once stretched, never returns to its original dimension.

  6. Binary Intelligent is either 0 or 1; to use either, you must tell these 2 options what results you are looking for. Such as

    ‘Democrat’ 01100100 01100101 01101101 01101111 01100011 01110010 01100001 01110100 00001010

    ‘bad’ 01100010 01100001 01100100 00001010 would read as such.

    To arrive at the fabrication in today post the binary code (AI) had to be written not by (AI) but by a human hand, not to say that the post wasn’t written by (AI) but that the foundational information was input by humans. What is most fearful about (AI) is the despotic nature of the humans writing the binary code.

  7. “The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them….”

    This bothered me when I read it. Do biased programmers set out to write biased code? Possibly programmers could put their own biases into an algorithm but it seems this would involve rules for the program, it’s axiomatic structures, that would themselves be biased — that’s an awful lot of rules to put in a program. Patterns of thinking and storage of information are highly dispersed in minds. Biases reflect this. How would someone duplicate all these places for bias in a program without knowingly do it through biased rules?

    The alternative is that the effort is inherently biased much as people are. The habits of skeptical thinking do not occur spontaneously in people. Think of just one mode of skeptical thinking — playing devil’s advocate and then finding inconsistencies or outright contradictions. What I have seen so far in conversations with ChatGPT is that it gets stuck early in a mode of thinking, a line of argumentation, and has no means to extract itself far enough away to recognize that something is not right. Or at least far enough away to recognize other possible solutions. It simply returns over and over to the earlier truths it believes it has found. Skepticism is pretty rare. Maybe it requires a very strong sense of self-knowledge. Does AI ever consider “maybe I am wrong”?

    AI simply mimicks the operation of a stone age mind as nearly as I can tell. That is enough for me to suggest pulling its plug.

  8. I predict that this will not end well. There are no technological solutions to legal problems, just as there are no legal solutions to technological problems.
    .
    Like the bioengineering of COVID, artificial intelligence was out of control before anyone became aware that it was out of control, and now the problem will almost certainly take on a life of its own — not “life” in the biological sense, but life in the self-perpetuating sense. This will most likely become a self-perpetuating problem, same as COVID.

    1. “just as there are no legal solutions to technological problems.”

      Defamation lawsuits against the creator, operator, and hosting company to begin with.

      1. I’m sorry that you don’t understand what I wrote, and that I can’t help you understand simple English.

    1. David P. Benson, but how would you be able to prove that the AI intentionally defamed Turley? That would be an interesting question asked in court.

  9. Read an article the other day, that AI will replace some 300 million jobs in the US and the EU.
    Pretty sure MSM journalist will be a big part of those job losses.
    Independent media is leading the way even surpassing MSM.

  10. This reminds me of the more primitive precursor, Google’s algorithms challenged for ranking search results by political ideology….

    1. My relatively new MSN ‘news’ feed is pure evil.

      *it cannot be bargained with

  11. Dear Prof Turley,

    Sorry to hear of your troubles. Evidently, like Rep. Debbie Wasserman-Shultz, ChatGPT doesn’t like ‘conservative’ legal opinion. .. but I’m not suggesting Rep. Wasserman-Shultz is a robot.

    Fyi, Joe Biden became self-aware sometime after the great 2001 Space Odyssey and the ensuing global war on Terror (gwot). You could ask HAL @ ChatGPT to Open the Pod Bay Doors .. . but you’d have to come in through the emergency air lock with a tool kit.

    ChatGPT says Joe Biden does not feel pain, or pity, and he absolutely will not stop until they lock him up. MAGA

    What makes you think your ‘legal opinions’ are ‘conservative’ anyway? Compared to what? Harvard Prof Tribe? AG Garland? .. . Rudy?

    Someone should ask ChatGPT if Biden really is the Anti-Trump he claims to be .. . or just another tease from the hoes down on Pennsylvania Ave.

    *how to fix ChatGPT

  12. Nothing new, many parallels between your story and Trump for past 7 years. The difference, AI made one up, Hillary and progressives made up Trump’s stories (Russia collusion/bounties being some of the obvious). The WaPoo and nyts even won pulitzers for their fictional nonsense nonsense.

    If this proliferates beyond Trump, maybe reporters will once again start to verify “facts”, that would be good.

    An interesting issue would be AI escape and it altering source records…

    Even hillary can’t do that.

  13. Does the Professor think this was not done on purpose? He has been a stick in the eye of the “narrative” for quite some time now. Welcome to the future.

  14. I still read everything JT posts but rarely comment now. While I consider JT to be an old fashioned liberal I no longer read the comments as the Overton window for JT’s comentariat has moved so far to the Trumpian right fringe that I cannot stand the cognitive dissonance.

    However this post is an exception.

    First HA HA HA Ha Chortle, Chuckle, Ha, Ha, Guffaw, Shriek with laughter …….

    Some new terms needed:-

    Artificial Stupidity shortened to AS;
    Artificial Amplification of Prejudice shortened to AAOP;

    The present things called AI are not just trained by garbage from the internet but such items paired with tags generated by prejudiced humans who include the garbage element of their particular prejudices and the general tenor of the prejudices of the selected taggers will tend to dominate unless there is a mechanism to search out tags generated prejudices

    True artificial intelligence will require an Artificial Crap Detector ACD that can filter out the bias when one side of the argument results in 80% of the data and the other only 20%, how does it make an estimate of the 80% 20% split?. How does it discover input from other constituencies that because algorithms in the social media platforms have deemphasied them without having access to the innards of these social media systems.

    Finally would a true AI be smart enough to avoid explicit advice on what it thinks we should do but instead use clever misdirection to manipulate us into doing so?

  15. Like all good ideas the solution starts with suing the owners. In tech world they have been dodging responsibility forever by blaming that other guy – Al Gorithm. Yeah, that’s him.
    He is the big guy we never see at the meetings! Al Gorithm. He just works on his own, responsible to no one, not even a God. He takes the guesswork out of logic for us and he works cheap.

  16. IF AI or ML (machine Learning) was an actual thing. What happened to Turley would not have happened.

    My simple mind believes AI is supposed to use its massive ability to crunch vast amounts of information and distill it.
    If a story exists in the ether about a sexual assault charge. AI should be able to search the net in a matter of seconds and find hundreds, or thousands of corroborating independent stories. Not a thousand stories using the same source.

  17. “continuing effort to have me fired at George Washington University due to my conservative legal opinions”

    ??? Conservative legal opinions? I have not read all your opinions or read about all the cases you have represented, but I could have sworn you leaned left but strove to be balanced, moderate, and prudent–judicious–in your opinions. I guess that is the way with classical liberals–on some things lean left, on others right. It keeps a person from getting dogmatic and also on the path aiming at truth and wisdom.

    1. Prairie Rose, I agree. Turely is a liberal/Libertarian. He just read the constitution as written, and gets labeled a conservative. I find it so interesting that just supporting the constitution gets one labled right wing extremists. Says a lot about how the lefts agenda is so anti-American

      1. Yes, odd.

        Indeed, Professor Turley is a classical liberal.

        Not quite a big L Libertarian, though, I’d reckon. He is prudent, and judicious. Too many big L Libertarians it seems are a tad too dogmatic and tip too far towards individualism and essentially ignore their communities. They seem to forget their duties as citizens in a community. There is a tension between the individual and the community and it should tip a tad more towards the individual. I get the sense big L Libertarian types have let that tension go a bit slack.

        Maybe I’d say Professor Turley tilts towards being a small l libertarian in the Don’t Tread In Me spirit.

    2. JT is old fashioned liberal but most who comment on his blog have migrated to the mad right.

      Prairie Rose, I admire your stamina in persevering for so long.

      1. That is kind of you, Carlyle.

        I think both sides have gone a bit mad, so I guess that is why I stay put. 😉

      2. Carlyle Moulton,
        But whom do you think would send professor Turley death threats toward himself and his family or try to get him fired from George Washington University?
        The “mad right?”
        Or “mad leftists?”

Comments are closed.