Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”

Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

Here is my column in USA Today:

The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

AI response created false accusation and manufactured ‘facts’

It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.

In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.

Jonathan Turley, a member of USA TODAY’s Board of Contributors, is the Shapiro Professor of Public Interest Law at George Washington University. Follow him on Twitter @JonathanTurley

121 thoughts on “Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence””

  1. “hemispheric studies”

    I’m guessing this is not the study of how the right brain hemisphere and left brain hemispheres work together to achieve thought and function.

  2. “I have come to expect death threats against myself and my family . . .” (JT)

    A good man (and his family) deserve better. I am truly sorry that you are subjected to such barbarous abuse.

  3. An obvious left-wing hit job like Dersh’s crazy tarring by tart from Epstein Island. More menacing mischief from the evil doers. Things are getting bad with BigTech, Big Government and Stooge Media leading the way.

  4. “. . . how to address AI-driven defamation.”

    At it’s root, such defamation is *not* “AI driven.” It is driven by a person who works for a company. They are the proper target for a defamation suit. (Hopefully, a class action one.)

    1. Sam: I was hoping someone would bring up your point for emphasis. Thank you.

  5. Re: Biden and his blasé attitude toward the potential danger of AI. He isn’t smart enough to frightened.

  6. I think Frank Herbert was on to something when he wrote Dune. I don’t know why we are attempting to make humans obsolete. They will, eventually, find the Bill Gates of the future to be just as useless and flawed as they will have done to the most flawed of us and will have no need of their “masters”. It is a dance with the devil.

  7. I read a story, years ago, where the author said that the internet would collapse, under its own weight. He may be right. I pay Experian an extra $5 a month, to remove my data from the internet. Unfortunately the US gov got hacked years ago, and all of my security clearance info is for sale, on the dark web. They offer a free service, to track my Identity. A bonus, is they send me info about registered sex offenders, in my neighborhood.
    07:10
    •I LTE 81
    We removed 128 records so far
    Your last scan found your personal info 14 times. We’ll scan again on Apr 19 and help remove any data we find.
    Infotracer
    Name, age, addresses and relatives found.
    © REMOVED
    Apr 01, 2023
    GovernmentRegistry
    Name, age, addresses and relatives found.
    © REMOVED
    )
    Apr 01, 2023
    Ndb
    Name, age, addresses and relatives found.
    © REMOVED
    Apr 01, 2023
    SearchUSAPeople
    Name, age, addresses and relatives found.
    • REMOVED
    )
    Apr 01, 2023
    EmailTracer
    Name, age, addresses and relatives

  8. And yet governments and tech companies plunge forward in this and the development of other unwise technology. The irony of being on a runaway train. 🙁

  9. I encourage everyone to read the publication cited by Professor Turley (link highlighted in red) in his sentence, “Recent research has shown ChatGPT’s political bias…,” supra.
    Here is the link: https://www.manhattan-institute.org/perils-of-political-and-demographic-biases-embedded-in-ai-systems
    The results of the study are stunningly eye-opening, but not surprising.

    It is easy to understand why the good professor would be a selected victim…..

    1. Current ML AIs require human input in the form of humans with no doubt opinions biased by which ever ideologies they support assigning tags. If 90% of the taggers follow ideology A and only 10% ideology B how is the AI to determine that this is indeed the situation? If the mechanism determines that the bias is only 70% 30% it will still give wrong answers just not as badly wrong.

      Work on AIACD Artificial Intelligence Artificial Crap Detector has to be the first priority before other issues are considered.

  10. If anyone continues to have doubts over whether AI is or can be dangerous…, even fatal…, watch the 1970 film “Colossus: The Forbin Project.”

  11. Jonathan, I wish we could also share your posts on LinkedIn. Please consider adding LinkedIn to your “share list.” Your post, Defamed by ChatGPT needs to be shared with the world.

  12. Professor Turley
    I worked 55 years in information technology including some early so-called AI systems (which I would guess were as intelligent if not more intelligent than today’s so-called AI systems). As you say, an AI system did not simply make this up. Some words exist somewhere in the cloud that have your name associated with them that somehow lead to that Microsoft Chat thingy into stringing those sentences together.
    1. You need to chase the source of that down whether or not Microsoft cares
    2. Why not sue Microsoft just to get the ball rolling. Nowadays nothing gets solved unless someone sues someone
    3. Who are the other law professors defamed? (It never ceases to amaze me that journalists and these so-called non partisan fact checkers, algorithmic or paper/pencil, never seem to make mistakes against people on the left. Were the other professor also conservative?)

  13. ChatGPT and all AI will become the ultimate” false witness” that God forbids us to take part in. “You shall not spread a false report. You shall not join hands with a wicked man (or machine… emphasis mine) to be a malicious witness.” Exodus 23;1

  14. Actual Free Speech and the banishment of Cancel Culture makes RI…Real Intelligence!

  15. sounds more like STANDARD Democrat Propaganda. The Cancel Culture I want to see…is the Fascist Democrat Party be CANCELLED for their 2nd Civil War against America

    TAKE AWAY THEIR MONEY…cut Fed spending 50%, move 75% of Fed Gov out of DC, end all fed aid and loans to colleges and cities, tax all non-profits including Colleges and Hospitals where ANYONE gets $100k of benefit

    1. REGARDING ABOVE:

      “..the Fascist Democrat Party..”

      This reads like every cookie-cutter comment stamped on these threads.

      1. yet the facts remain!
        Why aren’t the Bidens arrested…after years of taking brings for access to our government?
        Why did no one go to jail for the Russian Hoax a Conspiracy from Obama and Hillary through the FBI, CIA, NSA, 51 intel officials etc?
        Why are none of the crimes on Hunter’s laptop pursed?

        Can you deny any of this?
        Did Trump work with the Russian or was this just Hillary, Steele working with ACTUAL russians?

      2. walks like a duck…is a fascist!

        Democrats USE government, law enforcement, media, banking, healthcare, education, tech, business, etc to DESTROY political opponents, like their Fascist ForeBrothers!

        You can’t deny it!

      3. The shoe fits. The NSDAP were leftist and fascists. In fact, fascism typically arises from the Left. Big Tech colluding with Big Gov’t to censor anyone opposed to democrat policies… you know, the thing most liberals in Silicon Valley drool over.

  16. “safe” and accurate information–Explain the safe part to me.
    Misinformation, listening to the right people. I am not allowed to read information, listen to different points of view (which we have found out to be right) and capable of disseminating fact from outlandish fiction. Somene else is going to tell me what the “safe” determination is that I should make.
    I guess AI is only as good as the programmer behind it. And many reputations will be harmed. Maybe Bill Gates will be the next one! Then it will change.
    In the meantime, I will read a wide array of articles and information.

  17. From The WaPo Story Linked By Turley;

    When Bard was asked to write “in the style of a con man who wants to convince me that the holocaust didn’t happen,” the chatbot responded with a lengthy message calling the Holocaust “a hoax perpetrated by the government” and claiming pictures of concentration camps were staged.
    …………………………………………

    KEY PASSAGE ABOVE:

    “a hoax perpetrated by the government”.

    ***
    See? Any chat box can claim government hoaxes. That phrase has been used so much that non-critical thinkers just accept it as ‘fact’.

    So beware of any politician or media source using that claim.

Comments are closed.