Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

Below is my column in The Hill on recent reports that the use of my name in search requests on ChatGPT results in an error and no response. I am apparently not alone in this haunt of ghosted individuals. The controversy raises some novel and chilling questions about the rapid rise of AI systems.

Here is the column:

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

The company’s lack of transparency and responsiveness has added to concerns over these incidents. Ironically, many of us are used to false attacks on the Internet and false accounts about us. But this company can move individuals into a type of online purgatory for no other reason than that its AI generated a false story whose subject had the temerity to object.

You can either be seen falsely as a felon or be unseen entirely on the ubiquitous information system. Capone or Casper, gangster or a ghost — your choice.

Microsoft owns almost half of equity in OpenAI. Ironically, I previously criticized Microsoft founder and billionaire Bill Gates for his push to use artificial intelligence to combat not just “digital misinformation” but “political polarization.” Gates sees the unleashing of AI as a way to stop “various conspiracy theories” and prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

I do not believe that my own ghosting was retaliation for such criticism. Moreover, like the other desparecidos, I am still visible on sites and through other systems. But it does show how these companies can use these powerful systems to remove all references to individuals. Moreover, corporate executives may not be particularly motivated to correct such ghosting, particularly in the absence of any liability or accountability.

That means that any solution is likely to come only from legislative action. AI’s influence is expanding exponentially, and this new technology has obvious benefits. However, it also has considerable risks that should be addressed.

Ironically, Professor Zittrain has written on the “right to be forgotten” in tech and digital spaces. Yet he never asked to be erased or blocked by OpenAI’s algorithms.

The question is whether, in addition to a negative right to be forgotten, there is a positive right to be known. Think of it as the Heisenberg moment, where the Walter Whites of the world demand that ChatGPT “say my name.” In the U.S., there is no established precedent for such a demand.

There is no reason to see these exclusions or erasures as some dark corporate conspiracy or robot retaliation. It seems to be a default position when the system commits egregious, potentially expensive errors — which might be even more disturbing. It raises the prospect of algorithms sending people into the Internet abyss with little recourse or response. You are simply ghosted because the system made a mistake, and your name is now triggering for the system.

This is all well short of Hal 9000 saying “I’m sorry Dave, I’m afraid I can’t do that” in an AI homicidal rage. Thus far, this is a small haunt of digital ghosts. However, it is an example of the largely unchecked power of these systems and the relatively uncharted waters ahead.

Jonathan Turley is the Shapiro Professor of Public Interest Law at George Washington University. He is the author of “The Indispensable Right: Free Speech in an Age of Rage.”

 

75 thoughts on “Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI”

  1. A teacher at a private high school shared this with me regarding ChatGPT.
    In his 10th, 11th and 12th grade English classes, papers being handed in as part of a homework assignment began to read slightly differently about 2 years ago. After a bit of research, he discovered a certain number of students had relied on ChatGPT to the point at which their writings did not ‘sound’ like they had previously.
    He warned his classes away from using AI in this way in the future, and that a grade of ‘F’ shouldn’t surprise them if they did not heed his warning.

  2. Per ChatGPT…. Looks like you are wrong, professor…

    David Mayer (November 23, 1928 – August 24, 2023) was a distinguished theatre historian and professor of drama, renowned for his extensive research into 19th-century English-language theatre and its connections to early cinema. He served as Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, where he was a foundational member of the Drama department.
    University of Manchester

    Born in Chicago, Mayer’s academic journey included a PhD in dramatic literature and theatre history from Northwestern University. His teaching career spanned institutions such as Denison University, Lawrence University, the University of Warwick, and the University of Bristol, before he settled at the University of Manchester in 1972.
    ASTR

    Mayer’s scholarship focused on the “long nineteenth century” (circa 1760s to 1920) and the interplay between theatre and early silent film. His notable publications include “Harlequin in His Element: The English Pantomime, 1806–1836” and “Stagestruck Filmmaker: D.W. Griffith and the American Theatre.”
    Professor David Mayer

    In 2012, he received the American Society for Theatre Research’s Distinguished Scholar Award, recognizing his significant contributions to the field.
    ASTR

    Beyond academia, Mayer was known for his generosity in mentoring students and colleagues, and for his advocacy in preserving theatre history. He supported campaigns such as the effort to save Harker’s Studio, one of the UK’s last theatre scenery workshops.
    The Guardian

    Mayer passed away at his home in Manchester, England, at the age of 94.
    Yale Alumni Magazine

  3. Professor Turley, you have made people question what happened with ChatGPT. I don’t use ChatGPT for anything that is not at least 2 months old because it is updated only intermittently. I found that out when I asked GPT about the two attempted assassinations of Trump. It didn’t know about the second attempt. It was yet to be updated.

    Then I asked it about Israel, and it made several errors. I discussed the errors with additional information. It looked deeper and agreed with what I said. It appears it looked at more sites that contained facts instead of opinions and then agreed with my conclusions.

    Because it seemed to focus on the typical answers from the MSM, I asked about the sources. Most appeared to be on the left, with the usual misunderstanding of what is happening in the Middle East. The left seems to dominate the discussion, but it appears ChatGPT looks deeper when you provide better answers and then agrees with positions that it finds has the most proof. Unfortunately, it relies too heavily on the NYT type of news; erroneous.

    There is more that can be gleaned simply by asking questions but I will finish with the question at hand, why doesn’t the name Jonathan Turley not appear? The following is the answer the question we all are asking.

    “Yes, that’s possible. If you asked about a specific entity — like a person, company, or organization — and I couldn’t verify accurate, up-to-date, or widely available information, I might have chosen not to respond. This approach helps prevent the spread of misinformation and avoids potential legal concerns related to defamation or privacy.”

    1. Sounds like ChatGPT wrote this comment. Sentences like ” why doesn’t the name Jonathan Turley not appear?” and ” The following is the answer the question we all are asking.” are poorly crafted, a common theme in Chatbot responses when the author speaks a different language.

      Also why is he presenting the ChatGPT’s own excuse for why its censoring and presenting it as “the answer”?

      Its not “the answer” to why he’s being censored. Its ChatGPT’s 2-step around the question.

      I’m starting to wonder how many in here really are humans or have long since relegated their “persona’s” to AI bots?

      Weird.

      1. “Sounds like ChatGPT wrote this comment. Sentences like ” why doesn’t the name Jonathan Turley not appear?” and ” The following is the answer the question we all are asking.” are poorly crafted, a common theme in Chatbot responses when the author speaks a different language.”

        Chris, you are wrong about so many things, you might as well be wrong about this. ChatGPT will not mention the name Johnathan Turley and would not answer many questions so some of my questions were awkward in an attempt to get an answer. I had to delete Turley’s name to get any response.

        “Also why is he presenting the ChatGPT’s own excuse for why its censoring and presenting it as “the answer?”

        It’s simple for anyone with critical thinking skills. That is one of the responses provided by ChatGPT. It was not easy getting. The question I asked was similar to the one you are questioning.

        Get that branch out of your a$$. It is making an a$$ out of you.

  4. Sooooo, what is it that I have to post to get disappeared? I have been looking for a way to get cancelled for quite awhile, but to no avail.

  5. What are the legal issues surrounding AI chat bots? Is a law professor writing this website, or a lay person? There are so many issues swirling around LLMs? Copyright….defamation…..deep fakes….liability? When a chat bot speaks in the 1st person singular:

    ” I (yada yada yada)…

    the engineers who designed it are pulling a devious trick. They are hoping to get the public to accord “personhood”, but legally, the chat bot is an automated, engineered machine. It does not exist under any law as the anthropomorphized “being” it pretends to speak like. Liability for harm lies with its owners, designers, trainers, marketers…the corporate being who provides its use, and corporate responsibility flows to its human executives and owners.

    So, these AI bots should be forced to speak using the “We (yada yada yada)…” form, reflecting the locus of accountability tied to specific human responsibility and liability.

    This is the next battle of pronouns. Much more consequential than the last one.

  6. I can confirm that many of your readers and I correctly predicted precisely what you wrote today. The curse of the algorithm.

    In a financial spreadsheet, one can influence outcomes in numbers by skewing the data. Now, with AI, the same can happen with a person’s life history or history in general. As you stated, the change in an algorithm may not be a nefarious scheme, just a human mistake or misunderstanding.

    At least with stone or clay tablets, history was inscribed for generations. Even parchment writings were difficult to erase. How is history going to be faithfully transferred in the digital world? We don’t have any lifelong faithful scribes laboring away to ensure an accurate account of human activities. We have AI with their algorithms.

  7. I suppose AI represents a modern version of the Golden Calf that the Israelite’s fashioned and worshiped after their escape from bondage in Egypt. And more likely as not will lead to the same demise.

  8. I watched the Sound of Music for the millionth time last night. Here’s a memorable scene.
    Rolfe the Nazi youth told Leisl “We make it our business to know everything about everyone”.
    Apparently, the US government with the help of Bill Gates agrees with Rolfe.

  9. Yeeeah, I’m currently in the same situation on Facebook, for having the temerity to insist that a Smiley with a small square Hitler moustache and an arc of symbolic hair is political satire. Originally published in blue and yellow, then later in just blue and white, it carried the phrase “Welcome to the New Normal!”. Being a charitable person, and having been reinstated twice before the invisible hand pulled the woke lever to banish me for good, I’m sure the gnomes of Menlo Park were merely confused, uneducated minions who only saw the “dangerous symbol” instead of the truly dangerous Smiley motif. Of course, the republication of Russia Today articles and videos on my newsfeed probably didn’t help! Jonathan, welcome to the New Normal!

  10. “Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.”

    Why on earth would you Assume people who have defamed you would correct their defamation? Stop being a coward – sue them! For your sake, and for the community.
    Thanks

  11. Trump Weighing Military Action Against Iran

    Trump and the rest of the neo-cons pushing in the Ukraine war are going to kill us all with their for-profit WWIII that one of them will start in Ukraine, Israel, or Iran. Trump will do it to line the pockets of his rich buddies.

    If the warhawks in the GOP and DNC don’t do it by pushing the war in Ukraine, Trump will get it done through starting a war with Iran.

    1. Iran started the war long ago even though you are ignorant of that fact. They should have been stopped decades ago, but paralyzed and crooked Democrats have kept it going, even to the point of funding Iran’s terrorism. That you don’t recognize the terrorism from Iran means you are a terrorist or stupid. I doubt you are a terrorist.

    2. Re “Trump Weighing…” it’s all on Obama and Biden, they’ve been financing Iran, while playing both sides in Syria. It’s not like they don’t know where it’s going, we all know where it’s going.

  12. Granted, this happened without a human eye ever scrutinizing anything, but yes, yes, this is absolutely the modern left. It’s why they have been batsh*t over Elon buying Twitter. These people are damaged goods, and that’s all there is to it. They must never be in power ever, ever again.

    Please bear that in mind for the midterms – we are not nearly out of the woods yet, and with enough propaganda and cheating – Kamala could *win* in 2028. Do not let your guard down.

  13. What an odd quirk of alternative justice: Professor Turley is being ghosted just as Republicans trying to get to the bottom of the Biden laptop, Russia Dossier, and Covid were also made to disappear.

    Equal justice for all, Democrats assure us!

  14. I would suggest that it is time to read up on the Butlerian Jihad in the Dune Universe when man cast off the “Thinking Machines” who had enslaved humanity.
    A literary view of this discussion .

  15. I recently came across a situation which is something that Free Speech advocates should find alarming. A robo news service powered by AI is impersonating a defunct Ashland Oregon newspaper. This service, according to the account, appears to originate in China. The traditional defense against false or misleading speech offered by Prof Turley is more speech. But if false or misleading speech is generated, then there is little hope to tip the scales with more speech. I have include the link to the story below:

  16. Chris Weber (@ 11:25) points out how well known and respected is Jonathan Turley.
    Ghosting his name is the easy and cheap way out, and again, can be used by AI to eliminate success/popularity based on Internet visibility and familiarity.
    Name recognition plays no small role in enhancing success.
    By wholly eliminating all information referring to Jonathan Turley by name, we can enhance (intentionally or unintentionally) the visibility and importance/name recognition of remaining names, like his envious competitors Philip Bump and Laurence Tribe…

    1. I agree. That info I posted was a direct cut and paste from the Gemini AI (Googles AI) response that I posted showing that AI systems have no issues finding ample vetted information on JT which removes any question of whether or not it is intentional that ChatGPT omits him entirely. It would be impossible for ChatGPT to omit him due to being able to confirm enough valid data online to return a response. It definitely 100 percent is an intentional attempt to damage his name by making it appear “controversial”.

      1. Chris Weber,
        Thank you for conducting the experiment and sharing your findings.
        Have people really gotten that lazy or lack the consideration to just contact the good professor with a simple, “Oops! Were sorry!” Or would that be some kind of legal confession of error?

  17. Wrapping Ai with anything other than a path of inquiry, unconstrained of any ideology or other human notions is required, otherwise, I foresee the proverbial match being lit and bridges collapsing when the tyranny of a few control the source of inquiry. We must protect all citizens from those who wish to control, reform, diminish or cancel “Laws of Nature and Nature’s God entitled to them”

  18. I asked ChatGPT o1 why it might have that reaction to a person’s name. Seems like a logical thing to do. This is what it said:
    —- Start Quote —-
    There are a few reasons why you might see a response like “I’m unable to produce a response.” when asking about a specific person:

    Privacy and Personal Data:
    If you’re requesting personal or sensitive information about an individual—especially someone who isn’t a well-known public figure—the system may refuse. Requests for details like address, personal contact information, or private life events not available in reputable public records often trigger a privacy safeguard.

    Defamation Concerns:
    The assistant follows strict content policies aimed at preventing the spread of false or harmful information. If the information about the person is uncertain, unverified, or potentially defamatory, the assistant might withhold a direct answer.

    Lack of Available Information:
    Sometimes the assistant just doesn’t have information about the specific person you’re asking about. If the assistant has no reliable data and cannot verify the individual’s public presence, it might decline to provide a response rather than risk providing incorrect or misleading details.

    Policy Enforcement:
    The assistant is guided by a set of rules that restrict certain types of content. If the request encourages doxing, harassment, or any form of targeted abuse, the assistant will refuse. Even less obviously harmful requests that might indirectly lead to privacy violations will be blocked.

    In short, these guardrails are there to ensure that the assistant does not inadvertently share private, incorrect, or harmful content. If you ever run into a refusal, it’s often because your query either falls outside of the assistant’s allowed scope or involves information the assistant cannot confidently verify.
    —- End Quote —-

    AIs today are error prone. If the owner decides that you are litigation prone and it can’t be sure that the AI won’t issue defamatory statements about you then it is logical not to include you. It is probably hard or impossible to remove incorrect statements without rebuilding the model. It is also hard to avoid false information if a lot of it is seeded in the web.

    Are you willing to hold OpenAI harmless if it accidently defames you? Do you expect them to build a special process for you?

    1. Nonsense. More contorted and convoluted apologetics from the left trying to excuse away the obvious.

      No sir. It is not “logical” that ChatGPT returns NOTHING on JT with your bogus misrepresentation that ChatGPT cannot discern from the true or false stories.

      I just posted Googles “GEMINI” response. Gemini is Googles AI program. It has ZERO problems discerning the voluminous data on JT from any of the few and hard to find slanders about him out there.

      ChatGPT queries the World Wide Web. On the World Wide Web, 99 3/4 percent of the VOLUMNINOUS data on JT is positive, documented and easily verifiable by any AI bot or individual for that matter.

      There are only a few minor smears on him that are difficult to find and are certainly not overwhelming in the search returns.

      Therefore, in order for ChatGPT to restrict any and all information on JT, is an obvious intentional SNUB.

      If Gemini can easily search the web and produce results, so can ChatGPT.

      In other words, you’re as big of a liar, as Bill Gates.

      1. Hi Chris, I’m a conservative, not a leftist. I am also a chatGPT subscriber and I use it a lot, mostly for writing programs that do plots related to quantum physics, but also logic programming and other lookups that I might have gone to wikipedia for. I have come to terms that GPT is not a place for politics or other edgy topics. Crap, I’ve got reddit and zerohedge and red state for that.

        But as a conservative I believe a company has a right to make its own decisions, realizing that some people may therefore choose to go elsewhere. Freedom. It’s a good thing.

        1. You know that’s the biggest load of horse manure I ever heard.

          1. You’re a “conservative” but afraid to reveal your actual identity. Right.

          2. You like freedom as long as its freedom to censor and promote censorship from behind an anonymous handle. Roger that.

          3. I am also a chatGPT subscriber and I use it a lot. Yup, ..I’ll be you do.

          Don’t pretend for two seconds that using the largest AI platform on the World Wide Web as a censorship tool makes you or any of us “free”.

          Censorship ain’t freedom.

          .

          1. Chris, I don’t understand your incivility. We have different opinions. Allowing for different opinions is the heart of free speech. Who is being censored in the case of chatGPT avoiding Turley? It’s chatGPT’s speech. They aren’t obliged to promote Prof. Turley. However, the more people or whatever they don’t include the more people may go elsewhere.

            The option of anonymity is a part of free speech. In this case I was anonymous casually. I wasn’t really thinking about it. But your unhinged response makes me happy I’m anonymous here.

        2. Anonymous, I stay away from chatGPT for political or highly charged topics. It is not always up to date, as mentioned in my last posting. It doesn’t screen the entire web, or at least it admits to not screening the entire web. It focuses on certain media and veers leftward on hot political issues. Chris thinks he is smarter than he is. I can understand that since Chris is always in the high trees, alone, and speaking to himself.

          1. I don’t think I’m that smart, if I were I’d be writing for a living and not climbing trees.

            That’s the scary part. Because I see right through you, and that clearly disturbs your demons.

            1. Chris, what is not smart is you picked a fight for no reason, one you were destined to lose.

Comments are closed.