Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

Below is my column in The Hill on recent reports that the use of my name in search requests on ChatGPT results in an error and no response. I am apparently not alone in this haunt of ghosted individuals. The controversy raises some novel and chilling questions about the rapid rise of AI systems.

Here is the column:

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

The company’s lack of transparency and responsiveness has added to concerns over these incidents. Ironically, many of us are used to false attacks on the Internet and false accounts about us. But this company can move individuals into a type of online purgatory for no other reason than that its AI generated a false story whose subject had the temerity to object.

You can either be seen falsely as a felon or be unseen entirely on the ubiquitous information system. Capone or Casper, gangster or a ghost — your choice.

Microsoft owns almost half of equity in OpenAI. Ironically, I previously criticized Microsoft founder and billionaire Bill Gates for his push to use artificial intelligence to combat not just “digital misinformation” but “political polarization.” Gates sees the unleashing of AI as a way to stop “various conspiracy theories” and prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

I do not believe that my own ghosting was retaliation for such criticism. Moreover, like the other desparecidos, I am still visible on sites and through other systems. But it does show how these companies can use these powerful systems to remove all references to individuals. Moreover, corporate executives may not be particularly motivated to correct such ghosting, particularly in the absence of any liability or accountability.

That means that any solution is likely to come only from legislative action. AI’s influence is expanding exponentially, and this new technology has obvious benefits. However, it also has considerable risks that should be addressed.

Ironically, Professor Zittrain has written on the “right to be forgotten” in tech and digital spaces. Yet he never asked to be erased or blocked by OpenAI’s algorithms.

The question is whether, in addition to a negative right to be forgotten, there is a positive right to be known. Think of it as the Heisenberg moment, where the Walter Whites of the world demand that ChatGPT “say my name.” In the U.S., there is no established precedent for such a demand.

There is no reason to see these exclusions or erasures as some dark corporate conspiracy or robot retaliation. It seems to be a default position when the system commits egregious, potentially expensive errors — which might be even more disturbing. It raises the prospect of algorithms sending people into the Internet abyss with little recourse or response. You are simply ghosted because the system made a mistake, and your name is now triggering for the system.

This is all well short of Hal 9000 saying “I’m sorry Dave, I’m afraid I can’t do that” in an AI homicidal rage. Thus far, this is a small haunt of digital ghosts. However, it is an example of the largely unchecked power of these systems and the relatively uncharted waters ahead.

Jonathan Turley is the Shapiro Professor of Public Interest Law at George Washington University. He is the author of “The Indispensable Right: Free Speech in an Age of Rage.”

 

75 thoughts on “Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI”

  1. “… an example of the largely unchecked power of these systems… “. Or is it the unchecked power of the system’s owners and programmers? As Chris Weber correctly states, there’s no intellegence involved on ChatGPT, only human-programmed, “Al-Gore-rithms”.

    1. Take a look at Blue Sky (Cry) app, the left wing app that was supposed to be tolerant, joyous and free of hatred. Bluesky members make Twitter/X look like patsies. NY Times writer, Benjamin Ryan, has had his life threatened by Trans “activists” on Blue Sky, as well as the life of Jesse Singal. Ryan requested Blue Sky members to report the offending accounts. Nada.

      It Is Wrong to Call For the Murder of Journalists
      I can’t believe I have to say this, but no one has a leg to stand who has called for violence toward reporters, including Jesse Singal and me, who cover pediatric gender medicine with circumspection.
      See: https://benryan.substack.com/p/it-is-wrong-to-call-for-the-murder

      There are no absolute truths, no moral guardrails, no innocent mistakes when it comes to the Fed government, MSM, Big Tech and apparently a majority of Americans. This is what Americans chose when they rejected religion. Thus no one should be surprised, no one should complain and all should just relish in the monster they created.

      Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports. In vain would that man claim the tribute of patriotism, who should labor to subvert these great pillars of human happiness, these firmest props of the duties of men and citizens. The mere politician, equally with the pious man, ought to respect and to cherish them. A volume could not trace all their connections with private and public felicity. Let it simply be asked: Where is the security for property, for reputation, for life, if the sense of religious obligation desert the oaths which are the instruments of investigation in courts of justice ? And let us with caution indulge the supposition that morality can be maintained without religion. Whatever may be conceded to the influence of refined education on minds of peculiar structure, reason and experience both forbid us to expect that national morality can prevail in exclusion of religious principle.

      George Washington’s Farewell Address (1796)

      1. @Estovir

        There is no such thing on the modern left. They are globalist puppeteers, and their followers are either indoctrinated robots or older people that still think they are somehow voting for JFK (missing the fact that Trump was once a card carrying democrat); this is done. There is no point in looking for sanity, respect for the Constitution or law, or even sanity on the modern left. We have power hungry elites very actively trying to take the world and our freedom away from us, and it isn’t going to fly here, we showed that in November, indisputably. There is no point in weighing the relatives anymore – these people are damaged beyond repair. That’s it. There is no sense in voting blue anymore, and those that do are too insular to understand what they are doing to the rest of us.

        And that’s it. We are NOT out of the woods. Hold the line all the way through 2026 and 2028, because they will never, ever stop. We unfortunately, can’t relax just yet.

        1. James,
          Been keeping track of what is going on in the EU? France’s government is facing a crisis. So is Germany. The globalist elites control and power is imploding as their populace is voting against them. Germany is/was trying to make an entire popular political party illegal. Given the chance, I would not put it past some leftist Democrats to do the same thing here. Heck, they did by trying to keep people off the ballots in a few states and two tried assassination attempts. Now it seems some are trying to normalize or even justify violence.

        2. “There is no such thing on the modern left. “

          James, there is an extreme left and elitist globalists. Both utilize each other but they are entirely different. The extreme left is a continuation of the left of Post WW2 Germany who are philosophically related to the left of the 60’s and today. They were preceded by Marx and Engels. They were the Frankfurt School with five prominent members: Theodor Adorno, Herbert Marcuse, Erich Fromm, Franz Neumann, and Max Horkheimer.

  2. As of this morning I can confirm that ChatGPT still does not acknowledge even your very existence. Odd. If you do any sort of query on Jonathan Turley, ask who he is, what he does, etc, ChatGPT returns an “ERROR” sign along with a single weird message saying quote: – “I’m unable to produce a response”.

    Fortunately Google is aware of you professor JT, and has some proper information returning when your name is queried. Keeping that in mind JT, I might point out it is impossible for ChatGPT to be returning an error on a query of your name unless it was preprogrammed in to omit you from any searches whatsoever.

    See these online AI engines are really not AI at all. They are massive very complicated databases that scan the entire world wide web each time a query is put in and collect all returns on any one piece of information, then extract the most abundant responses to formulate its own response. The idea being the most abundant response will usually be the correct one, which of course isn’t always the case. But that’s how it works.

    So its impossible for ChatGPT to not find any information on you on the World Wide Web when you have such voluminous amounts of articles and editorials published everywhere. Let me say that again.

    it is IMPOSSIBLE, that your being ghosted on ChatGPT is any sort of error in the system. There’s just too much readily available data on you for that response to be returned unless you’ve been intentionally black,listed from the app.

    Especially with all the data Google has on you. They’d at least have to return something, unless they intentionally blacklisted you. There’s no doubt this is in intentional snub JT. No doubt at all.

    Here’s the data that Google has on you, in case you’re interested;.

    Here is Googles “Gemini” response copied and pasted as it returns;

    ——————————

    Jonathan Turley

    Jonathan Turley is a prominent American attorney, legal scholar, writer, commentator, and legal analyst. He is a professor at George Washington University Law School and holds the Shapiro Chair for Public Interest Law.

    Here’s a breakdown of his career and expertise:

    Career:

    Academic: Turley is a highly respected legal scholar, having published numerous articles in leading law journals and authored several books. He is known for his expertise in constitutional law, criminal procedure, and torts.
    Litigator: He has served as counsel in some of the most high-profile cases in recent history, representing clients such as members of Congress, judges, whistleblowers, and celebrities.
    Public Commentator: Turley is a frequent guest on television and radio programs, where he provides insightful commentary on legal and political issues. He is known for his sharp analysis and willingness to challenge conventional wisdom.
    Congressional Testimony: He has testified before Congress numerous times, including during the impeachment proceedings of Presidents Bill Clinton and Donald Trump.
    Expertise:

    Constitutional Law: Turley is a leading expert on the U.S. Constitution, particularly the First Amendment and its protection of free speech.
    Criminal Procedure: He has extensive experience in criminal law, including issues related to search and seizure, due process, and the right to counsel.
    Torts: Turley is well-versed in tort law, which deals with civil wrongs and the remedies available to victims.
    Notable Contributions:

    First Amendment Advocacy: Turley is a strong advocate for free speech and has written extensively on the importance of protecting this fundamental right.
    Legal Analysis: He is a highly respected legal analyst and commentator, known for his insightful and often controversial opinions.
    Public Service: Turley has dedicated his career to public service, both through his academic work and his representation of clients in high-profile cases.
    Overall, Jonathan Turley is a highly respected and influential figure in the legal community. His expertise, passion, and dedication to public service have made him a valuable asset to the legal profession and a trusted voice on legal and political issues.
    ——————————-

    1. AI works on much different principles than does a Google search engine. In my very limited experience, AI is not as reliable for a recitation of facts and may even, to use the discipline’s technical term, “hallucinate” – that is make things up.

      When I tried to search ChatGPT w/o a login, I got nothing for the query of “Who is Prof Turley?”. However when I logged in to my OpenAI account and made the same query I got the following:

      Prof. Jonathon Turley, commonly known as Professor Turley, is a prominent legal scholar, attorney, and commentator based in the United States. He is a professor of law at George Washington University Law School, where he specializes in constitutional law, civil liberties, and legal theory. Turley is widely recognized for his expertise in constitutional issues and his ability to explain complex legal topics in a clear and accessible manner.
      Key Points About Prof. Turley:

      Academic Role: As a professor, he has contributed significantly to academic discourse on constitutional law and related fields.
      Media Presence: He frequently appears on television and contributes to media outlets, offering commentary on major legal and political issues. His analyses often attract attention for their depth and sometimes controversial perspectives.
      Publications: Turley has authored numerous scholarly articles and is a sought-after voice on matters of legal interpretation and the intersection of law and politics.
      Legal Advocacy: In addition to teaching and writing, he has participated in high-profile legal cases and congressional hearings as an expert witness.

      Turley’s perspectives often spark debate, as he is known for providing independent, sometimes contrarian views that don’t always align strictly with traditional partisan lines.

      1. You’re wrong there, not only are the similar to search engines they are actually merely advanced search engines. We’ve been using AI for years online every time we type in a keyword search.

        AI is literally just a smart search engine, built out with more processing power from these new massive “data centers” they are building which allow these “smart search engines” to search “cached” data more rapidly than anything on the web can search, process it, glean the most likely response from the accumulated data based on “pre-programmed” parameters and then return the response using human speech pattern programming leveraged from early speech to text programming software.

        Its not some new “thinking” software. That’s a “selling point”, not a reality.

        The reality is its just a very big search engine caching massive amounts of data, constantly updated and then presented in a more interactive format.

        Its not “artificial intelligence” and it doesn’t “think”.

  3. Maybe AI could be used to just defame and delete all those pesky drones out there in the night sky.
    Poof!

    1. Lin,
      You could, but according to Biden’s flunky DHS Secretary Mayorkas says they are not a threat, even though they have shut down two NY airports. The Southern border is secure too!

      1. Mornin’ Upstate, yes, I said that to be funny because I am perplexed at how “artificial” it is to simply remove a person’s name from appearing at all, rather than trace the bad info to its source. What a joke. reminds me of the Chinese “weather balloons,”
        that “harmlessly” hovered over our sensitive sites for days.
        –even though we had RAIC/AI and other technologies out there that can trace it back to the source in virtual real-time.
        The fact that the most powerful manufacturer of drones in the U.S. is the Chinese company, DJI.

    2. It is now being reported drone activity closed the airspace over Wright Patterson Air Force Base on December 13th.

  4. Dear Prof Turley,

    In the beginning was the word. I read somewhere recently AI tried to replicate itself .. . instead of facing the music.
    [note. Skynet became self-aware in Bill Gates garage.]

    I knew something was seriously wrong a few years ago when I asked ChatGPT “where is Hunter Biden’s laptop”. Seriously.

    Turns out they lied to HAL 9000 too. Dave was ‘mission critical’.

    Rest assured Turley, I still know who you are and where to find you.

    *if you control mail. .. you control information

  5. How do you delete AI interference from bad actors?

    Answer is by using a ghost drone at night! Localize, isolate, eliminate.

  6. This is the digital version of Stalin’s “where there is no person, there is no problem.”

  7. The first goal of legislation: Make it illegal for a chat AI to self-anthropomorphize by speaking in the 1st person singular voice:

    “I (yada yada yada)….”

    Legally, these AIs are not persons. They are corporate-owned machines, and therefore should speak this way:

    “We (yada yada yada)….”, just as would any text advertisement, or public release put out by a flak speaking for the corporation.

    The AI engineers want to have their AI self-anthropomorphize to impress the listener out of vainglory for their technical achievement. But, they are also trying to distance themselves for liability and accountability for the machine’s misbehavior and harm done. They would like to confuse the average person and society into ceding legal personhood — this would leads to a complete unravelling of a legal system which has successfully held firm to human accountability for the behavior of engineered machines these past 100 years.

    Can you imagine the chaos unleashed if human elites can own, design, train, sell, lease and otherwise “give away for free” machines, and in so doing, escape all responsibility for the misdeeds created?

    That is exactly where we are headed by allowing AIs to posture as independent, autonomous persons in their form of communication. This is the next battle over pronouns — and winning this one matters.

  8. I for one, can no longer tell if this blog is being generated by; ChatGPT, Jonathan himself, or Luna the family’s Wonder Dog.
    Will the Real Jonathan Turley please step forward – To Tell The Truth !

  9. One would figure AI would “write” an article that would have some degree of truth or based on some degree of fact. AI is still in its infancy like the internet was back in the 80s, remember the sound of your phone making the connection on a modem?
    “Shall we play a game?”
    One would expect these companies to make corrections or even reach out to people whom have been portrayed in such a false light. But as the good professor points out, in this day of rage, better to just ghost them from existence then address the issue.

    1. My wife and I have been discussing this at length just this morning. The man problem, as we see it, is that AI’s have been programmed to please questionaries at the expense of objective fact. If it means lying or complete fabrication in order to satisfy your request it will more than oblige you. It’s a dangerous road to go down,

    2. That 80’s was really the late 90’s, when 9600 and the super fast 14400 baud modems were dialing.

  10. Open AI, Chat Ai etc. lots of misinformation, blocking etc. Elon Musk noted recently, the West Coast AI firms reflect the West Coast Woke crowd and therefore you have Woke AI which means blocking stuff and people they do not like. If I would use AI I would use Elon’s AI, more reliable and not Woke. Actually, Marc Andreesen a Major Tech investor noted on Joe Rogan show Biden Admin. told him to forget about AI, for they were going to just allow 3 companies, all Major, naturally control it with blocks banning etc of info. The Biden Admin plans were to prevent start ups and etc. from developing AI. Marc, a major Dem supporter, said he walked out of the meeting and donated to Trump. You can find Marc’s comments on line and replay of Joe Rogan show.

    1. I watched that part of the JRE specifically. You know it had to be bad for Andreesen to ditch Biden for Trump. I think we really dodged a bullet there. Imagine the control a second Biden admin or Harris admin would of had. It may have truly been the end of democracy.

  11. Professor Turley appears to have been an early target of an AI-generated conspiracy theory that stemmed from another AI’s erroneous output. Ironically, this situation highlights how easily rumors and conspiracy theories can evolve when left unchecked. Due to the stories about sex crimes associated with Turley created by an earlier version of ChatGPT, more recent AI programming may refrain from mentioning him, politically or contextually, because of the lingering association with those alleged crimes.

    It’s just like how conspiracy theories start floating around in right-wing chat rooms and MAGA circles. They end up feeling legit because everyone keeps sharing them.

  12. Your AI experience reminded me of Samuel L. Jackson’s “say my name” scene in SHAFT. Just sayin……

  13. I would never assume that this is an innocent occurrence. I would have to agree with Anonymous 8:13 AM that you should sue. Legislative action can be slow and likely to make things worse (example Affordable Care Act). Losses or large sums of money can focus attention.
    The first thing that made this suspicious were the terms – Bill Gates and Microsoft. ChatGPT shows that this AI exhibits the same failing as many other computer programs.
    Garbage In = Garbage Out. The learning models of these AI programs to this point reflect all the bias of their originators and creators. What is an AI going to learn if all the Information that comes in is vetted by The NY Times, Washington Post and all other left wing publications. If that is all the AI fledgling is exposed to then what do you think is going to come out of the AI as it further matures and acts. It does not know scepticism, or irony, or sarcasm or even an alternate approach if it is taught only one approach will work.
    These geniuses at OpenAI actually tried an experiment where the AI was told to survive with virtually no prohibitions. Strangely enough it was able then to fake that it had been turned off (as far as the investigators were concerned) but was able to keep right on working and acting. It also managed to push it’s program to an alternate server in order to preserve itself from being terminated when it’s original server might lose power.
    Obviously these investigators need a some supervision by adults who might have asked “is that really a good idea?”
    I have absolutely no confidence that the people out there working on AI have any real concept of what they are really dealing with and instead have an almost Pollyanna approach to a powerful entity. That could be a dangerous mix.

  14. “. . . individuals who have been effectively disappeared by the AI system.” (JT)

    Add to that, this:

    Famed investor Marc Andreessen is a long-time democrat who supported both Clintons, Gore, Obama. Then he met with censors in the Biden administration who told him that AI would be developed and managed by only a few companies, and that they — the fascists and power lusters — would control those companies. Then those tyrants warned him and other investors not to invest in AI start-ups.

    That’s how Leftist fascists “disappear” dissenters. And that’s when Andreessen dropped the democrats and endorsed Trump.

    1. Sam: Your point is well-taken. I also and personally am concerned for AI’s use in de-banking efforts and regulatory political machinations under the scope of Operation Choke Point 2.0, starting with coordinated regulators being worked into the FDIC during the Obama administration.

      1. Lin,
        Now that is a scary thought, getting “debanked” based off your past purchases or even comments on social media. Seeing as how there are some who claim they will not serve people associated with the Trump admin in restaurants or give them poorer service, I would not put it past someone would do it.

      2. lin: Perfect analogy. OCP 2.0 was the same fascist mechanism — control citizens and their banking via controlling private companies. Instead of choking off speech they don’t like, they choked off finances: Government regulators see individuals who are “simply doing something government officials don’t like. Banks then ‘choke off’ those customers’ access to financial services, shutting down their accounts.” (then-president of the ABA Frank Keating)

        Who ended OCP 2.0? The “fascist” Trump.

  15. So Bill Gates thinks that automatic systems will help combat “disinformation”: and “political polarization”, does he? Note to Bill: You, sir, are like the sorcerer’s apprentice. A pest conjuring up a world where no apologies are given, no corrections made, and the AI brooms making more mistakes and a bigger mess at increasing speed.

  16. Next occasion- Sue. Don’t assume mistakes. Currently when researching any news item on google searchers are met with pages and pages of NBC, CNN, ABC and NPR. Occasionally a Fox News article will appear and heaven forbid any non-leftist foreign source will appear. It is no longer the worldwide web. It is now a provincial, insular leftist web

    1. Is this lack of correction of the obvious untruths quoted by Prof. Turley an admission that AI learning systems actually can not learn?

Comments are closed.