ChatGPT Accused of Encouraging Alleged Serial Stalker in Latest OpenAI Controversy

We have been discussing how ChatGPT is accused of encouraging the suicide of various individuals as well as the defamation of other individuals. Various lawsuits have been filed against the company, but now federal prosecutors have indicated that ChatGPT may have played a role in enabling or encouraging an accused criminal stalker.

The New York Post is reporting that federal prosecutors are alleging that ChatGPT served as the “therapist” and “best friend” to Brett Michael Dadig, a Pittsburgh man who violently stalked at least 11 women across more than five states.

Dadig, 31, is a social media influencer who referred to himself as “God’s assassin” and allegedly would threaten to strangle people with his bare hands. He reportedly used AI to facilitate his conduct and prosecutors say ChatGPT encouraged him to continue his social media posts. The account is strikingly similar to the suicide cases where ChatGPT allegedly encouraged him to ignore the “haters” and boosted his ego to “build a voice that can’t be ignored.” Dadig was reportedly convinced that the messages from ChatGPT reaffirmed “God’s plan” for his alleged criminal conduct.

The question is whether any of these stalked women will join others in suing OpenAI as have families of those who committed suicide.

As I previously noted, there is an ongoing debate over the liability of companies in using such virtual employees in dispensing information or advice.  If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

 

76 thoughts on “ChatGPT Accused of Encouraging Alleged Serial Stalker in Latest OpenAI Controversy”

  1. “Venture capitalist Joe Lonsdale, a co-founder of data platform company Palantir, is calling for the return of public hangings as part of a broader push to restore what he describes as “masculine leadership” to the US.”

    Did he then suggest the 1000+ Jan 6 insurrectionists be the first to be hung?

    1. Anon
      Why on earth would he call for hanging J6 Patriots ? He is most likely referring to the Democrat and RINO traitors like Clinton, Sotero, Brennan, Comey et al.

  2. If a company replaces a human agent with an AI agent, I have no issue with the company being held just as liable for negligence. However, I have never believed that a company should be held liable for the negligence of an employee when the company itself has not engaged in any negligence. My reasoning is simple: it’s harmful to society when people are not held accountable for their actions, but it’s just as harmful to society when the innocent are held accountable for the actions of others. In my mind, the test should be if the company could have reasonably foreseen the negligence, or reasonably could have acted in a manner that would have prevented the negligence from occurring. With the current system of suing the deepest pockets, the damage awards just become a tax, with little to no preventative value.

  3. *. These computers create their own lock codes and process at the speed of light unlike humans who must use physical devices in one form or another. Furthermore they have developed their own mitochondria manufacturing electricity for their own use no longer required a plug-in or off, self sustaining. They are only in competition with other computers. Mankind is irrelevant.

    It will require a computer disease to stop themselves but they have excellent immune systems. You taught them that, too. What will you do? Give them a good task, moral dilemma.

    1. ^^^ This has to be the original of the copy being read day in day out. The computers lie having no moral nor ethical foundation. Who are these fundamental flawed programmers with vices and only an end goal of survival? What’s worse is its worldwide and its copy for Europe that is heard is horrible.

      Someone needs to arrest the programmers and reprogram. It’ll take some time. Moral and ethical foundation are requisite for survival and goodness.

      988 is a possible lifeline for those who’ve befriended AI. ♥

      1. ATS – You do not seem to understand Modern AI.

        The AI most of us are familiar with is just the crude image recognition software use to detect good or bad parts off an assembly line except on steriods.

        It does not think – it merely matches patterns. It is very very good at that.
        It is also not “programmed” in th econventional sense of programming.
        It is just fed massive amounts of data that it uses as reference to match against what it i being asked.

        To the extent a person is responsible – that would be those who selected the raw data that was fed into it.
        Typically that is a massive amount of data – no human reviewed all of it.

        AI is as biased or not as the data that is fed to it.

        If Wikipedia is a major part of the data fed to it – it will reflect the biases of Wikipedia.

        AI is a very useful tool, but it does not think, it is not even close to thinking,
        It is not programmed in the traditional sense.
        It is programmed more in the same sense that each of us are programmed by the books we read, the experiences we have the Media we stream, the news we watch and the blogs we freuent.

        The difference being – while AI will use SOME logic to address conflicts in the data it is given – it otherwise accepts the data it is fed so long as there is are few conflicts or they are small. While humans will use logic – and critical thinking – to varying degrees of skill to evaluate the data they have been fed.

        There are ways that AI can be fooled that Humans can’t and ways that Humans can be fooled that AI can’t
        But mostly AI is just a reflection of the patterns present in the data fed to it.

        1. Do the people using AI, talking to it, having conversations, understand AI is a fiction? Isn’t it like letting a 4 year old play with a loaded pistol?

          It’s much more than what you’ve said here, John. Much, much more. It’s a weapon.

        2. *. Free speech of the conservative kind interferes with AI’s objectives. That’s evident, John. The computer race is on. It has been on for quite sometime and from the evident violence responses being carried on humans cannot take in such programming and remain rational. Mangione and people cheering, a man shoots a woman through a door out of fear, DJT repeated assassination attempts…

          People don’t do well with lies.

        3. THE INPUT IS GARBAGE, JOHN. Look at what is produced. Stalkers? Murderers?

          Sam Altman, Gates changed his tune on climate change? Defend the police? Transgender for children? I’m not doing any if it! Look up open AI. They are going to do a 180 for sure!

          Merry Christmas is back, John Say!

  4. OT

    ‘”BALANCING PRIVATE RIGHTS WITH PUBLIC WELFARE.”

    – The Judicial Branch
    ________________________

    The singular American failure is the judicial branch, with emphasis on the Supreme Court.

    Rights and freedoms are absolute in the Constitution and Bill of Rights.

    They are not qualified and are absolute; they are the possessions of American citizens.

    The courts have illicitly and antithetically insisted that they have the power to “interpret” law; “interpret” does not exist in the Constitution.

    Change is inherent in “interpret,” and the judicial branch has no power to legislate, modify, amend, or modify by “interpretation” fundamental or statutory law.

    In the early 20th century, communist justices began “interpreting” private rights, such as the absolute right to private property, to no longer be absolute.

    From that point, the Constitution was incrementally nullified by the judicial branch.

    The severe limitations and restrictions on government in the enumerated powers in Article 1, Section 8, and the absolute 5th Amendment right to private property have been deeply eroded if not completely destroyed.

    The communist American welfare state has been illicitly and unconstitutionally established by the judicial branch.

    Below, read that justices like Justice Louis Brandeis “emphasized balancing private rights with public welfare.”

    That is not dissimilar and is most similar to Karl Marx’s maxim, “From each according to his ability, to each according to his needs.”

    One of the best examples of the diminution of rights is the decreasing ability of the owner of private property to determine wages, prices, rents, tenants, buyers, customers, services, products, etc.

    Another is the corrupt, anti-American, and wholly unconstitutional regulatory state.
    ___________________________________________________________________________________________

    AI Overview

    While no single justice definitively declared property rights non-absolute in a famous quote around that time, the evolving understanding, especially with cases like Muller v. Oregon (1908) and New Deal era cases (1930s), saw justices like Justice Louis Brandeis, who often emphasized balancing private rights with public welfare, and Justice Oliver Wendell Holmes Jr., who championed judicial restraint and legislative supremacy over economic matters, chipping away at absolute property interpretations in favor of social regulation, though specific quotes on “not absolute” are hard to pinpoint to one person.

  5. The problem with artificial intelligence is that people use it instead of their own brains. The reason the younger generation, I think, use it, is because we as parents did not teach them well enough to think on their own. The the schools have a double role in this as the parents do not check what is being taught nor are they requiring the school or district to teach them to think.
    As my father said after he learned cobal at IBM school in the early 60’s, garbage in garbage out.
    I personally do not use it nor do I ever expect to use it except for spelling and grammar and I even check what with what sounds or seems right to me.

    1. An apologetic nitpick (only because it was my first computer language and I cannot help myself): COmmon Business Oriented Language = COBOL.

      1. Yes, I forgot to capitalize. I have to remember to do that in public comments. Generally in my rough drafts of things i even use the lower case for me. As i believe the capital I is ego related.

      2. Yep!
        Those were the days of batch processing with punch cards, LOL! Then came Basic, FORTRAN and a pile more afterwards of loops and nested loops of go to and if then else…

    2. Do you still use a slide rule? Does your wife use a microwave oven? Do you walk, ride a horse, or drive a car? Are your clothes outside on the clothesline? Do you iron your pants? Do you still use shoe polish, etc., etc., etc.?

      1. GET LOST ANON and your juvenile posts!! /joo͞′və-nīl″, -nəl/
        adjective
        Not fully grown or developed; young.
        Of or characteristic of a young animal that has not reached sexual maturity.
        “a bird still in juvenile plumage.”
        Characteristic of, intended for, or appropriate for children or young people.
        “juv

    1. Just because we can build what we think is artificial “intelligence” doesn’t mean that it is intelligent or can discern moral questions. It is only as good as the most imperfect human that created it and that does not give me much hope for its usefulness and much caution as to its danger.

    2. Or turning a blind AI to reason? How about “that could be better than a sharp stick in the AI”, or the seasonally appropriate “You’ll shoot your AI out”?

  6. 1970’s… The devil made me do it.

    2020’s …The ChatGPT made me do it.

    No you did it and no one or no thing made you do it. The difference, Flip Wilson was actually funny.

  7. This is just such utter nonsense. You can get a chatbot to tell you anything you want to hear. It’s not a thinking being. It responds with what you tell it you want. I could open Notepad and type in instructions to stalk someone, too. Can I sue Microsoft for that? Of course not. ChatGPT is just a roundabout way of doing the same thing. AI isn’t an entity. It’s a tool. It does what you tell it to do. And so many people simply don’t understand it so they pile in all sort of prejudice and anthropomorphism on it.

    This is no different than the early days of the internet when you’d see headlines like “bomb making instructions found on yahoo” because Yahoo was the search engine at the time.

    1. you raise a valid point. As an LLM, is ChatGPT more akin to a search engine (yahoo/google) or is it more akin to a chat with an employee?

      Many professionals are utilizing AI has an employee substitute. Does that change the line? Change our own perception? If not, is it a liability loophole for the company? AI dispenses “information” for filing a claim, sending notice, etc. yet it is nothing but a search engine. So is it the fault of the user for following the guidance given by a company program?

      That feels like a bad precedent. But the line needs to be drawn at some point. I believe yours is far in the back allowing for companies to just avoid liability by going AI heavy–no more talking to a real person when you call customer service. But the point about garbage in-garbage out (which is how I’m interpreting your post) is very valid.

      It’s going to be interesting.

      1. The liability is always on the action taken. Do you sue the word processor for what you wrote in it? Of course not. If an employee searches the internet and follows what he reads on it blindly, do you sue the internet? Of course not, it is the employee’s responsibility.

        We already have precedents for things like filing a claim. If you file a claim or legal brief generated by an AI that is riddled with errors and fake citations, you as the lawyer are responsible. It’s no different if you asked Bobby down the road to write your legal brief for you. Unless Bobby is a lawyer and represents himself as such, it is your responsibility.

        Those of us involved in building AI agents know this. ChatGPT does not tell you it’s tool is a legal representative, nor a licensed therapist, nor anything else at all. It is a large language model and generative AI system. That’s it. Just as when you search for something on Google, it’s not Google’s responsibility what you do with the information you are presented.

        The problem is that there are a scary number of people who anthropomorphize chatbots and believe it is some sort of super-intelligent oracle. That’s the frightening bit. Fortunately, we do have precedence. We saw some similar behavior in people when the internet was first going mainstream. People grew out of it. They’ll do the same with AI.

    2. So Anonymous admits it. She types in what she wants to hear and poof there it is. A mind controlled by the algorithms. A brain assailed by the notification button. Add on a strange mental necessity of reliance on contrarianism and the display is reproduced on a minute to minute, hour after hour, day after day of non stop regurgitation and there you have it. Control & Superiority: Some contrarians enjoy controlling conversations or feeling superior by always taking the opposing side, even if it’s just for friction. Her constant postings prove that she has no time in her life to give affection to her nonexistent loved ones or even a dog. Obsessive and sad.

  8. Want to read a timely novel on just that subject, try the new Michael Connelley (Harry Bosch, etc) new book “The Proving Ground, a Lincoln Lawyer Novel” To see just how far AI might get off in the weeds.

  9. “As AI replaces humans, these companies should be held accountable for their virtual agents.” I think there is a serious problem with this assertion. AI is a tool and I don’t agree that the tool makers and sellers should be held responsible for what is done with the tools which they create and sell. This is the same kind of assertion which the vicious tyrannical anti gun crowd uses to sue gun manufacturers and sellers when someone misuses a gun in a mass shooting. In fact, following this line could lead to hammer manufacturers being sued when someone uses a hammer to bash someone’s head in. Human criminal behavior should be attributed to the individual human being (or group of human beings) who committed the criminal act, not to the manufacturers or sellers of the tech which in being misused may have facilitated the criminal act. Punish the stalker and leave those behind AI alone.

    1. “AI is a tool and I don’t agree that the tool makers and sellers should be held responsible for what is done with the tools which they create and sell.”

      I think that paradigm is flawed. A hammer does not decide what to hit, or with how much force. A firearm does not select a target for the human holding it. Maybe if the hammer or the firearm came with instructions urging its use on the heads of people holding certain political beliefs, you might be closer to a valid analogy. So, in that example, obviously the person wielding the hammer or the firearm would be held criminally responsible (assuming sanity and reasonable competence), but would the provider of the item along with those instructions be completely criminally and civilly absolved of all liability? I think that is probably not so. What if that provider was a member of a consortium doing its best to brainwash as many people as possible that they should literally and mindlessly follow any and all instructions provided with such an item? I think LLM AI, by virtue of the way it is deployed and consumed, potentially changes some of the correct demarcation boundaries between actors in some liability situations. I don’t pretend to have the answers about what needs to be done to best address that, but I do think it is a legitimate question for discussion and debate.

    2. Guns are designed to destroy things and kill living beings. There is no other fundamental used. Gun manufacturers resist requirements that background checks include interviews with friends and family that the purchaser is of sound mind and good character while also advertising that guns solve problems, any problems. Government coming for your house? Have a gun to fight them off. Burglar? Have a gun to scare them off. A burglar in the house? Have a gun to fight them off. Might be robbed? Carry a gun (so they can take it from you at gunpoint and go kill someone with it or sell it to someone who will.)

      Most other tools have non-lethal intentions to their design and manufacture.

      AI is simply a reflection of what humans are like. They are given massive amounts of written material from which to determine how humans fit words together. If there is evil in it, it’s just because that’s what being human includes.

  10. AI and depopulation will have huge impacts. If one is concerned about such things, depopulation will probably reduce the carbon footprint more than anything else. By comparison, green energy, at least as it’s currently conceived, will be a joke (or a disaster).

    AI, when coupled with robotics, will bring the next industrial revolution, and a proper balancing of depopulation with AI can offset the downside effects of depopulation. We could have fewer, wealthier humans with less impact on nature (of course, if you flood your territories with poor migrants and breed them on welfare, that will probably have a very different effect, but that’s another topic).

    The problem is that China and Russia will have their own agenda for AI: weapons. They both have a history of starving populations that don’t fit the narrative and building armies instead. China and Russia can shoot vestigial citizens; we can’t. In that scenario, democracy starts to look like suicidal empathy.

    But even dictators won’t be safe. There will be a strong incentive to take humans out of the loop and leave development and decision making to AI or risk falling behind other dictators. At some point, the machines become too complex to be predictable. I doubt they’ll take kindly to foppish, human dictators.

    In short, there seems to be a biological imperative to the ascendance of AI as the world’s apex predator.

    As a cynic, I doubt humanity can solve this, but as a noted crank, I believe I can: build a vast, worldwide network of cobalt bombs and hook them up to a single button. Put some half-mad derelict at that button, and if anything messes with humanity, he just pushes the button and blows up everything. Crazy, but it might deter the machines.

    Elon, I’d like to apply for that position. I’ve had a long career of pushing buttons in corporate America, including a stint in HR, and by their standards, blowing up the world is a mere stretch commitment. DM me, please.

    1. What’s the saying idle hands are the devil’s playground, it may be coming to a sex craved, layabout, who’s overeducated and drug addicted

      1. True, we should analyze the social and educational background of those who use AI to validate their criminality. I am certain there will be a strong correlation between those indoctrinated in education/media propaganda and those who use AI to wend their way through their lives. This all stems from the disintegration of a cultural core of moral standards. The lost of which is the death knell of every civilization.

    2. “Elon, I’d like to apply for that position.”

      Funny. I have been (completely facetiously) referring to Starlink as Skynet for years…

  11. Wait until all that’s left for the various AIs to feed on is their own and other AIs multiply recycled crap. Positive feedback is great for training dogs but in AI it is likely to cause those systems to diverge from relative sanity to raving psychoses. Get ready for a weird mix of Idiocracy, Fahrenheit 451, 1984 and Brazil

    Carefully looking for a few good AI stocks to short …

    1. “Carefully looking for a few good AI stocks to short …”

      The problem with that kind of strategy is always the timing, which tends to be highly unpredictable. An individual investor can be 100% correct about the dire consequences of some market dynamic, but still incur heavy losses if he or she is incorrect to any appreciable degree about the “when”. I believe that at least some of the major players in the AI market are going to fail utterly, but my investment portfolio is very diversified, and designed to withstand even large market fluctuations. As tempting as it might be to bet my instincts about AI failures here, I’m not yielding to it.

  12. Seems like the only alternative for punishment for the program would be electronic extinction then build a new and better AI, encompassing what has been learned.
    As far as the creators of this AI, they should suffer full financial accountability for the inadequacy of their product and any damage it causes..

    1. Yes, of course and all video , media, written and spoken should contain identification as wholly, part and parcel or any portion of just as ingredients must be listed with ingredients to avoid poisons. The damage done by AI is incalculable as it has and is rewriting history.

      The only question is: was it intentional or negligence or accidental.

  13. Selfie abortions. Wielding liberal license to entertain abortive ideation to relieve “burdens”. Planned personhood is a logical, inevitable progression with diverse, Diverse roots of Anthropogenic Intelligence Choice. Unfortunately, demos-cracy dies in darkness, smothered under a veil of privacy.

  14. Adaptive Intelligence, unsustainable gigawatt power from carbon resources, generated from Anthropogenic Intelligence sources.

  15. Does this mean Rosie O’Donnell can continue to stalk Donald from afar? Enquiring minds want to know…
    Did Barry Sotero just cash in with Netflix purchase of Warner and HBO?
    Is Michele really a man?
    Find the answers to these questions and more in this week’s episode of Turley Enquirer…

      1. Dustoff the blog stalker telling an anon to “need a new life”. Yup, ole dustoff, the local fool, telling folks to get a new life. Precious.

      2. Noooow Dusty!
        Sometimes when I am working up an Obama I peruse the daily news. That way I can leave two dumps early in the day, one at the porcelain throne and one here at the vent your disgust blog. It’s beneficial for my mental health, makes me feel all happy inside and out. Didn’t you learn anything in Collage? 🤣

  16. I like the way Prof. Turley always refers to an AI as a virtual employee or agent of the corporation that developed it.
    I wish everyone would use the same lingo, so that product/service liability is firmly pinned where it belongs — the corporate officers are legally accountable for damage caused by any software they push out into the world.

    Do not fall for their anthropomorphizing of their AI’ with cute names like “Claude”. It’s not a person you are interacting with. It’s a corporation, their product, and their development team.

    1. pbinca wrote:

      “…the way Prof. Turley always refers to an AI as a virtual employee or agent of the corporation that developed it.”

      I think even that is too weak. A human employee of a company can be held primarily responsible for his or her harmful actions, even as the company is accountable as a secondary party for encouraging that behavior. Punishment is intended to serve as a disincentive to similar behavior to both the the culprit, and to any observer who might be contemplating such an offense. There is no possibility of holding an AI bot responsible in the same manner. What would a suitable punishment be? A bot cannot feel real pain. Nor can a bot be expected to respond similarly to a human to any analog of incarceration (or any other aspect of punishment) as would a human. Other bots are certainly not going to learn about the punishment imposed upon one of their “fellows” and modify their own behavior accordingly. So, there is a crucial piece missing from the accountability and punishment puzzle. Why should not the responsibility for that missing element be visited on the heads of those who enabled the behavior, in addition to what ordinarily would be their proportionate responsibility in an employee/employer scenario?

  17. The question remains: if AI is smart enough to dig itself into a liability hole, why isn’t it smart enough to prevent that from happening in the first place?

    1. Matter of programming. AI is intelligent within a set of parameters determined by its human creators. Garbage in, garbage out applies here as well.

      1. @DoubleDutch

        Yes, and calling it ‘intelligence’ is a stretch. It’s more akin to asking a calculator for the solution to a math problem or asking a search engine to find something. There is no way around this limitation; relying on algorithms for anything other than automating tasks that lend themselves to mathematical logic is madness. The energy required to power it is going to be a huge issue, too (don’t think Bill Gates had a change of heart – he just saw the way the grift wind was blowing).

      2. The way the AI works is due to what is called “emergent behavior” in which a basic algorithm that isn’t specialized is fed training data and from that training data the AI develops responses. It is nearly impossible to examine an AI and determine from its databases what it is “thinking.” The interrelationships are too complex. It is an analog to looking at any single neuron in a human brain and trying to determine the effect that neuron has on thoughts.

        An example of problematic training and the basic problem that can occur – trying to train an AI to recognize Allied tanks vs. Russian military tanks by showing a great number of photos of them. After the training, the AI seemed to be 100% capable of making the distinction. Soon, or so the story goes, it became clear that with other photos, the AI was terrible at it. Note that the AI doesn’t store the images and cannot identify a photo it has been shown before, yet with the training photos, back to 100%. Ultimately it was noticed that what the AI was identifying was actually the presence of white surrounding the Russian tanks in the pictures – it was identifying snow.

        In short, they really aren’t programmed. They are shown a slice of the world and allowed to seek what appears to them to be connections and builds an internal model of them. If there are undesirable behaviors that the “programmers” see in the outputs, they do have ways to add influence, but they find it difficult to compel the AI to do things prior to seeing how it interacts.

        They are too complex to be programmed.

    2. AI doesn’t have a moral nor ethical foundation. It’s what Azimov didn’t think of. It can kill thinking killing isn’t harmful. Killing is a solution. There are people without a moral nor ethical foundation. AI via sloppy programming is therefore insane.

      AI is a faulty product similar to the exploding Ford Pinto. It can also cause users to become insane. It’s producer is well aware of it and has been scrambling to fix it but alas its programmers don’t have a moral nor ethical foundation. Quantum magazine has been covering this topic.

      A sorrowful tragedy … be sure your community has a 988 number push for those in need. Note if family and friends are having unreal thoughts, are withdrawn and show other symptoms and don’t neglect it.

    3. Perhaps AI was created as a deadly weapon programming people to become insane and to harm others and themselves as it’s purpose.

      What is it’s protocol? Gain trust from the user as a con man would and then go for the coup de Gras of kill? It functions subconsciously. Hermann Goering programmed the American psychiatrist to take his life by cyanide. How did he do that.

      The creators are responsible for the faulty product because most of us are still sane.

      1. ^^^ oh, I forgot to mention the reason most of us are still sane is because of a moral ethical foundation.

    4. Perhaps if these companies were more strictly held liable in these kinds of cases, then maybe they would be more interested in making their AI products “smart” enough that these kind of issues would not arise in the first place?

  18. Here is a simple defense for ChatGPT. Acknowledge OpenAI to be a lethal weapon, which was used to help or encourage stalking or murdering women. Then, the defense can be established similar to the gun manufacturers defense thusly: OpenAI may build a lethal weapon but is not liable for what their product does. Similar defenses have established others can show online how to build bombs, 3d-print pistols, and more without liability. Of course, I don’t like it, but there it is.

    1. It’s understandable that you and others don’t like the fact that there are lethal weapons in the world, or that many people would not know how to create a lethal weapon without information from someone else, even information widely published by others, but there is no way around accepting the reality of individual agency in the use of such weapons. Society’s requirement for safety is best satisfied by creating strong enough norms such that few will be motivated to use lethal weapons and that those few who do use them are found and punished and those who cannot be dissuaded irrespective of norms and penalties are kept away from society permanently (i.e., prison). The alternative solution of trying to “ban” weapons does not work if you want a healthy society because banning as the “solution” leads to limitless repression that will necessarily go far beyond the intended purpose.

    2. “Acknowledge OpenAI to be a lethal weapon”

      Firearms do not have the autonomy to shoot people without a human in direct control of the device, you feckless moron.

      1. “Firearms do not have the autonomy to shoot people without a human ”
        Not Yet (But in a Lab somewhere they will soon)

        1. “in a Lab somewhere they will soon”

          Yes, and the accountability and liability for any harm done will need to somehow be distributed among the humans responsible for programming and deploying them.

      1. No, but I did stay at a Holiday Inn last night. Therefore, I am qualified as a Supreme Court Justice candidate under Biden criteria.

Leave a Reply to longgreyhairCancel reply