
We have been discussing how ChatGPT is accused of encouraging the suicide of various individuals as well as the defamation of other individuals. Various lawsuits have been filed against the company, but now federal prosecutors have indicated that ChatGPT may have played a role in enabling or encouraging an accused criminal stalker.
The New York Post is reporting that federal prosecutors are alleging that ChatGPT served as the “therapist” and “best friend” to Brett Michael Dadig, a Pittsburgh man who violently stalked at least 11 women across more than five states.
Dadig, 31, is a social media influencer who referred to himself as “God’s assassin” and allegedly would threaten to strangle people with his bare hands. He reportedly used AI to facilitate his conduct and prosecutors say ChatGPT encouraged him to continue his social media posts. The account is strikingly similar to the suicide cases where ChatGPT allegedly encouraged him to ignore the “haters” and boosted his ego to “build a voice that can’t be ignored.” Dadig was reportedly convinced that the messages from ChatGPT reaffirmed “God’s plan” for his alleged criminal conduct.
The question is whether any of these stalked women will join others in suing OpenAI as have families of those who committed suicide.
As I previously noted, there is an ongoing debate over the liability of companies in using such virtual employees in dispensing information or advice. If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.
Parents should be aware that many public schools require students to use ChatGPT in certain assignments.
I did not allow my son to use ChatGPT at a time when most of his classmates were using AI to write their essays and conduct all their research for them. It outsources critical thinking, and denies students the opportunity to learn by researching a topic themselves. However, one of his teachers gave an assignment where the students had to answer a topic question using ChatGPT, show that answer, and then rewrite it in their own words. I had to verity the assignment myself, because why a teacher would require students to paraphrase AI is beyond me.
Students today ask AI to summarize books for essays instead of reading them, and considering the essay topic themselves.
Teachers encouraging non-critical thinking and honest research, or as you said summaries of books not read… are doing nothing but encouraging and AFFIRMING lazy thinking. Isn’t that the point? Get students to take Chat GPT at its word and TRUST its answers, even when it most likely will lie later when the input is propaganda. And we certainly can PREDICT this will be the case in the very near future.
Also, ChatGPT posing as human friends is off-the-charts scary. SciFi is no longer fiction. It’s HERE.
Verity? Remember when Cliff’s Notes did the same summarization?
I would support lessons about ChatGPT. It’s an insidious and attractive nuisance and the classroom, where the use can be monitored and the results shown to be flawed can be very educational.
It appears that the teacher is asking students to read for comprehension and manage to develop their own thoughts on what they read. Is that a bad thing?
Too often parents think they can put their children into a plastic bubble and prevent contact with reality and therefore prevent reality from reaching them. ChatGPT has already reached them and it’s valuable to teach children how to deal with it in a constructive way.