And so it begins. Scientists at the Georgia Institute of Technology have taught robots for the first time how to deceive humans. Didn’t we learn anything from Stanley Kubrick’s 2001: A Space Odyssey?
The team implanted computer algorithms that let a robot ‘decide’ whether it should deceive a human or another robot and give it the ability to strategize about the best means for concealment.
Worry not. As Hal said, “I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.”
Next they will implant the programs for Lust, Gluttony, Greed, Sloth, Wrath, Envy, and Pride
Source: Daily Mail
Jonathan Turley
Maaarrghk!,
Very subtle. lol Nice Douglas Rain impersonation by the way.
This is highly irregular……….
okay is anyone thinking of terminator this is what will happen if we carry on. why would you want robots to decive humans there is no logic in that
I would think if an AI ever became sentient it would recognize deception as a valuable tool for self preservation, we wouldn’t have to do any programing on that front. As for lust, gluttony etc. it might just recognize them as a fun way to pass the time between its other tasks 🙂
Apparently the Japanese are about 5 months away from unleashing robotic nurses. Seriously. The story is, they’ll work in “pods” of three, and communicate with each other via email, doing menial tasks for the first few years. And then . . . ?
Just wait till the military gets their hands on a few Nurse-bots. They’ll be adding skills that have nothing to do with bed pans.
Look out.
I mean time and money
What a waste of tie and money.
Vince,
Good Asimov story. I’d never heard that one.
I’m beginning to think scientists are kind of dumb in a way. They haven’t been able to figure out how to convince people that we’re changing the planet in ways that will eventually kill the human species. If we can’t fix that problem, then what else really matters?
“Next they will implant the programs for Lust, Gluttony, Greed, Sloth, Wrath, Envy, and Pride”
Why not? If we are willing to imbue our Gods and, by example, our children with these things, why not our mechanical slaves?
I’m not going to comment till I check with Slarti ….
I think we have already seen one in action two maybe……BuBya…..and….
I think what this means is that the government will be buying some of them to replace “Public Relations” people.
Does this violate the Three Laws of Robotics:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These folks may possibly have had this in mind.
The algorithms “let a robot ‘decide’ whether it should deceive a human or another robot and gave it strategies to give it the best chance of not being found out.” For example, a “search and rescue robot may need to deceive a human in order to calm or receive cooperation from a panicking victim.”
So a robot could deceive a human if necessary to protect the human from harm in compliance with the First Law. It could not deceive the human to harm him.
Isaac Asimov himself went to see the movie “2001” and HAL with friends. At the intermission, he was outraged that the robot had terminate the lives of the hibernating astronauts.
He was indignant.
They were violating the First Law!
Finally, a friend asked him:
Well, Isaac, why don’t you just strike them dead?