64 thoughts on “Watson Beats Humans in Practice Bout”

  1. Tony S.,

    Actually, there have been stories featuring smart guns albeit in a non-leading role. The one that immediately comes to mind is the Michael Crichton film Runaway, which was ostensibly about a plot to make domestic robots lethal. It’s a terrible movie, but the bad guy – Luther, “played” by Gene Simmons – had a gun with smart targeting ammunition. Basically miniature programmable missiles. I think the reason thinking machines feature more prominently than guns is exactly as you suggest: a matter of control. When a machine controlled by a human kills someone, it is really no different than a killer using a club. But a club that can decide to kill on its own volition and for possibly inscrutable motives? That is inherently scarier – more alien in the broadest sense of the word – and therefor a more potent literary device.

  2. Buddha Is Laughing, no I don’t dismiss the idea of machines getting out of control and destroying us. I do know it’s an incredibly popular American science fiction trope and for that reason the frequency with which it turns up amuses me. Nobody ever seems to write a SF story about a world in which machines cause (in my country) some 30,000 serious injuries and 3.000 deaths per year, or in your country, different kind of machine, a homicide death rate so absolutely ridiculous that I would probably be ashamed to write it.

    Is it because the machines we worry about would be under their own control and not that of a drunk or a psychotic or somebody too busy chatting on a cellphone?

  3. Tony S.,

    Let me guess. You’d be one of the guys at the Manhattan Project betting against the atmosphere flashing over? Just because that wasn’t the result doesn’t mean that the physicists betting on it were irrational in their concerns. Just because a possible bad result of a new technology is a subject of speculative fiction doesn’t mean the bad result is no less possible either. Never read The Shape of Things to Come by H.G. Wells have you? He got a lot of things wrong, but he got a lot of things right too. Perfect prediction is not a requirement for informative and informed speculation.

    You also discount deliberate misuse by humans. Remember, HAL didn’t kill the astronauts on Discovery because he was intrinsically evil. He killed them because he was given orders by humans that conflicted with the restraints built in to his fundamental operating parameters; namely the Three Laws.

    To dismiss a risk is your prerogative.

    The assessment of probability is a fickle and individual mistress. In the end though, probable is not the same as possible or unforeseeable. Indeed, it is unforeseen risks that usually pose the most danger with new technologies.

    But to deny the existence of risk based on remoteness (?) is merely foolish. I granted that sentient thinking machines could be a boon to civilization. Hell, I hope they are because left to our own devices we are a self-destructive and myopically stupid species. However, to change “could” to “would” flies in the face of history regarding technological change. It can have unintended and/or undesirable consequences. One cannot be prepared for contingencies one has not considered.

  4. Buddha and Mike S (if you’re reading),

    There’s a great Jay Lake short story, To Raise a Mutiny Betwixt Yourselves that deals (in part) with the problems of having the sentient computers you’d need to run a star ship. I think you’d also enjoy Ciabola , by Jame Cobb, which has several self-aware machines.

  5. The possibility of self-aware machines is like most technology, a two-edged sword. On one hand, machine life could solve problems we as the chemically limited biological brains could never solve as they have both the potential for unlimited memory and the ability to “think” at the speed of light (compared to our limitation of the speed of electrochemical interaction). On the other hand, they could decide that since oxygen is corrosive to their systems, that their best survival strategy would be to do away with it. Or to wipe us out for some equally obtuse reason. Modeled upon us or not, they will essentially be artificial alien life with processes and capabilities far beyond the smartest humans. As great as the promise of such machines are, the danger they present is equally great.

  6. We’re talking about what we know machines are capable of, not what they universally exhibit. An example of how a machine can be adapted to provide a more sympathetic interface, consider the research robot, Kobian, a humanoid capable of displaying realistic emotional cues. In environments where the manner of delivery of information can often matter more than the content, this line of interface may make a practical difference.

    So while we don’t yet know why humans feel emotions the way they do, we are aware that the outward manifestations of such emotions are often counterfeited (politeness, protocol, discretion, diplomacy, fitting in, white lies, acting) and it seems that machines are capable of such counterfeiting, as well as reading such emotional displays from humans. It’s really just another dimension of human communication, and machines are getting better at that.

  7. I disagree Tony!

    Just spend an afternoon trying to get through the automated system of any phone company and then talk to me of mechanical compassion 😉

  8. Woosty’s still a Cat, I think that’s well within the range of current technology. Certainly the recognition of facial expressions is current technology, I’m less certain about recognizing tone of voice. Once the emotional state of a person is available for mapping it may become part of the realtime decision-making process that determines the systems further interactions, including affect (moderating the agent’s tone of voice, choice of words, etc).

    But don’t forget that feelings such as empathy are not universal even in humans. In particular high functioning autistic and aspergers people tend to be very poor at recognising emotional affect and may themselves appear rather flat in their expressions.

    So it would be difficult to single such characteristics out as human, because empathic responses can be exhibited by machines and absent in humans.

  9. Tony Sidaway 1, January 30, 2011 at 11:13 pm

    Well, I’m disagreeing with what I think are false characterizations of the difference between what machines can do and what humans do. As a human I certainly do express emotions, but I think machines can do the same.

    I don’t think it is just the expression of emotions that is the necessarey thing so much as the feeling of them and how that feeling modulates our behavior. ie; capability of compassion when pain and harm are recognized and the ceasing of the behavior that causes that pain and harm to another….

  10. This contest was predicted many years ago by Sir Arthur Conan Doyle:

    “Come, Watson, come! The game is afoot. Not a word! Into your clothes and come!”

    (The Adventure of the Abbey Grange)

  11. Blouise, I don’t think we yet understand what we mean when we say we feel something. Until we have an operational definition that can distinguish a “true” feeling (whatever that means) from its counterfeit, the answer is “very probably not yet, and quite probably never, and even if we could we’d have no way of knowing for sure that we had succeeded.”

    That’s leaving aside the ethical question of creating an artificial life form simply for the purpose of seeing if we can induce in it the feelings of distress.

  12. Counterfeit giraffes that act exactly like real giraffes. So lifelike even they think they’re real. Heart, lungs, liver, all working.

    Now that brings an interesting question. As humans we presuppose that we have an internal life, because we have the subjective feeling of something inside, something that seems to experience everything, the “I” in Descartes’ “I think therefore I am.” If humans have that, do giraffes? How about our fellow great apes, the chimpanzees? They seem to have consciousness much like ours, being able to recognise their own reflections as such. Can we prove that they too have what we call internal life, which philosophers call phenomenology? If we assume they do not, why do we assume that all other humans do?

  13. Is one able to program a computer to experience fear … to feel threatened?

    A Good night echoes across the Pond and also to rafflaw

  14. Think of them as clockwork dolls, if you like, as in the ballet Coppelia or Offenbach’s opera Tales of Hoffman. Or that Doctor Who episode where he meets and falls in love with Madame de Pompadour.

    Okay, scrub that last one. That was a really scary episode!

  15. Tony Sidaway,

    I am not ridiculing your ruminations in any way but did you have to mention Zombies?! There are posters on this blog who looove Zombies and they will go on and on and on about them and post videos about them and I … Zombies scare the hell out of me.

    One can only hope they are all busy tonight …

  16. I sometimes wonder, not entirely seriously, whether our massive brains developed partly as a result of our upright stance. You need something to counteract the tendency of the upright body to tip backwards as the feet propel it forward producing a turning moment about the center of mass, and the large head filled with heavy watery matter provides a suitable counterbalance that can be readily manipulated by the strong neck and shoulder muscles to equalise and eliminate the turning moment with one in the forward direction.

    Only half kidding. Evidently giraffes have not learned this trick!

  17. Note how I inadvertently said of my calculating idiot savant that it “has this inexplicable feeling that there is something linking the couplet 2 and 3 to the couplet 1 and 4.”

    It’s actually quite difficult, for a programmer at least, to get away from this anthropomorphism, not least because it’s such a useful way of thinking about how we want our software to behave. We commonly use words like “think” and “decide” that former generations reserved for humans.

    Of course that doesn’t mean there is no essential difference between what we do and what machines do. But I do think the difference is much harder to pin down than most of us imagine. When we strip away the sheer complexity of our brains and look at the actual processes, it doesn’t seem that easy to prove that we’re more than rather strangely wired machines. Only a tiny amount of what our brains do is conscious thought, and much of that is related to communication and planning (as in formulating these sentences or wondering what I’ll do next weekend).

Comments are closed.