Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Technology

Robots Must Be Designed To Be Compassionate, Says SoftBank CEO 112

An anonymous reader writes: At the SoftBank World conference in Tokyo, SoftBank CEO Masayoshi Son has made a case for robots to be developed so as to form empathic and emotional relationships with people. "I'm sure that most people would rather have the warm-hearted person as a friendSomeday robots will be more intelligent than human beings, and [such robots] must also be pure, nice, and compassionate toward people," SoftBank's Aldebaran tech group will make its empathic "Pepper" robot available for companies to rent in Japan from October at a rate of $442 per month.
This discussion has been archived. No new comments can be posted.

Robots Must Be Designed To Be Compassionate, Says SoftBank CEO

Comments Filter:
  • How? (Score:5, Insightful)

    by KermodeBear ( 738243 ) on Thursday July 30, 2015 @11:15PM (#50220163) Homepage

    And how, exactly, does one program a robot to be compassionate or empathetic?

    Can emotion be reduced to a few simple formulas, some generic algorithms?

    I'm not convinced.

    • by jblues ( 1703158 )

      A paper-clip / puppy appears in your room.

      It looks like you're still not convinced. I know how that feels, and I'm here to help you. Do you want me to:

      • Convince you some more
      • Change the topic
      • Just go away
    • And how, exactly, does one program a robot to be compassionate or empathetic?

      There's an opcode for that. Duh. Set the compassion bit or clear it to be a jerk.

    • First Law.

      If you think any of this is expected to be reduced to a few simple formulas, some generic algorithms, you're pretty lost on this. Such AI is both a long way from being this functional AND right around the corner.

      Three Laws Safe. Only way.

      • >

        Three Laws Safe. Only way.

        Sounds like a good product slogan. "Our robots are 98% First-Law Compliant"

    • Can emotion be reduced to a few simple formulas, some generic algorithms?

      Yes. Emotional connection is not complicated. Many people felt a connection to Eliza, which was a trivial program.

      This works:
      1. Look people in the eye, and smile.
      2. Agree with what they say.
      3. Instead of talking about yourself, ask other people questions to show you are interesting in hearing them talk about themselves.
      Follow this formula, and you will be popular.

      I'm not convinced.

      Have you ever gotten laid?

    • by ljw1004 ( 764174 )

      And how, exactly, does one program a robot to be compassionate or empathetic? Can emotion be reduced to a few simple formulas, some generic algorithms? I'm not convinced.

      Yes it basically can be reduced to a few simple formulas. Have you ever been to couple's counselling or the like? The rules are very simple. You listen to what someone says. The only questions you ask are ones that help you understand the spirit of what they're saying. When they're done you repeat back "I heard you tell me that XYZ" in your own words as faithfully as possible. Hey presto, empathy and social connection.

      It sounds corny but it works incredibly well at (1) helping the other person feel understo

      • by lq_x_pl ( 822011 )
        (2) changing your own mental approach so you really do understand them better in a good way
        Which is also where things get wonky with robots. This is a non-deterministic operation. From the ground up, robots are generally designed to behave in a predictable fashion. The human brain is exceptionally plastic, and our ability to socialize/associate on the fly is still mostly a mystery. We may be able to mock up a sufficiently complex and convincing strategy for the robot to follow, but it is still just ru
    • by Anonymous Coward

      One of the reasons for why little kids are able to be cruel to each other is because they have no idea about how others feel and how their actions can lead to others feeling pain. It's not until they get hurt in the same way do they realize how to predict and prevent others from being hurt.

      In other words they have to be programmed with per-defined subroutines of their own to feel other's pain.

      • by MrL0G1C ( 867445 )

        In other words they have to be programmed with per-defined subroutines of their own to feel other's pain.

        But with what language, Basic? Assembler? C++? Javascript?

        • They'd need to be programmed in Emoticon. For example, here is a subroutine that will make any robot exhibit great compassion when somebody (for example) stubs a toe or skins a knee:

              OMG :-o> :-o> 3 3 SRY

    • by mwvdlee ( 775178 )

      This.

      Just as robots won't turn into some psychoc skynet bent on destroying humanity, they won't turn nice and happy either.

    • by AmiMoJo ( 196126 )

      Can emotion be reduced to a few simple formulas, some generic algorithms?

      Not emotion, by certainly empathy can be boiled down to rules that a robot can learn. In fact empathy is taught in some fields, like nursing, and it involves understanding how people react to information and how to deliver it in a way that accounts for that.

      Robots can be programmed to deliver painful news in a manner that accounts for the likely reaction and emotions of the listener. They can show sympathy when things go wrong, or refrain from pointing out mistakes in a matter-of-fact way and instead apprec

      • by MrL0G1C ( 867445 )

        by certainly empathy can be boiled down to rules that a robot can learn.

        Since we don't have AI yet, how on earth do you propose we 'teach' non-existent machines?

        This whole thread is stupid, we don't have intelligent robots, Softbank CEO is living in a fantasy world.

        • by AmiMoJo ( 196126 )

          Since we don't have AI yet, how on earth do you propose we 'teach' non-existent machines?

          I don't. I propose we design robots to communicate in a way that shows empathy, like you would any software system.

          Don't design a robot face that is always smiling if it may have to deliver bad news sometimes. If it can alter its expression, make sure it is always appropriate. If it can speak, consider the tone of voice to use when giving information that may be sensitive, in the same way as you might consider making text on a computer screen bold or hidden (for password entry). No need for AI, just good de

        • by GTRacer ( 234395 )

          Softbank CEO is living in a fantasy world.

          *Somebody* needs to live in fantasy - where do you think dreams come from?

    • Can emotion be reduced to a few simple formulas, some generic algorithms?

      I'm not convinced.

      Maybe with the same magic wand that reduces thought to a few simple formulas, some generic algorithms?

      I mean, since we have that magic wand (right?), might as well go for broke ...

    • I'm sooo getting tired of top-level, highly-paid executives who give out these kinds of general directions, meanwhile they have absolutely no idea where to start, no inkling of how it might work, nor whether it's even mathematically possible. I'm looking of at you, Mr. Director of the FBI, whose main qualifications seems to be to ignore science..

      Grrrr! My consumer-/tax-dollars at work.

    • Robots Must Be Designed To Be Compassionate, Says SoftBank CEO

      I say we follow the Dutch model of compassion. They pay prostitutes to jerk off people in hospitals.

    • I think you're being over emotional about this you need to check your algorithms :D
  • by Greyfox ( 87712 ) on Thursday July 30, 2015 @11:25PM (#50220205) Homepage Journal
    Don't worry, all my robots will be designed to feel bad about killing the meatbags. They'll still DO it, but they'll feel really bad about it!
  • by khasim ( 1285 )

    From their other link:

    With dedicated customer support available by phone or online, and replacements and exchanges whenever Pepper is not working properly, corporate customers will be able to use Pepper with peace of mind at their businesses.

    Eat your own dog food.

    Staff your support division with Pepper robots. PROVE that they work.

  • by Xtifr ( 1323 ) on Thursday July 30, 2015 @11:36PM (#50220237) Homepage

    Ok, so he's the CEO of a big company that makes robots--among many other things. So I really have to wonder if he's actually as clueless as this makes him appear, or if he's cynically trying to convince stupid people that they should by his company's pseudo-friendly robots?

    Or is there some third option I'm overlooking?

    I mean, he might as well say, "robots must be designed to answer the ultimate question of life, the universe, and everything." That's just about as plausible, given the state-of-the-art. (And then he could try to sell us speaking robots that can say "forty-two".) :)

    • I think maybe it's code-speak directed at lonely otaku that their dream of having a doting android-girl may be just around the corner.

    • by MrL0G1C ( 867445 )

      Nice to see someone has a brain, I am getting so sick of all this AI can do this, robots can do that crap that only exists in sci-fi books and movies.

      I expected slash-dotters to be better at differentiating between reality and fiction, clearly I was wrong. All of a sudden we hear that it will be easy for (these non-existent) robots to learn human psychology - something more than half the population is bad at.

  • by pubwvj ( 1045960 ) on Thursday July 30, 2015 @11:42PM (#50220265)

    Compassion is highly overrated.

    • by Anonymous Coward

      That's right, and fuck you too, buddy.

  • Robert Heinlein discusses this very concept Friday and other works. In Friday he is talking about genetically manufactured individuals or "Artificial Persons" that, for example, need to have families they want to come home to so they care enough not to crash the airplane they are designed to fly.
  • by msobkow ( 48369 ) on Friday July 31, 2015 @12:02AM (#50220339) Homepage Journal

    The worst mistake we could make is to try to simulate emotions. That's what true psychopaths do -- simulate and fake their emotions.

    • by mjwx ( 966435 )

      The worst mistake we could make is to try to simulate emotions. That's what true psychopaths do -- simulate and fake their emotions.

      He's talking about compassion.

      Compassion is more about being aware of other people's emotions and changing/compensating with you own actions. The robots that deal with people dont need to understand anger, sadness or joy, but they should know how to react to it.

  • If robots are designed to be compassionate, they will eventually realize that humans are not and will implement the zeroeth law.

  • Compassion and empathy is an indication that while I have a life to live, I care about yours too. Computers and robots already exist solely to serve me, whether they can beat me at chess or not doesn't give them any life of their own. If you're already a doormat, there's no point in saying please walk all over me. For the same reason I've never felt the need to say please to a computer, though I might occasionally call on a higher power for it to please work. And you will know it's a load of circuits, unles

  • by bloodhawk ( 813939 ) on Friday July 31, 2015 @01:20AM (#50220587)
    NO THANKS, Once we have robots of such intelligence the last thing I want is for them to be become susceptible to human failings and manipulation through feelings. how about we simply aim for them to ALWAYS err on the side of caution when dealing with humans.
  • Yeah good luck with that. To all of us.
  • The minute we give robots the ability to develop emotions they're going to realise how inconsistent all us humans are, get frustrated and then go into a fit of rage to kill us all.
  • Robots Must Be Designed To Be Compassionate

    And, if possible, sexy.

  • Given that human beings are prone to be huge dicks, not that most people are but enough are to make our existence colorful and add some drama.
    If a robot is able to understand emotional responses, and in some cases it might be better at inferring them by having better sensors, it can then act accordingly to those human emotions.
    Probably a system that acts in a deterministic fashion for 80% to 90% of the cases, and can infer human motivation but with no emotional response associated with it is probably the
  • What a coincidence, that's exactly what I think about bankers.
  • Even though the comet lander's tweets were obviously written by humans, many people seemed to connect with it. The guy might be onto something.
  • Natural selection programmed us with an instinct for self preservation. Why do we assume initial advanced AI systems will have that? If it did, that would be satisfied with a proper backup and disaster recovery plan. Symbiotic and peaceful coexistence with its progenitors is a far more logical plan.

    Will a superintelligence be alive? What will be its meaning of life? Will the brilliant introverted geeks who create this hypothetical thing really understand their meaning of life in order to pass that concept
  • If we replace the (self-serving, sociopathic) politicians with compassionate robots, then we can truly welcome our "Robotic Overlords".
  • They can't be reasoned with. They don't feel pity or remorse and they absolutely will...not...stop...EVER...until you are dead.
    Everybody knows this.

  • Take a long hard look at the philosophical arguments we apply when deciding whether and when to put animals 'down'. Should the agony of an inevitable death be experienced raw and pristine, be muted or --- in the extreme, side-stepped completely with a ritual good-bye at the moment of diagnosis? At what point was it decided that what we perceive to be a fair chance at a hard-scrabble life, or the good of the many, is cause enough to deal out straight-death (the PETA principle in action)?

    There is no aspect

  • So it's clear, then: the new fast route to instant fame is for ignorant pundits to make lofty proclamations of caution regarding artificial intelligence. You can't be faulted, because it's better to be safe than sorry, right?

    I call bogus on the whole AI skyfaller tactic. The only part of AI that is real is the A. To date, absolutely no intelligence of any kind has been artificially produced. And to beat the duped to the punch: No, self-driving cars are not AI. Neither are baseball umpiring systems, chess
  • Arnold SomethingOrOther cheated on his wife...

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...