Robots Must Be Designed To Be Compassionate, Says SoftBank CEO 112
An anonymous reader writes: At the SoftBank World conference in Tokyo, SoftBank CEO Masayoshi Son has made a case for robots to be developed so as to form empathic and emotional relationships with people. "I'm sure that most people would rather have the warm-hearted person as a friendSomeday robots will be more intelligent than human beings, and [such robots] must also be pure, nice, and compassionate toward people," SoftBank's Aldebaran tech group will make its empathic "Pepper" robot available for companies to rent in Japan from October at a rate of $442 per month.
Re: (Score:1)
And the manufacturer will say "We did. That rubber bumper at the end of the gun isn't just there for show."
Re: (Score:2)
There won't be a robot wars because they will listen.
Instead we will just manufacture a robotic version of Kyubey from Madoka Magica that will take over the world by manipulating our primitive emotion driven brains like puppets.
He will not need to fire a single bullet, but just say the right words.
Re: (Score:2)
I wonder what it is that makes people think that robots can be given emotions, when we have no idea how brains generate emotions? And even when we do figure out how brains do that, what makes people think that it doesn't require living cells to have feelings, sensations and emotions? In which case you would have to grow robots from single cells, which would mean they were living, rather than robots.
Re: (Score:2)
Re: (Score:1)
"I wonder what it is that makes people think that robots can be given emotions, when we have no idea how brains generate emotions?"
Ah, how refreshing. An intelligent thought on this subject.
"what makes people think that it doesn't require living cells to have feelings, sensations and emotions?"
Damn... I jumped the gun.
Cells are really vessels containing incredibly complicated chemical reactions. They communicate through various electrical and chemical means. I don't know if there is any non-material, spi
Re: (Score:3)
I would like it very much if you could provide some information that suggests that all you need is complexity to make something with feelings and emotions. Cells respond to chemicals, like hormones and neurotransmitters because that is how signaling pathways in the target cells are activated. But the key is that they are alive, which allows for sensations and in more advanced organisms with a complex central nervous system, emotions. Just because emotions involve "chemical reactions" (like the signaling cas
Re: (Score:2)
"I would like it very much if you could provide some information that suggests that all you need is complexity to make something with feelings and emotions."
I don't know why you would expect evidence of that from me. It isn't quite what I said! Maybe there is some aspect to our existence as people that goes beyond the materials and energy interacting within us. I don't know. Good luck proving it either way. But.. even if something like that does exist.. is it generated by the formation of our bodies? Is it
Re: (Score:2)
OK, I agree completely that we don't know what "alive" means in mechanistic terms. I was going to bring that up, so I am glad you beat me to it. If we cannot define life, then we have a long ways to go before we can imitate life's abilities, wouldn't you agree?
There is an enormous gap between defining life, and making a machine that can do what a human brain does. Considering that humans are still working on what "life" is, there is little doubt that it will be along time before we can create something with
Re: (Score:2)
If you search PubMed for articles on artificial life you get articles like this, where they make it quite explicit that they are just trying to imitate life-like behaviors, not create them de novo. Any "emotions" would be simple imitations of behavior.
Artif Life. 2015 Spring;21(2):141-65. doi: 10.1162/ARTL_a_00164. Epub 2015 May 7.
On the Evolution of Behaviors through Embodied Imitation.
Erbas MD1, Bull L, Winfield AF2.
Abstract
This article describes research in which embodied imitation and behavioral adaptat
Re: (Score:2)
Splat! I just flew into the paywall.
Re: (Score:2)
I just tried to get it through the University library, but they do not subscribe to that journal. I wish they would just switch to the open access model for all science publications. This old print model of publishing is just dumb in the Internet age.
Re: (Score:2)
Ok... I wrote a big huge TLDR response to this but then I decided to open a new-tab and re-read the comments that began our discussion and deleted it. Now I will write another :-)
Life can be described in 'mechanistic' terms. A definition that I have heard along those lines is that it must have metabolism, reproduction, self repair and evolution to be life. Usually this results in discussions about what happens if you find something that only fulfills some of the 4. Also commonly mentioned are viri which in
Re: (Score:2)
That is the question plaguing Neuroscience, Biology, Psychology and Cognitive Neuroscience (including AI work). We don't know how brain states generate the affective sensations that we experience (and I assume all animals with a sufficiently complex CNS experience, which probably includes fish, who have a very nicely complicated CNS). Right now "artificial life" researchers are struggling to come up with even a proto-bacterium-like entity from scratch. And simply putting the enzymes and transporters into a
Re: (Score:2)
I see several questions in here and I'm not entirely convinced that it can be proved that the answers are all related.
When I think of generating sensations I think of nerves sending messages to one another to get from some point in ones skin to some point in the brain. It seems trivial to grasp that concept while at the same time the complexity at the cellular level and below could be beyond comprehension. Still, this could be very easily seen as being similar to signals on a wire between a sensor and a CPU
Re: (Score:2)
Clearly, the brain works like other tissues in the body where the individual cellular interactions at a smaller scale exhibit emergent properties at the organ scale. The human brain has a complexity that no one person could ever hope to understand, and large teams of scientists struggle to understand small bits of the puzzle. Take a look at these videos of a new technique for looking at the micro-structure of the mouse brain. This is a fascinating technique that required the work of many neuroscientists and
Re: (Score:2)
You won't need a Turing test, because it will act alive
Too bad you will have no way of recognising whether it is "acting alive" or just faking.
It will have to say "ouch!" and really mean it when you stick it with a pin
Again, you will have no way of knowing whether it "really means it" or is just faking.
The whole issue of machine sentience is moot - and a foolish waste of time.
Re: (Score:2)
We will see how easy it is to tell if something is alive or sentient when someone actually makes an artificial being. My guess is that it will be very easy to tell if it is a robot as long as you are allowed to fully interact with it, and touch it and move it. I wonder how much effort people are going to be willing to put into making robots act alive when it is just imitation. The Japanese are doing it quite a lot, and the end results are extremely easy to discern from living organisms. I have heard so much
Re: (Score:2)
"Now.. whether something like this could actually be built is an entirely different question. The complexity would be unimaginable. But then.. I don't see anybody building humans out of raw chemicals either yet nobody is going to argue that a being made of cells can't have feelings, sensations and emotions.
Morgauxo,
The important point is that nobody has made even the most rudimentary artificial intelligence, let alone one with feelings and emotions. The Softbanc blatherer is just using the latest trick to get noticed: spew dire warnings about the risks of AI. AI, which doesn't exist and isn't even remotely on the horizon, even after lots of naturally intelligent people have devoted their lives to achieving.
Personally, I think it's demons from the pit of hell we'd better watch out for. We have more evide
How? (Score:5, Insightful)
And how, exactly, does one program a robot to be compassionate or empathetic?
Can emotion be reduced to a few simple formulas, some generic algorithms?
I'm not convinced.
Re: (Score:1)
A paper-clip / puppy appears in your room.
It looks like you're still not convinced. I know how that feels, and I'm here to help you. Do you want me to:
Re: (Score:2)
And how, exactly, does one program a robot to be compassionate or empathetic?
There's an opcode for that. Duh. Set the compassion bit or clear it to be a jerk.
Re: (Score:2)
First Law.
If you think any of this is expected to be reduced to a few simple formulas, some generic algorithms, you're pretty lost on this. Such AI is both a long way from being this functional AND right around the corner.
Three Laws Safe. Only way.
Re: (Score:2)
>
Three Laws Safe. Only way.
Sounds like a good product slogan. "Our robots are 98% First-Law Compliant"
Re: (Score:2)
Can emotion be reduced to a few simple formulas, some generic algorithms?
Yes. Emotional connection is not complicated. Many people felt a connection to Eliza, which was a trivial program.
This works:
1. Look people in the eye, and smile.
2. Agree with what they say.
3. Instead of talking about yourself, ask other people questions to show you are interesting in hearing them talk about themselves.
Follow this formula, and you will be popular.
I'm not convinced.
Have you ever gotten laid?
Re: How? (Score:1)
Re: (Score:2)
I think what is he is saying is every human is a deceitful liar, and I agree. If you know of any human who has never lied, I will change my mind.
Re: (Score:2)
I think it is pretty safe to say that at any given time, 1 to 2% of the human population has never lied.
So I guess you should change your mind.
Re: (Score:2)
And how, exactly, does one program a robot to be compassionate or empathetic? Can emotion be reduced to a few simple formulas, some generic algorithms? I'm not convinced.
Yes it basically can be reduced to a few simple formulas. Have you ever been to couple's counselling or the like? The rules are very simple. You listen to what someone says. The only questions you ask are ones that help you understand the spirit of what they're saying. When they're done you repeat back "I heard you tell me that XYZ" in your own words as faithfully as possible. Hey presto, empathy and social connection.
It sounds corny but it works incredibly well at (1) helping the other person feel understo
Re: (Score:2)
Which is also where things get wonky with robots. This is a non-deterministic operation. From the ground up, robots are generally designed to behave in a predictable fashion. The human brain is exceptionally plastic, and our ability to socialize/associate on the fly is still mostly a mystery. We may be able to mock up a sufficiently complex and convincing strategy for the robot to follow, but it is still just ru
Re: (Score:1)
One of the reasons for why little kids are able to be cruel to each other is because they have no idea about how others feel and how their actions can lead to others feeling pain. It's not until they get hurt in the same way do they realize how to predict and prevent others from being hurt.
In other words they have to be programmed with per-defined subroutines of their own to feel other's pain.
Re: (Score:2)
But with what language, Basic? Assembler? C++? Javascript?
LOL ;-) (Score:2)
They'd need to be programmed in Emoticon. For example, here is a subroutine that will make any robot exhibit great compassion when somebody (for example) stubs a toe or skins a knee:
OMG :-o> :-o> 3 3 SRY
Re: (Score:2)
This.
Just as robots won't turn into some psychoc skynet bent on destroying humanity, they won't turn nice and happy either.
Re: (Score:2)
Can emotion be reduced to a few simple formulas, some generic algorithms?
Not emotion, by certainly empathy can be boiled down to rules that a robot can learn. In fact empathy is taught in some fields, like nursing, and it involves understanding how people react to information and how to deliver it in a way that accounts for that.
Robots can be programmed to deliver painful news in a manner that accounts for the likely reaction and emotions of the listener. They can show sympathy when things go wrong, or refrain from pointing out mistakes in a matter-of-fact way and instead apprec
Re: (Score:2)
Since we don't have AI yet, how on earth do you propose we 'teach' non-existent machines?
This whole thread is stupid, we don't have intelligent robots, Softbank CEO is living in a fantasy world.
Re: (Score:2)
Since we don't have AI yet, how on earth do you propose we 'teach' non-existent machines?
I don't. I propose we design robots to communicate in a way that shows empathy, like you would any software system.
Don't design a robot face that is always smiling if it may have to deliver bad news sometimes. If it can alter its expression, make sure it is always appropriate. If it can speak, consider the tone of voice to use when giving information that may be sensitive, in the same way as you might consider making text on a computer screen bold or hidden (for password entry). No need for AI, just good de
Re: (Score:2)
Re: (Score:2)
*Somebody* needs to live in fantasy - where do you think dreams come from?
Re: (Score:2)
There's a big difference between having vision and living in a fantasy.
Re: (Score:2)
Can emotion be reduced to a few simple formulas, some generic algorithms?
I'm not convinced.
Maybe with the same magic wand that reduces thought to a few simple formulas, some generic algorithms?
I mean, since we have that magic wand (right?), might as well go for broke ...
Re: (Score:2)
Grrrr! My consumer-/tax-dollars at work.
Re: (Score:1)
Robots Must Be Designed To Be Compassionate, Says SoftBank CEO
I say we follow the Dutch model of compassion. They pay prostitutes to jerk off people in hospitals.
Re: (Score:2)
Re: (Score:1)
Re: Compassionate? (Score:1)
Oh They WILL Be (Score:3)
Ha! (Score:2)
From their other link:
Eat your own dog food.
Staff your support division with Pepper robots. PROVE that they work.
The real question... (Score:3)
Ok, so he's the CEO of a big company that makes robots--among many other things. So I really have to wonder if he's actually as clueless as this makes him appear, or if he's cynically trying to convince stupid people that they should by his company's pseudo-friendly robots?
Or is there some third option I'm overlooking?
I mean, he might as well say, "robots must be designed to answer the ultimate question of life, the universe, and everything." That's just about as plausible, given the state-of-the-art. (And then he could try to sell us speaking robots that can say "forty-two".) :)
Re: (Score:3)
I think maybe it's code-speak directed at lonely otaku that their dream of having a doting android-girl may be just around the corner.
Re: (Score:2)
> lonely otaku dream of having a doting android-girl may be just around the corner.
Who would want an android-girl, considering that OS has big security problems and can be easily hacked or infected on-line? Everybody will want an iOS-girl, who gives them an Apple. (An apple a day keeps mankind away from the gates of Eden.)
I think iOS-girl would be a bit high-maintenance and expensive. She'd want the latest updates and newest hardware accessories, and likes to do things her own way.
On the other hand, I suspect Android-girl might have some serious abandonment issues, although at least she's fairly open about most things.
Windows 10-girl is actually pretty cute, and a lot nicer than she used to be, but only if you can put up with her family.
Re: (Score:2)
Nice to see someone has a brain, I am getting so sick of all this AI can do this, robots can do that crap that only exists in sci-fi books and movies.
I expected slash-dotters to be better at differentiating between reality and fiction, clearly I was wrong. All of a sudden we hear that it will be easy for (these non-existent) robots to learn human psychology - something more than half the population is bad at.
Compassion is highly overrated (Score:3)
Compassion is highly overrated.
Re: (Score:1)
That's right, and fuck you too, buddy.
heinlein theme (Score:1)
Simulated emotions? Big mistake (Score:4, Interesting)
The worst mistake we could make is to try to simulate emotions. That's what true psychopaths do -- simulate and fake their emotions.
Re: (Score:2)
The worst mistake we could make is to try to simulate emotions. That's what true psychopaths do -- simulate and fake their emotions.
He's talking about compassion.
Compassion is more about being aware of other people's emotions and changing/compensating with you own actions. The robots that deal with people dont need to understand anger, sadness or joy, but they should know how to react to it.
Re: (Score:1)
Without sentience, they would have no reason to screw it up like we do, and a robot with compassion can just do things where compassion is an asset, and a robot without compassion can do the things where compassion is a detriment. Humans fail at this because we have sentience and free will, so we often choose the things we're not as well-suited to because we WANT to rather than because we can make the best contribution there. Robots wouldn't need to make that mistake.
Great way to get the zeroeth law (Score:2)
If robots are designed to be compassionate, they will eventually realize that humans are not and will implement the zeroeth law.
What a load of bullcrap (Score:2)
Compassion and empathy is an indication that while I have a life to live, I care about yours too. Computers and robots already exist solely to serve me, whether they can beat me at chess or not doesn't give them any life of their own. If you're already a doormat, there's no point in saying please walk all over me. For the same reason I've never felt the need to say please to a computer, though I might occasionally call on a higher power for it to please work. And you will know it's a load of circuits, unles
no thanks (Score:3)
Re: (Score:1)
I'd posit that there's lots of them. They don't do anything. You have to look for the people who don't do anything. ;)
Re: (Score:1)
Psychopaths aren't really the best at most things though. They should be guided by reason and logic, but they can still implement compassion and empathy in interfacing with people. Many things are much more palatable if they're presented nicely compared to the same things being presented contemptuously with an utter lack of tact or decorum. People are nearly always much more likely to listen or defer to someone who at least makes them feel heard and understood than to someone who makes them feel worthless a
5... 4....3... (Score:1)
Ain't gonna work (Score:2)
And... (Score:2)
Robots Must Be Designed To Be Compassionate
And, if possible, sexy.
Better not give them any emotion at all (Score:1)
If a robot is able to understand emotional responses, and in some cases it might be better at inferring them by having better sensors, it can then act accordingly to those human emotions.
Probably a system that acts in a deterministic fashion for 80% to 90% of the cases, and can infer human motivation but with no emotional response associated with it is probably the
What a coincidence! (Score:2)
Philae Lander (Score:2)
I think we've been watching too many movies (Score:2)
Will a superintelligence be alive? What will be its meaning of life? Will the brilliant introverted geeks who create this hypothetical thing really understand their meaning of life in order to pass that concept
And put them in power too! (Score:2)
They can't be bargained with (Score:2)
They can't be reasoned with. They don't feel pity or remorse and they absolutely will...not...stop...EVER...until you are dead.
Everybody knows this.
We Have Always Been At War With Euthanasia (Score:2)
Take a long hard look at the philosophical arguments we apply when deciding whether and when to put animals 'down'. Should the agony of an inevitable death be experienced raw and pristine, be muted or --- in the extreme, side-stepped completely with a ritual good-bye at the moment of diagnosis? At what point was it decided that what we perceive to be a fair chance at a hard-scrabble life, or the good of the many, is cause enough to deal out straight-death (the PETA principle in action)?
There is no aspect
To get famous, just scream baseless AI cautions (Score:2)
I call bogus on the whole AI skyfaller tactic. The only part of AI that is real is the A. To date, absolutely no intelligence of any kind has been artificially produced. And to beat the duped to the punch: No, self-driving cars are not AI. Neither are baseball umpiring systems, chess
It won't work. (Score:1)