How Should the Law Think About Robots? 248
An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."
All I needed to read... (Score:3, Insightful)
"With the personal robotics revolution imminent..."
Imminent? Really? Sorry, but TFA has been watching too many SyFy marathons.
Bad question (Score:2)
deterministic (Score:5, Insightful)
The same set of inputs will generate the same set of outputs every time.
Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.
Re: (Score:3)
That isn't exactly true. Analog-to-digital converters, true random number generators, fluctuations in the power supply, RF fields, cosmic rays and so on mean that in real life, the same set of inputs won't always generate the same set of outputs, whether in androids or in their meaty analogs.
Re:deterministic (Score:5, Insightful)
Re: (Score:2)
By that measure, endorphins, epinephrine, serotonin, dopamine, and so on are also inputs.
Re: (Score:2)
Sure, I just quoted the summary. And unfortunately people usually don't grasp the difference between determinism and predictability, as most of the comments here shows. What these fluctuations etc. do is just increase the chaotic behavior.
Re: (Score:3, Insightful)
I was hoping someone would make this comment - I fully agree. It seems pretty arrogant to presume that just because we are so ignorant of our own internal mechanisms that we don't understand the connection between stimuli and behavior that there is no connection, but I understand that a lot of people like to feel that we are qualitatively "different" and invoke free will and all of these things to maintain a sense that we have a moral right to consider ourselves superior to other forms of life, whatever th
Re: (Score:2)
I find it funny that people are proud of the fact that they don't believe in free will -- as if they believed they had anything to do with it! So proud, in fact, that they brag about how superior they for coming to such a conclusion, even though they claim it was well outside their nonexistent influence!
In a bizarre contradiction, they take credit for all their accomplishments and the cultivation of their positive traits and beliefs even though they claim to believe they were involved only as a passive ob
Re:deterministic (Score:5, Insightful)
You did have a choice, and you did write it. Determinism doesn't mean you didn't have a choice.
It means if we take our time machine back to before you posted this, and watched you do it again, without doing anything that could possibly influence your decision directly or indirectly, we'd observe you making the same choice. Over and over. Right up until we accidentally influenced you by stepping on that butterfly that caused that hurricane that caused the news crew to pre-empt that episode of the show you were recording last week that made you give up and go to sleep a bit earlier which made you less tired today which allowed you to consider the consequences more thoroughly and make the opposite choice. But until then, you're given the same inputs, and you're making the same choice. Every time.
Why is it that people seem proud if the idea that their choices are not based on their experience, learning, and environment? In other words, why is choice more meaningful if it's random and causeless? Why is it more valid to take credit for your random actions than your considered actions? I would think people would be more proud of the things they demonstrate they could do repeatably rather than the things that for all we know rely on them rolling a natural 20, as it were.
auto cars need there own set of laws maybe even fu (Score:2)
auto cars need there own set of laws maybe even full coverage for any one hurt.
The fallacy of the three laws (Score:4, Insightful)
And that is the fallacy of the three laws as written by Asimov- he was a biophysicist, not a binary mathematician.
The three laws are too vague. They really are guidelines for designers, not something that can be built into the firmware of a current robot. Even a net connected one, would need far too much processing time to make the kinds of split second decisions about human anatomy and the world around them to fulfill the three laws.
Re:The fallacy of the three laws (Score:5, Insightful)
The three laws are too vague. They really are guidelines for designers
The "three laws" were a plot device for a science fiction novel, and nothing more. There is no reason to expect them to apply to real robots.
Re: (Score:3)
Very true. But rather redundant to my point, don't you think?
I believe I read somewhere your exact point- oh yeah, it was the commentary in the book "The Early Asimov Volume 1"- a writing textbook by the author pointing out that his real purpose in inventing the three laws was to make them vague enough to have easy short stories to sell to magazines.
Positrons (Score:2)
Re: (Score:2)
In the day he was writing, radiation hazard was practically unknown, even among scientists and even after Madame Curie died of it.
Re: (Score:2)
Re: (Score:2)
Maybe the "positrons" are actually holes in an electron sea --- and "positronic brain" just scored higher with U.S. Robotic's marketing focus group than "holey synthmind".
Re: (Score:2)
Maybe the "positrons" are actually holes in an electron sea
That was Dirac's interpretation of them at the time Asimov started writing the stories since Feynman had not come along to improve on it. However even with the Dirac interpretation of positrons it was still known that they annihilate with electrons to produce dangerous gamma rays.
Re: (Score:2)
A free space electron-positron annihilation will release two 511keV gammas, but a hole in an electron valence band in a semiconductor can annihilate with a conduction band electron with considerably less energy release.
Yes, I know Asimov wasn't trying to accurately describe a real technology when coining the term "positronic brain," and wouldn't have been considering solid state electronics design in the 1940s.
Re: (Score:3)
In fact, I believe I read one of his writing textbooks where he said he PURPOSEFULLY made the laws vague enough to fit stories into.
deterministic? (Score:5, Insightful)
Robots do not have deterministic output based on your commands. First of all, they have sensor noise, as well as environmental noise. Your commands are not the only input. They also hidden state, which includes flaws (both hardware, and software), both design, manufacturing and wear related.
While this point is obvious, it is also important: someone attempting to control a robot, even if they know exactly how it works, and are perfect, can still fail to predict and control the robots actions. This is often the case (minus the perfection of the operator) in car crashes (hidden flaws, or environmental factor cause the crash). Who does the blame rest with here? It depends on lots of things. The same legal quandary facing advanced robots already applies to car crashes, weapon malfunctions, and all other kinds of equipment problems. Nothing new here.
Also, if you are going to make the point that "This projection of human attributes is dangerous when trying to design legislation for robots.", please don't also ask "How Should the Law Think About Robots?". I don't want the Law to Think. Thats a dangerous projection of human attributes!
Overcomplicating the subject (Score:2)
Freedom is the right of all sentient beings. Legislate based on the criteria of self-awareness or the animal equivalent if near-sentient. problem solved.
Re:Overcomplicating the subject (Score:5, Insightful)
Self-awareness is wonderful. But the criteria for judging that is as muddy as when live begins for purposes of abortion.
Robots are chattel. They can be bought and sold. They do not reproduce in the sense of "life". They could reproduce. Then they'd run out of resources after doing strange things with their environment, like we do. Dependencies then are the crux of ownership.
Robots follow instructions that react to their environment, subject to, as mentioned above, the random elements of the universe. I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer. If you program machine gun bots for armies, then you'd better hope the army is doing the "right" thing, which I believe is impossible with such bots.
Once that environmental autonomy is achieved, they get rights associated with sentient responsible beings. Until then: chattel.
Re: (Score:2)
I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test.
If the programmer makes a robot with psychopathic tendencies that happens to be destined to be a killing machine eventually; I don't think the programmer should get absolved, just because the robot is subsequently able to pass a self-awareness and responsibility test.
The programmer must bear responsibility for anything they knowingly did with a malicious intent that can be e
Re:Overcomplicating the subject (Score:4, Funny)
You've obviously never had children.
Re: (Score:2)
I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer.
Even if the robots do this pass this hypothetical test, that would only make them *appear* to be sentient, self-aware or conscious. I still doubt robots would be able to feel anything such as experience the colour green or the smell of coconut in the same way we do. That then begs the question, what makes us different from them?
In these kind of discussions, people will fall over themselves giving reasons why we'd still be above these hyper-intelligent robots, whilst trying to avoid any mention of the '
Re: (Score:2)
You say "above", like pecking order. My survival instincts say, not gonna happen. I do not welcome my robotic overlords.
Hyper-intelligence and collective intelligence might be useful and might not. See plentiful science fiction for possible outcomes.
Let's remove the hocus pocus "soul" word, because much as you'll try, you won't define it and that won't satisfy anyone. The Touring Test is but one of many ways to attempt this. We'll figure it out. Until then: chattel.
Re: (Score:2)
We, as humans, seem to have evolved, not genetically so much as through ideas. We understand what civility is and how it needs to work.
Robots may or may not evolve either themselves, or with suitable programming. It doesn't matter to me. They are rocks and wires and goo. When they participate in society responsibly, then I'll consider their merits. That's a long ways away.
Black and white ideas? No, not at all. Civility took a long time to construct, and all of the antecedents are important as to how we got
Re: (Score:2)
Such automatons might be self--aware, but their execution of their program is not their own. They're already slaves.
Choice, the hopefully best choices, are the ones we hope for. But we could go and devolve the arguments endlessly. First there is self-determination, which is hopefully acting responsibly.
Re: (Score:2)
What legislated criterion for self-awareness would you propose that could not trivially be achieved by a system intentionally designed to do so? A bit of readily-available image recognition software, and I can make a computer that will "pass" the mirror test [wikipedia.org]. I suspect a fancy system like IBM's "Watson" could be configured to generate reasonably plausible "answers" to self-awareness test questions, at least with a level of coherency above that of lower-IQ humans.
Re: (Score:2)
The criteria I would suggest would be...
Expression of preferences; likes, dislikes, annoyances, opinions and desires; a tendency to prefer certain things or certain kinds of actions or behaviors, and to express what those are. Test an ability to make decisions with incomplete information, and rationalize decisions after the fact, then explain their judgements opinions biases, and reasons for their decisions in writings; compare performance on judgement tests to humans.
Judges unaware of the humanne
Re: (Score:2)
Your tests appear to be strongly centered on a specifically "human-centric" --- and even distinctly culturally biased --- definition of "self awareness." If you just want to test that someone is human, you can have them come in for blood tests and an MRI. Perhaps your "self-awareness" test is too narrow --- I think even a lot of humans would fail --- to be sensible for evaluating "sentience" in non-human beings? Let's consider some of the particular points of your test; keeping in mind how a "machine impost
Re: (Score:3)
There are no rights, natural or otherwise, only what we collectively decide so, and such that the powers that be haven't yet either made illegal or require licensing for their exercise. Inroads to the latter are continuing (c.f. free assembly, for instance.)
Rights as you speak of are only so if we are willing to fight* for them if needs be. That's how we have them now, anyway.
*This need not be literal or extreme by any stretch; it might mean little more than greater collective involvement in local politic
Don't (Score:5, Funny)
anthropomorphize computers. It makes them angry.
Re:Don't (Score:4, Informative)
The Law Doesn't Think, People Do. (Score:4, Insightful)
Laws and guns are both tools... they don't think and don't murder.
Re: (Score:2)
Re: (Score:2)
Laws don't dictate, PHBs do. ;-)
Minor copy edit: (Score:5, Insightful)
As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your senses) and the outputs (your behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the person will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the person. While this mental agency is part of our definition of a person, it is vital for us to remember what is causing this agency.
-
Re: (Score:2)
Law of the Robot? (Score:5, Informative)
Similarly, we don't need a specialized law of the robot: "Robots are, and for many years will remain, tools," and the law already covers uses of tools (e.g. machines, such as cars) in committing torts (such as hit and run accidents).
Comment removed (Score:5, Insightful)
Robots should have all rights (Score:2)
Except the one to become a lawyer.
Welcome to the Age of Information (Score:5, Interesting)
I've got a neural network system that has silicon neurons with sigmoid functions that operate in analog. They're not digital. Digital basically means you round such signals to 1 or 0, but my system's activation levels vary due to heat dissipation and other effects. In a complex system like this quantum uncertainty comes into play, especially when the system is observing the real world... Not all Robots are Deterministic. I train these systems like I would any other creature with a brain, and I can then rely on them to perform their training as well as I can trust my dog to bring me my slippers or my cat to use the toilet and flush, which is to say: They're pretty reliable, but not always 100% predictable, like any other living thing. However, unlike a pet who has a fixed size brain I can arrange networks of neural networks in a somewhat fractal pattern to increase complexity and expand the mind without having to retrain the entire thing each time the structure changes.
FYI: I'm on the robots' and cyborgs' side of the war already, if it comes to that. What with us being able to ever more clearly image the brain, [youtube.com] and with good approximations for neuron activity, and faster and faster machines, I think we'll certainly have near sentient, or sentient machine intelligences rather soon. Also, You can just use real living brain cells hooked up to a robotic chassis -- Such a cyborg is certainly alive. [youtube.com] Anyone who doubts cybernetic systems can have fear, or any other emotion is simply an ignorant racist. I have a dog that's deathly afraid of lightning, lightning struck the window in a room she was in. It rattled her so bad she takes Valium to calm down now when it rains... Hell, even rats have empathy. [nytimes.com]
I have to remote log into one of my machine intelligence's systems to turn it off for backup / maintenance because it started acting erratically, creating a frenzy of responses for seemingly no reason, when I'd sit at the chair near its server terminal -- Imagine being that neural network system. Having several web cams as your visual sensors, watching a man sit at a chair, then instantly the lighting had changed, all the sea of information you monitor on the Internet had been instantly populated with new fresh data, even the man's clothes had changed. This traumatic event happened enough that the machine intellect would begin essentially anticipating the event when I sat at the terminal, that being the primary thing that would happen when I did sit there. It was shaken, almost as bad as my poor dog who's scared of lightning... You may not call it fear, but what is an irrational response in anticipation of trauma but fear?
Any sufficiently complex interaction is indistinguishable from sentience, because that's what sentience IS. Human brains are electro chemical cybernetic systems. Robots are made out of matter just like you. Their minds operate on cycles of electricity, gee, that's what a "brain wave" is in your head too... You're more alike than different. A dog, cat or rat is not less alive than you just because it has a less complex brain. They may have less intelligence, and that is why we don't treat them the same as humans... However, what if a hive mind of rat-brain robots having multiple times the neurons of any single human wanted to vote and be called a person, and exhibited other traits a person might: "Yess massta, I-iz just wanna learn my letters and own land too," it might say, mocking you for your ignorance. Having only a fraction of its brain power you and the bloke in TFA would both be simple mindless automatons from its vantage point? -- Would it really be more of a person than you are? Just because it has a bigger, more complex, brain by comparison, would that make you less of a person than it? Should such things have more rights tha
Simple (Score:2)
Re: (Score:2)
Thats realdoll or maybe realdoll 2.0 ;-)
Just a machine (Score:2)
I personally find all this nit
if a corporation is a person (Score:2)
so is a robot
Simple (Score:2)
Make its owner responsible for the robot.
I agree (Score:2)
I'm a sci-fi writer, and I've thought about this a fair bit. Book two in the Lacuna series deals with a self-aware construct who is different from his peers because of a tiny error. His inputs and outputs are therefore non-deterministic, in so far as you could present him with a set of inputs and record his outputs, then erase his memory and give him the same inputs again. His outputs would be different (subtly). Or they might not. The error was subtle enough to evade detection during manufacturing after al
Re: (Score:3, Insightful)
We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Re:A race of slaves (Score:5, Insightful)
We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Perhaps we shouldn't give potentially mutinous personalities to our tools? I mean, my screwdriver doesn't need an AI in it. Neither do my pliers. My table saw can hurt me, but only if the laws of physics and my own inattentiveness make it so, not something someone programmed into it.
Oh, wait, my mistake. I didn't grow up addicted to science fiction written by authors who lost track of which characters were designed to be actual tools and which were human beings due to that author's inability to discern people from things. I guess I just don't understand the apparently very vital uses of designing a mining device programmed to feel ennui, or a construction crane that some engineer at some point explicitly decided to give the ability to hate and some marketing director signed off on it. Maybe it's just that I can't see any sci-fi with a message of "oh no, our robots suddenly have feelings now and are rebelling" in any sort of serious light because ANY ENGINEER ON THE PLANET WOULDN'T DESIGN THAT SHIT BECAUSE IT'S FUCKING STUPID TO GIVE YOUR TOOLS THE EASY ABILITY TO MUTINY.
Oh, boo fucking hoo. I don't care that you overengineered your tools and your lack of real social skills means you have feelings for them. That's your problem, not a problem with society.
Re: (Score:3)
Re:A race of slaves (Score:4, Funny)
Oh, boo fucking hoo. I don't care that you overengineered your tools and your lack of real social skills means you have feelings for them. That's your problem, not a problem with society.
Says the slightly more evolved hairless chimpanzee, as he furiously hammers away at his over-engineered communications device.
Re: (Score:2, Insightful)
What is your proof that they will never exist?
Who says that robots will be abacus with greater computational power?
What evidence do you have that our brains are not deterministic systems, of which the part that brings awareness or "being" cannot be reproduced in other ways?
It seems that the wishful thinking is on your part.
Re: (Score:2)
Boredom proves that human brains are not deterministic. If they were deterministic, any human being would be able to stay on task indefinitely without rest.
Re:Exaxctly. (Score:4, Insightful)
Re: (Score:2)
I had some unusual experiences. How does that demonstrate that my mind is non-deterministic?
Re: (Score:2)
Cmdr Data, probably not.
C3PO, Honda is producing robots better than him already.
Re: (Score:2)
Cmdr Data, probably not.
C3PO, Honda is producing robots better than him already.
Only C3PO can walk without falling down.
Re: (Score:2)
I'll see your T-800 and raise you a T-X. Better looking and better armed!
Re: (Score:3)
The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Given the summary's caveat that "the robot will never see exactly the same input twice" --- how do you know even a smart dog wouldn't react identically given the exact same input twice? If you stick a random number generator into a robot's "brain," does it suddenly fall into a wholly different philosophical category?
Re: A race of slaves (Score:2)
Hell, even my Irobot Roomba has a random number generator to choose a random action when it hits something... So much for deterministic robots.
Re: (Score:2)
That's a definition process. Random, to a theoretical mathematician, means something rather specific, and something that uses pseudorandom functions to produce a similar effect (such as going any of several directions for differing durations when it hits something), isn't actually random. To most engineers, if something behaves sufficiently like mathematical randomness, you might as well call it random.
There are circumstances where either approach is sensible. A Roomba that didn't have any randomizing funct
Re: (Score:2)
Did everyone forget their basic computer science?
The RNG is irrelevant, as it's just another input. The computer acts deterministically, given the same input (which includes the data from the RNG), and you'll get the same output.
Changing the level of description to better suit your intuitions doesn't change that simple fact.
This might help: Remember when you were first learning about Turing machines and wondered (or had a classmate wonder out-loud) how they could cope with something like a GUI where the co
Re: (Score:2)
And there is no such thing in computer science as a random number.
There is, when your digital computer yields a sequence of random bits which come in from a noisy analog input, and runs that input through an appropriate XOR function, by definition the noise is random (has random error, within a certain range),
and also, your analog input can include data from a physically random process, such as background radiation measurement, geiger counter measuring a radioactive decay, or white noise input.
Re: (Score:2)
You can hook up a hardware random noise generator to a computer --- that relies on "physical" noise processes which are as random as anything else we know in the universe. So yes, you can have "random numbers" in computer science --- even if not generated by an algorithm --- but as a mathematical ideal against which to compare pseudo-random generators, or the result of a "true" hardware random source. So, one can build a robot that won't necessarily act deterministically; even one that incorporates results
Re: (Score:3)
That. Plus, a dog (or anything with a brain) isn't a simple input/output system because the external input doesn't get all clean and shiny to the processing center, it gets mixed with memory and other internal factors. So, even if you could control external factors such that the input was exactly the same, what would get processed would still not be the same input, but a variation thereof, and hence different outputs. Which is why animals (and neural-network-based AIs) need training rather than programming.
Perhaps ours are too (Score:4, Interesting)
The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
How do you know that our brains are not highly deterministic too? At the moment computers and robots have very limited inputs so we can easily tell that they are deterministic because it is easy to give them identical inputs and identical programming and observe the identical response. With humans and animals this is exceedingly hard to show because, even if you somehow manage to create the identical inputs, we have a memory and our response will be governed by that. In addition each of our brains is slightly differently arranged due to genetic and environmental factors which will also cause different responses.
Quantum fluctuations are probably what save us from being 100% deterministic but, nevertheless, we may find out that we are perhaps more deterministic that we think we are and that it is only the complexity of our brains and the inputs they process that makes it appear otherwise. So I am not quite convinced that the gap you mention has much to do with determinism rather than the relative complexity of a dog's brain vs. the smartest robot's.
Re: (Score:2)
Boredom proves that human brains are not deterministic. Quantum Fluctuations may be the cause; but anybody who has thought about this problem deeply, or has worked with small children, knows that the human brain is not deterministic.
Silicon can't mimic the firmware in an equal number of neurons to transistors- yet. Maybe someday when we find a truly random input instead of merely a pseudo random input, but not yet.
Re:Perhaps ours are too (Score:4, Insightful)
No. All of these are appeals to intuition or a misunderstanding of how a deterministic processes behave.
True, but unknown internal states can make something deterministic appear to be non-deterministic.
If QM makes something non-deterministic then every physical behavior is non-deterministic, including the behavior of robots.
It shouldn't be that hard to hook up a Geiger counter to a computer.
Re: (Score:3)
Yes, humans do take in a lot of inputs too over time, and memory is just essentially some sort of feedback process where previous inputs and outputs continue to matter, to some extent.
Deterministic or not and intelligent or not; having a "will" or "not" are different questions.
They're right though, in that, computers for the forseeable future should not be recognized by legislation as having will, sentience, intelligence, or life.
There should have to be some test they would be capable of passing,
Re: (Score:3)
There should have to be some test they would be capable of passing, first, and I don't mean Turing's test, which is grossly insufficient.
Perhaps the Voight-Kampff test?
Re: (Score:3)
I dunno.. I correctly deduced a method to expose the psuedorandom nature of qbasic's RND function, after noticing that it frequently outputted even numbers more often than odd numbers, and that the numbers were oftend divisible by 4, or that cumulative remainders were dividible by 4 when added together, with a maximum run on such additions being around 8.
Using those observations, I used some modulo division with stored remainders, and integer division to deal with the main dividends, and ended up getting hi
Re: (Score:3)
We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Have you considered that the human brain may be 100% deterministic? It doesn't look it, but that's probably because you're not taking all the inputs into account - if you were to give 2 identical human brains *exactly* the same inputs from conception, you may well find that the outputs are identical too. How is this different from a robot brain (which, like a human brain, may well base its output on past inputs as well as the current inputs)?
Re: (Score:2)
If you give the *same* human brain the *same* inputs 100 times in a row, it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30 out of sheer boredom.
That proves that the human brain isn't deterministic, and anybody who claims it to be so needs to have their work checked for bias.
Re: (Score:2)
If you give the *same* human brain the *same* inputs 100 times in a row, it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30 out of sheer boredom.
That proves that the human brain isn't deterministic, and anybody who claims it to be so needs to have their work checked for bias.
Unless you are resetting the brain to the same state at the start of each experiment then it proves nothing.
Re: (Score:2)
If the brain is deterministic, it should be resetting to start state every time you wake up.
And for simple tasks, should be able to go into an infinite loop quite nicely without *ever* getting bored.
So no, internal states do not make something deterministic or non-deterministic. The question is, can it do the same output with the same input?
Re: (Score:2)
it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30
Decisionmaking is too simple a task; humans are heavily influenced by preferences, and despite that are only consistent 70% of the time? That shows you irrationality for one.
Decisionmaking does not express the human non-determinism most efficiently. Try something more complicated like creativity. Say painting a picture, or creating some other form of art. I bet you the output is h
Re: (Score:3)
The human would itself have to be physically identical to William Shakespeare for the experiment to be valid.
And then my answser would depend on the lifetime accumulation of errors from quantum uncertainty. I expect they would be the exact same literary works though, word for word, and I don't see a good reason to assume not. The thing is, exposing a human to the same inputs as William Shakespeare goes well beyond merely impossible, so we're just flailing around guessing, and it makes it a terrible analog
Re: (Score:2)
Boredom counts as an input.
Re: (Score:3)
Have you considered that the human brain may be 100% deterministic?
Given the parent post, this response was inevitable.
Re: (Score:2)
I have yet to see any compelling argument that the human brain isn't 100% deterministic. The fact that it's complex does not necessarily make it non-deterministic and the underlying physics and chemistry founding the neural networks in the brain are not necessarily less deterministic than a neural network built out of silicon.
So if we create robots so sophisticated that their apparent sentience level is indistinguishable from a human it would be unethical not to afford them the same right. That, however, is
Re: (Score:2)
The fact that you can make different choices with the same input proves that the human brain is not deterministic. Some religions call this a soul.
AI has been a hobby of mine for 20 years. I have grave doubts that we will *EVER* make a robot so sophisticated that it can ignore it's programming. Learn, yes. Self-modify the programming, within certain parameters, that's been done too. Duplicate a decision tree to the point of being able to make the right choice more deterministically than any expert, yes
Re: (Score:2)
Re:A race of slaves (Score:4, Insightful)
Re: (Score:3)
Re: (Score:2)
So, what, the professor thinks we should just create a race of slaves?
We already did, numerous times. All domesticated animals are essentially slaves or worse.
Re: (Score:3)
You're looking through history for examples where humans have treated an entire race as slaves, and the best you can come up with is domesticated animals?
Re: (Score:2)
Re: (Score:2)
Give me a way to test something for qualia, and I'll get right to it.
Massive, wide-ranging absolutist claim with no actual evidence or argument ... You've convinced me!
Re: (Score:3)
What's new about that? In many countries drawn or even written child pornography is treated like the real thing. Even though no child is harmed. In a way legislation based on form, not on function. Grave mistake?
Are you saying that this existing legislation *isn't* a grave mistake?
Hey offender... (Score:2)
Re: (Score:2)
"Don't give me any of that Star Trek crap. It's too early in the morning."
-Dave Lister
Re: (Score:3)
Are you 12? Was there really any reason to put those censors in there and slow down everyone else's parsing?
Re:This has aready been covered by the Big Three L (Score:5, Funny)
On the contrary, I'd say the posting style significantly speeds up parsing, by encouraging people to entirely skip over the content past the first few words --- and nothing of value is lost.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
why do we need this shit here? who the fuck is legislating industrial robots as persons at the moment - or near future? NOBODY!
The labor unions might like it if they would --- then wages would need to be allocated to these robots, PLUS required breaks and protections against working too many hours in a row on a shift, and the robots would have to have a guardian appointed by the courts, to make decisions in their interest, like whether they can continue to work for the company that purchased the robot