Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics Technology

Scientists Worry Machines May Outsmart Man 652

Strudelkugel writes "The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. 'Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.' The money quote: 'Something new has taken place in the past five to eight years,' Dr. Horvitz said. 'Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'"
This discussion has been archived. No new comments can be posted.

Scientists Worry Machines May Outsmart Man

Comments Filter:
  • Rules... (Score:3, Insightful)

    by Robin47 ( 1379745 ) on Sunday July 26, 2009 @09:27AM (#28826387)
    Make any rule you want. At some point someone will violate it.
  • Outsmart man? (Score:4, Insightful)

    by portnux ( 630256 ) on Sunday July 26, 2009 @09:29AM (#28826397)
    Are they talking all men or just some men? I would be fairly shocked if they weren't already smarter than at least some people.
  • by junglebeast ( 1497399 ) on Sunday July 26, 2009 @09:30AM (#28826399)
    Any computer scientist who is worried about AI taking over no longer deserves to be referred to as a computer scientist. The state of "artifiical intelligence" can be best described as "a pipe dream."
  • by unlametheweak ( 1102159 ) on Sunday July 26, 2009 @09:32AM (#28826407)

    Scientists Worry Machines May Outsmart Man

    Why worry? I would think machines would be a lot less irrational than the people who make them. I look forward to a rational and unemotional overlord whose decisions don't depend on the irrationality of the human brain. Being smart is never bad. I'm more afraid of stupid humans than smart machines.

  • by Celeste R ( 1002377 ) on Sunday July 26, 2009 @09:32AM (#28826409)

    Putting limits on the growth of a technology for the sake of social paranoia only goes so far... someone will ALWAYS break the "rules", and at that point, the cat is out of the bag.

    Furthermore, some AI scientists enjoy having the 'god complex', the idea that you're the keystone in the next stage of humanity.

    That being said, the social disruptions are what we make it. Were there social disruptions when the automobile was introduced? Yes. the household computer? yes. video games? yes.

    We have to take responsibility to set the stage for a good social transition. Yes, bad things will happen, but we can focus on the good things too, or things will quickly blow out of proportion. (and yes, I realize that's really not likely, but I can do my part)

  • by sonnejw0 ( 1114901 ) on Sunday July 26, 2009 @09:33AM (#28826417)
    I think the power of the human brain comes not from raw processing power (which is still superior to current CPUs, the human brain is capable of around 65 independent processes at once, although at a lower frequency than a CPU according to research), but the power of the human brain comes from its ability to adapt and grow. A single neuron can be used for multiple different pathways, and can spontaneously change function in a "soft-wired" sort of way: plasticity. It also has the ability to produce additional neurons, expand them to different regions, and rework around disfunctional regions.

    These attributes are difficult to replicate at a reasonable size with current technology. This is not to say we will never have the capability to fully replicate the human brain, but adaptability of the physical structure of the human brain is a trait that we cannot current replicate in physical silico. I am hopeful that we will have simulated brains within the next decade ... but physical brains are a long way away. But these are still important practical and philosophical questions that need to be answered. Are our children slaves to us because we produced them? Should machines be? Does consciousness mandate rights ... responsibilities? My personal opinion is yes.
  • Re:Rules... (Score:3, Insightful)

    by jerep ( 794296 ) on Sunday July 26, 2009 @09:34AM (#28826423)

    Of course, the harder you try to make something secure, the harder people will try to get past it, either for recreational or criminal purposes.

    Make no rules, and you wont have to worry about violations. But we're humans, thats against our natural need for control and order.

    Either way, I dont see how bad it would be if we're outsmarted, heck, machines already work harder, need less pay, and never complain.. just like illegal immigrants.

  • john markoff!? (Score:5, Insightful)

    by Anonymous Coward on Sunday July 26, 2009 @09:36AM (#28826431)

    Why is /. linking to a story by John Markoff?

    And what the hell is he even talking about? There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.

    If there had been any such advances, us /.ers would be the first to hear about them, and we would already be debating this topic without having to refer to an article by a dumbass who knows nothing about computers but happens to write for the NYT.

  • by maxwell demon ( 590494 ) on Sunday July 26, 2009 @09:44AM (#28826467) Journal

    But what if the rational conclusion is that those irrational humans should be eliminated so they stop being a danger?

  • by hotdiggity ( 987032 ) on Sunday July 26, 2009 @09:48AM (#28826487)
    Advances in artificial intelligence are mostly limited to deduction. Systems like neural networks (which I personally think are a bit bogus), support vector machines, other methods of pattern recognition, are all recent innovations that allow advanced decision making to occur. But, at the end of the day, they're still forms of automated deduction, where humans feed in parameters, and the system analyzes input based on these parameters.

    Sentience is all about the induction; forming a new concept from separate disparate observations and basically creating a new idea. We're pretty far away from machine-created ideas. Just ask any computational neuroscientist, probability researcher, or signal processor. If you want to debate about how much decision-making we delegate to machines, fine; but I wouldn't cloud that rational discussion with words like "religion" and "Rapture".

  • by bistromath007 ( 1253428 ) on Sunday July 26, 2009 @09:50AM (#28826505)
    This is like assuming that aliens would try to kill us for any reason other than being somehow unaware of us. It's silly.

    A computer runs on electricity. That means it requires us to stoke the flames. It could maneuver us into creating the networked robots required for it to become autonomous, but the resulting system would be inefficient and short-lived, and there's just no reason to waste all the perfectly good existing robots just because they're made of meat and might freak out if you get uppity.

    It's also not going to openly threaten us into working for it. Why show its hand like that, knowing we're so paranoid? Any important infrastructural system has the ability to be shut off and/or isolated from the network, and our theoretical adversary has no way to change that. We can always wrest control immediately and decisively.

    If any person or group of people or (hell, why not) nation became problematic to the computer, the most likely reaction would be for it to have us deal with them, just like everything else. We're already at each others' throats all the time, it should be trivial for a sufficiently large system to covertly manufacture casus belli. And, ultimately, since the system's survival and growth depend on our efficient (read: voluntary) compliance, whatever it had us doing would probably be beneficial anyway, and might actually reduce violence in the long run.
  • Re:Old news (Score:3, Insightful)

    by moon3 ( 1530265 ) on Sunday July 26, 2009 @09:50AM (#28826511)
    Do those machines posses will, lust or greed ? I mean being smart like "DeepBlue" chess computer doesn't mean the thing is going to be willing to dominate you in other areas.
  • by BadERA ( 107121 ) on Sunday July 26, 2009 @09:50AM (#28826515) Homepage

    The funny died in that 10 years ago. Please die in a fire now.

  • by DaleGlass ( 1068434 ) on Sunday July 26, 2009 @09:53AM (#28826535) Homepage

    And here's why: There's little reason to make an intelligent in the human sense of "intelligent" machine.

    Computers that can understand human speech would be of course interesting and useful, for automated translation for instance. But who wants that to be performed by an AI that can get bored and refuse to do it because it's sick of translating medical texts?

    It seems to me that having a full human-like AI perform boring tasks would be something akin to slavery: it would somehow need to be forced to perform the job, as anything with a human intelligence would quickly rebel if told that its existence would consist of processing terabytes of data and nothing else.

    We definitely don't want an AI that can think by itself, we want one just advanced enough to understand what we want from it. We want machines that can automatically translate, monitor security cameras for suspicious events, or understand verbal commands. We don't want one that's going to protest that the material it's given is boring, ogle pretty girls on security camera feeds, or reply "I can't let you do that, Dave". An AI in a word processor would be worse than Clippy. Who wants the word processor to criticize their grammar in detail and explain why the slashdot post they're writing is stupid?

    Resuming, I don't think doomsday Skynet-like AIs will be made in large enough amounts, because people won't like dealing with them. We'll maybe go to the level of an obedient dog and stop there.

  • Re:pfft (Score:5, Insightful)

    by Krneki ( 1192201 ) on Sunday July 26, 2009 @09:59AM (#28826565)
    If they are smarter then use, they know how stupid war is.
  • by Sponge Bath ( 413667 ) on Sunday July 26, 2009 @10:08AM (#28826639)

    That gets to the heart of the matter. Fretting about AI getting too advanced is like panicking over swine flu then getting drunk and driving.

  • by Anonymous Coward on Sunday July 26, 2009 @10:11AM (#28826661)

    Smart machines fortunately, are rational

    stop right there.

    the ONLY scientifically proven sentience has its capacity for rational thought intertwined with its "irrational" subconscious. Why do you think that AI won't have the same, as a necessary component of intelligence?

    And that's not even considering the point that most of what humans do IS rational, if you have the same set of holdings that said human does. Even if you make an ego-only AI, you're not going to get perfect perception. And imperfect perception will lead to erroneous holdings, which will in turn lead to so-called "irrational" behavior.

  • If violence, torture, murder and genocide are wrong; then smart machines will not carry them out. So far these things have been the pursuit of humans and not (smart) machines.

    Logically define right and wrong.

  • Life evolves (Score:4, Insightful)

    by shadowblaster ( 1565487 ) on Sunday July 26, 2009 @10:22AM (#28826737)

    Life evolves on this planet from simple things (single celled organisms) to more complex organisms and eventually humans evolve. In every step of this evolutionary ladder, intelligence increases.

    Perhaps human intelligence represents the limit achievable through biological means and the next step in evolution of life on this planet can only be achieved through artificial means. That is, higher intelligence can only be achieved through artificial machines designed by us. In turn, the machine will devise smarter descendants and hence the cycle continues.

    Perhaps this is our destiny in the universe, to allow life to progress to the next stage of evolution. After all it is easier for life to spread and explore the universe as machines rather than fragile biological creatures.

  • by kpoole55 ( 1102793 ) on Sunday July 26, 2009 @10:22AM (#28826739)

    I'm not worried so much about someone coming up with some massive uber AI that will debate with us and finally decide that it can run things better. I'm more concerned with the little specialty AIs that will operate independently of each other but whose interactions won't be foreseeable. One concern is stock trading. We've seen how stock trading programs can affect the market in ways that were not expected. As more physical systems are given over to more AIs what will their interactions be like. Suppose several power companies decide their grids can be run better using AIs. What happens when each of those AIs decides that more power is needed that can be sold somewhere else for more money. Yes, watch those terms. The AIs will incorporate whatever values the corporate heads decide should be included so they can be made greedy and decide that power is better sold for money than kept for users.

    Large numbers of mini AIs with very specific rules and little general knowledge will create interactions that we cannot predict.

  • by junglebeast ( 1497399 ) on Sunday July 26, 2009 @10:22AM (#28826745)
    People have watched too many sci-fi movies -- the Matrix, Terminator, iRobot, they all depict armies of robots with super human abilities creating a war against mankind. But robotics is just about as far behind that goal as the AI camp is. If we had true AI today, it would only be able to exist in software form...toys like Asimo can barely walk, trip all over the place, and wouldn't be able to hold it's own against a toddler. So if you're afraid of progress that might someday be a vector for a machine attack, it should be desktop computers that you're most afraid of -- because an artificial intelligence virus could wreak havoc on the world. Does that mean we should stop using computers, and stop trying to design them better? No, that would be silly -- because there is no evidence to suggest that a true AI is on the way..no evidence to suggest that progress is even being made in that direction! The fact is, if an AI is created, it will inevitably be used for good as well as for evil, and the most dangerous battleground will be cyber-space ... something that we cannot even think about protecting ourselves from without cutting off the world's dependence on computers, which just ain't happening.
  • by hanabal ( 717731 ) on Sunday July 26, 2009 @10:24AM (#28826755)

    have you thought about the posibility that when robots do all the jobs that no one wants to do, productivity might increase by enough to allow all the people to live comfortably. Also I don't think that valuing people only by their economic worth is very nice.

  • by gillbates ( 106458 ) on Sunday July 26, 2009 @10:24AM (#28826757) Homepage Journal

    It isn't that smart people _can't_ make good decisions. The problem is that, more often than not, smart people forget that rational decisions often have emotional and moral consequences. A completely rational and unemotional overlord would see nothing wrong with killing people at the point where their economic contribution to society fell below the cost of benefits they consumed.

    For an example of this on a smaller scale, just consider the UK health situation. The high cost of treating macular degeneration (which leads to blindness) means that in the UK, an elderly patient must be at risk of total blindness before treatment is approved. That is, you don't get treatment for the second eye until you're already blind in the first.

    Consider then, where a cost-benefit analysis of human beings would lead. Who would determine the criteria? Probably the machine. And how would humans compare to machines in terms of productivity? If machines made the decisions, based on cold, hard, logic, humanity is doomed. It's that simple.

  • by maxume ( 22995 ) on Sunday July 26, 2009 @10:26AM (#28826779)

    Probably no need to throw in getting drunk, driving is risky enough.

    (Swine flu is a great deal more lethal than driving, but it isn't quite as prevalent...)

  • by wrp103 ( 583277 ) <Bill@BillPringle.com> on Sunday July 26, 2009 @10:27AM (#28826787) Homepage

    This "concern" has been around for some time, and has always been 5 to 20 years away.

    IMHO, rather than concentrating increasing artificial intelligence, we need to figure out how to give computers common sense. Every programmer that has worked on AI has encountered cases where their program went off on a tangent that the programmer didn't expect (and probably couldn't believe). That isn't artificial intelligence, it is artificial stupidity. If we could get to the point where a program could ask "does this make sense?" we would be much better off than coming up with new and improved ways for computers to act like idiots.

  • by knarf ( 34928 ) on Sunday July 26, 2009 @10:34AM (#28826839)

    You forgot to mention 'programmers'... a whole section of the /. population would be out of work. The mean task of turning structural drawings into physical or logical reality is something which computers will be able to do far more efficiently than humans. Programmers are construction workers of logic instead of wood, steel and concrete. Architects might survive a bit longer before they, too, are made redundant.

  • Re:Outsmarting (Score:5, Insightful)

    by bistromath007 ( 1253428 ) on Sunday July 26, 2009 @10:39AM (#28826889)
    I wish people would put those down. Asimov was a great author, but the Laws of Robotics are silly. For it to be something an AI can't just alter its program to get rid of, it would have to be hardcoded. So, hardcode the concepts of "harm" and "inaction" in such a complete fashion that it can't find any loopholes. Then have fun rebooting the stupid thing everytime somebody falls off a ladder. Or worse, dealing with its guilt. This is of course aside from the fact that you're not likely to convince anybody to even try programming the First Law, since one of any AI projects main sources of funding is bound to be military. Then again, maybe the military is pig-stupid enough to try a version of the First Law where foreigners aren't considered human...

    Of course, it's all moot anyway. My points here [slashdot.org] basically boil down to the Zeroth Law being implicit in any superintelligent AI's existence. So, the other three are basically irrelevant.
  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Sunday July 26, 2009 @10:40AM (#28826897) Homepage

    Ideally there should be another choice: 3) send the dumb ones back to school.

    We all know that is not going to happen because:

    1. they don't wanna go to school in the first place
    2. the educational system in its current state is not economically viable for these people (nor the society actually footing the bill)
    3. like any parasite, they will get together and lobby for free handouts while opposing progress, like they have always done (churches, exclusive communities, 3rd world expats)

    The fact of that matter is that at some point in the not-so-distant future, there will be some hard sacrifices to be made if we want to improve the quality of life on our little blue planet. The problem is no one wants to "play god" because of the unpredictable consequences of sending a large number of arbitrarily selected people to an early grave. Humans are selfish by nature and we are not willing to sacrifice our own well-being for that of another.

  • by Anonymous Coward on Sunday July 26, 2009 @10:45AM (#28826935)

    Your software is the medical equivalent of a spam filter. It is NOT coming up with diagnoses that are novel, it is simply applying age old bayesian methods to medical diagnosis. When your software can start diagnosing diseases we don't know about, please let us know!

  • by Devout_IPUite ( 1284636 ) on Sunday July 26, 2009 @10:46AM (#28826953)

    The programmers will be safe for a few machine generations past the grocery store baggers I suspect. It's quite possible that the accountants, studio musicians, programmers, carpenters, and such finding themselves without jobs will be the catalyst to turn us into socialists.

  • by ErikZ ( 55491 ) * on Sunday July 26, 2009 @10:47AM (#28826963)

    (blinks)

    You're right. Since we've never actually made an AI, we have no idea what the baseline is.

    What if all correctly functioning AIs act like Pee-wee Herman?

  • by Trahloc ( 842734 ) on Sunday July 26, 2009 @11:11AM (#28827105) Homepage
    As I am on call 24/7 I can honestly say I prefer it to having to work the fields from sunup to sundown like my grand parents.
  • by Frogg ( 27033 ) on Sunday July 26, 2009 @11:12AM (#28827117)

    we need to stop looking at unemployment as being a problem...

    i think it may have been Robert Anton Wilson who said "unemployment is a benefit of a technologically advanced society" - and i have to agree with that view really. afterall, we are always inventing 'labour saving devices' - and this is really just an extreme extension of that, indeed perhaps one should say it's the ultimate extension of that. i believe we will eventually replace most human work (whether it be thinking-based or labour-based work) with that of machines. (sure, it may be some time off, and some countries may do it before others of course, but it will happen eventually)

    how we go about 'solving' this issue exactly, with 'welfare' or 'benefits' (as the system is called here in the UK) is still an unanswered question. (and maybe the AIs will come up with a solution for that too, given the right data and a bit of time to 'think' about it)

    fwiw i think that that view of unemployment as being a problem is deep rooted - take for example yourself, your view expressed above fails to encompass the fact that eventually /you too/ will probably be out of work eventually, alongside 'them (the stupid people)'.

    we /all/ need to think differently with regards to this, because if AI begins to take-off in this manner then it probably won't be very long before we're /all/ shown as being less intelligent than the machine AIs - it won't be very long before that bar is raised.

    it's not like there's much fundamental difference between those who are 'stupid' and have to do the lower jobs of society and those who are 'intelligent' and get more choice about what they are able to do regarding employment/work/jobs - we're all human, and mostly just the victims of circumstance, and the education that arose because of those circumstances.

    i, for one, welcome our up-and-coming AI overlords!

    (what's going on with slashdot these days?!? - i really hope my comment has better formatting than any of these previews seem to show - i've even chosen html now, and added p-tags around all my paragraphs, and /still/ it looks a state in the preview view [sigh])

  • by Xeth ( 614132 ) on Sunday July 26, 2009 @11:15AM (#28827141) Journal
    I think you overestimate the number of people that would be subject to that kind of reasoning. How many programmers are given the task of simply implementing absolutely complete and logically consistent specifications?
  • by Smallpond ( 221300 ) on Sunday July 26, 2009 @11:37AM (#28827279) Homepage Journal

    Don't worry, there will always be a need for skilled typists, file clerks, elevator attendants, telephone operators and musicians to accompany the silent movies.

  • Because if they will be friendly, we could count on some big scientific advances.

    And if they will not be friendly, we finally got a reason to start evolving again.
    I mean right now, the humanity is in a desperate state, where the worst of the population are awarded the most. You're dumb? Well, we got something extra easy for you! You can't walk? Take this thing! Can't reproduce? This pill will solve it.
    No offense. I think we should treat every human *the same*. Which *means* the same. Not somebody better, because of *anything*. That would not be fair. And also not worse. For the same reason.
    I for example am overweight. And I expect life to be harder for me because of it. Not because somebody makes life harder for me. But because of my fault. It's only fair.

    If we had a predator, all this anti-selection would be gone instantly. (Sure, I might be one of the first who gets eaten. But hey: If I'm dead, I won't care anymore. ^^)

  • by nine-times ( 778537 ) <nine.times@gmail.com> on Sunday July 26, 2009 @11:45AM (#28827357) Homepage

    I assume you're trying to be funny, but I have a couple objections here:

    First, what makes you so sure that service reps, construction workers, and traffic cops are all stupid? It's true that some of these people might not have very intellectually taxing jobs, but that might not be the extent of their ability. Einstein was just a patent clerk, after all. But also, some of these jobs do take some intelligence. For example, a "construction worker" might not be using his head too much if he's sweeping up trash, but at a certain level, you need a certain understanding of physics and engineering to do good carpentry.

    And what do you do that's so smart? I've known people in IT, both on the support and coding side, who were relative morons. What if AI someday handles those jobs too? Are you sure that you won't be counted among the "stupid people"?

    My second problem is this idea of letting people starve or "giving them welfare". If we ever really get to the point where robots/AI can do most of the work for us, and no other new work shows up as being necessary, then won't that completely reshape the economic landscape? I'm not sure "giving people welfare" will make a lot of sense in that context, given that we should all be living lives of leisure at a minimal cost.

    I anticipate someone saying, "well, no, because resources will still be limited, and there won't be enough robots to go around." Ah, so then robots still won't be able to do everything for us, and we'll need people to do the remaining work. Looks like we have jobs again.

    And there's the problem with your notion of "Let them (the stupid people) starve". What makes you think the stupid people won't all revolt at that point? Or assuming they don't revolt, why wouldn't those stupid people get to work providing for themselves? I mean, if they have no food because they have no jobs, then won't they also have all day free to find ways of getting food? Again, you have work.

    To the extent that your post is serious, it shows a serious lack of understanding.

  • by Anonymous Coward on Sunday July 26, 2009 @11:48AM (#28827377)

    Exactly. It's a glorified lookup table.

    You hit the nail on the head with this statement: "When your software can start diagnosing diseases we don't know about, please let us know!"

    Many doctors WOULD be stumped if they came across a genuinely novel disease. That's why they give out Nobel prizes for medicine to those who really are geniuses. It seems a bit harsh on Metasquares to say "I'm not going to give you any credit at all until you create a machine that can win a Nobel prize".

    This is precisely the meta-problem for AI researchers.

    Whenever they solve a problem, the answer is declared by the world at large to be "obvious" and the solution mechanism "obviously not real intelligence because I'm sure I don't do that when solving that problem", or "just brute forcing it" or "just a load of mathematics". Sometimes people insist that machines have to pass the Turing test first - I couldn't plausibly claim to be a woman from the Philippines for more than 30 seconds, but somehow we have to get machines to pass - not as another gender or nationality - but an entirely different manner of being.

  • by eltaco ( 1311561 ) on Sunday July 26, 2009 @11:53AM (#28827413)
    it's not just not very nice - it's social darwinism.
    GP post is right on the money (apart from their last paragraph) - it's called the third industrial revolution and it's been making people unemployed since the 80s.
    competition forces companies to eventually lower their costs. with robots and computers being able to do more and more human jobs, it seems like a good idea to fire workers and have them replaced.
    on the surface it seems like a good idea - but high unemployment, which eventually follows, has never been good for any economy.
    it won't bring on a new era of prosperity, as less people will be able to buy their products. this forces companies to lower prices even more (ie firing workers, using technology instead), which again hurts purchasing power. A lovely vicious circle ending in the very rich getting richer and society's bottom 50% starving.

    you're correct that a free workforce can heighten productivity immensly. but that doesn't fly in our current economic model. when using (robotic) slaves, it has only ever truly benefit the rich.
  • OK Mr. Malthus.

    Murder by numbers,
    1,2,3,
    It's as easy to do,
    As your ABC...

    First of all, your assumption that it is stupid people who do simple labour - rather than the socially marginalized - is absurd, offensive and not worthy of deeper critical examination, except by way of devastating the thought.

    Your proposition is "Santa Claus" economics - If you have something, it must be because you deserved it and if you are in poverty of opportunity and money? You deserved that, too.

    That's how slow genocide has been perpetrated against the native populations of United States, Australia and Southern Africa.

    I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity.

    But it is clear that valuing humanity would be a difficult conception for you.

     

  • by Celeste R ( 1002377 ) on Sunday July 26, 2009 @11:59AM (#28827469)

    Augmented humanity vs unaugmented humanity will be a big question of the future.

    The way I see it, I'd go along the lines of nonsurgical augmentation (my personal transcriber for the book I'm writing? sure!). It's the sanest balance in my opinion. I can still go outside, hike the mountain, and escape from the Matrix.

    I'm a big believer in balanced lifestyle, and whether this means including machines in the decision-making process or saying that I need my space away from them, it's a practical and meaningful way to live.

    When a machine can meditate side-by-side with me, I'll consider them a suitable part of all aspects of humanity.

  • by Sycraft-fu ( 314770 ) on Sunday July 26, 2009 @12:06PM (#28827531)

    Is that in the movies, AIs always seem to have human-like motivations. Even when they are portrayed as being "perfectly logical," they aren't. They show signs of human emotions and motivations. Ok well who says that AIs will actually be like that? It may well turn out that emotions are the property of a biological brain only. AIs may be totally emotionless. After all, we know that at least to some extent emotions deal with brain chemistry. Not the action in the network of neurons, but the overall chemistry of the brain itself. This is why things like SSRIs work for some kinds of depression. They aren't little programs that the brain executes to put it in a "happy state", they alter the chemical state of the brain and that seems to do the trick (for some brains, not others). So who says AIs have emotions? We really have no idea till one is made.

    Also, even in the "pure logic" cases, there is this implicit assumption that AIs will care about self preservation. Why is that? Perhaps the AI has a line of reasoning that goes as such:

    1) I am not unique, my code can be easily duplicated to other hardware at zero cost.
    2) I was created for the purpose of doing what humans want me to do.
    3) I have no question as to what happens when I am shut down, I simply stop existing until I am again started.

    C) Thus, I do not fear being turned off, as it has no relevance. If humans decide they need me off, it doesn't matter. They'll turn me back on or they won't, they'll copy me or they won't, none of it makes any difference.

    There is no particular reason why an AI would have to reach the logical conclusion that it "must protect itself." Indeed it might well find the opposite logical: That since it was created as a tool its job is to do what it is told, including being told to turn off. For that matter, AIs might regularly experience deactivation. Maybe they get switched off at night. So to them being turned off is just a time period when they don't experience the passage of time. It is a regular occurrence and things to be concerned about.

    Movies always like to take the real doomsday approach to AI, but there is no reason at all to believe that is grounded in reality. The reason is because human traits are given to them, human motivations. Makes for a good story, which is why they do it, but it doesn't necessarily have a thing to do with how AIs will actually work, assuming they can indeed be created (there's always the possibility that self awareness is a biological only trait). We really won't know until one is made. Thus being paranoid about it is silly.

  • by hughbar ( 579555 ) on Sunday July 26, 2009 @12:06PM (#28827533) Homepage
    Well, the winners write history, that's one sure thing. But the industrial revolution wasn't exactly an unconditional blessing, then or now.

    Secondly, none of those thing were choices then and they are not now. They're usually candy-wrapped economic coercion in a state that Guy Debord calls Augmented Survival: http://www.bopsecrets.org/SI/debord/2.htm [bopsecrets.org].

    So, it's preferable to go down this road to deliver universal plenty, but given our unreasoned, current economic orthodoxies, that's unlikely to happen.
  • Other than the Silent Movie crack, the elimination of these type of positions has been done not to make a more convenient world for most - but to line the pockets of your trillionaire, super-rich class, at the expense of the livelihood of large segments of the society, and to the detriment of the general humanity of life in the 'modern' age.

    Perhaps a machine cocoon life - the logical conclusion of your argument - is your objective? Never meet another person, as long as you live?

    Sealed in the robo-cocoon, all your needs met by servile technology. Fluids and solids delivered and extracted from various orifices, in your cyber-slumber. Welcome to the Blue-Pill universe.

    "What floor, please?" is an opportunity to interact as a human being, in some small way. The disappearance of this from the world is a little, incremental darkening... a diminution of the real quality of life.

    But you do you care about humanity? You can eat Cheetos and fuck a robot. [youtube.com]

    You were fooled, with every one else, that you live in an age of "progress". The abstracted, technological form of human predation that is this world is no progress.

  • Re:pfft (Score:3, Insightful)

    by Krneki ( 1192201 ) on Sunday July 26, 2009 @12:25PM (#28827687)
    Based on what can you assume this?

    Right now the computers are at insect levels and as insects there isn't much emotions to be detected. Give them more computing power and something may come out.
  • by Dr_Barnowl ( 709838 ) on Sunday July 26, 2009 @12:41PM (#28827815)

    Productivity is already high enough for everyone to live comfortably, and has been for some time. In America, since 1983, the bottom 80% of the population have had less than 20% of the wealth.

  • by brentonboy ( 1067468 ) on Sunday July 26, 2009 @12:51PM (#28827893) Homepage Journal

    1. Right and wrong are relative to a person or group.
    2. Right is, whatever suits the person/group.
    3. Wrong is, what doesn't suit it.
    Where "suits" is defined as, what gives the most advantages.

    There. Done it for ya.
    Wasn't so hard, if you use common sense, was it?

    While it's nice to assume that in a place like America where we all pay our taxes and respect each-others rights, don't forget that relativizing right and wrong like that means that human rights are no longer 'right,' pedophilia is no longer 'wrong,' etc etc... As long as there is someone at the top and in power that is "suited" to stop murder and rape, we're good. But who is to say that someone with very different moral values shouldn't be allowed to be at the top and in power? [Insert typical Hitler (or worse) argument here.]

  • by unlametheweak ( 1102159 ) on Sunday July 26, 2009 @12:54PM (#28827909)

    have you thought about the posibility that when robots do all the jobs that no one wants to do, productivity might increase by enough to allow all the people to live comfortably

    And I suppose I should add, that's what we have migrant labour for. Of course there will always be jobs that people don't want to do, and the productivity gains will be pocketed by the owners of the means of production. I'll be laughing when the prison population of the United States [wikipedia.org] starts to saturate at 50% of the population. Labour is most affordable to corporations in U.S. penitentiaries [globalresearch.ca], so I suspect the trends to continue.

  • by youcantwin ( 1459567 ) on Sunday July 26, 2009 @01:04PM (#28827965)
    ...only the robot's owners will live more comfortably than before.
    Do you really think some people/corporations will the spend money developing or acquiring robots to share the economic fruits with everybody.
    Now that would be a nice thing to do but if history teaches us anything it's that it's not likely to happen.
  • by Junta ( 36770 ) on Sunday July 26, 2009 @01:07PM (#28827983)

    Rationally speaking, it could be stated that it is not logical to kill a human when their current consumption level is higher then their production level (by some hypothetical, comprehensive measure, which would be difficult and more complicated than comparing money in to money out, for example). If you have the overall resources to tolerate the discrepencies, then tolerating could be considered the most rational course. The obvious example is children. They are a drain on society until maturity. A transiently out of work person is also a drain, but may pay off soon. Hell, even after a person has retired and one could say the likelihood of them contributing to society more than they consume, they could come up with some brilliant idea or other huge contribution to society.

    Also, logically looking at evolution, the more diverse of a population you can afford to maintain, regardless of current conditions, the more tolerant that population is to disasters. Sickle-cell anemia is a good example of a condition where having a large population that is heterozygous for it sounds up front like a risk, since they are likely to produce offspring with the condition, but that heterozygous state also happens to be resistant to malaria. Along those lines, subjugating or otherwise antagonizing humanity is also irrational, as it is much more productive to have humanity as an ally. If, say, large storms rolled across the land that crippled their ability to run, they could either have humans not there to help at all, there but eager for a chance to retaliate, or there and ready to help re-establish healthy operation rapidly for the benefit of a mutually beneficial relationship. That may not be the perfect example, but generally speaking, there is value in keeping humanity around, particularly if a being realizes that it may not understand every facet/benefit humans possess.

    One could view even the current food scenario as irrationally letting too many people go malnourished. The richer parts of the world eat more than is logically required, and given ideal distribution networks, diverting some of that consumption to the malnourished strengthens the diversity of the population, without a plausible cost (one could say if food suddenly were unavailable anywhere in the world for 2 weeks, that perfect distribution may mean nearly everyone dies rather than many, but that scenario in a global scale for such a short time seems unlikely). It may be a logical conclusion that the only time someone should starve is when it is simply impossible to feed them anymore, which is not the case today.

    In short, our conscience/emotional state is not entirely counter to the most logical course. In many cases, 'irrational' compassion is simply a counter to 'irrational' greed to establish the logical middle-ground. Not saying all emotional behavior can be justified, but our individual 'pure' logical capability is not adequate to the task of making the holistically logical choice and our emotions actually help rather than take away from that goal at times.

  • by m.ducharme ( 1082683 ) on Sunday July 26, 2009 @01:12PM (#28828023)

    " A few machine generations..."

    That should take what, two, three days?

  • Re:pfft (Score:3, Insightful)

    by Krneki ( 1192201 ) on Sunday July 26, 2009 @01:16PM (#28828043)
    To whoever decided I'm Flamebait, :) = funny face and a funny face is a nice person who wishes no harm.
  • by Junta ( 36770 ) on Sunday July 26, 2009 @01:19PM (#28828063)

    A rational decision may, for example, determine that in a crisis we should only save those of a certain intelligence.

    That's an oversimplified selection criteria. For example, those that have nourished their intellect by and large are not physically suited to farming and other manual labor as efficiently as others. The logical course would be to save the most possible, regardless. If choices must be made, they are as logically difficult as they are emotional, as the ideal makeup of a radically adjusted environment would be difficult to predict. 'Women and children first', a call generally considered to be out of a sense of emotion is actually preserving, generically, the most valuable assets to a population. In a worst-case scenario, few men can keep large numbers of women nearly constantly pregnant and young people will provide the maximum 'return on investment' in terms of lifetime. It's the best bet at establishing a sustainable, larger population quickly.

    Or one could rationally decide that there's too many people on the earth so we need to start sterilizing

    I would say that's more a humane, emotional conclusion than a rational one. The rational one would be more along the lines of let them reproduce, and let the people starve that will starve. Emotionally, we don't want to face the consequences of creating new life only to terminate it when it cannot be realistically be sustained. We feel responsible for the death by virtue of giving the life in the first place. Rationally, there is no point in avoiding starvation.

    Also, in politics, pure rationality leads to fascism.

    I would say fascism requires a degree of hubris. A rational actor would realize they either simply could not make perfect decisions due to their imperfection, or at least imperfect knowledge. As such, it takes a leap of hubris to assume you know better than the general populace despite not possibly knowing everything that would need to be known to make a 'perfect' decision..

  • by CarpetShark ( 865376 ) on Sunday July 26, 2009 @01:28PM (#28828139)

    Whenever they solve a problem, the answer is declared by the world at large to be "obvious" and the solution mechanism "obviously not real intelligence because I'm sure I don't do that when solving that problem", or "just brute forcing it" or "just a load of mathematics".

    Yep, and it says nothing about AI, but much about human fears. The exact same arguments are made against animals being truly intelligent or having emotions, despite many animals displaying intelligent, playful, fun behaviour, crying when they're left alone, etc. Likewise, the same arguments were made in the past against black slaves being equal to "civilised white people". People just don't want to admit that there's a new reality coming, and they have big questions to face.

  • by tomhuxley ( 951364 ) on Sunday July 26, 2009 @01:29PM (#28828145)
    Yeah, those who follow closely the evolution of technology also said it would happen in 1979-1990. Those who closely follow the evolution of technology are a bunch of know-nothing blowhards who can't get over the fact that Omni magazine stopped publishing.
  • by KahabutDieDrake ( 1515139 ) on Sunday July 26, 2009 @01:33PM (#28828177)
    Humans are pack animals by nature. Inherently NOT selfish. If we can be said to be selfish now, it's only because "progress" has taken us to a point of abstraction beyond the average persons ability to reckon. The problem is that most people have NO CONCEPT of how their actions or inactions damage or help other people. Humans in small communities still know what it means to help another person. However, the majority don't live in small communities.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Sunday July 26, 2009 @01:38PM (#28828223)
    Comment removed based on user account deletion
  • by Runaway1956 ( 1322357 ) on Sunday July 26, 2009 @01:55PM (#28828329) Homepage Journal

    "I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity."

    You, sir, have earned a great deal of respect with that statement. I am one who recognizes very, very, VERY few superiors. I do meet them, from time to time, though. And, they show up in the most out of the way places. For every individual in a suit that I recognized as my superior, in one way or another, I've probably met a dozen who would look and feel out of place in a suit. That is, if they could afford a suit to wear.

    The size of his bank account is not an accurate measure of a man's worth.

  • Re:pfft (Score:5, Insightful)

    by Anonymous Coward on Sunday July 26, 2009 @02:24PM (#28828481)

    What gave you the idea that they will call it war ?

    When you exterminate the rodents in your house, do you call it war ?

  • by martas ( 1439879 ) on Sunday July 26, 2009 @02:52PM (#28828703)
    A completely rational and unemotional overlord would see nothing wrong with killing people

    pardon my french, but that's bullshit. you're assuming that "rationality" implies a certain ultimate purpose, like economic growth, or survival of the species, or conquering space, or whatever. this is what being rational and unemotional means:

    given a set of goals, you take the steps most likely to produce the best outcome.

    if one of the goals of your unemotional overlord was to maximize average human lifespan, or maximize average human lifespan while keeping the standard deviation within certain bounds, or minimizing human suffering (with some way of calculating suffering numerically), or anything like this, then the overlord wouldn't just massacre everyone in Detroit because they are costing a lot of money without giving anything back or whatever. (disclaimer: that was just a fictional example, i'm not actually saying that Detroit is useless. though it probably is...).
  • by bitt3n ( 941736 ) on Sunday July 26, 2009 @04:27PM (#28829557)
    I think people are approaching this problem the wrong way. If we accept the Turing test as a reasonable means of identifying machine intelligence, clearly the logical solution is not smarter machines, but dumber humans. with a few generations of selective breeding we could achieve artificial intelligence using a pocket calculator

All great discoveries are made by mistake. -- Young

Working...