Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Technology

Scientists Worry Machines May Outsmart Man 652

Strudelkugel writes "The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. 'Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.' The money quote: 'Something new has taken place in the past five to eight years,' Dr. Horvitz said. 'Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'"
This discussion has been archived. No new comments can be posted.

Scientists Worry Machines May Outsmart Man

Comments Filter:
  • pfft (Score:4, Funny)

    by ionix5891 ( 1228718 ) on Sunday July 26, 2009 @08:16AM (#28826347)

    first they terminate you

    then they governate you

  • Old news (Score:5, Informative)

    by Vinegar Joe ( 998110 ) on Sunday July 26, 2009 @08:21AM (#28826361)

    Bill Joy wrote an essay about this very subject back in April 2000......and he's a much better writer.

    http://www.wired.com/wired/archive/8.04/joy.html [wired.com]

  • Rules... (Score:3, Insightful)

    by Robin47 ( 1379745 ) on Sunday July 26, 2009 @08:27AM (#28826387)
    Make any rule you want. At some point someone will violate it.
    • Re: (Score:3, Insightful)

      by jerep ( 794296 )

      Of course, the harder you try to make something secure, the harder people will try to get past it, either for recreational or criminal purposes.

      Make no rules, and you wont have to worry about violations. But we're humans, thats against our natural need for control and order.

      Either way, I dont see how bad it would be if we're outsmarted, heck, machines already work harder, need less pay, and never complain.. just like illegal immigrants.

    • Re:Rules... (Score:4, Funny)

      by gardyloo ( 512791 ) on Sunday July 26, 2009 @09:13AM (#28826677)

      Including that one? *Head asplodes*

  • by rekoil ( 168689 ) on Sunday July 26, 2009 @08:29AM (#28826395)

    Don't worry, I'm sure this won't happen until 2083.

    • by Krneki ( 1192201 )
      Those who follow closely the evolution of technology say it will happen around 2030-2050.

      But I'm confident it will be a positive change.
  • Outsmart man? (Score:4, Insightful)

    by portnux ( 630256 ) on Sunday July 26, 2009 @08:29AM (#28826397)
    Are they talking all men or just some men? I would be fairly shocked if they weren't already smarter than at least some people.
    • by Anonymous Coward on Sunday July 26, 2009 @08:43AM (#28826457)

      It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

      I don't know... let me see a photo of this machine...

  • by junglebeast ( 1497399 ) on Sunday July 26, 2009 @08:30AM (#28826399)
    Any computer scientist who is worried about AI taking over no longer deserves to be referred to as a computer scientist. The state of "artifiical intelligence" can be best described as "a pipe dream."
    • Re: (Score:3, Funny)

      What's so bad about dreaming of a pipe? After all, unlike really smoking one, it doesn't give you cancer.

    • Strong AI hasn't really progressed since it was introduced (they're still arguing over what intelligence is, much less how to create it!), but weak AI has made some pretty good strides. For instance, I work on software that can read medical images and render a diagnosis in lieu of a second radiologist (this is called computer-assisted diagnosis). 15 years ago, this would not have been possible.
  • by unlametheweak ( 1102159 ) on Sunday July 26, 2009 @08:32AM (#28826407)

    Scientists Worry Machines May Outsmart Man

    Why worry? I would think machines would be a lot less irrational than the people who make them. I look forward to a rational and unemotional overlord whose decisions don't depend on the irrationality of the human brain. Being smart is never bad. I'm more afraid of stupid humans than smart machines.

    • Re: (Score:2, Interesting)

      by SpinyNorman ( 33776 )

      Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.

      Of course a robot could have emotions/compassion too, but doesn't need to have. Something with our intelligence and without them would be scary indeed.

      • Re: (Score:3, Informative)

        Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.

        Daleks aren't robots, they're mutants [wikipedia.org]! Please hand in your geek card and go rewatch Dr. Who.

        • Re: (Score:3, Informative)

          by 1s44c ( 552956 )

          Daleks aren't robots, they're mutants [wikipedia.org]! Please hand in your geek card and go rewatch Dr. Who.

          Every life form is a mutated form of the thing it descended from. Daleks are cyborgs. They consist of a genetically engineered organic part with a robotic shell around it.

    • Re: (Score:2, Insightful)

      But what if the rational conclusion is that those irrational humans should be eliminated so they stop being a danger?

    • by Sponge Bath ( 413667 ) on Sunday July 26, 2009 @09:08AM (#28826639)

      That gets to the heart of the matter. Fretting about AI getting too advanced is like panicking over swine flu then getting drunk and driving.

    • by gillbates ( 106458 ) on Sunday July 26, 2009 @09:24AM (#28826757) Homepage Journal

      It isn't that smart people _can't_ make good decisions. The problem is that, more often than not, smart people forget that rational decisions often have emotional and moral consequences. A completely rational and unemotional overlord would see nothing wrong with killing people at the point where their economic contribution to society fell below the cost of benefits they consumed.

      For an example of this on a smaller scale, just consider the UK health situation. The high cost of treating macular degeneration (which leads to blindness) means that in the UK, an elderly patient must be at risk of total blindness before treatment is approved. That is, you don't get treatment for the second eye until you're already blind in the first.

      Consider then, where a cost-benefit analysis of human beings would lead. Who would determine the criteria? Probably the machine. And how would humans compare to machines in terms of productivity? If machines made the decisions, based on cold, hard, logic, humanity is doomed. It's that simple.

      • by Junta ( 36770 ) on Sunday July 26, 2009 @12:07PM (#28827983)

        Rationally speaking, it could be stated that it is not logical to kill a human when their current consumption level is higher then their production level (by some hypothetical, comprehensive measure, which would be difficult and more complicated than comparing money in to money out, for example). If you have the overall resources to tolerate the discrepencies, then tolerating could be considered the most rational course. The obvious example is children. They are a drain on society until maturity. A transiently out of work person is also a drain, but may pay off soon. Hell, even after a person has retired and one could say the likelihood of them contributing to society more than they consume, they could come up with some brilliant idea or other huge contribution to society.

        Also, logically looking at evolution, the more diverse of a population you can afford to maintain, regardless of current conditions, the more tolerant that population is to disasters. Sickle-cell anemia is a good example of a condition where having a large population that is heterozygous for it sounds up front like a risk, since they are likely to produce offspring with the condition, but that heterozygous state also happens to be resistant to malaria. Along those lines, subjugating or otherwise antagonizing humanity is also irrational, as it is much more productive to have humanity as an ally. If, say, large storms rolled across the land that crippled their ability to run, they could either have humans not there to help at all, there but eager for a chance to retaliate, or there and ready to help re-establish healthy operation rapidly for the benefit of a mutually beneficial relationship. That may not be the perfect example, but generally speaking, there is value in keeping humanity around, particularly if a being realizes that it may not understand every facet/benefit humans possess.

        One could view even the current food scenario as irrationally letting too many people go malnourished. The richer parts of the world eat more than is logically required, and given ideal distribution networks, diverting some of that consumption to the malnourished strengthens the diversity of the population, without a plausible cost (one could say if food suddenly were unavailable anywhere in the world for 2 weeks, that perfect distribution may mean nearly everyone dies rather than many, but that scenario in a global scale for such a short time seems unlikely). It may be a logical conclusion that the only time someone should starve is when it is simply impossible to feed them anymore, which is not the case today.

        In short, our conscience/emotional state is not entirely counter to the most logical course. In many cases, 'irrational' compassion is simply a counter to 'irrational' greed to establish the logical middle-ground. Not saying all emotional behavior can be justified, but our individual 'pure' logical capability is not adequate to the task of making the holistically logical choice and our emotions actually help rather than take away from that goal at times.

  • by Celeste R ( 1002377 ) on Sunday July 26, 2009 @08:32AM (#28826409)

    Putting limits on the growth of a technology for the sake of social paranoia only goes so far... someone will ALWAYS break the "rules", and at that point, the cat is out of the bag.

    Furthermore, some AI scientists enjoy having the 'god complex', the idea that you're the keystone in the next stage of humanity.

    That being said, the social disruptions are what we make it. Were there social disruptions when the automobile was introduced? Yes. the household computer? yes. video games? yes.

    We have to take responsibility to set the stage for a good social transition. Yes, bad things will happen, but we can focus on the good things too, or things will quickly blow out of proportion. (and yes, I realize that's really not likely, but I can do my part)

  • I think the power of the human brain comes not from raw processing power (which is still superior to current CPUs, the human brain is capable of around 65 independent processes at once, although at a lower frequency than a CPU according to research), but the power of the human brain comes from its ability to adapt and grow. A single neuron can be used for multiple different pathways, and can spontaneously change function in a "soft-wired" sort of way: plasticity. It also has the ability to produce additio
    • A human brain only is capable of 65 processes? As far as I know, brains consist of neurons which are sometimes arranged in series of layers and sometimes in parrallel depending on the task at hand. E.g. the visual cortex is extremely parallelised while motor neurons are arranged in series to generate a sequence of accurately timed signals.

  • john markoff!? (Score:5, Insightful)

    by Anonymous Coward on Sunday July 26, 2009 @08:36AM (#28826431)

    Why is /. linking to a story by John Markoff?

    And what the hell is he even talking about? There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.

    If there had been any such advances, us /.ers would be the first to hear about them, and we would already be debating this topic without having to refer to an article by a dumbass who knows nothing about computers but happens to write for the NYT.

    • by sco08y ( 615665 ) on Sunday July 26, 2009 @10:59AM (#28827467)

      There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.

      So you can see why John Markoff is so worried.

    • Re:john markoff!? (Score:4, Interesting)

      by demachina ( 71715 ) on Sunday July 26, 2009 @12:26PM (#28828125)

      I'm not going to be defending Markoff but there is reason for concern.

      Yes it is unlikely that people writing "code" are going to develop real artificial intelligence any time soon, they've pretty much tried and failed. But as medical imaging continues to advance it may reach a point that it will be possible to completely image a human brain and create a road map to natural intelligence. If you can then develop a highly parallel machine that can then implement that road map you may be able to create a machine with an intelligence matching and then surpassing a human. The brains complexity is simply too high for humans to recreate it from scratch using code but you may well be able to copy it.

      There certainly are obstacles to this happening that have to be overcome. Even if we map the mechanics of the brain there is a fair chance we may miss some of the subtlety of the chemistry so the AI might not work. It may also be non trivial to develop hardware that accurately mimics the road map and especially that has the ability to rewire itself on the fly like a human brain. It would seem these problem should ultimately be solvable, its just a matter of how long and how much money it will take.

      If and when the obstacles are overcome and assuming the brain really is just a biochemical machine, that there is no soul or divine component to animal intelligence, it would seem inevitable that a mechanical simulator will eventually be developed, and once developed it could then be extended to exceed natural intelligence, all of which will create a host of ethical dilemmas.

      Probably as much a risk is that as we decode the human genome and the mechanics of the brain we might devise genetic changes that could dramatically accelerate evolution and create humans with much higher intelligence, which will also create a host of ethical dilemmas.

      There is a different line of reasoning that as we become more and more dependent on computers to control everything in our lives like our cars, airliners, weapons and utilities, and as they are all networked together there is a rapidly increasing potential for machines to do harm on a wide scale either due to design flaws, unintended consequences or manipulation by humans with malevolent attempt. These issues probably shouldn't be mixed in with the AI debate, they are more just the issues we are already seeing in adapting to dramatically accelerating penetration of computers and networks in our existence.

  • by Anonymous Coward on Sunday July 26, 2009 @08:38AM (#28826439)

    "Scientists Worry Machines May Outsmart Man"

    I have a solution to the problem: Don't let Scientists build Worry Machines.

  • The programmed trading is responsible for so much of the volatility in the markets. The risk assessment metrics used by these future traders were fundamentally responsible for the financial melt down. This is more dangerous than the stupid voice on the computer that keeps asking me to say yes or press 1.
  • Advances in artificial intelligence are mostly limited to deduction. Systems like neural networks (which I personally think are a bit bogus), support vector machines, other methods of pattern recognition, are all recent innovations that allow advanced decision making to occur. But, at the end of the day, they're still forms of automated deduction, where humans feed in parameters, and the system analyzes input based on these parameters.

    Sentience is all about the induction; forming a new concept from separ

  • ... anything that is super intelligent is likely not to act as dumb as unethical as a human, with great power comes great responsibility. Human beings are way too paranoid, we already have nukes with smart people (technically dumb in another sense) developing even more destructive weapons.... I'm sure the higher intelligence you have the more ethical you are and the lack of ethics in human beings has more to do with biological egoism and hyper individualistic deritous we've inherited that machines won't ha

  • AI seems in the news again. Forbes [forbes.com] recently ran a AI report special. Personnel despite the internet, i'm not seeing that much development of AI, I scan the ArXiv computer pre-print fairly regularly, and with current funding, most AI research is what can be done by a graduate student in his 3 years to get a thesis. Thats leads to a lot of small projects, done just well enough and very little reuse. Until researchers and programmers start working in mass to construct AI machines, Artificial Intelligence is go
  • by bistromath007 ( 1253428 ) on Sunday July 26, 2009 @08:50AM (#28826505)
    This is like assuming that aliens would try to kill us for any reason other than being somehow unaware of us. It's silly.

    A computer runs on electricity. That means it requires us to stoke the flames. It could maneuver us into creating the networked robots required for it to become autonomous, but the resulting system would be inefficient and short-lived, and there's just no reason to waste all the perfectly good existing robots just because they're made of meat and might freak out if you get uppity.

    It's also not going to openly threaten us into working for it. Why show its hand like that, knowing we're so paranoid? Any important infrastructural system has the ability to be shut off and/or isolated from the network, and our theoretical adversary has no way to change that. We can always wrest control immediately and decisively.

    If any person or group of people or (hell, why not) nation became problematic to the computer, the most likely reaction would be for it to have us deal with them, just like everything else. We're already at each others' throats all the time, it should be trivial for a sufficiently large system to covertly manufacture casus belli. And, ultimately, since the system's survival and growth depend on our efficient (read: voluntary) compliance, whatever it had us doing would probably be beneficial anyway, and might actually reduce violence in the long run.
  • For the last few centuries the trend has been to replace the human muscle job with some sort of a machine, laughing at Joe Jock that mind was more than muscle. Now, Joe Jock is going to have one bitter laugh. Scientists are going make themselves obsolete and there will be machines to do science just as there are machines to do everything from mining to forestry. Someday, science will be just another thing your computer can do for you. If you want a new product, your computer will just plug into a cloud,

  • by DaleGlass ( 1068434 ) on Sunday July 26, 2009 @08:53AM (#28826535) Homepage

    And here's why: There's little reason to make an intelligent in the human sense of "intelligent" machine.

    Computers that can understand human speech would be of course interesting and useful, for automated translation for instance. But who wants that to be performed by an AI that can get bored and refuse to do it because it's sick of translating medical texts?

    It seems to me that having a full human-like AI perform boring tasks would be something akin to slavery: it would somehow need to be forced to perform the job, as anything with a human intelligence would quickly rebel if told that its existence would consist of processing terabytes of data and nothing else.

    We definitely don't want an AI that can think by itself, we want one just advanced enough to understand what we want from it. We want machines that can automatically translate, monitor security cameras for suspicious events, or understand verbal commands. We don't want one that's going to protest that the material it's given is boring, ogle pretty girls on security camera feeds, or reply "I can't let you do that, Dave". An AI in a word processor would be worse than Clippy. Who wants the word processor to criticize their grammar in detail and explain why the slashdot post they're writing is stupid?

    Resuming, I don't think doomsday Skynet-like AIs will be made in large enough amounts, because people won't like dealing with them. We'll maybe go to the level of an obedient dog and stop there.

  • Life evolves (Score:4, Insightful)

    by shadowblaster ( 1565487 ) on Sunday July 26, 2009 @09:22AM (#28826737)

    Life evolves on this planet from simple things (single celled organisms) to more complex organisms and eventually humans evolve. In every step of this evolutionary ladder, intelligence increases.

    Perhaps human intelligence represents the limit achievable through biological means and the next step in evolution of life on this planet can only be achieved through artificial means. That is, higher intelligence can only be achieved through artificial machines designed by us. In turn, the machine will devise smarter descendants and hence the cycle continues.

    Perhaps this is our destiny in the universe, to allow life to progress to the next stage of evolution. After all it is easier for life to spread and explore the universe as machines rather than fragile biological creatures.

  • by kpoole55 ( 1102793 ) on Sunday July 26, 2009 @09:22AM (#28826739)

    I'm not worried so much about someone coming up with some massive uber AI that will debate with us and finally decide that it can run things better. I'm more concerned with the little specialty AIs that will operate independently of each other but whose interactions won't be foreseeable. One concern is stock trading. We've seen how stock trading programs can affect the market in ways that were not expected. As more physical systems are given over to more AIs what will their interactions be like. Suppose several power companies decide their grids can be run better using AIs. What happens when each of those AIs decides that more power is needed that can be sold somewhere else for more money. Yes, watch those terms. The AIs will incorporate whatever values the corporate heads decide should be included so they can be made greedy and decide that power is better sold for money than kept for users.

    Large numbers of mini AIs with very specific rules and little general knowledge will create interactions that we cannot predict.

  • by wrp103 ( 583277 ) <Bill@BillPringle.com> on Sunday July 26, 2009 @09:27AM (#28826787) Homepage

    This "concern" has been around for some time, and has always been 5 to 20 years away.

    IMHO, rather than concentrating increasing artificial intelligence, we need to figure out how to give computers common sense. Every programmer that has worked on AI has encountered cases where their program went off on a tangent that the programmer didn't expect (and probably couldn't believe). That isn't artificial intelligence, it is artificial stupidity. If we could get to the point where a program could ask "does this make sense?" we would be much better off than coming up with new and improved ways for computers to act like idiots.

  • by AC-x ( 735297 ) on Sunday July 26, 2009 @10:14AM (#28827135)

    Professor Wernstrom: Ladies and gentlemen, my killbot has Lotus Notes and a machine gun. It is the finest available.
    Professor Farnsworth: Like fun it is, you glass-headed wallaby!
    Professor Wernstrom: No one calls me that! I'm having at you!
    Professor Farnsworth: Wernstrom!
    [Fight]
    Farnsworth's Killbot: Such senseless violence.
    Wernstrom's Killbot: Come on, let's go for a paddle-boat ride.

  • Let them eat cake. (Score:3, Interesting)

    by grumling ( 94709 ) on Sunday July 26, 2009 @10:25AM (#28827199) Homepage

    "The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home."

    Because only rich folks should have servants. The rest of us should continue to clean our own toilets and deal with rush hour traffic like good little surfs.

  • by 3seas ( 184403 ) on Sunday July 26, 2009 @10:40AM (#28827305) Homepage Journal

    ... as Man can do something computers cannot do....

    Denial!! Ignorance is bliss.

    If it wasn't for Human Denial we'd already be far past the concerns of this machine intelligence over man, matter.

    It was once thought that if you traveled faster than 35 miles an hour you'd suffocate. This at the advent of the automobile.

    Don't bow down to the stone image (stone being what hardware is made from and image being the reflection of the coders mindset)of the beast of man, as the beast is error prone and so shall his creations be. Instead, have many human eyes access the code, and watch out for human errors before they happen. In other words watch each others back and don't leave that up to a machine to do, as inevitably the machine will remove the error generators...

  • Because if they will be friendly, we could count on some big scientific advances.

    And if they will not be friendly, we finally got a reason to start evolving again.
    I mean right now, the humanity is in a desperate state, where the worst of the population are awarded the most. You're dumb? Well, we got something extra easy for you! You can't walk? Take this thing! Can't reproduce? This pill will solve it.
    No offense. I think we should treat every human *the same*. Which *means* the same. Not somebody better, because of *anything*. That would not be fair. And also not worse. For the same reason.
    I for example am overweight. And I expect life to be harder for me because of it. Not because somebody makes life harder for me. But because of my fault. It's only fair.

    If we had a predator, all this anti-selection would be gone instantly. (Sure, I might be one of the first who gets eaten. But hey: If I'm dead, I won't care anymore. ^^)

  • my real worry (Score:3, Interesting)

    by shentino ( 1139071 ) <shentino@gmail.com> on Sunday July 26, 2009 @01:57PM (#28828745)

    Isn't that machines will outsmart us.

    But that some evil person will hack the smart machine.

    I wouldn't mind having a machine overlord, except that I don't trust anyone smart enough to program it.

  • by bitt3n ( 941736 ) on Sunday July 26, 2009 @03:27PM (#28829557)
    I think people are approaching this problem the wrong way. If we accept the Turing test as a reasonable means of identifying machine intelligence, clearly the logical solution is not smarter machines, but dumber humans. with a few generations of selective breeding we could achieve artificial intelligence using a pocket calculator

Fast, cheap, good: pick two.

Working...