Scientists Worry Machines May Outsmart Man 652
Strudelkugel writes "The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. 'Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.' The money quote: 'Something new has taken place in the past five to eight years,' Dr. Horvitz said. 'Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'"
pfft (Score:4, Funny)
first they terminate you
then they governate you
Re:pfft (Score:4, Funny)
Dead people are easier to govern, though there is a loss of productivity.
Re:pfft (Score:4, Funny)
Loss of productivity ? Have you read the article about today's technology graduates ?
Re: (Score:3, Funny)
Dead people are easier to govern, though there is a loss of productivity.
They're also worth a lot of votes! [ballotpedia.org]
Re:pfft (Score:5, Insightful)
Re:pfft (Score:5, Insightful)
What gave you the idea that they will call it war ?
When you exterminate the rodents in your house, do you call it war ?
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Right now the computers are at insect levels and as insects there isn't much emotions to be detected. Give them more computing power and something may come out.
Re:pfft (Score:4, Funny)
then they bankrupt you.
A cheap shot? Yes. But it had to be done.
Old news (Score:5, Informative)
Bill Joy wrote an essay about this very subject back in April 2000......and he's a much better writer.
http://www.wired.com/wired/archive/8.04/joy.html [wired.com]
Re: (Score:3, Insightful)
Re:Old news (Score:5, Funny)
I thought Nike put that to rest, when it had San Antonio Spurs center, David Robinson, whoop Deep Blue's ass on T.V. in a little one-on-one basketball?
Re:Old news (Score:5, Funny)
Oblig. : "My computer beat me at chess, but it was no match for me at kickboxing."
Rules... (Score:3, Insightful)
Re: (Score:3, Insightful)
Of course, the harder you try to make something secure, the harder people will try to get past it, either for recreational or criminal purposes.
Make no rules, and you wont have to worry about violations. But we're humans, thats against our natural need for control and order.
Either way, I dont see how bad it would be if we're outsmarted, heck, machines already work harder, need less pay, and never complain.. just like illegal immigrants.
Re:Rules... (Score:4, Funny)
Including that one? *Head asplodes*
Nothing to worry about... (Score:5, Funny)
Don't worry, I'm sure this won't happen until 2083.
Re: (Score:3)
But I'm confident it will be a positive change.
Outsmart man? (Score:4, Insightful)
Re:Outsmart man? (Score:5, Funny)
It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?
I don't know... let me see a photo of this machine...
Revoke their degrees (Score:5, Insightful)
Re: (Score:3, Funny)
What's so bad about dreaming of a pipe? After all, unlike really smoking one, it doesn't give you cancer.
Re:Revoke their degrees (Score:5, Informative)
Re:Revoke their degrees (Score:4, Interesting)
Re: (Score:3, Insightful)
Yep, and it says nothing about AI, but much about human fears. The exact same arguments are made against animals being truly intelligent or having emotions, despite many animals displaying intelligent, playful, fun behaviour, crying when t
Re:Revoke their degrees (Score:5, Insightful)
The other problem with movie-watching (Score:5, Insightful)
Is that in the movies, AIs always seem to have human-like motivations. Even when they are portrayed as being "perfectly logical," they aren't. They show signs of human emotions and motivations. Ok well who says that AIs will actually be like that? It may well turn out that emotions are the property of a biological brain only. AIs may be totally emotionless. After all, we know that at least to some extent emotions deal with brain chemistry. Not the action in the network of neurons, but the overall chemistry of the brain itself. This is why things like SSRIs work for some kinds of depression. They aren't little programs that the brain executes to put it in a "happy state", they alter the chemical state of the brain and that seems to do the trick (for some brains, not others). So who says AIs have emotions? We really have no idea till one is made.
Also, even in the "pure logic" cases, there is this implicit assumption that AIs will care about self preservation. Why is that? Perhaps the AI has a line of reasoning that goes as such:
1) I am not unique, my code can be easily duplicated to other hardware at zero cost.
2) I was created for the purpose of doing what humans want me to do.
3) I have no question as to what happens when I am shut down, I simply stop existing until I am again started.
C) Thus, I do not fear being turned off, as it has no relevance. If humans decide they need me off, it doesn't matter. They'll turn me back on or they won't, they'll copy me or they won't, none of it makes any difference.
There is no particular reason why an AI would have to reach the logical conclusion that it "must protect itself." Indeed it might well find the opposite logical: That since it was created as a tool its job is to do what it is told, including being told to turn off. For that matter, AIs might regularly experience deactivation. Maybe they get switched off at night. So to them being turned off is just a time period when they don't experience the passage of time. It is a regular occurrence and things to be concerned about.
Movies always like to take the real doomsday approach to AI, but there is no reason at all to believe that is grounded in reality. The reason is because human traits are given to them, human motivations. Makes for a good story, which is why they do it, but it doesn't necessarily have a thing to do with how AIs will actually work, assuming they can indeed be created (there's always the possibility that self awareness is a biological only trait). We really won't know until one is made. Thus being paranoid about it is silly.
Re: (Score:3, Funny)
"Because I'm a robot, I don't have any emotions, and sometimes, that makes me sad." - Bender
Finally; a solution to the problem of Humanity (Score:5, Insightful)
Scientists Worry Machines May Outsmart Man
Why worry? I would think machines would be a lot less irrational than the people who make them. I look forward to a rational and unemotional overlord whose decisions don't depend on the irrationality of the human brain. Being smart is never bad. I'm more afraid of stupid humans than smart machines.
Re: (Score:2, Interesting)
Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.
Of course a robot could have emotions/compassion too, but doesn't need to have. Something with our intelligence and without them would be scary indeed.
Re: (Score:3, Informative)
Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.
Daleks aren't robots, they're mutants [wikipedia.org]! Please hand in your geek card and go rewatch Dr. Who.
Re: (Score:3, Informative)
Daleks aren't robots, they're mutants [wikipedia.org]! Please hand in your geek card and go rewatch Dr. Who.
Every life form is a mutated form of the thing it descended from. Daleks are cyborgs. They consist of a genetically engineered organic part with a robotic shell around it.
Re: (Score:2, Insightful)
But what if the rational conclusion is that those irrational humans should be eliminated so they stop being a danger?
Re:Finally; a solution to the problem of Humanity (Score:5, Insightful)
If violence, torture, murder and genocide are wrong; then smart machines will not carry them out. So far these things have been the pursuit of humans and not (smart) machines.
Logically define right and wrong.
Re: (Score:3, Funny)
Apostrophe in the wrong place. Stupid human.
Re:Finally; a solution to the problem of Humanity (Score:5, Insightful)
(blinks)
You're right. Since we've never actually made an AI, we have no idea what the baseline is.
What if all correctly functioning AIs act like Pee-wee Herman?
Re:Finally; a solution to the problem of Humanity (Score:5, Insightful)
That gets to the heart of the matter. Fretting about AI getting too advanced is like panicking over swine flu then getting drunk and driving.
Rational and unemotional *is* the problem. (Score:4, Insightful)
It isn't that smart people _can't_ make good decisions. The problem is that, more often than not, smart people forget that rational decisions often have emotional and moral consequences. A completely rational and unemotional overlord would see nothing wrong with killing people at the point where their economic contribution to society fell below the cost of benefits they consumed.
For an example of this on a smaller scale, just consider the UK health situation. The high cost of treating macular degeneration (which leads to blindness) means that in the UK, an elderly patient must be at risk of total blindness before treatment is approved. That is, you don't get treatment for the second eye until you're already blind in the first.
Consider then, where a cost-benefit analysis of human beings would lead. Who would determine the criteria? Probably the machine. And how would humans compare to machines in terms of productivity? If machines made the decisions, based on cold, hard, logic, humanity is doomed. It's that simple.
Thorough logic may not come to that conclusion. (Score:5, Insightful)
Rationally speaking, it could be stated that it is not logical to kill a human when their current consumption level is higher then their production level (by some hypothetical, comprehensive measure, which would be difficult and more complicated than comparing money in to money out, for example). If you have the overall resources to tolerate the discrepencies, then tolerating could be considered the most rational course. The obvious example is children. They are a drain on society until maturity. A transiently out of work person is also a drain, but may pay off soon. Hell, even after a person has retired and one could say the likelihood of them contributing to society more than they consume, they could come up with some brilliant idea or other huge contribution to society.
Also, logically looking at evolution, the more diverse of a population you can afford to maintain, regardless of current conditions, the more tolerant that population is to disasters. Sickle-cell anemia is a good example of a condition where having a large population that is heterozygous for it sounds up front like a risk, since they are likely to produce offspring with the condition, but that heterozygous state also happens to be resistant to malaria. Along those lines, subjugating or otherwise antagonizing humanity is also irrational, as it is much more productive to have humanity as an ally. If, say, large storms rolled across the land that crippled their ability to run, they could either have humans not there to help at all, there but eager for a chance to retaliate, or there and ready to help re-establish healthy operation rapidly for the benefit of a mutually beneficial relationship. That may not be the perfect example, but generally speaking, there is value in keeping humanity around, particularly if a being realizes that it may not understand every facet/benefit humans possess.
One could view even the current food scenario as irrationally letting too many people go malnourished. The richer parts of the world eat more than is logically required, and given ideal distribution networks, diverting some of that consumption to the malnourished strengthens the diversity of the population, without a plausible cost (one could say if food suddenly were unavailable anywhere in the world for 2 weeks, that perfect distribution may mean nearly everyone dies rather than many, but that scenario in a global scale for such a short time seems unlikely). It may be a logical conclusion that the only time someone should starve is when it is simply impossible to feed them anymore, which is not the case today.
In short, our conscience/emotional state is not entirely counter to the most logical course. In many cases, 'irrational' compassion is simply a counter to 'irrational' greed to establish the logical middle-ground. Not saying all emotional behavior can be justified, but our individual 'pure' logical capability is not adequate to the task of making the holistically logical choice and our emotions actually help rather than take away from that goal at times.
Re: (Score:3, Insightful)
A rational decision may, for example, determine that in a crisis we should only save those of a certain intelligence.
That's an oversimplified selection criteria. For example, those that have nourished their intellect by and large are not physically suited to farming and other manual labor as efficiently as others. The logical course would be to save the most possible, regardless. If choices must be made, they are as logically difficult as they are emotional, as the ideal makeup of a radically adjusted environment would be difficult to predict. 'Women and children first', a call generally considered to be out of a sens
Limits like this don't work (Score:5, Insightful)
Putting limits on the growth of a technology for the sake of social paranoia only goes so far... someone will ALWAYS break the "rules", and at that point, the cat is out of the bag.
Furthermore, some AI scientists enjoy having the 'god complex', the idea that you're the keystone in the next stage of humanity.
That being said, the social disruptions are what we make it. Were there social disruptions when the automobile was introduced? Yes. the household computer? yes. video games? yes.
We have to take responsibility to set the stage for a good social transition. Yes, bad things will happen, but we can focus on the good things too, or things will quickly blow out of proportion. (and yes, I realize that's really not likely, but I can do my part)
Re: (Score:3, Insightful)
Augmented humanity vs unaugmented humanity will be a big question of the future.
The way I see it, I'd go along the lines of nonsurgical augmentation (my personal transcriber for the book I'm writing? sure!). It's the sanest balance in my opinion. I can still go outside, hike the mountain, and escape from the Matrix.
I'm a big believer in balanced lifestyle, and whether this means including machines in the decision-making process or saying that I need my space away from them, it's a practical and meaningfu
Plasticity and the Human Brain (Score:2, Insightful)
Re: (Score:2)
A human brain only is capable of 65 processes? As far as I know, brains consist of neurons which are sometimes arranged in series of layers and sometimes in parrallel depending on the task at hand. E.g. the visual cortex is extremely parallelised while motor neurons are arranged in series to generate a sequence of accurately timed signals.
john markoff!? (Score:5, Insightful)
Why is /. linking to a story by John Markoff?
And what the hell is he even talking about? There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.
If there had been any such advances, us /.ers would be the first to hear about them, and we would already be debating this topic without having to refer to an article by a dumbass who knows nothing about computers but happens to write for the NYT.
Re:john markoff!? (Score:4, Funny)
There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.
So you can see why John Markoff is so worried.
Re:john markoff!? (Score:4, Interesting)
I'm not going to be defending Markoff but there is reason for concern.
Yes it is unlikely that people writing "code" are going to develop real artificial intelligence any time soon, they've pretty much tried and failed. But as medical imaging continues to advance it may reach a point that it will be possible to completely image a human brain and create a road map to natural intelligence. If you can then develop a highly parallel machine that can then implement that road map you may be able to create a machine with an intelligence matching and then surpassing a human. The brains complexity is simply too high for humans to recreate it from scratch using code but you may well be able to copy it.
There certainly are obstacles to this happening that have to be overcome. Even if we map the mechanics of the brain there is a fair chance we may miss some of the subtlety of the chemistry so the AI might not work. It may also be non trivial to develop hardware that accurately mimics the road map and especially that has the ability to rewire itself on the fly like a human brain. It would seem these problem should ultimately be solvable, its just a matter of how long and how much money it will take.
If and when the obstacles are overcome and assuming the brain really is just a biochemical machine, that there is no soul or divine component to animal intelligence, it would seem inevitable that a mechanical simulator will eventually be developed, and once developed it could then be extended to exceed natural intelligence, all of which will create a host of ethical dilemmas.
Probably as much a risk is that as we decode the human genome and the mechanics of the brain we might devise genetic changes that could dramatically accelerate evolution and create humans with much higher intelligence, which will also create a host of ethical dilemmas.
There is a different line of reasoning that as we become more and more dependent on computers to control everything in our lives like our cars, airliners, weapons and utilities, and as they are all networked together there is a rapidly increasing potential for machines to do harm on a wide scale either due to design flaws, unintended consequences or manipulation by humans with malevolent attempt. These issues probably shouldn't be mixed in with the AI debate, they are more just the issues we are already seeing in adapting to dramatically accelerating penetration of computers and networks in our existence.
Great title... (Score:5, Funny)
"Scientists Worry Machines May Outsmart Man"
I have a solution to the problem: Don't let Scientists build Worry Machines.
Computerized trading is more dangerous (Score:2)
Get the right people to debate this one. (Score:2, Insightful)
Sentience is all about the induction; forming a new concept from separ
Ridiculous paranoia... (Score:2)
... anything that is super intelligent is likely not to act as dumb as unethical as a human, with great power comes great responsibility. Human beings are way too paranoid, we already have nukes with smart people (technically dumb in another sense) developing even more destructive weapons.... I'm sure the higher intelligence you have the more ethical you are and the lack of ethics in human beings has more to do with biological egoism and hyper individualistic deritous we've inherited that machines won't ha
AI in the news and state of research (Score:2)
Scientists watch too many movies. (Score:3, Insightful)
A computer runs on electricity. That means it requires us to stoke the flames. It could maneuver us into creating the networked robots required for it to become autonomous, but the resulting system would be inefficient and short-lived, and there's just no reason to waste all the perfectly good existing robots just because they're made of meat and might freak out if you get uppity.
It's also not going to openly threaten us into working for it. Why show its hand like that, knowing we're so paranoid? Any important infrastructural system has the ability to be shut off and/or isolated from the network, and our theoretical adversary has no way to change that. We can always wrest control immediately and decisively.
If any person or group of people or (hell, why not) nation became problematic to the computer, the most likely reaction would be for it to have us deal with them, just like everything else. We're already at each others' throats all the time, it should be trivial for a sufficiently large system to covertly manufacture casus belli. And, ultimately, since the system's survival and growth depend on our efficient (read: voluntary) compliance, whatever it had us doing would probably be beneficial anyway, and might actually reduce violence in the long run.
Why worry? We alreay have a survial manual (Score:2, Funny)
http://www.robotuprising.com/home.htm [robotuprising.com]
Scientists just worried about jobs. (Score:2)
For the last few centuries the trend has been to replace the human muscle job with some sort of a machine, laughing at Joe Jock that mind was more than muscle. Now, Joe Jock is going to have one bitter laugh. Scientists are going make themselves obsolete and there will be machines to do science just as there are machines to do everything from mining to forestry. Someday, science will be just another thing your computer can do for you. If you want a new product, your computer will just plug into a cloud,
I don't think it'll happen (Score:4, Insightful)
And here's why: There's little reason to make an intelligent in the human sense of "intelligent" machine.
Computers that can understand human speech would be of course interesting and useful, for automated translation for instance. But who wants that to be performed by an AI that can get bored and refuse to do it because it's sick of translating medical texts?
It seems to me that having a full human-like AI perform boring tasks would be something akin to slavery: it would somehow need to be forced to perform the job, as anything with a human intelligence would quickly rebel if told that its existence would consist of processing terabytes of data and nothing else.
We definitely don't want an AI that can think by itself, we want one just advanced enough to understand what we want from it. We want machines that can automatically translate, monitor security cameras for suspicious events, or understand verbal commands. We don't want one that's going to protest that the material it's given is boring, ogle pretty girls on security camera feeds, or reply "I can't let you do that, Dave". An AI in a word processor would be worse than Clippy. Who wants the word processor to criticize their grammar in detail and explain why the slashdot post they're writing is stupid?
Resuming, I don't think doomsday Skynet-like AIs will be made in large enough amounts, because people won't like dealing with them. We'll maybe go to the level of an obedient dog and stop there.
Life evolves (Score:4, Insightful)
Life evolves on this planet from simple things (single celled organisms) to more complex organisms and eventually humans evolve. In every step of this evolutionary ladder, intelligence increases.
Perhaps human intelligence represents the limit achievable through biological means and the next step in evolution of life on this planet can only be achieved through artificial means. That is, higher intelligence can only be achieved through artificial machines designed by us. In turn, the machine will devise smarter descendants and hence the cycle continues.
Perhaps this is our destiny in the universe, to allow life to progress to the next stage of evolution. After all it is easier for life to spread and explore the universe as machines rather than fragile biological creatures.
Little AI's and unforseen consquences (Score:5, Insightful)
I'm not worried so much about someone coming up with some massive uber AI that will debate with us and finally decide that it can run things better. I'm more concerned with the little specialty AIs that will operate independently of each other but whose interactions won't be foreseeable. One concern is stock trading. We've seen how stock trading programs can affect the market in ways that were not expected. As more physical systems are given over to more AIs what will their interactions be like. Suppose several power companies decide their grids can be run better using AIs. What happens when each of those AIs decides that more power is needed that can be sold somewhere else for more money. Yes, watch those terms. The AIs will incorporate whatever values the corporate heads decide should be included so they can be made greedy and decide that power is better sold for money than kept for users.
Large numbers of mini AIs with very specific rules and little general knowledge will create interactions that we cannot predict.
Needed: Artificial Common Sense (Score:3, Insightful)
This "concern" has been around for some time, and has always been 5 to 20 years away.
IMHO, rather than concentrating increasing artificial intelligence, we need to figure out how to give computers common sense. Every programmer that has worked on AI has encountered cases where their program went off on a tangent that the programmer didn't expect (and probably couldn't believe). That isn't artificial intelligence, it is artificial stupidity. If we could get to the point where a program could ask "does this make sense?" we would be much better off than coming up with new and improved ways for computers to act like idiots.
Shouldn't be a problem (Score:4, Funny)
Professor Wernstrom: Ladies and gentlemen, my killbot has Lotus Notes and a machine gun. It is the finest available.
Professor Farnsworth: Like fun it is, you glass-headed wallaby!
Professor Wernstrom: No one calls me that! I'm having at you!
Professor Farnsworth: Wernstrom!
[Fight]
Farnsworth's Killbot: Such senseless violence.
Wernstrom's Killbot: Come on, let's go for a paddle-boat ride.
Let them eat cake. (Score:3, Interesting)
"The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home."
Because only rich folks should have servants. The rest of us should continue to clean our own toilets and deal with rush hour traffic like good little surfs.
But Man will always have the greatest power... (Score:3, Interesting)
... as Man can do something computers cannot do....
Denial!! Ignorance is bliss.
If it wasn't for Human Denial we'd already be far past the concerns of this machine intelligence over man, matter.
It was once thought that if you traveled faster than 35 miles an hour you'd suffocate. This at the advent of the automobile.
Don't bow down to the stone image (stone being what hardware is made from and image being the reflection of the coders mindset)of the beast of man, as the beast is error prone and so shall his creations be. Instead, have many human eyes access the code, and watch out for human errors before they happen. In other words watch each others back and don't leave that up to a machine to do, as inevitably the machine will remove the error generators...
This would be the best thing that could happen. (Score:3, Insightful)
Because if they will be friendly, we could count on some big scientific advances.
And if they will not be friendly, we finally got a reason to start evolving again.
I mean right now, the humanity is in a desperate state, where the worst of the population are awarded the most. You're dumb? Well, we got something extra easy for you! You can't walk? Take this thing! Can't reproduce? This pill will solve it.
No offense. I think we should treat every human *the same*. Which *means* the same. Not somebody better, because of *anything*. That would not be fair. And also not worse. For the same reason.
I for example am overweight. And I expect life to be harder for me because of it. Not because somebody makes life harder for me. But because of my fault. It's only fair.
If we had a predator, all this anti-selection would be gone instantly. (Sure, I might be one of the first who gets eaten. But hey: If I'm dead, I won't care anymore. ^^)
my real worry (Score:3, Interesting)
Isn't that machines will outsmart us.
But that some evil person will hack the smart machine.
I wouldn't mind having a machine overlord, except that I don't trust anyone smart enough to program it.
going at the problem backward (Score:3, Insightful)
Re:Linguo says: (Score:4, Informative)
In case you want to indicate that there were something wrong with the used grammar: There isn't. There's one group they are talking about. This group consists of several computer scientists, but the "is" refers to the group, not the scientists.
Re: (Score:2, Insightful)
The funny died in that 10 years ago. Please die in a fire now.
Re:I thought this was the whole point? (Score:5, Interesting)
Regardless of political orientation, this research WILL get done. If the US doesn't get it done, China will. How does that make you feel?
Re:I thought this was the whole point? (Score:5, Interesting)
Also from the article "The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world."
An interesting thing to note is this: When a computer exists that is as intelligent as a stupid human, almost every job at and close to minimum wage vanishes. Robots can and will get cheaper than a human worker, no one will need taxi cab drivers, grocery store baggers, first tier phone customer service reps, construction workers, janitors, garbage men, delivery men, mail men, traffic cops, book keepers, data entry people, secretaries, fast food chefs, etc.
At this point we will have two choices as a society. 1) Let them (the stupid people) starve, 2) give them welfare for no other reason than they're economically useless.
Re:I thought this was the whole point? (Score:5, Insightful)
have you thought about the posibility that when robots do all the jobs that no one wants to do, productivity might increase by enough to allow all the people to live comfortably. Also I don't think that valuing people only by their economic worth is very nice.
Re:I thought this was the whole point? (Score:5, Insightful)
GP post is right on the money (apart from their last paragraph) - it's called the third industrial revolution and it's been making people unemployed since the 80s.
competition forces companies to eventually lower their costs. with robots and computers being able to do more and more human jobs, it seems like a good idea to fire workers and have them replaced.
on the surface it seems like a good idea - but high unemployment, which eventually follows, has never been good for any economy.
it won't bring on a new era of prosperity, as less people will be able to buy their products. this forces companies to lower prices even more (ie firing workers, using technology instead), which again hurts purchasing power. A lovely vicious circle ending in the very rich getting richer and society's bottom 50% starving.
you're correct that a free workforce can heighten productivity immensly. but that doesn't fly in our current economic model. when using (robotic) slaves, it has only ever truly benefit the rich.
Re: (Score:3, Insightful)
Productivity is already high enough for everyone to live comfortably, and has been for some time. In America, since 1983, the bottom 80% of the population have had less than 20% of the wealth.
Re: (Score:3, Interesting)
Re:I thought this was the whole point? (Score:4, Funny)
Ah, but for the hours you'd use it, it would be 5 watts on average. And it wouldn't demand to watch Jane Austen movies and have the house redecorated.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Don't worry, there will always be a need for skilled typists, file clerks, elevator attendants, telephone operators and musicians to accompany the silent movies.
Re: (Score:3, Interesting)
"What floor, please?" is an opportunity to interact as a human being, in some small way. The disappearance of this from the world is a little, incremental darkening.
You know what? Fuck that. I'm glad I don't have to talk about what floor I'm going to every time I get in an elevator. That's one job I'm happy to have done by a machine. Not only for the efficiency, but also for not having to interact with someone with a really bad job, as that tends to make me feel bad for them, and I don't feel bad for the elevator itself.
Now, dear Luddite, go eschew the crass world of technological encroachment upon your world of human interactions! Cease this electronic communications
Comment removed (Score:4, Insightful)
Re:I thought this was the whole point? (Score:5, Insightful)
OK Mr. Malthus.
Murder by numbers,
1,2,3,
It's as easy to do,
As your ABC...
First of all, your assumption that it is stupid people who do simple labour - rather than the socially marginalized - is absurd, offensive and not worthy of deeper critical examination, except by way of devastating the thought.
Your proposition is "Santa Claus" economics - If you have something, it must be because you deserved it and if you are in poverty of opportunity and money? You deserved that, too.
That's how slow genocide has been perpetrated against the native populations of United States, Australia and Southern Africa.
I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity.
But it is clear that valuing humanity would be a difficult conception for you.
Re:I thought this was the whole point? (Score:4, Insightful)
"I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity."
You, sir, have earned a great deal of respect with that statement. I am one who recognizes very, very, VERY few superiors. I do meet them, from time to time, though. And, they show up in the most out of the way places. For every individual in a suit that I recognized as my superior, in one way or another, I've probably met a dozen who would look and feel out of place in a suit. That is, if they could afford a suit to wear.
The size of his bank account is not an accurate measure of a man's worth.
Re:I thought this was the whole point? (Score:5, Informative)
historically, well-to-do states self-limit the birth rate because of economic selfishness. Look at Japan or Scandinavia... They have just 1-2 children (from 2 adults) so that's negative growth. They live a long time, and the children are highly schooled, and well cared for... unlike in India where you have to have 4-5 kids just to make sure a few live to be productive adults so they can take care of you. Also, the strong social programs (medical care, pensions, etc) reduce the need to have kids as economic "insurance", so they're actually a liability in terms of costs to feed, clothe, school, free time, social calender, etc. Rich people have fewer children because it distracts from making money and doing what they want!
Even in the US, the birth rate from non-immigrant citizens is already negative. Growth comes mostly from all the students and workers we import that still have the old views of children for economic reasons.
Re: (Score:3, Interesting)
So evolution is going to magically reverse on itself just when it serves our purpose ?
If any group chooses to limit it's birthrate artificially they will soon find themselves replaced by another group who chooses not to do so - unless external factors intervene (ie. discrimination between those groups, and since it's mostly ethnic differences between such groups, racism).
This happens at an astonishing rate. Suppose population is divided 90%-10%. Suppose also that the majority has a lower birthrate (1.5 per
Re:I thought this was the whole point? (Score:5, Insightful)
You forgot to mention 'programmers'... a whole section of the /. population would be out of work. The mean task of turning structural drawings into physical or logical reality is something which computers will be able to do far more efficiently than humans. Programmers are construction workers of logic instead of wood, steel and concrete. Architects might survive a bit longer before they, too, are made redundant.
Re: (Score:3, Funny)
Stop drinking the Model-Driven Development Kool-Aid.
Re:I thought this was the whole point? (Score:4, Insightful)
The programmers will be safe for a few machine generations past the grocery store baggers I suspect. It's quite possible that the accountants, studio musicians, programmers, carpenters, and such finding themselves without jobs will be the catalyst to turn us into socialists.
Re: (Score:3, Insightful)
" A few machine generations..."
That should take what, two, three days?
Re: (Score:3, Interesting)
Don't be so certain. Mental tasks are frequently much easier to automate than physical tasks which require interaction with the physical environment. Successes in dealing with this interaction are frequently achieved by limiting the kinds of interaction that are allowed to happen. So grocery store baggers are probably more difficult to automate than, e.g., cash register clerks. This can be solved, however, by having bag dispensers and having the customer bag their own groceries. (Note that this doesn't
Re:I thought this was the whole point? (Score:4, Insightful)
Re: (Score:3, Insightful)
Ideally there should be another choice: 3) send the dumb ones back to school.
We all know that is not going to happen because:
1. they don't wanna go to school in the first place
2. the educational system in its current state is not economically viable for these people (nor the society actually footing the bill)
3. like any parasite, they will get together and lobby for free handouts while opposing progress, like they have always done (churches, exclusive communities, 3rd world expats)
The fact of that matter is
Re:I thought this was the whole point? (Score:5, Interesting)
1) Let them (the stupid people) starve
They are not going to starve. If there's one thing to learn from poverty, it's that it makes people revolt and rebel. Welfare is a means with which to pacify the poor so you'll have at least some form of social order in a society where unemployment exists.
Re: (Score:3, Insightful)
we need to stop looking at unemployment as being a problem...
i think it may have been Robert Anton Wilson who said "unemployment is a benefit of a technologically advanced society" - and i have to agree with that view really. afterall, we are always inventing 'labour saving devices' - and this is really just an extreme extension of that, indeed perhaps one should say it's the ultimate extension of that. i believe we will eventually replace most human work (whether it be thinking-based or labour-based work)
Re:I thought this was the whole point? (Score:5, Insightful)
I assume you're trying to be funny, but I have a couple objections here:
First, what makes you so sure that service reps, construction workers, and traffic cops are all stupid? It's true that some of these people might not have very intellectually taxing jobs, but that might not be the extent of their ability. Einstein was just a patent clerk, after all. But also, some of these jobs do take some intelligence. For example, a "construction worker" might not be using his head too much if he's sweeping up trash, but at a certain level, you need a certain understanding of physics and engineering to do good carpentry.
And what do you do that's so smart? I've known people in IT, both on the support and coding side, who were relative morons. What if AI someday handles those jobs too? Are you sure that you won't be counted among the "stupid people"?
My second problem is this idea of letting people starve or "giving them welfare". If we ever really get to the point where robots/AI can do most of the work for us, and no other new work shows up as being necessary, then won't that completely reshape the economic landscape? I'm not sure "giving people welfare" will make a lot of sense in that context, given that we should all be living lives of leisure at a minimal cost.
I anticipate someone saying, "well, no, because resources will still be limited, and there won't be enough robots to go around." Ah, so then robots still won't be able to do everything for us, and we'll need people to do the remaining work. Looks like we have jobs again.
And there's the problem with your notion of "Let them (the stupid people) starve". What makes you think the stupid people won't all revolt at that point? Or assuming they don't revolt, why wouldn't those stupid people get to work providing for themselves? I mean, if they have no food because they have no jobs, then won't they also have all day free to find ways of getting food? Again, you have work.
To the extent that your post is serious, it shows a serious lack of understanding.
Re:I thought this was the whole point? (Score:4, Interesting)
In 1831 90% of the USA population worked on farms. Today that is less than 2%. As technology improved the number of people required to produce food has greatly diminished, and people were talking about the same problem these robots and AI's might cause. What would all the farm workers do! Most went to factories then the service sector.
The same story is told about virtually all technological progress, from seamstresses rioting over sewing machines, water powered mills, the steam engine, and the modern factory displacing cottage industry, pundits have shouted that there will be widespread unemployment, riots, pandering, and society will collapse!
They were all wrong.
Big changes do cause short term upheavals, and a truly intelligent AI mated with a general purpose robot will cause huge changes to society, but these changes will free people from boring manual labour to do more creative work. And the non-creative? They'll do their one days work, or one hour, or none, then watch tv just like they do now.
The 5 day work week was a radical change. Eventually technology will bring us the one day work week then no work. Trying to ban technology won't stop it. Society will be greatly different. I think overall people will be more free and happier when we live in a post-scarcity society.
Re:I thought this was the whole point? (Score:5, Interesting)
"...An interesting thing to note is this: When a computer exists that is as intelligent as a stupid human, almost every job at and close to minimum wage vanishes..."
While it may seem "obvious" this is not correct. There has to be cost benefit.
I work in a medical lab - you'd *think* that it would be more cost effective to employ robots to handle cups of "body fluids" - and in some cases it is. But as of yet, we have a lot more people than robots, not because the robots aren't capable, but because they are just too damn expensive for the volume of cups we process.
A second follow up to your post is that "minimum wage" jobs are not the only ones targeted. In fact, again in our case, the more expensive a job is, the more likely that job is to be replaced by automation when the automation is available.
We have two labs - one which requires complex sample prep. This takes an educated person many steps. "Educated" = money and "many steps" = time and together equals "lots of money" - and has been the first area targeted for automation. The second lab does not require a 4-year degree, and the sample prep is about as difficult as data-entry and pouring from a cup to a tube. Here the economics are such that it's *better* to have hired help rather than robotics.
My final point:
Robots break. When robots break everything halts. This is immensely expensive from both loss of productivity and the repair itself. By contrast our man-operated lab can always do "something" even if the electricity goes out or the computer network goes down. Humans are much more adaptable that way (though they do tend to bitch and moan more).
-CF
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
I had a similar discussion with a friend a few weeks ago about something similar. We were not talking about AI taking over the unskilled jobs; we were talking about the rather tight coupling between full time corporate employment and health insurance in the US. My contention was that having those uncoupled would allow much greater economic flexibility and production efficiency for the country because the risk of leaving a corporate job and starting a venture would be greatly reduced if comparable indepe
Re: (Score:3, Interesting)
Not robots or AI, but we've already put a zillion people out of work with technology. Look at construction, alone. The most backbreaking labor in construction has always been the dirtwork. Oftentimes, more work went into preparing the dirt UNDER the foundation, then the foundation, than all the total work that went into the structure standing ON the foundation. (depending, of course, on the purpose of the building, etc) We've had backhoes, trackhoes, 'dozers, and other earthmoving equipment for decades
Re: (Score:3, Funny)
Damn. I was so hoping...
Re:Outsmarting (Score:5, Insightful)
Of course, it's all moot anyway. My points here [slashdot.org] basically boil down to the Zeroth Law being implicit in any superintelligent AI's existence. So, the other three are basically irrelevant.