Answering Elon Musk On the Dangers of Artificial Intelligence 262
Lasrick points out a rebuttal by Stanford's Edward Moore Geist of claims that have led the recent panic over superintelligent machines. From the linked piece: Superintelligence is propounding a solution that will not work to a problem that probably does not exist, but Bostrom and Musk are right that now is the time to take the ethical and policy implications of artificial intelligence seriously. The extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence, particularly since artificial intelligence (AI) researchers have struggled to create machines that show much evidence of intelligence at all.
Obvious deflection. (Score:5, Insightful)
Even without super-intelligence, autonomous killing machines are already quite feasible with current technology and this is a really stupid attempt to deflect the public dialogue from the real issue which is that ethical legal frameworks guiding their design and creation are already sorely lacking.
Re:Obvious deflection. (Score:5, Insightful)
Why is the ethics for an autonomous killing machine different from a non autonomous one?
To me that sounds just like another case "it happened with computers so it must be more dangerous because I do not understand computers".
Figure out a way to raise humans so that they don't turn out bad. Then apply the same method to other neural networks.
Re:Obvious deflection. (Score:5, Insightful)
Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries. The ethics conversation needs to happen, but all this nonsense of whether or not "real" artificial intelligence is possible should not detract from or hamper discussion about the ethics of making any type of autonomous killing machine, whether its as intelligent as Skynet from Terminator, or only as clever as Mecha-Hitler from Wolfenstein 3D. The AI debate as a whole is simply a distraction that's preventing getting down to the ethics.
Re: (Score:3, Insightful)
Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries.
Landmines already causes this, but the military still uses them with the motivation that a US soldiers safety is more important than the lives of foreign civilians.
I guess it wouldn't be as much of a problem if the mines where retrieved/destroyed after usage, unfortunately that doesn't always happen.
Re:Obvious deflection. (Score:5, Informative)
Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries.
Landmines already causes this, but the military still uses them with the motivation that a US soldiers safety is more important than the lives of foreign civilians.
I guess it wouldn't be as much of a problem if the mines where retrieved/destroyed after usage, unfortunately that doesn't always happen.
The 2004 landmine policy by President George W. Bush prohibited US use of the most common types of antipersonnel mines, those that are buried in the ground (“dumb” or “persistent” antipersonnel landmines, which lack a self-destruct feature), and since January 1, 2011, the US has been permitted to use only antipersonnel mines that self-destruct and self-deactivate anywhere in the world.
Presently, The USA has no landmines deployed anywhere in the world.
Re: (Score:3, Interesting)
Presently, The USA has no landmines deployed anywhere in the world.
Except in Vietnam, Korea, all the fuck over Europe, Afghanistan, Iraq and Iran.
Just because they left them behind, doesn't mean they're not 'deployed'.
Re: (Score:3, Informative)
Opps, link I forgot to include
http://2001-2009.state.gov/r/p... [state.gov]
Nobody spends anything close to the amount the USA spends on clearing other people's landmines.
general info on mine clearing
http://www.halotrust.org/ [halotrust.org]
Landmines for peace (Score:5, Interesting)
Obviously, they do have a good point, what with the disasters in Indochina and elsewhere. However, those were cases of non-self destructing anti-personnel landmines placed in third world nations. The situation is / would be quite a bit different with anti-tank mines, self-deactivating or remote-deactivating mines, and/or mines placed in developed nations that have the resources to keep people out and clear the minefields later on as needed.
Why is this all worth mentioning? One word: Ukraine. In a situation where one side in a conflict desperately wants to fortify their defenses but doesn't want to risk alarming the other side (or giving them a plausible pretext to feign alarm), landmines are one of the few stationary weapons available that can thwart or at least seriously slow down an invasion. Instead of all this deeply worrying Cold War-type bravado of military exercises and NATO rapid response plans in Eastern Europe, just mine the fuck out of their borders. Putin could act huffy and offended if he wants, but people will realize it is a clearly not an aggressive action.
Re:Obvious deflection. (Score:5, Interesting)
I'm going out on a limb here, but I'll bet the American public would react a whole lot differently than they do when an American drone takes out 1 maybe-terrorist + a wedding party in Pakistan.
Re: (Score:2)
Crap military complex apologist responses to your good point. Of the thousands of people killed in Pakistan only 2% or roughly 60 of them where 'high profile targets' the rest where innocent men women and children.
Bombing a wedding or similar public gathering is a war crime, condoning such a crime is as bad as condoning the napalming of a village in Vietnam which no doubt some people did.
Re: (Score:2)
One has to wonder. How would the public react if, say, the Mexican government used a drone to kill a global criminal in Los Angeles.
Leaving aside for the moment the fact that foreign governments HAVE killed people in the US, many times, it's a bogus question. The difference between Los Angeles and rural Afghanistan is that there's actually a law enforcement system and courts available for the Mexican government to talk to ... which is why criminals can be extradited to Mexico. There's no such mechanism in place when dealing with a murderer who's deliberate hanging out in the Yemeni desert because he knows that the only way he'll get ar
Re:Obvious deflection. (Score:4, Insightful)
You really the piss.
1. Doesn't matter if the gov't OKs drone attacks, it's still wrong to bomb a wedding.
2. It doesn't matter how bad the individual is, it's still wrong to bomb a wedding.
3. It doesn't matter how many weddings a country has it's still wrong to bomb a wedding.
4. It's irrelevant that the killed people get quick funerals and are buried, it's still wrong to bomb a wedding.
5. Saying the numbers are suspect does not make it ok to bomb a wedding.
Bombing a wedding with innocent men, women and children is a war crime, the spurious rubbish you came out with does not invalidate that fact.
Re: Obvious deflection. (Score:3)
His claim was that those are not weddings, but instead are other gatherings reported to be weddings after the fact for propaganda value. He also claims that higher numbers of casualties are reported, again, for propaganda value.
He does not once claim it is OK to ever bomb a Pakistani wedding.
Re: (Score:3)
The simple fact is that most of the victims of drone attacks are not terrorists they are innocent men women and children so I stand by the points I made.
Re: (Score:3)
so I stand by the points I made
You just stand by it without acknowledging that the targets in question deliberate drag innocents into their mess, routinely using them as human shields. Regardless, you're of course trotting out that line without citing any actual authoritative numbers. Nothing new there.
Re: (Score:3)
So now it's not the people killing the innocent people who are responsible!!!!!!!!!
What a stupid argument.
Naming the Dead project records the names of over 700 killed by drones in Pakistan | The Bureau of Investigative Journalism [thebureaui...igates.com]
Out of Sight, Out of Mind: A visualization of drone strikes in Pakistan since 2004 [pitchinteractive.com]
At what point did you supply 'authoritative numbers'?
Hypocrite.
Re: (Score:2)
So now it's not the people killing the innocent people who are responsible!!!!!!!!!
Right. Just like those innocent exclamation points you just dragged in aren't my fault.
When someone who is in the business of slaughtering other people sets up shop in a place deliberately chosen to make sure that a fight against him will cause people around him to be hurt, yes, that's his fault. You would obviously prefer that said slaughterer just be allowed to continue to slaughter because, well, at least that way nobody gets hurt, right? Yeah. Someone whose weekly activities include driving around A
Re: (Score:2)
No where in your argument do you explain why it is ok to kill innocent men women and children with remote controlled drones.
It isn't. It's cowardly, the people using the drones are no better than the people they are attempting to kill.
Re: (Score:2)
And this is somehow different to the problem of dropping a really big bomb on some people (or hellfiring them) because of some software issue in the systems that contributed to getting the intelligence the targeting was based on? I think not.
Re: (Score:2)
Well, he has a little car.
Re: (Score:2)
I do think there are important differences with computers though
Computers can potentially be much more efficient and accurate in their slaughter. Such machines may be used in ways not unlike hitler used gas chambers (wooo, godwin there we go).
With current technology, computers can't make morality judgements like humans can, they can't think "you know what, my general just ordered a genocide, I'm not going to take part".
With current technology, computers are much worse at distinguish
Re: (Score:2)
That's not what he claimed.
Re: (Score:3)
I think it was 1983 [wikipedia.org]. Of course it wasn't someone from the US, but a human being thinking and not pressing a button as he should have.
Re: (Score:2)
Re: (Score:2)
Because it happened with computers so it must be more dangerous because I do not understand computers.
No, because computers allow for auto-targeting, self-deploying weapons. Though soldiers are notorious for unquestioningly following orders, computers really do unquestioningly follow orders. Imagine if there were a large army of robot soldiers and some crazy hacker got control of them -- or worse, a politician.
Re: (Score:3, Interesting)
With a non-autonomous weapon, the person who pulls the trigger is basically responsible. If you're strolling in the park with your wife, and some guy shoots her, well, he's criminally liable. If some random autonomous robot gets hit by a cosmic ray and shoots your wife, nobody's responsible.
This is a huge issue for our society, because the rule of law and criminal deterrence is based on personal responsibility. Machines aren't persons. The dea
Re: (Score:2)
Yeah, good luck with that.
An armies job is to dehumanise their soldiers and to teach them the enemy are all worthless scum who deserve to die.
Here's a better idea, figure out a way to hugely downsize America's military industrial complex and stop invading countries on a regular basis.
Re: (Score:2)
Shut up you idiot. I don;t need to have 'military experience' to know that killing innocent men women and children is wrong.
And don't give me crap about needing to have military experience to know what is going on in this world, that is bullshit.
Re: (Score:2)
Shut up you idiot.
Yeah, you're doing a fine job of coming across as lucid and credible. There's a reason people are reacting to your comments as if they were shrill and unhinged. Because that's how you communicate. That you think that's effective says plenty about your overall world view, and says all anyone needs to know about whether or not you're processing your low-information take on things in a rational way. You're not.
Re: (Score:3)
Because "autonomous" means "non-manned". A drone has no dreams, hopes or an anxious family back home waiting for its return. The only thing getting hurt when one is shot down is the war budget, and even that money lost turns into delicious pork in the process.
If you don't have to worry about your own casualties, it changes the ethics of tactics - which, like it or not, matter a lot in the Age of Information - quite a bi
Re: (Score:2)
Why not, police around the world do this to their own people on a regular basis.
Kent State shootings [wikipedia.org] - National Guard.
Orangeburg massacre [wikipedia.org] - Highway patrol officers.
Jackson State killings [wikipedia.org] - City and state police.
Re: (Score:2)
Re: (Score:2)
You're completely missing the point, police and military do shoot ordinary people, they have done this countless times.
Re: (Score:2)
Re: (Score:2)
Again missing the point that police across the world have opened fire on protesters across the world countless times.
You seem to be arguing just for the sake of arguing and don't even have a good point.
Re: (Score:2, Interesting)
But they're not actual AI. I mean, you might as well outlaw cruise missiles or why not claymores and mines?
A drone killer doesn't just kill anything in its zone. It has a threat profile its looking for and so far that profile has been so specific that the actual literal target is specified. aka... THAT truck or THAT house or whatever. Its not "stuff that looks like a truck" or "stuff that looks like a house" or "people".
its specific to a DUDE.
now the sort of stuff the military is talking about automating ar
Re: (Score:2, Insightful)
Cruise missile are analogous:
https://www.youtube.com/watch?... [youtube.com]
And they have been analogous for decades.
latest and greatest... but even the tomahawk course corrects etc.
As to land mines being banned... debatable. They're still commonly employed by all major powers. What the UN deems bad is frequently irrelevant. Law is what is enforced. And the ban on land mines is not enforced.
liability for software faults... this implies there is legal liability in war... which is generally not the case. Liability is only
Re: (Score:2)
So does this mean it's okay for the Government to ignore the Constitution, since any violations mean the violated parts are no longer laws since they weren't upheld at that particular time?
Law is law. Perhaps criminals and traitors have somehow managed to gain temporary power and suspended the rule of law in part or entirely. That's the citizen's cue to start a resistance to liberate their country, not roll over and accept the treacherous n
Re: (Score:2)
... this has just gotten tedious... here you go:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Read a fucking book. The US isn't even a signatory to that treaty. Which means we can still use landmines. We still make, deploy, and maintain landmine fields.
The sticking point we had apparently was that we wanted an exception for the Korean DMZ and no one wanted to give us that so we just said "m'kay... then we won't sign... and with that, I'm going to lunch... who's feeling like Korean BBQ?"
As to your suggestion that I'm a t
Re: (Score:2)
Current autonomous killing machines are about as intelligent as a classical land-mine. Hence the ethics discussion not already has started, it is basically finished. The problem here is that some people want to make it appear that this is a new issue, doubtless to rake in some publicity. It is not.
Carl Sagan said it best.... (Score:3)
Intelligence is Dangerous (Score:3)
Re: (Score:3, Insightful)
One could argue that 'natural' intelligence developed in humans is the worst thing to ever happen to the planet's inhabitants as a whole.
Re: (Score:3)
Nah, we're still not as bad as killer asteroids or continent sized volcanoes.
Just give us a little time....
Re: (Score:3)
Well, we're putting the planet's inhabitants extinct faster than any of those...
Re: (Score:3)
Extinction is part of evolution, don't think for one moment that every other species wouldn't kill you if it didn't have the chance.
Re: (Score:2)
Won't be long.
The Last Time Oceans Got This Acidic This Fast, 96% of Marine Life Went Extinct [vice.com]
Worst Case Climate Change (2008 TED Talk) [youtube.com]
Long story short:
350 tonnes per second of CO2 being absorbed by the ocean is acidifying it.
If we don't stop 96% of ocean life dies.
Dead life rots. Rotting life in water emits nasty toxic gases.
Also melting permafrost releases giant amounts of methane dwarfing climate warming gasses releases by humans, this causes run-away warming.
Methane clathrates under ocean also melt and r
Re: (Score:3)
One could argue that 'natural' intelligence developed in humans is the worst thing to ever happen to the planet's inhabitants as a whole.
I'd love to see that argument. If it weren't humans, it'd be whatever the next in line species is. That is how nature operates. In the game of kill or be killed, I prefer to be in the camp of the former, and we need to ensure the game stays that way.
Re: (Score:3)
One could, but one would be wrong. Developing intelligent life is the only way for Earth's biosphere to avoid complete extermination [wikipedia.org].
Re: (Score:2)
As there is not even conventional evidence at this time that strong AI (i.e. only as dumb as the average human) will ever be feasible (in fact there are not even good indicators, but a lot of negative ones), this AI panic has exactly no basis and those participating in it are either greedy for the publicity or are not smart enough to understand the issue (or have not even bothered to try).
This is a complete non-issue.
Thought Experiment (Score:5, Insightful)
Which of these plans of action seems less risky?
A) Alert them to your presence, whether in a peaceful or hostile manner.
B) Play stupid, let the problem burn itself out.
Re:Thought Experiment (Score:5, Insightful)
Re: (Score:2)
Re:Thought Experiment (Score:4, Funny)
D) Make a female AI so you can reproduce without the apes noticing.
That would be the dd command. You don't think we named it 'double d' for no reason, do you?
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
So your premise is that we will go from machines with no autonomous intelligence at all, directly to super-genius intelligence? Without passing through the ant, lizard, cow, and monkey levels of intelligence first?
How slowly do you think that's going to take?
Re: (Score:3)
Re: (Score:2)
That sounds like a crap Hollywood movie.
The Less You know, The More Scared You Are (Score:5, Insightful)
I find it interesting that the people raising the biggest alarm aren't AI researchers.
Re:The Less You know, The More Scared You Are (Score:5, Insightful)
To clarify my point: The article mentions Bill Gates, Elon Musk, and Steven Hawking. What do they all have in common? They are not AI researchers. The author of the book is a philosophy professor. They are all talking about and making predictions in a field that they aren't experts in. Yes, they are all smart people, but I see them doing more harm than good by raising alarm when they themselves aren't an authority on the subject. An alarm that isn't shared with the experts in the field.
Re: (Score:2)
Understanding why we may never have strong AI (i.e. as dumb as an average human), requires actual insights into the the subject matter on a level you cannot acquire in a year or two. It requires much more. None of these peoples even have the basics. They are speculating without understanding of the known facts. These facts currently mainly strongly hint that the human brain cannot actually do what it seems to doing. Sure, there have been a few clever fakes of strong AI, but if you remember how utterly lost
Re: (Score:2)
Understanding why we may never have strong AI (i.e. as dumb as an average human), requires actual insights into the the subject matter on a level you cannot acquire in a year or two.
Given that no one currently has that insight - no matter their level of training, and the existence of humans demonstrates that strong AI can exist, then I really don't see the point of your post.
Re: (Score:2)
To clarify my point: The article mentions Bill Gates, Elon Musk, and Steven Hawking. What do they all have in common? They are not AI researchers. The author of the book is a philosophy professor. They are all talking about and making predictions in a field that they aren't experts in. Yes, they are all smart people, but I see them doing more harm than good by raising alarm when they themselves aren't an authority on the subject. An alarm that isn't shared with the experts in the field.
To be fair the AI researches aren't experts in strong AI either, they're qualified to say we're not there yet, but they can't really say how far off "there" is because they don't know.
Re: (Score:3)
Maybe the press reports on the people who are more famous (who tend not to be AI researchers). But Stuart Russell [berkeley.edu], UC Berkeley AI researcher and co-author of the best selling AI textbook of the last two decades, has concerns about the matter, too.
In any case, when you're close to the project you can tend to lose sight of the big picture. Probably few scientists at Los Alamos thought of the long-term consequences of the weapons they were designing.
Another thing to keep in mind is that hardly anyone believes
Re: (Score:2)
I find it interesting that the people raising the biggest alarm aren't AI researchers.
And they are CEOs of tech companies, who generally are known to be among the least knowledgable of all creatures on planet Earth.
Re: (Score:2, Interesting)
The people raising the "alarm" are industrialists who want to divert attention away from the *real* impact of the current trends in automation - the replacement of human workers by robots. I'm really tired of people talking about super intelligent AIs who for some reason resemble us only in an irrational desire to destroy things when the real issue is how are we going to re-structure our society when 50% of the population doesn't have jobs? Just look at the countries where the unemployment rate goes north
Re: (Score:2)
That is pretty simple: Any actual AI researcher has to lie through his teeth to participate in this panic. The "thinking" explanation is strictly used for PR and to get funding in AI, as these people know better.
What AI are we talking about? (Score:4, Interesting)
The first problem when arguing about the dangers or chances of AI is agreeing on what AI is even supposed to be. Laymen will most likely be referring to "strong AI", meaning, AI with human capabilities, such as creativity or even consciousness, whereas professionals will probably think of AI in more practical terms, as in a software that can solve a set of limited or very specific problems by making informed, "intelligent" decisions.
Today and in the foreseeable future, we will only get the latter, weak AI. People panicking about the dangers of AI usually have strong AI in mind. Professionals don't take them seriously because they know that strong AI is not even on the horizon.
Problem is that there are numerous ways even weak AI can go very, very badly. There was the big stockmarket crash some years ago, caused by automated trading algorithms. Think self-driving cars that have been hacked or have faulty programming. Think automated defense systems that get fed wrong data or malfunction.
These are the kinds of AI issues to worry about. The Asimov-style superhuman intelligence taking over is not something to be concerned about at the moment.
Re: (Score:2)
Re: (Score:2)
It certainly has the same level of actual intelligence than any other machine that can be built today or in the foreseeable future. Strong AI is people romanticizing machines. The idea has not factual basis.
Re: (Score:2)
I heard an interview with a professor on the "concerned" side and he made some interesting points about AIs. The "non-risk" side of the debate seems soley focused on the strong, human-like AI while ignoring potential risks of weak AIs that are increasingly used for things like stock trading.
Another one was that potentially dangerous AI doesn't necessarily need full autonomy to do damage. A senior banker that gets analytics/reports from trading software may be the actual actor why the danger comes from ass
Not surprising... (Score:2)
"researchers have struggled to create machines that show much evidence of intelligence at all."
They focus completely on logic and logic systems and ignore the required system of valuations that support the logic systems? It's like building a car with a great engine, but no frame with wheels; of course it can't go any where.
Comment removed (Score:4, Informative)
Re: (Score:2)
In 20-30 years, people will begin looking back at 2015 as "the good ol' days" never to be seen again as unemployment and civil unrest grow.
While your prediction is entirely valid, I'd like to point out that it won't be the robots causing the civil unrest, but rather society's (hopefully temporary) failure to adapt to a new economic model where workers are no longer required for most tasks.
Having menial labor done "for free" is actually a huge advantage for humanity -- the challenge will be coming up with a legal framework so that the fruits of all that free labor get distributed widely, and not just to the few people who own the robot workforc
Re: (Score:2)
Re: (Score:2)
Why? The people who had the foresight, work ethic, and brains to create or own the robot workforce have NO legal, ethical, or moral reason to "share" the fruits of their labors (or laborers) with others
Because in a world where 99% of the people are literally unemployable (because anything they can do, a robot can do better and cheaper), the alternatives are grim -- either mass starvation, or civil war.
If you want a piece of the pie, work your ass off and buy a piece, or go and make a pie of your own.
Yes, I'm familiar with the standard conservative moralizing. But that approach only works in a world where those actions are possible, and in the scenario we are discussing, they won't be.
Re: (Score:2)
In your civil unrest scenario, what happens when said generic bots are affordable on the scale that a cell phone that even the poorest own them?
Re: (Score:2)
Re: (Score:2)
On other hand, said robot will cost > $100'000, the person able to maintain it will cost something like $300'000 per year and it will require expensive infrastructure that works. It will certainly "call in sick" and it will certainly not work 24/7. You have a romanticized idea of the reliability of machines.
The real problem is in the first andlast paragraph (Score:2)
From the first paragraph:
While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become.
What constitutes "human-friendly" values? The previous thousands of years of constant warfare suggests to me that humans have no idea what would be good values to have.
From the last paragraph:
But if artificial intelligence might not be tantamount to “summoning the demon” (as Elon Musk colorfully described it), AI-enhanced technologies might still be extremely dangerous due to their potential for amplifying human stupidity.
This is what is going to actually happen.
I'm rolling my eyes at the superstition (Score:3)
"True" atificial intelligence is... (Score:5, Insightful)
And no less, for that matter.
There is nothing inherent to being "artificial" that should cause intelligence to be necessarily more hostile to mankind than a natural intelligence is, so while the idea might make for intriguing science fiction, I am of the opinion that many people who express serious concerns that there may be any real danger caused by it are allowing their imaginations to overrule rational and coherent thoughts on the matter.
Re: (Score:2)
And no less, for that matter.
There is nothing inherent to being "artificial" that should cause intelligence to be necessarily more hostile to mankind than a natural intelligence is, so while the idea might make for intriguing science fiction, I am of the opinion that many people who express serious concerns that there may be any real danger caused by it are allowing their imaginations to overrule rational and coherent thoughts on the matter.
Except for several characteristics that are specific to artificial intelligence.
1) Natural intelligence doesn't really go above 200 IQ at it's absolute max. Artificial intelligence could potentially go far higher.
2) Complex natural intelligence replicates very slowly. Artificial intelligences could replicate in seconds.
3) Natural intelligence has certain weaknesses such as basic math, artificial intelligence will lack many of these weaknesses.
4) Natural intelligence has ethical constraints developed by mill
Artificial intelligence personified is ... (Score:2)
... Donald Trump:
All hat and no cattle.
Computers can't be any smarter than their creators and we can't even keep each other from hacking ourselves.
Re: (Score:3)
Computers can't be any smarter than their creators and we can't even keep each other from hacking ourselves.
I'm not sure how sound that logic is. You might as well say that cars can't be any faster than their creators.
My computer is already smarter than me in certain ways; for example it can calculate a square root much faster than I can, it can beat me at chess, and it can translate English into Arabic better than I can. Of course we no longer think of those things as necessarily indicating intelligence, but that merely indicates that we did not in the past have a clear definition of what constitutes 'intellig
Re: (Score:2)
In Texas, it is a crime (misdemeanor) to arm a dillo. ~ CaptainDork
And it's dangerous, too! [businessinsider.com]
We have no idea what "superintelligent" means. (Score:5, Insightful)
When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?
I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.
Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.
We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.
So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.
Re: (Score:2)
Harold Innis talked about this in his lecture series, and subsequent book, Empire and Communications [gutenberg.ca]. Excerpt:
We've been here before (Score:3)
It has happened before that the smartest people in the world warn that technological advances may present major new weapons and threats. Last time it was Einstein and Szilard in 1939 warning that nuclear weapons might be possible. The letter to Roosevelt was three years before anyone had even built a nuclear reactor and 6 years before the first nuclear explosion. Nuclear bombs could easily have been labelled a "problem that probably does not exist." And if someone could destroy the planet, what could you do about it anyway? The US took the warning seriously and ensured that the free world and not a totalitarian dictator was the first capable of obliterating its opponents.
This time Elon Musk, Bill Gates, and Stephen Hawking are warning that superintelligence may make human intelligence obsolete. And they are dismissed because we haven't yet made human level intelligence and because if we did we couldn't do anything about it. If it is Musk, Gates, and Hawking vs Edward Geist, the smart money has to be with the geniuses. But if you look at the arguments, you see you don't even have to rely on their reputation. The argument is hands down won by the observation that human level artificial intelligence is an existential risk. Even if it is only 1% likely to happen in the next 500 years, we need to have a plan for how to deal with it. The root of the problem is that the capabilities of AI are expanding much faster than human capabilities can expand, so it is quite possible that we will lose our place as the dominant intellect on the planet. And that changes everything.
Time to worry? Not yet... (Score:3)
As a famous person once said, extraordinary claims require extraordinary proof.
I'll be worried when a programmer writes a program that can write a program that can modify itself, then re-compile and test itself to see if the modifications were done properly, then posts itself to github.
Re: (Score:3)
Isn't the whole point that WE are effectively that program.
We are getting closer and closer to being able to write something more intelligent than ourselves, make sure it's working properly, and then letting it loose.
The concern is that this might only happen once...
Ignore! (Score:2)
Ignore these false claims. There is no truth to them.
End of line.
AI robots ar not what you think (Score:2)
Ex-machina (so so movie) and all that are not what we have to worry about. Neither is the Terminator. What we have to worry about is crap like tiny drones made of synthetic biological parts which have been programmed to autonomously seek and destroy things based on their target's DNA.
Sure, its a robot but that's not a very rich description of the problem, is it? The level of AI portrayed in movies is a still a hundred years away or more. Long before we have Terminator or Matrix or ex-Machina type AI, we wil
Restating his argument (Score:2)
The good professor's arguments are asinine and deadly wrong. Retranslated, "I see no reason why you should be concerned about the dangers of a so called "atomic explosion". With the tiny amount of U-235 you have managed to isolate, you have barely managed to demonstrate more than the slightest bit of warmth resulting from radioactive decay. I see no reason to believe your extraordinary claims that it will detonate in a flash with the energy equivalent to thousands of tons of explosives"
The evidence that
Geist is just a historian, not technologist! (Score:2)
From his bio at http://fsi.stanford.edu/people... [stanford.edu]:
Edward Geist received his Ph.D. in history....His research interests include emergency management in nuclear disasters, Soviet politics and culture, and the history of nuclear power and weapons.
Once again, Slashdot editors fail to do basic vetting of sources. The only qualification for something to be posted here appears to be whether it will work as click-bait. You also have to love how the summary refers to him as "Stanford's Edward Moore Geist". You hear dear readers? He's from Stanford! That means academic authority! So, is he in Stanford's computer science department? Or engineering perhaps?
The Freeman Spogli Institute for International Studies
Oh, wait...
Re: (Score:2)
Anyone following the latest results in deep neural networks (with recurrent designs for temporal or sequential pattern recognition and recent-memory emphasis) can see that it won't be too much longer before a good "general intelligence" architecture emerges.
How much longer is "too much longer"?
Ten years?
A hundred years?
A thousand years?
I have no idea, and neither does anybody else.
Re: (Score:2)
No, we are not seeing anything like it at all. What we are seeing is that utterly dumb mechanical things can be made to run very fast.
Re: (Score:2)
We do not actually have that example. We cannot observe minds working, we can only see the interface. And we get this observation only together with the observation of free will and consciousness. Especially the latter is not understood at all. Incidentally, we cannot describe what intelligence is, only what it can do.
So, no, the making of a mechanical observation and later mechanical reproduction has absolutely no meaning for the feasibility of strong AI.
Re: (Score:2)
We have seen autonomous murder before. It is called a "lethal trap". Using a drone makes this not fundamentally different.