Why Robots Will Not Be Smarter Than Humans By 2029 294
Hallie Siegel writes "Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil's recent claim that 2029 will be the year that robots will surpass humans. From the article: 'It’s not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories ... will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not have enough time to develop adult-equivalent intelligence by 2029'"
Kurzweil is an idiot with Super Powers (Score:4, Funny)
Kurzweil's predictive powers are so incredibly wrong that he could literally destroy the world by making a mundane prediction that then couldn't come true.
For example, if Kurzweil foolishly predicted that the sun would come up tomorrow, the earth would probably careen right out of its orbit.
Re:Kurzweil is an idiot with Super Powers (Score:5, Insightful)
There's two schools of thought on this:
There are those who think Kurzweil is a crazy dreamer and declare his ideas bunk.
There are those who think Kurzweil is a smart guy who's been right about a fair number of things, but take his predictions with a grain of salt.
There doesn't seem to be a lot in the middle.
[You can score me in the second camp, FWTW.]
Re:Kurzweil is an idiot with Super Powers (Score:5, Insightful)
Actually, your second point IS the middle. The logical third point would be, there are those who think Kurzweil is a genius and is spot on about the future.
Re:Kurzweil is an idiot with Super Powers (Score:4, Informative)
...while there are certainly some Kurzweil nuthugging fanbois out there, they don't seem to exist in any vast number.
While those who have opinions of Kurzweil probably span the spectum, it seems that there's a bunch of level-headed folk who think Kurzweil is a smart guy with some interesting thoughts about the future, and on the other side, there's an angry mob throwing rotten fruit shouting "Your ideas are bad, and you should feel bad about it!"
AI and the prevalence of bombast (Score:5, Insightful)
o we don't know what "thinking" is -- at all -- not even vaguely. Or consciousness.
o so we don't know how "hard" these things are
o and we don't know if we'll need new theories
o and we don't know if we'll need new engineering paradigms
o so Alan Winfield is simply hand-waving
o all we actually know is that we've not yet figured it out, or, if someone has, they're not talking about it
o at this point, the truth is that all bets are off and any road may potentially, eventually, lead to AI.
Just as a cautionary tale, recall (or look up) the paper written by Minsky on perceptrons (simple models of neurons and in groups, neural networks.) Regarded as authoritative at the time, his paper put forth the idea that perceptrons had very specific limits, and were pretty much a dead end. He was completely, totally, wrong in his conclusion. This was, essentially, because he failed to consider what they could do when layered. Which is a lot more than he laid out. His work set NN research back quite a bit because it was taken as authoritative, when it was actually short-sighted and misleading.
What we actually know about something is only clear once the dust settles and we --- wait for it --- actually know about it. Right now, we hardly know a thing. So when someone starts pontificating about dates and limits and what "doesn't work" or "does work", just laugh and tell 'em to come back when they've got actual results. This is highly distinct from statements like "I've got an idea I think may have potential", which are interesting and wholly appropriate at this juncture.
Kurzweil right on trends but wrong on policies (Score:2)
Contrast with James Hughes, Director of IEET: http://www.youtube.com/watch?v... [youtube.com]
And also: http://www.youtube.com/watch?v... [youtube.com]
Kurzweil was heavily rewarded for success as a CEO in a capitalist society. So his recommendations tend to support that and also be limited by that. So, things like a "basic income" or "Free software" may be beyond Kurweil's general policy thinking.
Se also the disagreeing comments here:
"Transhumanist Ray Kurzweil Destroys Zeitgeist Movement 'Technological Unemployment'"
http://www.youtube [youtube.com]
Re: (Score:2)
Actually, your second point IS the middle. The logical third point would be, there is one who thinks Kurzweil is a genius and is spot on about the future.
FTFY!
Re: (Score:2)
Re: (Score:2)
Too lazy to RTFA, but the bit about "won't have time to develop adult intelligence by 2029" seems to be missing the difference between the speed of chemical synapses and electrical or photonic switching circuits.
Re: Kurzweil, you missed my perspective, I think he's a crazy dreamer who has been right about a fair number of things. I take his predictions with a great deal of skepticism, but I wouldn't bet my retirement accounts on him being wrong....
Re: (Score:2)
Kurzweil is Lex Luthor.
Re:Kurzweil is an idiot with Super Powers (Score:4, Insightful)
I propose, en the other (third) hand, that reliably educating humans to be smart should be the first step. We will only do the artificial intelligence bit when we actually get the human intelligence angle.... and that will not, for sure, happen any time soon.
Re:Kurzweil is an idiot with Super Powers (Score:4)
You are missing an important detail here. Humans are very very smart, the problem is that we all think we are smart. I have heard about this Kurweiler thing for quite a while and have to say he is dead wrong and let me explain why.
Humans are smart because they can optimise. There are two ways to digest information; bitmap style, or vector graphics style. Most humans do learning vector graphics style. It allows us to process huge amounts of information with the cost of inaccuracy. This does not mean we cannot process information bitmap style, and indeed there are humans who do, namely autistic. And I don't mean pseudo autistic, I mean Rainman autistic. There is this artist who can look at any sight and create a photo copy of it on a piece of paper. The cost of bitmap is that other functions are put out of order.
Kurzweil from what I am guessing is thinking this is a hardware issue. I say no it is not a hardware issue for our human brains are optimised to process huge amounts of information. It is a conflict of information issue that causes us to be both smart and stupid at the same time. For if we all reached the same conclusion we as a human race would have died out many eons ago.
When two people see the same information they will more often than not come to different conclusions. This is called stocastics, and it is what causes strife among humans. Some humans think God came in the form of a fat man, others think he came cruxifiyed, and yet another came in a beard and head piece. I am not mocking religion, what I am trying to point out is that we all see the same information, yet we all wage wars on who saw the right image.
Thus when Robots or AI gets as intelligent as humans, the machines are going to be as fucken stupid as human beings. They are going to wage the same wars and think they all have reached the proper conclusion, even though they are all right and wrong at the same time. The real truth to AI has already been distinctly illustrated in a movie that gets rarely quoted... The Matrix! Think hard about the battles and the wars and the thoughts. They all represent the absolute truths that each has seen and deemed to be correct. YET they are slightly different.
I will grant Kurweiler one thing the machines will have more storage capacity, but then I ask what is there stopping us from not becoming part machine part human? I say nothing...
"Robots" will never be as smart as a human. (Score:2, Interesting)
The difference between a robot and a computer is that the computer is self-mobile at the very minimum. If it can't get up and move away, (no matter how awkwardly), it's not a robot.
Mobility is hard, not easy. Worse, the larger a computer is, the harder mobility becomes.
There are lots of reasons to build a computer smarter than a human being, but practically none to add
Re:"Robots" will never be as smart as a human. (Score:5, Insightful)
By the same argument you could say that any good library from 1950 was also smarter then a human. You'd be just as wrong.
Re: (Score:3)
In a large number of ways, a 1950's library is smarter than any human.
If the measure of "smart" is how closely it behaves like a human - sure, we're probably a ways off. ...we're making progress, but have a way to go.
If the measure of "smart" is what we know (in bulk), we're already there.
If the measure of "smart" is the ability to synthesize what we know in useful relevant ways...
Re: (Score:2)
Does a book or a web page really know the information it contains?
Is a concept held in human working memory equivalent to the same concept written down?
Re: (Score:2)
A firm yes to the second, unless you have some very particular religious beliefs.
The first though is less obvious: the best current working definition for "knowledge" is "justified, true belief". Wikipedia holds many things that are both true and justified, but Wikipedia doesn't "believe" anything, if we're just speaking about the web site, not the editors.
"Belief" certainly requires sentience (feeling), and maybe sapience (thinking). Personally, I think human sapience isn't all that special or unique, th
Re: (Score:2)
A human mind can manipulate a concept, apply it to new situations and concepts.
A concept written down is just static information, waiting for an intelligence to load it into working memory and do something with it.
Re: (Score:2)
Human memory is just storage, no different from paper. It's the intelligence that's relevant, not the storage.
Re: (Score:2)
You don't know what 'working memory' means in the computer or neurological sense? Hint: how is it stored?
You should just shut-up. You're embarrassing yourself.
Re: (Score:2)
Wow, where does the hate come from?
Sure, if you mean "working memory" as a loose analogy for the computer sense, sure, I agree with you because that requires active contemplation. If by "working memory" you mean the stuff we're currently contemplating, its the contemplating part that matters, yes? That's how you're distinguishing "working memory" from "memory"? So the difference is "intelligence", not the storage medium?
Re: (Score:3)
Working memory is the space that you actively think on. It's not clear how it's stored, but it's clear that most memory is not just words. An AI will start with an in memory way of storing connected concepts; actors, linguistic, mathematical, logical, not understood but remembered cause/effect, image. Parsing the information into working memory involves putting it into a form that the intelligence can use.
This is a pretty well understood concept. The details are the tricky part.
Re:Wow, where does the hate come from? (Score:3)
Terrifyingly, "The Hate" might be one of the easier first things to simulate in AI!
The reason is that it's often demonstrated with a far lower level "skillset" than the smart comments.
See for example the (thinning?) pure troll posts here. Despite the rise in lots of other things, I'm noticing fewer pure troll posts of the worst vicious kind. I wondered idly why they got here so regularly. Anyone remember the ones that went:
"so you sukerz ya haterz loosers you take it and shove it?"
Any 1000 of you could writ
Re: (Score:2)
I think that perhaps it's not as firmly equivalent as you imply; a concept in a book cannot be used in the same ways as a concept in human memory without being copied to human memory. At which point it's not the concept in the book getting used any more.
Re: (Score:2)
You can't "use" an concept stored in "human memory" directly either. Thinking about stuff copies* it out of memory and into consciousness. (Or did you mean "memory" in a very loose sense, in which case I agree with you).
*Human memory is normally quite lossy - we reconstruct most of what we remember - heck, we construct most of what we see - so "copy" isn't the best word, really.
Re:How aware does a system have to be? (Score:2)
This is one of the approaches I've been poking at off and on for a while as noted in my remarks over the years in these stories.
To me an instructive experiment is to go all the way to the top and give the program some initial values not unlike Asimovian ones, and then it builds a "like/dislike" matrix of people and things.
It's not that far off from college dorm discussions! : )
So then going back to basics, you feed it info about people doing things, it runs those against its "like/dislike" systems, and upda
Re: (Score:3)
I certainly wouldn't argue that libraries are self-aware.
It all goes back to the definition of smart is. Libraries certainly contain more information -- at least, in a classical sense. [Maybe one good memory of a sunset contains more information - wtfk] Watson, for example, is just a library with a natural language interface at the door. By at least one measure -- Jeopardy :) -- a library (Watson) is smarter than a lot of people.
Re: (Score:2)
Does a book or a web page really know the information it contains?
Doubtful. If a book contains the equations, 1 + 1 = 2, and 1 + 1 = 3, how does it know the first and not the second equation?
Define intelligence? (Score:3)
Re: (Score:2)
A computer not only has software (i.e. the instructions), but also hardware to actually execute the instructions in a reliable way. For the 1950's library to be considered "a computer" you would have to include the librarian (or regular person) who actually follows the instructions of the lookup system to retrieve the information, and even then whether this is a "reliable" method of execution is debatable.
In fact you could in theory make any computer that is only instructions written on paper (e.g. copy da
Re:"Robots" will never be as smart as a human. (Score:5, Insightful)
Computers on the other hand can already be argued to be smarter than a human - if you consider the entire internet as a single computer.
Depends on how you define "smarter."
The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human. That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.
There's a reason D&D considers Wisdom and Intelligence to be separate attributes.
Re: (Score:2)
machines cannot do anything without direct, explicit directions - told to it by a human.
Everything a computer does is a result of it's programming and input. The same could be said of a human. The only difference is that the programming in a human is a result of natural selection, and the programming in a computer is a result of intelligent design (by a human which was indirectly a result of natural selection).
In the same way that a computer can not do anything that it's programming does not allow, a human can not do anything that his/her brain does not allow. It's true that human brains al
Re: (Score:2)
It's all just matter and energy.
Indeed - very few (sane) people dispute the fact that consciousness can be generated with non-biological hardware (using silicon). We know that consciousness is the result of matter and energy - a more interesting question IMO, is: matter down to what level ? does the brain only use "classical" physics principles to generate consciousness, or does it somehow exploit quantum principles (we certainly know that natural selection has made use of those in some cases - see photosynthesis).
Maybe the brain requi
Re: (Score:3)
The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human.
I'm sure not doing anything would still be way better than someone only checking facebook for a whole day. Which increases the score on the Robot side.
Re: (Score:2)
The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human. That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.
Once. And then it can be rather damn good at it, like how chess computers beat their programmers. I also think you're underestimating how generic algorithms can be, even if you ask Watson a question it's never heard before it probably will find the answer anyway. As for military use, the biggest problem is that humans don't have identification, friend-or-foe systems. If you told a bunch of armed drones to kill any human heat signature in an area I imagine they could be very efficient. Just look at some of t
Re: (Score:2)
We just have a more useful and more convincing Mechanical Turk instead of something that can think for itself.
Re: (Score:2)
So when is water not wet? (Score:2)
The important thing first is to answer the question "what is thought?"
If we can't do that how do we know if it's really thinking or just something complex enough that it looks like it - eye spots on moth wings instead of real big eyes.
Re: (Score:2)
Whatever thought is, I'm sure it's not going to be dependent on some property of carbon atoms that silicon atoms don't have.
If we can't do that how do we know if it's really thinking or just something complex enough that it looks like it
What you are referring to is the idea of a philosophical zombie (or p-zombie). It is true that we would not be able to tell if a computer was conscious or just a p-zombie. I think descartes "I think therefore I am" is a pretty convincing argument to convince yourself that you are conscious. But it doesn't work on other humans. They might just be p-zombies too. How do you decide th
Re: (Score:3)
No.
First we need to define consciousness.
Then we get to decide if something fits the definition or not.
I really do not understand why you are acting as if you are unable to grasp that point. Is this some sort of debating trick?
Re: (Score:2)
Arguably, making the computer mobile, giving it responsibility for care of its own "body," is one way to make it more human. It could be simulated, the big deep blue processor could be kept in a closet and operate the body by remote, or the whole body and world thing could play out in VR, but those elements of seeing the world through two eyes, hearing from two ears, smelling, tasting, feeling, having to balance while walking, getting hurt if you are careless, those are all part and parcel of being human -
They don't need to be smart. (Score:2)
All they need know how to do is stick soft humans with a sharp stick. We are nowhere near as tough as we think we are. We couldn't stop Chucky dolls much less Terminators.
Re:They don't need to be smart. (Score:4, Interesting)
Very Sober (Score:5, Insightful)
Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil ...
I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other? We're talking fifteen years into the future here. Do you think that the persons/people predicting that "heavier than air flying machines are impossible" only eight years before the fact were also the sober ones?
Lord Kelvin was a sober, rational minded individual. He was also wrong.
Re:Very Sober (Score:5, Insightful)
| I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other?
It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.
Obviously in 1895 heavier than air flying machines were possible because birds existed. And in 1895 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Internal combustion engines of sufficient power/weight were rapidly improving, fluid mechanics was reasonably understood, and it just took the Wrights to re-do some of the experiments correctly and have an insight & technology about controls & stability.
So in 1895, Lord Kelvin was the Kurzweil of his day.
Re: (Score:3)
Obviously in 2014 thinking machines were possible because humans existed. And in 2014 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Microprocessors of sufficient power/weight were rapidly improving, neuromorphic engineering was reasonably understood, and it just took the Markrams et. al. to re-do some of the experiments correctly and have an insight & technology about controls & stability.
Hmm. I agree.
Re: (Score:2)
Re: (Score:2, Insightful)
It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.
More actively than Ray Kurzweil, Director of Engineering at Google in charge of machine intelligence? Very few people in the world are more active in AI-related fields than he is.
Re: (Score:3)
Compare, as another poster said, to Peter Norvig, who has his own Scholar page [google.ca] and the difference is rather striking.
Re: (Score:2)
It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.
Most of the time, you're right, the experts are the experts, they know their fields and they can predict the future.
I find, however, that the experts that have the time to seek publicity, pontificate for the press, serve as expert witnesses, etc. are often a bit low on skill and behind the curve on what is really possible, or even true in their field. Meanwhile, some of the most cutting edge innovators can be disinclined to share their latest progress.
This is patently not the case in massively collaborativ
Re: (Score:2)
I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics.
I don't see the word "sobering" used that way. For me this just means that after one might get excited after hearing Kurzweil, hearing from Winfield is a sobering experience. There is no implication that either of the two is less crazy or more right.
Re: (Score:3)
while those who see things progressing more rapidly are shown as crazy lunatics.
Easy, we have 60 years of AI people saying "the machines are coming, the machines are coming, TOMORROW, or in 10 years by the latest" and they have yet to show up.
In the long run they will be right, in the short term there is no evidence that the singularity is around the corner. Heck, google translate is stuck at 95% correctness rate for about the last five years. If we cannot solve that one, what basis there is for Kurzweil alarmist scenario?
Alternative View (Score:2, Funny)
Analysis: By 2029 people will be so dumb that current robots will be smarter than humans.
Re: (Score:2)
Doctor: [laughs] Right, kick ass. Well, don't want to sound like a dick or nothin', but, ah... it says on your chart that you're fucked up. Ah, you talk like a fag, and your shit's all retarded. What I'd do, is just like... like... you know, like, you know what I mean, like...
I've only got ONE thing to say.... (Score:2)
Number five, IS Alive.
I've seen it myself. Spontaneous emotional response.
They don't need to be smart (Score:2)
They only need to be cute [smartdoll.jp].
That is hard to predict (Score:2)
If smart is the capability of intellectually adapting to accomplish tasks then computers are in trouble for now. If academia overall stops chasing it's own tail worried about publishing papers in great volume of questionable relevance and resumes the publishing of meaningful developments then maybe we can get a good breakthrough in ten years. And that is a big maybe.
I am not particularly thrilled to create an AI good enough to be like us. /. is nice enough but humans overall are dicks. Anything we create wi
Robot, please (Score:2)
Anyone who thinks that robots will be smarter than humans by 2029 has not really thought things through. I can step out on my back patio, take one look at the pergola, and tell you that it's going to need to be replaced in the next couple of years. I can look at the grass and tell whether I need to cut it this weekend or let it go for another week. I can sniff the air and tell you that the guy in the next cubicle has farted. Of course a robot might come to the same conclusions, but it would have to take
Re: (Score:3)
To be fair, your ability to tell if the grass needs cut is also based on sampling grass growing patterns over your entire life...
Re: (Score:2)
Re: (Score:2)
Wait just a god damn second. Are you claiming you understand a woman?
Much less bold then claiming to understand women, but I'm still calling BS on you.
Most people go their whole lives and don't even begin to understand themselves, much less another adult.
I don't know. (Score:5, Funny)
If the contents of my Facebook feed can be taken into consideration, one could reasonably make the argument that robots are smarter than humans right now.
the "data" milestone (Score:2)
Commander Data is a fictional character. The character occurs in a ****context**** where humanity has made technological jumps that enable ***storytelling****
I absolutely hate that really, really intelligent people are reduced to this horrible of an analogy to comprehend what's happening in AI....and I *love* Star Trek! I'm a trekkie!
Re: (Score:2)
rock != Data (Score:2)
exactly the problem. the "turning test" is a facile demonstration...not a scientific "test" at all.
Do yourself a favor and ignore Turing completely when thinking about computing.
I didn't say it would make it "intelligent"...it would do just as I said, give it legal rights. Just as giving Commander Data legal rights doesn't make it any more or less "human"...confering rights doesn't change the molecules of the
Re: (Score:2)
exactly the problem. the "turning test" is a facile demonstration...not a scientific "test" at all.
The question of how to measure consciousness is not *only* a scientific one. It is more a philosophical question. It has a scientific component to it which is why it is important that humans are prevented from seeing the subjects or hearing their "voice". It is a thought experiment detailing a scientific experiment to that could conclusively prove a machine was as intelligent as a human. Since human intelligence is best measured by human perception, the test uses human perception to make the evaluation
Don't worry (Score:2)
Don't worry. The Year 2038 problem [wikipedia.org] will take them out a decade later.
Re: (Score:2)
no the 2030 welfare costs will kill us as you have a mess number of people out of work.
Look at autopilots they still don't do all and the (Score:2)
Look at autopilots they still don't do all and they can't handle stuff like sensors going bad to well.
Re: (Score:2)
And a pilot who loses an eye does so well without it's sensor, right?
Re: (Score:2)
but an human can better workaround a bad reading / work out an reading miss match
Dark Matter... (Score:2)
will be what causes the singularity!
15 years is kind of soon (Score:5, Interesting)
We're probably more than 15 years from strong AI. Having been in the field, I've been hearing "strong AI Real Soon Now" for 30 years. Robotic common sense reasoning still sucks, unstructured manipulation still sucks, and even Boston Dynamics' robots are klutzier than they should be for what's been spent on them.
On the other hand, robots and computers being able to do 50% of the remaining jobs in 15 years looks within reach. Being able to do it cost-effectively may be a problem, but useful robots are coming down to the price range of cars, at which point they easily compete with humans on price.
Once we start to have a lot of semi-dumb semi-autonomous robots in wide use, we may see "common sense" fractured into a lot of small, solveable problems. I used to say in the 1990s that a big part of life is simply moving around without falling down and not bumping into stuff, so solve that first. Robots have almost achieved that. Next, we need to solve basic unstructured manipulation. Special cases like towel-folding are still PhD-level problems. Most of the manipulation tasks in the DARPA Robotics Challenge were done by teleoperation.
Re: (Score:2)
We're not going to be able to build real AI until we actually understand HOW biological organisms think. What we have in a modern digital computing is nothing at all like a biological brain. I suspect that we may never achieve AI while using digital computers. The reason I suspect this is that the human (and every other animal) brain is analog and I believe analog computing is required for true AI. Because we've never really invested in analog computing I believe real AI will continue to be 30 years out unt
Computers can't beat us at chess, oh, wait... (Score:2)
Everybody knew computers could never beat humans at chess. Now they do. In much the same way, computers will beat us at every single intellectual task, at some point in time. Technology revolutions go faster every time one occurs. From 10k years for the agricultural revolution to two years for the internet and mobile phones. I see no reason why computers can't outsmart us in 2025.
In 2029 ... (Score:2)
You'll be lucky just to get it to move out of your basement by 2049.
Who needs adult level intelligence in a robot? (Score:2)
If you invent a robot as smart as a 9 year old with basic concrete reasoning power that can do simple household chores and yardwork you will become a billionaire.
That assumes computers learn as slowly as humans (Score:5, Interesting)
That presumption seems to be precipitated on the theory that a computer intelligence won't "grow" or "learn" any faster than a human. Once the essential algorithms are developed and the AI is turned loose to teach itself from internet resources, I expect it's actual growth rate will be near exponential until it's absorbed everything it can from our current body of knowledge and has to start theorizing and inferring new facts from what it's learned.
Not that I expect such a level of AI anytime in the near future. But when it does happen, I'm pretty sure it's going to grow at a rate that goes far beyond anything a mere human could do. For one thing, such a system would be highly parallel and likely to "read" multiple streams of web data at the same time, where a human can only consume one thread of information at a time (and not all that well, to boot.) Where we might bookmark a link to read later, an AI would be able to spin another thread to read that link immediately, provided it has the compute capacity available.
The key, I think, is going to be in the development of the parallel processing languages that will evolve to serve our need to program systems that have ever more cores available. Our current single-threaded paradigms and manual threading approaches are far too limiting for the systems of the future.
Yeah but wait till he becomes a teenager... (Score:2)
From the summary:
it still might not have enough time to develop adult-equivalent intelligence by 2029
2029: Skynet is born. Nothing bad happens
2042: Skynet turns 13...
15 years? Try 200. (Score:2)
We have no idea how the human brain works. We throw random chemicals at people's brains after incorrectly assessing an illness and hope people function better afterwords. We apply electric shocks to the brain as medicine. Brain medicine is in the stone ages, technologically speaking.
Humans depends upon millions of non-human species inside and on the surface of our bodies, and we can't culture most of them, and we don't have a clear understanding of how they work together but we have a vague idea that they a
Complete nonsense (Score:2)
It has been known for decades that completely new theories will be needed. Anybody that has missed that has not bothered to find out what the state-of-the art is.
Less time and it depends. (Score:2)
1) Why do we need a machine as foolish as an adult human? Duplicating the downsides to that level of "intelligence" might take centuries. Self aware? Why is that intelligent or even desirable? 99% might happen soon but the pointless last 1% could take forever.
2) Once computers can do jobs on par with an 8 year old the whole economy will collapse as nearly every job can be learned and performed by a child if you remove the immaturity factor. Robotics already out performs humans it just needs the brain power.
Siegel is of course right (Score:2)
What exactly does surpassing human intelligence (Score:2)
mean? And what is the reference for human intelligence?
Does it mean the robots wouldn't vote to ban teaching of evolution in public schools? Would they vote for teaching the controversy even when none exists?
Will robots be smarter than that?
Seems reasonable (Score:2)
Ex-perts to de right of me, Ex-perts to the left . (Score:2)
.... of me and bla bla bla.... Lik'en what de hel I know...?
See thread to know.... https://www.facebook.com/char.... [facebook.com]
Yep, eben dis dumb hick can see threw dat wall of ex pert tease! T.Rue
Did you know dat too experts who is'a pos'in each utter goes show what da's exprt at?
Go ahead, mod me down..... ain't gonna change de inedible!!!
Abstractionize dat will ya.... http://abstractionphysics.net/ [abstractionphysics.net] to go
Here's the thing (Score:3)
Kurzweil's smart machine predictions are, last I checked anyway, based on a rather brute force approach to machine intelligence. We completely understand the basic structure of the brain, as a very slow, massively parallel analog computer. We understand less about the mind, which is this great program that runs on the brain's hardware, and manages to simulate a reasonably fast linear computing engine. There is work being done on this that's fairly interesting but not yet applied to machine mind building.
So, one way to just get there anyway is basically what Kurzweil's suggesting. Since we understand the basic structure of the brain itself, at some point we'll have our man made computers, extremely fast, somewhat parallel digital computers, able to run a full speed simulation of the actual engine of the brain. The mind, the brain's own software, would be able to run on that engine. Maybe we don't figure that part out for awhile, or maybe it's an emergent property of the right brain simulation.
Naturally, the first machines that get big enough to do this won't fit on a robot... that's why something like Skynet makes sense in the doomsday scenario. Google already built Skynet, now they're building that robot army, kind of interesting. The actual thinking part is ultimately "just a simple matter of software". Maybe we never figure out that mind part, maybe we do. The cool thing is that, once the machine brain gets to human level, it'll be a matter of a really short time before it gets much, much better. After all, while the human brain simulation is the tricky part, all the regular computer bits still work. So that neural net simulation will be able to interface to the perfect memory of the underlying computing platform, and all that this kind of computation does well. It will be able to replace some of the brute force brain computing functions with much faster heuristics that do the same job. It'll be able to improve its own means of thinking pretty quickly, to the point that the revised artificial mind will run on lesser hardware. And it well be that there are years or decades between matching the neural compute capacity of the human mind and successfully building the code for such a mind. So that first sentient program could conceivably improve itself to run everywhere.
Possibly frightening, which I think is one reason people like to say it'll never happen, even knowing that just about every other prediction about computing growth didn't just happen, but was usually so conservative it missed reality by lightyears. And hopefully, unlike all the doomsday scenarios that make fun summer blockbusters, we'll at least not forget the one critical thing: these machines still need an off switch/plug to manually pull. It always seems in the fiction, we decide just before the machines go sentient and decide we're a virus or whatever, that the off switch didn't needed anymore.
Re: (Score:2)
Neuroscientists know that the human brain is far more complex than any foreseeable microprocessor-based computer system ...
Henry Markram [theguardian.com] would like a word with you.
Re: (Score:2)
Re: (Score:2)
However the thing about models is a simple one is sometimes a good way to simulate specific things accurately. A model for dealing with autism may do that well but don't expect it to be able to simulate speech or a migrane.
Re: (Score:2)
The material (silicon) doesn't matter. Only the architecture matters. The difference between a human brain and a typical laptop is not the material it's made of. It is that the laptop is designed from the top down, with most of the computation happening in a central location (or a few locations). A human brain is a massively parallel computer with computation happening in every neuron.
If we just add more silicon chips we can have more parallel computing. They don't even need to be near eachother. Comp
Re: (Score:2)
Re: (Score:3)
Brains in general, human brains included, do not process information. They generate consciousness.
Brains *do* process information. They *also* generate consciousness. I would argue that they generate consciousness *by* processing information.
They do this in ways that neuroscientists still don't understand. As a neuroscientist I can say this without hesitation.
We don't understand how consciousness is generated. That doesn't mean we can't make it happen. The Wright brothers made a working airplane before everything was figured about about aerodynamics. Many aeodynamic principles were at play in the Wright Flyer that the Wright brothers didn't understand, but they knew enough to make it happen.
Maybe humans don't know h
Re: (Score:2)
Smart computer scientists do not think that. In fact they thought it would take very long and may well be infeasible decades ago. There are just a lot of stupid CS types around.
Re: (Score:2)
Outside of the whole "going insane because of conflicting programming" thing, HAL didn't do a lot more than Google Now can do. HAL 9000 mostly provided a text-to-speech interface for a governance and caretaker system for hibernating astronauts and the ship that housed them. It mostly just kept antennas pointed and turned on the lights when it was time to wake up.
There are two things HAL could do, that Google Now doesn't do. HAL could make decisions -- but they were pretty simple logical pre-programmed de
Soul of a new... blah blah blah (Score:2)
c'mon. Every indication says your brain is you. Chemical reactions, electrical impulses, stored states, massive, active and dynamic connectivity. That's what "you" arise from. When your brain stops, you stop. Your head contains a most effective EM shield consisting of wet, conductive layers that are sufficient to prevent huge RF and EM fields from getting into your brain tissue. The tiny, minuscule events going on inside your head can't get out under any circumstance for the same reason, unless you (a) punc
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
A good deal of philosophy is mythology, trendy mythology, which is why I tend to ignore the signals coming from that direction. It's not even a soft science: it's not science at all. So yes, you're quite right, and thank you for noticing I'm not taking part in that mostly-bewildered sideshow.
There is nothing -- repeat, absolutely nothing -- to indicate, in any way, that there is anything going on in brains that isn't mundane physics. Further, not anywhere in the
Re: (Score:2)