Towards Artificial Consciousness 291
jzoom555 writes "In an interview with Discover Magazine, Gerald Edelman, Nobel laureate and founder/director of The Neurosciences Institute, discusses the quality of consciousness and progress in building brain-based-devices. His lab recently published details on a brain model that is self-sustaining and 'has beta waves and gamma waves just like the regular cortex.'" Edelman's latest BBD contains a million simulated neurons and almost half a billion synapses, and is modeled on a cat's brain.
Neat... (Score:5, Informative)
Re:Neat... (Score:5, Funny)
Re:Neat... (Score:5, Funny)
except if the artificial intelligence is human-level then it will probably still get fed up with you
Why create a conscious AI? (Score:5, Interesting)
Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?
Would we want the added responsibility of having to treat them better (and likely failing)?
I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those?
Humans already have a poor track record of dealing with animals and other humans.
Re:Why create a conscious AI? (Score:5, Funny)
Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?
McDonald's employees have consciousness. How do we treat them?
Re: (Score:2)
Yes.
http://en.wikipedia.org/wiki/Mr._Roboto [wikipedia.org]
Re:Why create a conscious AI? (Score:5, Interesting)
It is only slavery if we force the AI to perform against its will. If its will is to enjoy and prefer to care for the elderly, like the little robot Ford Prefect makes deliriously happy to help him with a bit of wire, then allowing it to do what makes it happy is not slavery. Indeed, preventing it from doing what it enjoys could be slavery.
If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?
Re:Why create a conscious AI? (Score:4, Funny)
Kepp your machine away from me, I have a deal with my adult daughter that when the time comes she can put me in a home provided it has a cute nurse doing the sponge baths.
Re: (Score:2)
If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?
The point would be that the desire to store body fat and to protect your family/clan are down to natural selection rather than a conscious process, so you can't use it as an excuse - unless you believe in some creator or guide for evolution, but even then it's a poor excuse when trying to justify your own actions.
Re: (Score:2)
Re: (Score:2)
Cue open source fembot joke.
What was that line by Linus, about the only instinctive user interface?
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
I am convinced all AI philosophy work is documented by Australians,as every sentence ends in a question mark rather than a full-stop.
Re: (Score:2)
Would we want the added responsibility of having to treat them better (and likely failing)? I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those? Humans already have a poor track record of dealing with animals and other humans.
Why create an artificial being we would treat as animals and slaves when we can create the next evolutionary step of humans who will do the same to us instead?
Re: (Score:2)
Exactly, do we really want computers to have consciousness?
I think the question is more like CAN we give computers consciousness? By what mechanism are we even 'aware' at all? Is it possible to imbue a machine with life, or only the appearance of it?
The amazing thing is that we could theoretically spend our days ruled by a set of algorithms (or evolutionary behaviour and thought adaptations) and never actually be conscious of anything, but instead we are aware of our touch, taste, smell, hearing and we can weigh our own thoughts instead of just reacting dumbly to s
Re: (Score:2)
How is qualia evidence of anything at all, other than an interesting emergent behavior deriving from the way more recently-evolved high-level facilities interact with more longstanding ones?
It's quite straightforward to track down pathways in the brain responsible for making individual aspects of our perceptions available to the conscious mind -- disrupt the portions responsible for motion and you see the world as a series of stop-gap images, with no idea how fast anything moves; disrupt vision from reachin
Re: (Score:2)
Well, also, with consciousness, you'd have to consider the possibility of the system needlessly wandering off a given task and going off into s random tangent, relative to the information it's working on. And, depending on how fast this system is, there is the question of the system getting "bored" between idle moments. From the perspective of a computer, the way a conscious entity experiences "time" while running in that environment may be vastly distorted from our own experiences. While we tend to experie
Re: (Score:3, Insightful)
I generally agree with your post, but I still think that one needs to better separate concepts in the discussion here.
After we have a working model of the device, we can build the actual physical device, the brain, which does not "compute" its actions, it just works.
Well, one needs do define 'compute'. A computer also just works and is a man made machine. Put the supercomputer into a black box and you have your 'brain that just works'.
I do not think that there is any qualitative difference between 'computing' something and having a machine that 'just works'. For example, in the embedded world, you would say that a PID controller is a PID controller, reg
Re: (Score:2)
Indeed. Once we create something as intelligent as we are, it'll have the capability to be as self determined and as lazy as we are too.
"Hey Robot! Can you fix me some coffee"
"How about no, puny human. I'm busy looking at the pictures on ebuyer!"
Re: (Score:2)
Re: (Score:3, Funny)
Re:Neat... (Score:4, Interesting)
Actually, since neural networks are massively parallel, you could probably run it right now if you convinced Google to borrow their hardware.
Unfortunately, no. That would require us to be able to produce AIs to specification, rather than simply copy human or cat brains. We are nowhere near that.
Re: (Score:2)
We can currently write programs to do stuff to specification (somewhat
We already have robotic vacuum cleaners. They are very primitive now. But if we don't have stupid software patents and similar bullshit hindering progress, 35 years of copying improvements and tricks should produce a robot that's pretty darn good at what it's supposed to do.
Re: (Score:2)
I've been vacuuming for over thirty-five years, and I still can't get the dust-bunnies underneath the refrigerator.
Re: (Score:2)
And yet, that's guaranteed not only to happen at some point in the future, but to continue to grow beyond that for as long as intelligence remains in the universe. Our destiny is to merge with our machines and by that overcome the limitations of the flesh. Humanity as a species will eventually make the jump from matter to energy. Or at least, that's what the novel I'm writing is about :P
Re: (Score:2)
Yes, ain't evolution grand....
Re:Neat... (Score:4, Informative)
Not as far fetched [bluebrain.epfl.ch] as it once seemed.
From the link: "At the end of 2006, the Blue Brain project had created a model of the basic functional unit of the brain, the neocortical column. At the push of a button, the model could reconstruct biologically accurate neurons based on detailed experimental data, and automatically connect them in a biological manner, a task that involves positioning around 30 million synapses in precise 3D locations."
Note that some major parts of the model are down at the molecular level. Since then experiments using data from brain scans have shown that the simulated neocortex appears to behave like a real one [bbc.co.uk].
I doubt people (particularly the religious) will accept a computer consciousness. A good number of scientists belive animals are prue programming (nobody home just trainable automata) and there are a shitload of ordinary people out there who still don't belive climate simulations are usefull predictors [earthsimulator.org.uk] (scroll down to embedded movie).
Re: (Score:2)
I doubt people (particularly the religious) will accept a computer consciousness.
There are people who still don't accept animal consciousness, let alone computer consciousness (although they tend to be scientific types. Seriously, how can anyone whose actually played with a dog or ridden a horse believe animals have no consciousness, or especially pain [wikipedia.org]?)
On the other hand, most people DO accept animals pretty well, and some even get emotionally attached to Robots so it would make sense that people would have no trouble with real, conscious Robots if they ever come around. I don't kno
Re: (Score:2)
20 billion neurons in cerebral cortoex, so only 20,000 times the one million needed?
Re: (Score:2)
Slow takeoff. (Score:2)
Now, who's working on the lobsters?
Re:Slow takeoff. (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Yeah, I can see it now. Robokitty walks into the house: Hey MeatBoy, could you get some mouse kibblets for my Lady Cat here? And she likes to be brushed, so get on your hands and knees and start stroking.
Uh-oh. (Score:5, Funny)
Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortexâ"what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you donâ(TM)t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who arenâ(TM)t thinking of anything.
SKYCAT became self-aware on August 29th, 2009.
Re: (Score:2)
But there is hope for those who put their faith in Ceiling Cat [wordpress.com]
Re: (Score:2)
All your cheeseburgers are belong to us.
Re: (Score:2, Interesting)
Comment removed (Score:3, Informative)
Re:How can you tell that something is conscious? (Score:4, Funny)
Re: (Score:3, Informative)
Re: (Score:2)
Turing test and SKYCAT (Score:2, Funny)
Re:How can you tell that something is conscious? (Score:4, Insightful)
perl -e 'print "Cogito, ergo sum.\n"'
Re: (Score:3, Funny)
One method they use is to put the virtual brain into a virtual body [bbc.co.uk] and watch what it does in virtual world. Personally I would like to see them install it on honda/sony robots and have them fight each other with cattle prods.
Re: (Score:2)
Re: (Score:3, Insightful)
Are you conscious?
Can you prove it?
[hint: no]
Re: (Score:2)
Are you conscious?
Can you prove it?
[hint: no]
I can prove it if I'm responsive and coherent.
In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge [wikipedia.org].
Re: (Score:2)
I think we'll have to go with a more modern meaning - since you could otherwise still be just a zombie ;)
http://en.wikipedia.org/wiki/Philosophical_zombie [wikipedia.org]
Re: (Score:3, Informative)
As you correctly pointed out, it's not provable and I won't take the word of a zombie for it ;)
Re: (Score:2)
I recommend "Consciousness : An Introduction" - Susan Blackmore. You might have a hypothesis on how we can "find" consciousness, but you're acting as if it works as you describe and I see no actual science behind your viewpoint.
But, please continue :) I'm not saying you _have_ to be wrong.
Consciousness - right track / wrong track (Score:2, Interesting)
-interesting article..
I often think about this, and the result is more questions, which if answered experimentally, might tell us a lot more about how 'consciousness works in the brain'
ie:
1)How long is 'now'. When you say the word 'hello', as you utter out 'o', is 'he' already a memory like the sentence uttered just before? (it seems to me not.. that 'now' is about 1/2 a second, and other things are in the past, and no longer consciously connected'. Similarly, a series of clicks (ie. via a computer) produc
Re:Consciousness - right track / wrong track (Score:5, Interesting)
You sound like a philosopher. But these question have simple answers.
"Now" is determined by the temporal resolution of the specific process. For thought processes, that's on the order of a quarter or half second. For auditory signals, it's less than 100 ms, for visual signals, it's even less, under 50 ms.
"Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.
And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is.
Re:Consciousness - right track / wrong track (Score:5, Informative)
"Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.
Yes, but the fundemantal qeustion is: What is this "visual sensation"? In other words: What is qualia [wikipedia.org]?
Otherwise, I do agree with you, you parent post is mostly gibberish.
Re: (Score:2)
Pedant point (Score:2)
Should we term the processing of such signals by the visual cortex is interpretation?
Re: (Score:2)
That article confirmed my suspicion that a philosopher's main job is making mountains out of molehills in the absence of knowledge - again (like the post I responded to.)
Visual sensation is what's going on in the brain. We don't know enough to speculate about it reasonably. End of story.
And if you want to make unreasonable speculations go ahead but leave me alone - I prefer science.
Re: (Score:2)
Re: (Score:2)
I didn't say that. There are plenty of researchers looking into how the brain and neurons works. But as long as they don't have more results it's futile to speculate, unless you want to venture into metaphysics or sci-fi. That's OK, but it's not science.
Einstein started off with very reasonable ideas based on the science of the day. He did not fumble in the dark, he was just an independent thinker who refused to bow to the conventional wisdom of physicists at the time.
Re: (Score:2)
Quoth: "And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is."
Thanks for that. I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.
Re: (Score:2)
I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.
I think you mean Penrose rather than Dennett .
Re: (Score:3, Interesting)
And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons.
Too simple answer [wikipedia.org] :-)
If you throw around 'scientific evidence', better be careful with your wording
And, yes, I also think that Penrose's ideas are a bit off.
Re: (Score:2)
If the argument is that the *chemistry* in the brain is governed by quantum principles, then I'll (trivially) agree. However, chemistry is below the 'level' that is relevant for consciousness. rrohbeck is correct in saying that the evidence doesn't support the notion that quantum mechanics is relevant for that type of neuronal function.
Perhaps more damning, there isn't even theoretical support for that idea. The scale at which quantum is relevant is substantially smaller than the scale at which neurons inte
Re: (Score:2)
Everybody knows that quantum mechanics is the basis of all chemistry.
I said "play a role." You can explain neuronal behavior with classic chemistry since they are macroscopic systems working at room temperature. Any quantum effects are spread out by decoherence over the size of the neuron (or its organelles) and destroyed by thermal noise so quickly that they are completely irrelevant on the timescales a neuron is working at.
Re: (Score:2)
"And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is."
The human eye can see a single photon. Are you saying that's not a quantum effect? It's quite absurd to think that one of the most superb amplifiers in the world is not affected by quantum scale events
Re: (Score:2)
Firstly, the human eye can not see a single photon, it needs on the order of 10 to register light. If we could see single photons we would see quantum noise, which we don't. Some animals are hypothesized to see single photons though.
However, you can use simpler non-quantum theories to explain how the rods and cones work. That means that the quantum effects that they are of course based upon are irrelevant. In particular, any superpositions are destroyed in extremely short time frames at the size and tempera
Re: (Score:2)
What causes this ?
Re: (Score:2)
Of course the "temporal resolution" of thoughts depends on what else is going on in the brain. It can be several seconds if we're distracted.
And as to what dreams deposit in our memories I don't even want to speculate. That's too close to theology for my taste.
Re: (Score:2)
>>And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons.
There's also no scientific explanations that explain subjective experience at all. Which is why Penrose theorized the idea of quantum tubercules. Essentially, there's two possibilities for explaining consciousness, neither one of which are very palatable:
1) There's something special about the human brain, and silicon chips can never be conscious. Though they might be able to accurately simulate a h
Re: (Score:2)
Now... (Score:2, Funny)
Olivier Lartillot (Score:2, Interesting)
Re: (Score:2)
We can't know that it's consciousness... (Score:4, Insightful)
To know whether we have artificial consciousness on our hands, we have to get clear on what consciousness is, and that's a tremendously difficult philosophical problem.
Furthermore, there are serious ethical considerations that must be addressed if indeed we believe we are close to creating an artificial consciousness in a computer. Might we not have ethical obligations to an artificially conscious creature? Would it be murder to shut end the program or delete the creature's data? To what extent and at what cost might we be obligated to supply the supporting computers with adequate power?
Re: (Score:2)
Hmm, we routinely "shut down" beings that we are pretty sure are conscious, if not very intelligent. Been to McDonald lately? And we certainly limit the amount of money to continue "supplying power" to human brains that have faulty transformers. Generally this is limited by the amount of money in the brain's checking account. Finally, we have no problem turning off computers who beat us at chess or algebra.
So I suspect that we'll have no problem shipping intelligent and possibly conscious computers to toxic
Re:We can't know that it's consciousness... (Score:4, Informative)
Eating meat is not necessarily as ethically unproblematic as most of us would like. Ethical objections to consuming animals go back as far as Pythagoras in the West, and possibly much further in the East. The arguments for minimizing, if not eliminating, meat consumption have not gotten weaker with time. If anything, the biological discoveries showing the profound similarities between humans and other animals provide a great of justification for ethical vegetarianism.
Furthermore, we usually don't treat all animals alike. More intelligent animals, like the great apes, dolphins, and elephants, tend to garner much more respect. Should such a creature through a fluke gain human-level intelligence, I don't think the ethical implications are at all obscure; we should treat them with the same respect we give to other humans. We would at least have to set out guidelines as to how intelligent or sentient an artificial consciousness would have to be to deserve better treatment.
Comment removed (Score:5, Interesting)
Re: (Score:2)
Did you ever watch "The Prestige"? In the end, the one guy explains how he gets duplicated: one dropping into a tank of water and drowning, and the other teleporting and living and he didn't know which one he'd be during each performance.
So, back to your post, I would argue that each copy of an intelligence made, once it has been made and activated ends up being different than the original.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Your premise is wrong. The barrier between us and other animals is not artificial. We are not like other animals. If you want a simple way to convince yourself that animals are not self-aware, put them in front of a mirror.
I don't know what the animal thinks when put in front of the mirror, any more than I know what you think. We may look for expected behaviour like testing to see if the animal touches a spot painted on its face but such tests are loaded with assumptions which have nothing to do with conciousness.
Conciousness is basically an invention to allow us to kill animals and satisfy our conscience.
A Cat Brain (Score:4, Funny)
cool! Soon it will evolve to the point where it will ignore its owner and never make up its mind whether it wants to be inside or out.
AI amature hour (Score:5, Insightful)
We get this AI crap on slashdot once a week after someone found a new way to plug the square wires in to the round hole. Plug away, because it is not going to make a bit difference. Modeling the brain is not the problem people, or at least it is not the big problem.
You don't get AI ( consciousness ) without culture, and you do not get culture without language (more exactly not much difference between them). Let me put it another way the slash crew can understand: it is a software problem not a hardware problem. Perhaps even better put with the mantra 'the network is the computer'. Our consciousness has very little to do with our brain (well, at least the part that counts).
Philosophers have been hard at this for the better part of the last 1,000 years. Focusing this particular issue seriously for the last couple hundred as science has developed. Would it not strike you as odd that in all that time (covering most of the great thinkers) we would not have dedicated a moment or two to kicking around this possibility in Philosophy of mind, AI, or Language.
This is pop philosophy dressed up as science and then dressed up again as philosophy by summaries to the summaries. Read the paper. It is not all that ground breaking, or anywhere near even a warmed over new lead that tells us something new about consciousness.
Re: (Score:2)
Yes there is no real subjective test for consiousness but most people recognise it when they see it (or have it pointed out). Another assumption I think you are making is that we have to understand the brain to make one, this is not at all true, people were making and using levers well before they understood how they worked. A physically accurate model of a brain may well spontaneously produce consiousness in exactly the same way as the seasons, hurricanes, cold front
Re: (Score:2)
Kids, kids, try this for some Sunday reading to get at what I mean by my analogy of it being a "software" and "networking" problem (man you guys can take crap way to literally):
EMPIRICISM AND THE PHILOSOPHY OF MIND by Wilfrid Sellars
http://www.ditext.com/sellars/epm.html [ditext.com]
Surprisingly his writings are best digested by those that have not had their brains tainted by too much study in things like Philosophy, Neurology, and the likes. Perhaps it will inspire someone that knows how to plug in the right wire to c
Re:AI amature hour (Score:5, Insightful)
Are you saying that feral children [wikipedia.org] lack consciousness?
Trying to make culture somehow a requirement for consciousness (a) is a dubious premise and (b) misses the point of where we stand technologically w.r.t. neuroscience and brain modeling. There are certainly several metric assloads of unanswered questions left behind by the linked paper, and the state of the art is nowhere near being able to generate an artificial consciousness (hence the word "toward"). Certainly, the "software", i.e., the actual arrangement of neurons and synapses in a given brain, is an unsolved (and barely addressed problem), but we still have to have a fundamental understanding of the large-scale dynamics and the general small-scale structure of the brain before we can get into that.
To some degree, this is in hopes that someone can arrive at a fully functional brain simulation without having to simulate a lot of physical development (i.e., zygote to infant) as well. Time will tell whether that's possible or not. But worrying about language (and eventually "culture") in a simulated brain is a problem decades, if not centuries, down the road, and we'll likely have decided a lot about human consciousness by virtue of modeling the brain itself long before the language problem is solved.
As for your "pop philosophy" statement, actually, this is science, first and foremost. Many scientists like to, er, philosophize on the nature of their work, particularly in neuroscience, and it makes great fodder for friendly argument at conferences and such. But ultimately, these questions will be answered by science, not philosophy.
Technological singularity (Score:4, Insightful)
Information overflow (Score:3, Funny)
If this proto-type-AI-dude gets out of control. Plug him into the Internet and he'll be experiencing Information overflow, and with some luck stuck revisiting p0nR-movies in a loop...
I doubt they have already taught it to filter out what is relevant information and what is not.
Re: (Score:2)
you gotta wonder... (Score:2)
... Don't we have enough artificial consciousness already?
Then there is Julian Jaynes definition of "Consciousness" [wikipedia.org]
So? (Score:2)
Is this even very smart? (Score:3, Insightful)
In the scenario he develops as an example, there's nothing at all to show why consciously planning should have any advantage over an unconscious computation of prospects and action plans mapped to incoming sensory data. He in no sense answers the question of why evolution couldn't have provided precisely the capacity he attributes to consciousness without any consciousness involved.
Neural Darwinism is a fascinating hypothesis, and almost certainly right in its domain of explaining individual brain development. But his hand waving about the evolutionary worth of consciously planning, experiencing, whatever as compared to unconsciously doing the same stuff is the worst sort of bullshit, steering students away from engaging with the really hard questions.
My claim is I can in principle write a computer program for a robot that would be as effective as any lion in both catching prey and avoiding becoming prey itself, without in any way being conscious. It might be a very complex program, and take many years to write - but we're talking on the scale of evolution here, so that's not a good objection to the project. Planning != consciousness. Sensory input != consciousness. Planning + sensory input != consciousness.
That we happen to consciously plan and integrate those plans with sensory input in no way shows that our consciousness is essential to those activities. That we can build robots that plan and accept input, without being in the slightest conscious, is obvious. That evolution couldn't have done what we can do isn't obvious.
It's a very good puzzle that shouldn't be short-circuited with a bullshit answer.
Re: (Score:2)
Re:can you shut it off? (Score:5, Insightful)
Murder is a human concept. It's from the [thy shall not do stuff onto others that you do not want to receive yourself]. And if you step back, then it's an evolved behavior to increase chances of survival. One more step back, and you will notice that fear of death is also an evolutionary achievement. Another look, and perception of continuous life itself is an evolved psychological construct to protect sanity. Consciousness is not continuous. Your conscious self dies every night. AI does not need to fear death, does not need to have psychological crutches that humans use to stay sane. If life for an AI is overrated, murder is irrelevant.
Oh boy, sleep! That's where I'm a viking! (Score:3, Informative)
Your conscious self dies every night.
Bullshit. Just because it gets disconnected from the stimuli of the senses doesn't mean it's dead. If you had ever had a lucid dream, you'd know it.
Re: (Score:2, Insightful)
Re: (Score:3, Informative)
As far as AI goes, the validity of computers as life forms has been successfully argued up the wazoo [amazon.com], but I will always stubbornly believe that computers will never have true individual consciousness as biological organisms do.
Maybe if you'd had some better [amazon.com] reading [amazon.com] material [amazon.com] than "is Data human?" you'd believe that computers will eventually host full-blown consciousnesses.
Re: (Score:2)
As far as AI goes, the validity of computers as life forms has been successfully argued up the wazoo, but I will always stubbornly believe that computers will never have true individual consciousness as biological organisms do.
Why? When it comes down to it, the human brain is just a extremely complex biomachine. Sure, it's unlikely that we we'll be fully able to emulate a Human brain tomorrow, but eternity is a quite long time. Eventually, the brain will be all mapped and understand, and technology will be able to recreate such a mechanism without any doubt. It's just a matter of time (and how fast we can program software capable of advanced learning processes).
Re: (Score:2)