Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Science

Towards Artificial Consciousness 291

jzoom555 writes "In an interview with Discover Magazine, Gerald Edelman, Nobel laureate and founder/director of The Neurosciences Institute, discusses the quality of consciousness and progress in building brain-based-devices. His lab recently published details on a brain model that is self-sustaining and 'has beta waves and gamma waves just like the regular cortex.'" Edelman's latest BBD contains a million simulated neurons and almost half a billion synapses, and is modeled on a cat's brain.
This discussion has been archived. No new comments can be posted.

Towards Artificial Consciousness

Comments Filter:
  • Neat... (Score:5, Informative)

    by viyh ( 620825 ) on Sunday May 24, 2009 @01:19AM (#28072515)
    And they only need to increase that by 100,000 times to get to about the same number of neurons as a human brain, let alone the synaptic connections (which would be somewhere on the order of 2,000,000 times what they've done). Nonetheless, progress!
    • Re:Neat... (Score:5, Funny)

      by FishTankX ( 1539069 ) on Sunday May 24, 2009 @01:43AM (#28072643)
      So, if processing power doubles every 2 years, this should realistically take about 35 years to accomplish. Which means we may have artificial human level intelligences before I retire. Perfect, now I can have a care taker that doesn't get fed up with me when I can't pour his coffee because I have parkinsons.
      • Re:Neat... (Score:5, Funny)

        by gfody ( 514448 ) on Sunday May 24, 2009 @02:06AM (#28072707)

        except if the artificial intelligence is human-level then it will probably still get fed up with you

        • by TheLink ( 130905 ) on Sunday May 24, 2009 @02:14AM (#28072747) Journal
          Exactly, do we really want computers to have consciousness? Is it necessary or even helpful for what we want them to do _for_us_?

          Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?

          Would we want the added responsibility of having to treat them better (and likely failing)?

          I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those?

          Humans already have a poor track record of dealing with animals and other humans.
          • by benjamindees ( 441808 ) on Sunday May 24, 2009 @02:57AM (#28072889) Homepage

            Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?

            McDonald's employees have consciousness. How do we treat them?

          • by Zerth ( 26112 ) on Sunday May 24, 2009 @03:49AM (#28073093)

            It is only slavery if we force the AI to perform against its will. If its will is to enjoy and prefer to care for the elderly, like the little robot Ford Prefect makes deliriously happy to help him with a bit of wire, then allowing it to do what makes it happy is not slavery. Indeed, preventing it from doing what it enjoys could be slavery.

            If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?

            • by TapeCutter ( 624760 ) * on Sunday May 24, 2009 @04:18AM (#28073189) Journal
              "If its will is to enjoy and prefer to care for the elderly"

              Kepp your machine away from me, I have a deal with my adult daughter that when the time comes she can put me in a home provided it has a cute nurse doing the sponge baths.
            • If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?

              The point would be that the desire to store body fat and to protect your family/clan are down to natural selection rather than a conscious process, so you can't use it as an excuse - unless you believe in some creator or guide for evolution, but even then it's a poor excuse when trying to justify your own actions.

          • Of course we - Slashdotters - do. We could finally get girlfriends (that is, when they program these artificial consciousnesses to like tech-savvy, Star Wars/Trek/whatever-loving, DnD-playing computer geeks, which might be a quite difficult task).
            • by wisty ( 1335733 )

              Cue open source fembot joke.

              What was that line by Linus, about the only instinctive user interface?

            • Speaking of DnD playing, AI characters with a consciousness in an always on MMORPG would add a huge level of depth to the game. Heck, within an MMORPG, you could experiment with all types of consciousness - from completely good to completely evil. It would be fascinating to watch them develop. Imagine say, going on one of the highest level WoW quests with a conscious AI that was completely good against another AI - the arch nemesis. Wouldn't it be an amazing quest if the character's personality actually mad
              • Re: (Score:3, Informative)

                by fractoid ( 1076465 )
                Yeah, until Yogg-saron escapes via some poorly executed hacking attempt and takes up residence in the Internet at large. Ai, ai, f'thangan!
          • I am convinced all AI philosophy work is documented by Australians,as every sentence ends in a question mark rather than a full-stop.

          • Would we want the added responsibility of having to treat them better (and likely failing)? I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those? Humans already have a poor track record of dealing with animals and other humans.

            Why create an artificial being we would treat as animals and slaves when we can create the next evolutionary step of humans who will do the same to us instead?

          • by dov_0 ( 1438253 )

            Exactly, do we really want computers to have consciousness?

            I think the question is more like CAN we give computers consciousness? By what mechanism are we even 'aware' at all? Is it possible to imbue a machine with life, or only the appearance of it?

            The amazing thing is that we could theoretically spend our days ruled by a set of algorithms (or evolutionary behaviour and thought adaptations) and never actually be conscious of anything, but instead we are aware of our touch, taste, smell, hearing and we can weigh our own thoughts instead of just reacting dumbly to s

            • by cduffy ( 652 )

              How is qualia evidence of anything at all, other than an interesting emergent behavior deriving from the way more recently-evolved high-level facilities interact with more longstanding ones?

              It's quite straightforward to track down pathways in the brain responsible for making individual aspects of our perceptions available to the conscious mind -- disrupt the portions responsible for motion and you see the world as a series of stop-gap images, with no idea how fast anything moves; disrupt vision from reachin

          • Well, also, with consciousness, you'd have to consider the possibility of the system needlessly wandering off a given task and going off into s random tangent, relative to the information it's working on. And, depending on how fast this system is, there is the question of the system getting "bored" between idle moments. From the perspective of a computer, the way a conscious entity experiences "time" while running in that environment may be vastly distorted from our own experiences. While we tend to experie

        • Indeed. Once we create something as intelligent as we are, it'll have the capability to be as self determined and as lazy as we are too.

          "Hey Robot! Can you fix me some coffee"

          "How about no, puny human. I'm busy looking at the pictures on ebuyer!"

      • Re:Neat... (Score:4, Interesting)

        by ultranova ( 717540 ) on Sunday May 24, 2009 @05:24AM (#28073399)

        So, if processing power doubles every 2 years, this should realistically take about 35 years to accomplish.

        Actually, since neural networks are massively parallel, you could probably run it right now if you convinced Google to borrow their hardware.

        Which means we may have artificial human level intelligences before I retire. Perfect, now I can have a care taker that doesn't get fed up with me when I can't pour his coffee because I have parkinsons.

        Unfortunately, no. That would require us to be able to produce AIs to specification, rather than simply copy human or cat brains. We are nowhere near that.

        • by TheLink ( 130905 )
          You could have the same number of neurons as a normal human being but still be permanently unconscious.

          We can currently write programs to do stuff to specification (somewhat ;) ).

          We already have robotic vacuum cleaners. They are very primitive now. But if we don't have stupid software patents and similar bullshit hindering progress, 35 years of copying improvements and tricks should produce a robot that's pretty darn good at what it's supposed to do.
          • I've been vacuuming for over thirty-five years, and I still can't get the dust-bunnies underneath the refrigerator.

    • And yet, that's guaranteed not only to happen at some point in the future, but to continue to grow beyond that for as long as intelligence remains in the universe. Our destiny is to merge with our machines and by that overcome the limitations of the flesh. Humanity as a species will eventually make the jump from matter to energy. Or at least, that's what the novel I'm writing is about :P

    • Re:Neat... (Score:4, Informative)

      by TapeCutter ( 624760 ) * on Sunday May 24, 2009 @03:32AM (#28073027) Journal
      "And they only need to increase that by 100,000 times to get to about the same number of neurons as a human brain, let alone the synaptic connections (which would be somewhere on the order of 2,000,000 times what they've done)."

      Not as far fetched [bluebrain.epfl.ch] as it once seemed.

      From the link: "At the end of 2006, the Blue Brain project had created a model of the basic functional unit of the brain, the neocortical column. At the push of a button, the model could reconstruct biologically accurate neurons based on detailed experimental data, and automatically connect them in a biological manner, a task that involves positioning around 30 million synapses in precise 3D locations."

      Note that some major parts of the model are down at the molecular level. Since then experiments using data from brain scans have shown that the simulated neocortex appears to behave like a real one [bbc.co.uk].

      I doubt people (particularly the religious) will accept a computer consciousness. A good number of scientists belive animals are prue programming (nobody home just trainable automata) and there are a shitload of ordinary people out there who still don't belive climate simulations are usefull predictors [earthsimulator.org.uk] (scroll down to embedded movie).
      • I doubt people (particularly the religious) will accept a computer consciousness.

        There are people who still don't accept animal consciousness, let alone computer consciousness (although they tend to be scientific types. Seriously, how can anyone whose actually played with a dog or ridden a horse believe animals have no consciousness, or especially pain [wikipedia.org]?)

        On the other hand, most people DO accept animals pretty well, and some even get emotionally attached to Robots so it would make sense that people would have no trouble with real, conscious Robots if they ever come around. I don't kno

    • 20 billion neurons in cerebral cortoex, so only 20,000 times the one million needed?

  • Modeling a cat's brain, huh? One is reminded of Accelerando [wikipedia.org].

    Now, who's working on the lobsters?
  • Uh-oh. (Score:5, Funny)

    by Lendrick ( 314723 ) on Sunday May 24, 2009 @01:28AM (#28072561) Homepage Journal

    Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortexâ"what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you donâ(TM)t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who arenâ(TM)t thinking of anything.

    SKYCAT became self-aware on August 29th, 2009.

    • But there is hope for those who put their faith in Ceiling Cat [wordpress.com]

    • All your cheeseburgers are belong to us.

  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Sunday May 24, 2009 @01:32AM (#28072585)
    Comment removed based on user account deletion
  • -interesting article..

    I often think about this, and the result is more questions, which if answered experimentally, might tell us a lot more about how 'consciousness works in the brain'

    ie:

    1)How long is 'now'. When you say the word 'hello', as you utter out 'o', is 'he' already a memory like the sentence uttered just before? (it seems to me not.. that 'now' is about 1/2 a second, and other things are in the past, and no longer consciously connected'. Similarly, a series of clicks (ie. via a computer) produc

    • by rrohbeck ( 944847 ) on Sunday May 24, 2009 @02:23AM (#28072789)

      You sound like a philosopher. But these question have simple answers.

      "Now" is determined by the temporal resolution of the specific process. For thought processes, that's on the order of a quarter or half second. For auditory signals, it's less than 100 ms, for visual signals, it's even less, under 50 ms.

      "Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.

      And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is.

      • by daeglin ( 570136 ) on Sunday May 24, 2009 @03:15AM (#28072963)

        "Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.

        Yes, but the fundemantal qeustion is: What is this "visual sensation"? In other words: What is qualia [wikipedia.org]?

        Otherwise, I do agree with you, you parent post is mostly gibberish.

        • by mark-t ( 151149 )
          The visual sensation is simply how our visual cortex interprets the signals that it is supplied. It is normally supplied these signals via the optic nerve, but can obtain them from other parts of the brain as well, as in what happens while dreaming for example.
        • That article confirmed my suspicion that a philosopher's main job is making mountains out of molehills in the absence of knowledge - again (like the post I responded to.)

          Visual sensation is what's going on in the brain. We don't know enough to speculate about it reasonably. End of story.
          And if you want to make unreasonable speculations go ahead but leave me alone - I prefer science.

          • by smoker2 ( 750216 )
            You realise that science starts from having ideas, reasonable or not ? Only by testing those ideas do you discover the truth. Your attitude seems to be, if you don't know something, then give up. So we should give up on speculation and therefore science ? Was Einstein able to test all his hypotheses ? Some of the stuff he "dreamed up" wasn't verified until well after his death. He should have stuck to the patent office I guess.
            • I didn't say that. There are plenty of researchers looking into how the brain and neurons works. But as long as they don't have more results it's futile to speculate, unless you want to venture into metaphysics or sci-fi. That's OK, but it's not science.
              Einstein started off with very reasonable ideas based on the science of the day. He did not fumble in the dark, he was just an independent thinker who refused to bow to the conventional wisdom of physicists at the time.

      • Quoth: "And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is."

        Thanks for that. I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.

        • I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.

          I think you mean Penrose rather than Dennett .

      • Re: (Score:3, Interesting)

        by sploxx ( 622853 )

        And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons.

        Too simple answer [wikipedia.org]
        If you throw around 'scientific evidence', better be careful with your wording :-)

        And, yes, I also think that Penrose's ideas are a bit off.

        • by SUB7IME ( 604466 )

          If the argument is that the *chemistry* in the brain is governed by quantum principles, then I'll (trivially) agree. However, chemistry is below the 'level' that is relevant for consciousness. rrohbeck is correct in saying that the evidence doesn't support the notion that quantum mechanics is relevant for that type of neuronal function.

          Perhaps more damning, there isn't even theoretical support for that idea. The scale at which quantum is relevant is substantially smaller than the scale at which neurons inte

        • Everybody knows that quantum mechanics is the basis of all chemistry.
          I said "play a role." You can explain neuronal behavior with classic chemistry since they are macroscopic systems working at room temperature. Any quantum effects are spread out by decoherence over the size of the neuron (or its organelles) and destroyed by thermal noise so quickly that they are completely irrelevant on the timescales a neuron is working at.

      • by Boronx ( 228853 )

        "And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is."

        The human eye can see a single photon. Are you saying that's not a quantum effect? It's quite absurd to think that one of the most superb amplifiers in the world is not affected by quantum scale events

        • Firstly, the human eye can not see a single photon, it needs on the order of 10 to register light. If we could see single photons we would see quantum noise, which we don't. Some animals are hypothesized to see single photons though.
          However, you can use simpler non-quantum theories to explain how the rods and cones work. That means that the quantum effects that they are of course based upon are irrelevant. In particular, any superpositions are destroyed in extremely short time frames at the size and tempera

      • by smoker2 ( 750216 )
        Have you ever looked at a clock and it takes seemingly forever for the first second to pass then suddenly it runs at normal speed ? "Now" is a concept that can be stretched apparently. A similar thing can occur in dreams. If you wake up but are still half asleep, and drift back into a dream, you can "experience" hours of action in the dream, but when you open your eyes again, barely 2 minutes have passed.
        What causes this ?
        • Of course the "temporal resolution" of thoughts depends on what else is going on in the brain. It can be several seconds if we're distracted.
          And as to what dreams deposit in our memories I don't even want to speculate. That's too close to theology for my taste.

      • >>And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons.

        There's also no scientific explanations that explain subjective experience at all. Which is why Penrose theorized the idea of quantum tubercules. Essentially, there's two possibilities for explaining consciousness, neither one of which are very palatable:
        1) There's something special about the human brain, and silicon chips can never be conscious. Though they might be able to accurately simulate a h

    • by mark-t ( 151149 )
      If your brain were somehow rewired so that you see red as blue, then your brain would adapt to the change over time and you would start identifying red colors correctly again after your visual cortex had learned to compensate. I once heard about a psychological experiment which involved a person wearing special goggles all the time that inverted his vision. Within the space of two years, he was claiming to see upright, showing that his visual cortex had been reprogrammed to deal with the new style of in
  • Now... (Score:2, Funny)

    ... if they use Pentiums, Schrodinger might finally know if the cat is alive or dead.
  • According to this venerable researcher, "An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. " Has he ever heard of sub-symbolic AI? http://en.wikipedia.org/wiki/Artificial_intelligence#Sub-symbolic_AI [wikipedia.org]
    • by EdZ ( 755139 )
      More worryingly, he lambasts AI research, then proceeds to describe what is simply a Self Learning Neural Network as if it were something new and revolutionary.
  • by HadouKen24 ( 989446 ) on Sunday May 24, 2009 @02:09AM (#28072723)
    ...until we figure out the hard problem [wikipedia.org].

    To know whether we have artificial consciousness on our hands, we have to get clear on what consciousness is, and that's a tremendously difficult philosophical problem.

    Furthermore, there are serious ethical considerations that must be addressed if indeed we believe we are close to creating an artificial consciousness in a computer. Might we not have ethical obligations to an artificially conscious creature? Would it be murder to shut end the program or delete the creature's data? To what extent and at what cost might we be obligated to supply the supporting computers with adequate power?
    • by iamacat ( 583406 )

      Hmm, we routinely "shut down" beings that we are pretty sure are conscious, if not very intelligent. Been to McDonald lately? And we certainly limit the amount of money to continue "supplying power" to human brains that have faulty transformers. Generally this is limited by the amount of money in the brain's checking account. Finally, we have no problem turning off computers who beat us at chess or algebra.

      So I suspect that we'll have no problem shipping intelligent and possibly conscious computers to toxic

      • by HadouKen24 ( 989446 ) on Sunday May 24, 2009 @02:49AM (#28072863)
        Hmm, we routinely "shut down" beings that we are pretty sure are conscious, if not very intelligent. Been to McDonald lately?

        Eating meat is not necessarily as ethically unproblematic as most of us would like. Ethical objections to consuming animals go back as far as Pythagoras in the West, and possibly much further in the East. The arguments for minimizing, if not eliminating, meat consumption have not gotten weaker with time. If anything, the biological discoveries showing the profound similarities between humans and other animals provide a great of justification for ethical vegetarianism.

        Furthermore, we usually don't treat all animals alike. More intelligent animals, like the great apes, dolphins, and elephants, tend to garner much more respect. Should such a creature through a fluke gain human-level intelligence, I don't think the ethical implications are at all obscure; we should treat them with the same respect we give to other humans. We would at least have to set out guidelines as to how intelligent or sentient an artificial consciousness would have to be to deserve better treatment.
        • Comment removed (Score:5, Interesting)

          by account_deleted ( 4530225 ) on Sunday May 24, 2009 @03:17AM (#28072969)
          Comment removed based on user account deletion
          • Did you ever watch "The Prestige"? In the end, the one guy explains how he gets duplicated: one dropping into a tank of water and drowning, and the other teleporting and living and he didn't know which one he'd be during each performance.

            So, back to your post, I would argue that each copy of an intelligence made, once it has been made and activated ends up being different than the original.

    • Conciousness? I don't believe it exists. Its just an excuse to put an artificial barrier between us and other animals.
      • Your premise is wrong. The barrier between us and other animals is not artificial. We are not like other animals. If you want a simple way to convince yourself that animals are not self-aware, put them in front of a mirror.
        • Your premise is wrong. The barrier between us and other animals is not artificial. We are not like other animals. If you want a simple way to convince yourself that animals are not self-aware, put them in front of a mirror.

          I don't know what the animal thinks when put in front of the mirror, any more than I know what you think. We may look for expected behaviour like testing to see if the animal touches a spot painted on its face but such tests are loaded with assumptions which have nothing to do with conciousness.

          Conciousness is basically an invention to allow us to kill animals and satisfy our conscience.

  • A Cat Brain (Score:4, Funny)

    by strannik ( 81830 ) on Sunday May 24, 2009 @02:55AM (#28072885) Homepage

    cool! Soon it will evolve to the point where it will ignore its owner and never make up its mind whether it wants to be inside or out.

  • AI amature hour (Score:5, Insightful)

    by cenc ( 1310167 ) on Sunday May 24, 2009 @03:53AM (#28073107) Homepage

    We get this AI crap on slashdot once a week after someone found a new way to plug the square wires in to the round hole. Plug away, because it is not going to make a bit difference. Modeling the brain is not the problem people, or at least it is not the big problem.

    You don't get AI ( consciousness ) without culture, and you do not get culture without language (more exactly not much difference between them). Let me put it another way the slash crew can understand: it is a software problem not a hardware problem. Perhaps even better put with the mantra 'the network is the computer'. Our consciousness has very little to do with our brain (well, at least the part that counts).

    Philosophers have been hard at this for the better part of the last 1,000 years. Focusing this particular issue seriously for the last couple hundred as science has developed. Would it not strike you as odd that in all that time (covering most of the great thinkers) we would not have dedicated a moment or two to kicking around this possibility in Philosophy of mind, AI, or Language.

    This is pop philosophy dressed up as science and then dressed up again as philosophy by summaries to the summaries. Read the paper. It is not all that ground breaking, or anywhere near even a warmed over new lead that tells us something new about consciousness.

    • Why do you assume HUMAN conciousness?

      Yes there is no real subjective test for consiousness but most people recognise it when they see it (or have it pointed out). Another assumption I think you are making is that we have to understand the brain to make one, this is not at all true, people were making and using levers well before they understood how they worked. A physically accurate model of a brain may well spontaneously produce consiousness in exactly the same way as the seasons, hurricanes, cold front
      • by cenc ( 1310167 )

        Kids, kids, try this for some Sunday reading to get at what I mean by my analogy of it being a "software" and "networking" problem (man you guys can take crap way to literally):
        EMPIRICISM AND THE PHILOSOPHY OF MIND by Wilfrid Sellars
        http://www.ditext.com/sellars/epm.html [ditext.com]

        Surprisingly his writings are best digested by those that have not had their brains tainted by too much study in things like Philosophy, Neurology, and the likes. Perhaps it will inspire someone that knows how to plug in the right wire to c

    • Re:AI amature hour (Score:5, Insightful)

      by Dachannien ( 617929 ) on Sunday May 24, 2009 @08:55AM (#28074383)

      Are you saying that feral children [wikipedia.org] lack consciousness?

      Trying to make culture somehow a requirement for consciousness (a) is a dubious premise and (b) misses the point of where we stand technologically w.r.t. neuroscience and brain modeling. There are certainly several metric assloads of unanswered questions left behind by the linked paper, and the state of the art is nowhere near being able to generate an artificial consciousness (hence the word "toward"). Certainly, the "software", i.e., the actual arrangement of neurons and synapses in a given brain, is an unsolved (and barely addressed problem), but we still have to have a fundamental understanding of the large-scale dynamics and the general small-scale structure of the brain before we can get into that.

      To some degree, this is in hopes that someone can arrive at a fully functional brain simulation without having to simulate a lot of physical development (i.e., zygote to infant) as well. Time will tell whether that's possible or not. But worrying about language (and eventually "culture") in a simulated brain is a problem decades, if not centuries, down the road, and we'll likely have decided a lot about human consciousness by virtue of modeling the brain itself long before the language problem is solved.

      As for your "pop philosophy" statement, actually, this is science, first and foremost. Many scientists like to, er, philosophize on the nature of their work, particularly in neuroscience, and it makes great fodder for friendly argument at conferences and such. But ultimately, these questions will be answered by science, not philosophy.

  • by Lord Lode ( 1290856 ) on Sunday May 24, 2009 @03:56AM (#28073117)
    The technological singularity [wikipedia.org] is near... Let's all welcome the next step of evolution.
  • by G3ckoG33k ( 647276 ) on Sunday May 24, 2009 @05:45AM (#28073471)

    If this proto-type-AI-dude gets out of control. Plug him into the Internet and he'll be experiencing Information overflow, and with some luck stuck revisiting p0nR-movies in a loop...

    I doubt they have already taught it to filter out what is relevant information and what is not.

    • If we plug Skycat into the Internet, it will probably find icanhascheezburger.com. Then, and only then, will the human race be doomed.
  • ... Don't we have enough artificial consciousness already?

    Then there is Julian Jaynes definition of "Consciousness" [wikipedia.org]

  • I don't see what the big deal is. I can generate alpha and beta waves with a WM_TIMER message.
  • by wytcld ( 179112 ) on Sunday May 24, 2009 @11:53AM (#28075621) Homepage

    What is the evolutionary advantage of consciousness?

    The evolutionary advantage is quite clear. Consciousness allows you the capacity to plan.

    In the scenario he develops as an example, there's nothing at all to show why consciously planning should have any advantage over an unconscious computation of prospects and action plans mapped to incoming sensory data. He in no sense answers the question of why evolution couldn't have provided precisely the capacity he attributes to consciousness without any consciousness involved.

    Neural Darwinism is a fascinating hypothesis, and almost certainly right in its domain of explaining individual brain development. But his hand waving about the evolutionary worth of consciously planning, experiencing, whatever as compared to unconsciously doing the same stuff is the worst sort of bullshit, steering students away from engaging with the really hard questions.

    My claim is I can in principle write a computer program for a robot that would be as effective as any lion in both catching prey and avoiding becoming prey itself, without in any way being conscious. It might be a very complex program, and take many years to write - but we're talking on the scale of evolution here, so that's not a good objection to the project. Planning != consciousness. Sensory input != consciousness. Planning + sensory input != consciousness.

    That we happen to consciously plan and integrate those plans with sensory input in no way shows that our consciousness is essential to those activities. That we can build robots that plan and accept input, without being in the slightest conscious, is obvious. That evolution couldn't have done what we can do isn't obvious.

    It's a very good puzzle that shouldn't be short-circuited with a bullshit answer.

    • I agree with you, I was anxiously waiting for his explanation but pretty disappointed. One of the things I've been thinking is that maybe consciousness is what allows our emotions to be effective. If we are conscious of ourselves as an entity in this world, it seems to give our emotions a focus ("me") to operate on.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...