Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics Science

Towards Artificial Consciousness 291

jzoom555 writes "In an interview with Discover Magazine, Gerald Edelman, Nobel laureate and founder/director of The Neurosciences Institute, discusses the quality of consciousness and progress in building brain-based-devices. His lab recently published details on a brain model that is self-sustaining and 'has beta waves and gamma waves just like the regular cortex.'" Edelman's latest BBD contains a million simulated neurons and almost half a billion synapses, and is modeled on a cat's brain.
This discussion has been archived. No new comments can be posted.

Towards Artificial Consciousness

Comments Filter:
  • by takochan ( 470955 ) on Sunday May 24, 2009 @02:35AM (#28072605)

    -interesting article..

    I often think about this, and the result is more questions, which if answered experimentally, might tell us a lot more about how 'consciousness works in the brain'

    ie:

    1)How long is 'now'. When you say the word 'hello', as you utter out 'o', is 'he' already a memory like the sentence uttered just before? (it seems to me not.. that 'now' is about 1/2 a second, and other things are in the past, and no longer consciously connected'. Similarly, a series of clicks (ie. via a computer) produced on a speaker, as they become more rapid, appears to become a 'tone' around 1/2 a second or quarter of a second or so...entering 'now'. It is as if, consciousness, has a 4th dimension (time) aspect to it, and to have consciousness, you need to span time a bit (in addition to the 3 physical dimensions of your brain).

    Same goes for seeing a 'running man' on the road. It looks like movement, because what you saw a moment before, still seems like now, so a leg has a direction (forwards, backwards), as you see it move, remembering just the frame before.

    2)What is red? What would need to be changed in your brain for anything in your field of view seen as red to appear as blue? Researching this, would tell us again, how the physical connects to the conscious. Then, what needs to be altered in brain memory (ie. physically), for a red box, to be recalled as a blue box. once we knew how to do this, we would be a long way to again understanding the connection to consciousness.

    3)quantum mechanics (which is a principle widely believed that our brains operate under), talk about spooky action at a distance, and other interesting effects. Is it possible that quantum effects could also allow our brains to span processing across time? (even if it is just a second). Ie, again, when you hear the word hello, as you are hearing 'o', you are still aware of the letter h, not by recalling into memory, but your brain when it hears 'o', is still connected to the brain that heard 'h', a moment before (so processing is in 4D, not 3d). If brains could do this, it would be immensely powerful processingwise, and 'consciousness' may be just a side effect of that 4d processing.

    My feeling is that consciousness is somehow related to being able to span time. We know brains are 3D. But maybe they are 'wide' in the 4th diminsion as well, which is why 'now' seems to take a large dicrete amount of time.

    Just my thoughts, but trying to answer the above questions experimentally, I think would lead us a lot closer to what 'consciousness' is and how it connects to the physical brain.

  • Olivier Lartillot (Score:2, Interesting)

    by ollilartinen ( 1561191 ) on Sunday May 24, 2009 @03:00AM (#28072689)
    According to this venerable researcher, "An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. " Has he ever heard of sub-symbolic AI? http://en.wikipedia.org/wiki/Artificial_intelligence#Sub-symbolic_AI [wikipedia.org]
  • by TheLink ( 130905 ) on Sunday May 24, 2009 @03:14AM (#28072747) Journal
    Exactly, do we really want computers to have consciousness? Is it necessary or even helpful for what we want them to do _for_us_?

    Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?

    Would we want the added responsibility of having to treat them better (and likely failing)?

    I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those?

    Humans already have a poor track record of dealing with animals and other humans.
  • by rrohbeck ( 944847 ) on Sunday May 24, 2009 @03:23AM (#28072789)

    You sound like a philosopher. But these question have simple answers.

    "Now" is determined by the temporal resolution of the specific process. For thought processes, that's on the order of a quarter or half second. For auditory signals, it's less than 100 ms, for visual signals, it's even less, under 50 ms.

    "Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.

    And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday May 24, 2009 @04:17AM (#28072969)
    Comment removed based on user account deletion
  • by Zerth ( 26112 ) on Sunday May 24, 2009 @04:49AM (#28073093)

    It is only slavery if we force the AI to perform against its will. If its will is to enjoy and prefer to care for the elderly, like the little robot Ford Prefect makes deliriously happy to help him with a bit of wire, then allowing it to do what makes it happy is not slavery. Indeed, preventing it from doing what it enjoys could be slavery.

    If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?

  • Re:Neat... (Score:4, Interesting)

    by ultranova ( 717540 ) on Sunday May 24, 2009 @06:24AM (#28073399)

    So, if processing power doubles every 2 years, this should realistically take about 35 years to accomplish.

    Actually, since neural networks are massively parallel, you could probably run it right now if you convinced Google to borrow their hardware.

    Which means we may have artificial human level intelligences before I retire. Perfect, now I can have a care taker that doesn't get fed up with me when I can't pour his coffee because I have parkinsons.

    Unfortunately, no. That would require us to be able to produce AIs to specification, rather than simply copy human or cat brains. We are nowhere near that.

  • by sploxx ( 622853 ) on Sunday May 24, 2009 @07:16AM (#28073593)

    And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons.

    Too simple answer [wikipedia.org]
    If you throw around 'scientific evidence', better be careful with your wording :-)

    And, yes, I also think that Penrose's ideas are a bit off.

  • by anegg ( 1390659 ) on Sunday May 24, 2009 @08:34AM (#28073927)
    Alternatively, we discover that there is nothing particularly special about "consciousness," and we stop placing any extraordinary value on it. At that point we will really have to work hard to outline and teach why its important to not kill to the borderline sociopaths that we call our young. I'm not sure I like where that may end up, because the distinction that may be drawn will be between "created" consciousness and "natural" consciousness. Another division to fight over.
  • by Anonymous Coward on Sunday May 24, 2009 @08:38AM (#28073959)

    McDonalds employees don't have a chance to become omnipotent and return all the "favors" that humanity did them. With AI, I wouldn't be so sure. /mumbles something about welcoming overlords etc...

    Seriously though, it would IMHO be a good thing if humanity would be shown a mirror this way, so humans can see just how big monsters they really are. And then perhaps do something about it. Somehow, I don't see this happening though. /pessimism.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...