Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Science

Towards Artificial Consciousness 291

jzoom555 writes "In an interview with Discover Magazine, Gerald Edelman, Nobel laureate and founder/director of The Neurosciences Institute, discusses the quality of consciousness and progress in building brain-based-devices. His lab recently published details on a brain model that is self-sustaining and 'has beta waves and gamma waves just like the regular cortex.'" Edelman's latest BBD contains a million simulated neurons and almost half a billion synapses, and is modeled on a cat's brain.
This discussion has been archived. No new comments can be posted.

Towards Artificial Consciousness

Comments Filter:
  • by Anonymous Coward on Sunday May 24, 2009 @02:41AM (#28072627)

    Not that it has any significance in this bogus experiment where nothing will happen, but consciousness must be testable using physical methods, as our brains know they are being conscious. Once we identify the phenomenon it will be easy to tell if ants, robots or rocks share this characteristic with humans.
    Consciousness is unrelated causally to intelligence and can only be identified for sure in clinical trials.

  • by timmarhy ( 659436 ) on Sunday May 24, 2009 @03:06AM (#28072705)
    yep i'd believe the emotional maturity of a 3yo, it's one reason i can't stand fuckheads who are cruel to animals
  • by HadouKen24 ( 989446 ) on Sunday May 24, 2009 @03:09AM (#28072723)
    ...until we figure out the hard problem [wikipedia.org].

    To know whether we have artificial consciousness on our hands, we have to get clear on what consciousness is, and that's a tremendously difficult philosophical problem.

    Furthermore, there are serious ethical considerations that must be addressed if indeed we believe we are close to creating an artificial consciousness in a computer. Might we not have ethical obligations to an artificially conscious creature? Would it be murder to shut end the program or delete the creature's data? To what extent and at what cost might we be obligated to supply the supporting computers with adequate power?
  • by rrohbeck ( 944847 ) on Sunday May 24, 2009 @03:09AM (#28072727)

    perl -e 'print "Cogito, ergo sum.\n"'

  • AI amature hour (Score:5, Insightful)

    by cenc ( 1310167 ) on Sunday May 24, 2009 @04:53AM (#28073107) Homepage

    We get this AI crap on slashdot once a week after someone found a new way to plug the square wires in to the round hole. Plug away, because it is not going to make a bit difference. Modeling the brain is not the problem people, or at least it is not the big problem.

    You don't get AI ( consciousness ) without culture, and you do not get culture without language (more exactly not much difference between them). Let me put it another way the slash crew can understand: it is a software problem not a hardware problem. Perhaps even better put with the mantra 'the network is the computer'. Our consciousness has very little to do with our brain (well, at least the part that counts).

    Philosophers have been hard at this for the better part of the last 1,000 years. Focusing this particular issue seriously for the last couple hundred as science has developed. Would it not strike you as odd that in all that time (covering most of the great thinkers) we would not have dedicated a moment or two to kicking around this possibility in Philosophy of mind, AI, or Language.

    This is pop philosophy dressed up as science and then dressed up again as philosophy by summaries to the summaries. Read the paper. It is not all that ground breaking, or anywhere near even a warmed over new lead that tells us something new about consciousness.

  • by Lord Lode ( 1290856 ) on Sunday May 24, 2009 @04:56AM (#28073117)
    The technological singularity [wikipedia.org] is near... Let's all welcome the next step of evolution.
  • by Troed ( 102527 ) on Sunday May 24, 2009 @05:54AM (#28073307) Homepage Journal

    Are you conscious?

    Can you prove it?

    [hint: no]

  • by sploxx ( 622853 ) on Sunday May 24, 2009 @07:09AM (#28073561)

    I generally agree with your post, but I still think that one needs to better separate concepts in the discussion here.

    After we have a working model of the device, we can build the actual physical device, the brain, which does not "compute" its actions, it just works.

    Well, one needs do define 'compute'. A computer also just works and is a man made machine. Put the supercomputer into a black box and you have your 'brain that just works'.

    I do not think that there is any qualitative difference between 'computing' something and having a machine that 'just works'. For example, in the embedded world, you would say that a PID controller is a PID controller, regardless of whether it is implemented analogue (doing real integrations in a capacitor) or digital (approximating the integration with digital counts, i.e. a 'simulation' of a real capacitor).

    That said, I think the point of such simulations can only be the validation of functional models of the brain. We already have a way of 'producing' conscious beings, which is effective enough (given the overpopulation concerns). It is also a highly energy efficient way of implementing the 'conscious machine'.

    Given that artificial consciousness is possible at all:

    Implementing something like consciousness on a large supercomputer would give a lot of insights into ourselves.

    Implementing consciousness in a box that consumes less power and takes less space than the human brain would be more of a serious technological breakthrough than a scientific advance.

    Of course, in any case, ethics issues remain.. "may you switch 'it' off..." etc. - which I feel are much too complicated to warrant cramming any of my armchair philosophy thoughts in here... :-)

  • by jdoeii ( 468503 ) on Sunday May 24, 2009 @08:51AM (#28074029)

    Murder is a human concept. It's from the [thy shall not do stuff onto others that you do not want to receive yourself]. And if you step back, then it's an evolved behavior to increase chances of survival. One more step back, and you will notice that fear of death is also an evolutionary achievement. Another look, and perception of continuous life itself is an evolved psychological construct to protect sanity. Consciousness is not continuous. Your conscious self dies every night. AI does not need to fear death, does not need to have psychological crutches that humans use to stay sane. If life for an AI is overrated, murder is irrelevant.

  • Re:AI amature hour (Score:5, Insightful)

    by Dachannien ( 617929 ) on Sunday May 24, 2009 @09:55AM (#28074383)

    Are you saying that feral children [wikipedia.org] lack consciousness?

    Trying to make culture somehow a requirement for consciousness (a) is a dubious premise and (b) misses the point of where we stand technologically w.r.t. neuroscience and brain modeling. There are certainly several metric assloads of unanswered questions left behind by the linked paper, and the state of the art is nowhere near being able to generate an artificial consciousness (hence the word "toward"). Certainly, the "software", i.e., the actual arrangement of neurons and synapses in a given brain, is an unsolved (and barely addressed problem), but we still have to have a fundamental understanding of the large-scale dynamics and the general small-scale structure of the brain before we can get into that.

    To some degree, this is in hopes that someone can arrive at a fully functional brain simulation without having to simulate a lot of physical development (i.e., zygote to infant) as well. Time will tell whether that's possible or not. But worrying about language (and eventually "culture") in a simulated brain is a problem decades, if not centuries, down the road, and we'll likely have decided a lot about human consciousness by virtue of modeling the brain itself long before the language problem is solved.

    As for your "pop philosophy" statement, actually, this is science, first and foremost. Many scientists like to, er, philosophize on the nature of their work, particularly in neuroscience, and it makes great fodder for friendly argument at conferences and such. But ultimately, these questions will be answered by science, not philosophy.

  • by wytcld ( 179112 ) on Sunday May 24, 2009 @12:53PM (#28075621) Homepage

    What is the evolutionary advantage of consciousness?

    The evolutionary advantage is quite clear. Consciousness allows you the capacity to plan.

    In the scenario he develops as an example, there's nothing at all to show why consciously planning should have any advantage over an unconscious computation of prospects and action plans mapped to incoming sensory data. He in no sense answers the question of why evolution couldn't have provided precisely the capacity he attributes to consciousness without any consciousness involved.

    Neural Darwinism is a fascinating hypothesis, and almost certainly right in its domain of explaining individual brain development. But his hand waving about the evolutionary worth of consciously planning, experiencing, whatever as compared to unconsciously doing the same stuff is the worst sort of bullshit, steering students away from engaging with the really hard questions.

    My claim is I can in principle write a computer program for a robot that would be as effective as any lion in both catching prey and avoiding becoming prey itself, without in any way being conscious. It might be a very complex program, and take many years to write - but we're talking on the scale of evolution here, so that's not a good objection to the project. Planning != consciousness. Sensory input != consciousness. Planning + sensory input != consciousness.

    That we happen to consciously plan and integrate those plans with sensory input in no way shows that our consciousness is essential to those activities. That we can build robots that plan and accept input, without being in the slightest conscious, is obvious. That evolution couldn't have done what we can do isn't obvious.

    It's a very good puzzle that shouldn't be short-circuited with a bullshit answer.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...