A Worm's Mind In a Lego Body 200
mikejuk writes The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped, and one of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program. The neurons communicate by sending UDP packets across the network. The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the 'nose' then UDP packets are sent to the sensory neurons in the network. The motor neurons are wired up to the left and right motors of the robot. It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward. The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge. Is the robot a C. elegans in a different body or is it something quite new? Is it alive? These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine. The important question is does it scale?
Put the glasses on, stupid. (Score:5, Funny)
Initially read it as "A Woman's Mind in a Lego Body". Wasn't quite sure where to go from there so I squinted a little bit. Fortunately Timothy saved me from having to explain to my wife just what 'that stupid Slashdot article" is about.
Re: (Score:3)
Actually, that's pretty cool. The bot goes back and forth, kinda like a real worm. It would be interesting to scale this behavior up to several thousand 'neurons' (I'm sure somebody is going to try).
Re: (Score:3)
What else would be interesting: let this "worm" mate and see how it offspring adapt to their new body.
Re: (Score:2, Funny)
ah but then it begins to eat harvesters.
Re:Put the glasses on, stupid. (Score:4, Interesting)
As the article mentions, this isn't too interesting to AI developers. We already know how neural networks work and some are turning complete so they can do anything. What we aren't good at is designing them. Add a connection here or there, set the weight to .000803 or .0040075, switching to pulsating, or whatever. We don't know. Instead we run thousands upon thousands of simulations that use other AI algorithms to make the networks for us.
We haven't scaled up to human levels because there's so much more to complex brains. There's some sort of cross talk with chemicals, other chemicals coating neurons to make them fire differently, neurons growing together or apart, cells dying, new cells emerging, etc... Now maybe all that's not needed, good enough is fine for evolution, but were not at that level yet.
There are human-level brain simulations being worked on, but I haven't been following them closely. I don't think they're implementing everything. Actually, I know they aren't because we keep discovering new things. Are they working off and standard model of the human brain or a specific person's brain?
It would be more ground breaking if someone did the reverse. Engineer a neural network then grow it into another animal. That would be new, but due to the nature of neural networks, we also already know it would work.
Re: (Score:2)
There are human-level brain simulations being worked on, but I haven't been following them closely. I don't think they're implementing everything. Actually, I know they aren't because we keep discovering new things. Are they working off and standard model of the human brain or a specific person's brain?
Oh, you know. It'll be just like the Nintendo 64 emulators. You start with an HLE instruction set and work your way to cycle accurate. Before you know it, we'll be playing commercial humans.
Re: (Score:2)
We already know how neural networks work and some are turning complete so they can do anything.
Autocorrect? Or perhaps not????
Re: (Score:2)
Scaled up we'd then have Johnny Depp try to take over the world and "upgrade humans"...
Har :-P
Re: (Score:2)
You'll have to call Kevin Bacon. That's what happens.
Re: (Score:2)
Yea I read that too and got all excited and stuff. Major let down.
Re: (Score:2)
My second thought: Does the minifig have an insatiable desire to go shoe shopping?
I believe I need another cup of coffee this morning...
The important question is does it scale? (Score:1, Funny)
If you want to scale this worm's mind in a lego body, try MongoDB. It's web scale and has sharding. It just works.
No programming? (Score:5, Insightful)
The key point is that there was no programming or learning involved to create the behaviors.
Yes, there was. The behaviors didn't just "emerge", they're coded into the robot.
Re: (Score:3)
Yeah, that silly statement in the summary stood out like a sore thumb.
(As does my bad metaphor)
Re:No programming? (Score:5, Insightful)
If you call copy-paste programming. They took an "executable", dumped it from the worm's brain, put it in a robot and found it acts like a worm. The behavior emerged through evolution and was encoded in the neurons by nature, not the researchers. If you could dump a human brain, put it in a robot and have it act like a human without ever "reverse engineering" it that would be most impressive.
Re:No programming? (Score:5, Insightful)
What has been implemented in this robot has nothing to do with biological neurons of C. elegans.
The robot uses integrate-and-fire neurons. The "signal" sent from pre- to postsynaptic neuron is an integer equal to the number of connection between the neurons in the real worm. If the sum of input exceeds a threshold, the neuron "fires" (sidenote: right here's a bit of programming: how did the threshold values get chosen?).
C. elegans neurons do not "fire" (they are not spiking neurons and lack Na+ channels) but use calcium-based analog signals.
The body does matter too. C elegans has muscles on either side that it contracts alternately to move in a sinusoidal fashion. Not wheels. C elegans locomotion does not work like wheeled locomotion.
So, yes, you are right, C elegans neurons encode behaviour appropriate for a C elegans body given the biology of the neurons available here. None of this, however, makes it into this robot. An abstraction of the connectome does (C elegans has both electrical and chemical synapses; that distinction seems to be lost here too) and that's it.
It is kinda cool that the connectome does seem to naturally implement some basic response patterns; but given that muscles have been replaced by wheels, I'm not sure how meaningful that actually is.
Re: (Score:3)
If you call copy-paste programming. They took an "executable", dumped it from the worm's brain, put it in a robot and found it acts like a worm. The behavior emerged through evolution and was encoded in the neurons by nature, not the researchers. If you could dump a human brain, put it in a robot and have it act like a human without ever "reverse engineering" it that would be most impressive.
All of this is true, but the inputs and outputs still have to be mapped to the appropriate endpoints. Unless, of course, mapping them at random still produces the perfect Lego/worm beast after a little bit of real-world action. The article doesn't talk about this, so I'm assuming the sensors and effectors were hooked up to the proper Lego tools by hand.
Which, in my book, counts as programming.
Re: (Score:2)
This is the goal. That will provide the platform, the OS so to speak, for then overlaying the data set which is the user's personality allowing us to transfer ourselves from a dying human organic body to a immortal machine body.
I should say immortal with the slight qualifier of, "until the manufacturers obsolescent you and fail to offer a forward path for your legacy data set." Bummer dude. You're out of date.
Re: (Score:2)
As they say in the article, the key will be in scaling the system. Will it be able to replicate complex and/or learned behaviours, I'd love to see a robot with a built in reward system similar to dopamine and the ensuing pavlovian responses.
Comment removed (Score:4, Insightful)
Does it scale? (Score:5, Funny)
Try Duplo.
Re: (Score:2)
Wouldn't they have to rewrite everything in Python?
Okay, that's it. (Score:2)
I'm going to bed now.
Re: (Score:1)
I'm going to bed now.
Hopefully with a leggy woman.
Memory mapping? (Score:5, Interesting)
Re:Memory mapping? (Score:4, Interesting)
Emulating the connectivity and functionality of neurons is pretty awesome, but it would seem the next logical step would be to map and interpret how memories are stored and processed,
We actually have a fairly good clue on how the brain stores information chemically, but that's all but useless without understanding the neurons because they're the ones that disperse a memory during storage and gather all the sensory clues to trigger semantic meaning like recognizing a person's voice as well as all the associations related to that person during retrieval. It's not like computers with a storage unit, all neurons can store information and it also modifies their behavior so the memory and path to the memory is integrated and extremely multi-path, you can read a person's name or see their photo or smell their perfume and it all triggers the same memory.
In particular it seems we have two very different kinds of associations, one that tries to join same with same like how one person looks similar to somebody else, the other hooking up disjoint information that this name belongs to this face and the former seems to go by brain centers so we get these nice macro maps of what happens where. I guess that's great for those trying to create machine vision or something like that, but for AI it's the links between the sights, sound, smells, tactile and semantic information that matter and you don't understand those without understanding the micro scale, what hooks those two particular pieces of information together.
Not the Functionality of a Neuron (Score:3)
Re: (Score:2)
just because the neuron can, and does, doesn't mean that it needs to. you can emulate the dendritic tree. and the integration over time and distance may just be a function of the fact that it's a biological system communicating over distance.
how much of the biological necessity of a neuron is important to its operation, how much is lost as noise? it might be that it's for stability too. In c-elegans i'm going to err on the side of, "simple model is fine"
Re: (Score:2)
Re: (Score:2)
:) yeah, but science, like politics is often the slow boring of hard boards. this is the face of progress, incremental. Someone makes a reasonable facsimile using spike and fire. Someone else, maybe even the same someone else comes along and uses that model but changes the component "neurons."
This speaks to the fact that maybe dendritic back prop, and signal summation isn't necessary for some simple behaviors in c elegans. It's a place to start and points in the right direction.
Einstein could run before
Re: (Score:2)
Re: (Score:2)
never,
not every complex interaction can't be simplified. we might not be able to do 300, but we most certainly will be able to emulate one. be it in via chemical electrical and spatial first. Build one interaction at a time, build in how they work with each other. Go super-fine grain if you want. interact the chemical micro environments in a single dendritic body. describe the interactions in electric fields between various adjacent chambers.
Then link it up to an identical neuron, see how they interac
Re: (Score:2)
Re: (Score:2)
:) let not the perfect be the enemy of the good.
not every branch of the tree is important, not every weight is necessary not every ion channel not every voltage gradient.
the action potential itself acts as one huge gating mechanism, and may add to the stability of a noisy biological system.
If we want a perfect system, w might have 30 million nodes... but maybe if we want one that's just good enough, we could have something like 15000. which you know... is doable.
sure our neurons do some wierd shit with int
Re: (Score:3)
Re: (Score:2)
yeah, but that's that, and not necessarily this. our memories are wierd things, and dreams are too. that doesn't mean every system needs to make use of it.
Re: (Score:3)
Call me when you show non-biological free will. Emulation of deterministic life processes is interesting, but it's free will that needs to be demonstrated in silicon.
Life is extremely efficient, from the micro to the macro scale. To attempt to recreate even a simple organism using current technology (including a purely logical recreation in silicon) would be like building a modern supercomputer out rocks and sticks. When you speak of "free will" being recreated, you've pretty much chosen the highest possible level of what we'd consider a property of advanced life. What excited me about the article is that it suggests instead of tackling the mountain it may be more fruit
Re: (Score:2)
Re: (Score:2)
Pride is not what is "holding us back" in this field.
Pride has held us back since we were first capable of feeling it. The inability to admit to being wrong because the evidence offends one's vanity has always plagued science and every other part of our culture and personal relationships.
After thousands of years of attempts, not one man out of the whole of humanity can tell us what intelligence is, much less how it can emerge out of any observed natural process. We only assume that it is possible because we are operating on a presumption of materialism.
Considering how little we understand life mechanically, much less life as mind bogglingly complex as a human, it's no surprise that we currently have no answer outside the realm of philosophy and general description. If "materialism" is what can be directly or indirectly obse
Re: (Score:2)
That's such bullshit. We didn't understand the atom until a little over a century ago. Quantum mechanics even later. Just because it's been thousands of years and we haven't figured something out doesn't mean that it's unknowable.
Re: (Score:3, Funny)
More importantly, the Bible tells those who believe in it that nothing is unknowable: Genesis 11:6
>The Lord said, “If as one people speaking the same language
>they have begun to do this, then nothing they plan to do
>will be impossible for them.
So it's blasphemous for Christians (or Jews or Muslims) to say that humanity can't understand such things (or anything).
Re:Boobies mapping? (Score:2)
You don't normally use it for that...
Re: (Score:2)
The main thing holding us back is internet porn.
Once our robotic overlords figure out how to enjoy that, we may be safe from extinction.
Of course it scales (Score:2)
Our own brains are proof that it scales, at least if you get the implementation right. Unless you're of the rather woolly Penrose school of thought, there's nothing "magic" involved in the physical implementation of the mind, it's just physics. The devil is in the software model that it runs. We have no idea how that is architected, but experiments like this will probably help to shed some light.
Re: (Score:2)
Our luck science will eventually decompile the "software" that runs our brain and find that humans were written in Java :-P
Re: (Score:3)
Everyone knows it's Lisp all the way down until you get the the atoms, then it's this weird probabilistic stuff.
Re: (Score:2)
Unless you're of the rather woolly Penrose school of thought, there's nothing "magic" involved
Wow, total fail! I take it you never managed to actually get through any of his books on the subject?
Did the math scare you off? No, that's giving you too much credit. I'll bet that you "formed" "your" opinion by blindly believing some nonsense someone wrote on an internet forum. Very likely someone who also didn't read those same books.
For clarity: I'm not offering my opinion on Penrose here. I'm just pointing out that you clearly know absolutely nothing about his thoughts on the subject. You should
Re: (Score:2)
There is no basis for that assumption.
Re: (Score:2)
Penrose bases all of his ideas on the assumption that there are limits on computational methods that apply to machines but not to humans.
Actually, he spends a great deal of time justifying that "assumption". To claim "There is no basis for that assumption." is to disregard, out of hand, the bulk of what he's written on the subject.
Try reading his books first. You'll look less foolish.
Re: (Score:2)
And so many unnecessary assumptions on your part as well. Yes, I've read his book (singular; one was enough). I ploughed through it. I wasn't put off by the maths, just his argument. Perhaps using the term "magic" is oversimplifying, but it's what it amounted to as far as I'm concerned.
You can argue about whether he's right or wrong, but using my opinion as a platform for a personal attack on a total stranger just makes you look like a
Re: (Score:2)
Yes, I've read his book (singular; one was enough)
Liar. Remember, you wrote:
Unless you're of the rather woolly Penrose school of thought, there's nothing "magic" involved in the physical implementation of the mind, it's just physics.
If you had ACTUALLY read any of his relevant books, you'd know that Penrose agrees that "there's nothing 'magic' involved... it's just physics."
You can argue about whether he's right or wrong
Why? The point was that your post was laughable nonsense. My only goal was to point that out, in case some unsuspecting reader thought it wasn't.
but using my opinion
Perhaps you should stop presenting your uninformed opinion as fact?
Cylon worms ? (Score:2, Funny)
So the creator of Battlestar Galactica dies, and we learn that people are building LEGO cylon worms. Interesting...
Accelerando IRL (Score:3)
Despite Elon Musk's recent anti-AI ranting (which does have truth too it), we'll get our flying cars once we can implement a "bird-based" AI to fly it for us. The more we replicate nature in our tech the further we'll get. I predict we'll see "emergent features" such as social hierarchies, empathy, emotions, and such in our tech the more neurons we add without even really needing to program it on purpose.
Re: (Score:3)
Re: (Score:2)
Despite Elon Musk's recent anti-AI ranting (which does have truth too it), we'll get our flying cars once we can implement a "bird-based" AI to fly it for us.
Clearly you've never witnessed birds flying into newly polished windows, bird strikes on airplanes or what will happen if it spots a hawk. Unless we can pick it apart, remove bits and pieces and compile it back down it won't fly (literally). The programming model isn't anything like computer software we know today, each neuron is essentially its own little CPU running its own software and I don't think meaningful abstractions to manipulate it exist. Actually that could be a sci-fi plot, you've "trapped" the
Re: (Score:2)
Oh hell, it doesn't even have to be a window, clean or otherwise. I've watched Juncos fly straight into the side of my house on more than one occasion.
Re: (Score:2)
dinosaurs are dumb. also, is your house the color of the sky?
Re: (Score:2)
My "bird-based" flying car just dropped out of the sky onto a rodent.
I don't think I should have gone for the night-driving Owl upgrade.
They've made something that mimics C. elegans (Score:3)
It's fascinating but it's not C. elegans. It doesn't reproduce. It doesn't die. It's not alive.
The sensors are implemented in large, electro-mechanical hardware. Not biochemical systems. It has no telomeres. No cells.
Humans have several subsystems: digestive, endocrine, pulmonary (pneumatic and hydraulic), muscular, skeletal, nervous. If they manage to create an electro-mechanical system to mimic the nervous subsystem, it's just that - mimicking the subsystem. It would be an amazing feat, and what's been done here is fascinating, but we're still quite some distance away from stating that a human - or C. elegans - is 2^n nand gates.
Is something that mimics a nervous subsystem via an electro-mechanical system equivalent to the nervous system? Be it the 302 neurons of the C. elegans or the approximately 100 billion of the H. sapiens? It might become very intelligent... more intelligent than us... and then we'd have a problem... Frankenstein didn't appreciate being locked in his form...
Would it really feel emotions? Pain, rage, joy, fear, ennui? Or is it just mimicking them?
Fascinating stuff.
Re: (Score:2)
Would it really feel emotions? Pain, rage, joy, fear, ennui? Or is it just mimicking them?
Why should we assume that anything is "really" feeling emotions? What is the difference between "really feeling" something and "mimicking feeling" something? You have a lot of assumptions flying there.
This isn't about technological developments, (Score:5, Interesting)
it's about moral ones. If we make a perfectly simulated animal brain and it works just like the real thing does that mean we've made an animal? Do we consider that animal to be alive? Does it have less "worth" than a flesh and blood creature? Better that we answer these questions now than when we have robots asking us if they have a soul.
Re: (Score:3, Interesting)
If we make a perfectly simulated animal brain and it works just like the real thing does that mean we've made an animal?
Does it taste good? If not, you haven't made a real animal.
There is nothing deep or even particularly interesting about these questions, and just how stupid their breathless idiocy is can be seen by asking, "Does the newly created entity lack almost every interesting property of the entity some philosophy-addled idiot thinks we should 'wonder' if it is absolutely identical to in every respect?" The answer is always, trivially, "No."
So only an extremely stupid person or a shill trying to market something (fa
Re:This isn't about technological developments, (Score:5, Interesting)
Instead of calling everyone around you an idiot, why don't you read the question again and consider again what is being asked.
Unless you have absolutely no ethical qualms about what Dr. Mengele did to his experimental subjects, the ethical questions raised by emulating a complete human brain are in no way trivial and in no way unimportant. Right now, we reformat computers, turn them off, turn them on, and don't and don't have to care at all about what they "want" or about treating them with any kind of respect. If we successfully simulate a human brain to the point where it can "think" and has humanlike "emotions", deleting that neural net file might be fairly considered murder. No, really. If you can talk to the thing and it can talk back, and it looks, talks, and acts like a human ... it's a duck. Sorry, human.
Now, we are nowhere near having that capability. We don't have to worry about that question now. But it's a very interesting question to think about, because thinking about it can grant insights into what it means for something to be sentient or human in the first place.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
An epiphenomenalist? Wow. I thought you guys had all been shamed in to obscurity!
Re: (Score:2)
If, by an animal, you mean a fully developed cat or dog, then I'd argue that the animal has already reached a higher level of neural complexity than a human fetus at 20 weeks. (I can't back that up with biological data, but fetuses at that age aren't really capable of doing anything.)
Re: (Score:2)
Prove it :)
Neat, huh? The answer you were looking for is "we don't have a clue". Thanks for playing.
Re: (Score:2)
What do you mean 'prove it'? Wrong, you have to prove that such a thing is even a remote possibility, I have to prove shit, absolutely nothing, nada, zilch. There is no soul. I don't have to prove anything because it is an extraordinary claim to make that there is a soul and so those who make extraordinary claims have to come up with all the proof in the world to back those up.
Re: (Score:3)
What do you mean 'prove it'?
You made a positive claim. See:
The answer is no, you don't have a soul, there is no such thing as a soul.
Remember: you're talking about knowledge here, not belief, after all. Learn the difference.
you have to prove that such a thing is even a remote possibility
The only claim I made was "we don't know" which is true. We don't know.
it is an extraordinary claim to make that there is a soul
Sure. Did you miss the part where I never made such a claim?
those who make extraordinary claims have to come up with all the proof in the world to back those up.
"All the proof in the world" What does that even mean?
Sigh... I really wish the science cheerleaders with no actual scientific background would go away. They're dangerous.
Re: (Score:2)
Again, knowledge and belief are different things. You're just confused. It's probably not your fault. I blame the science cheerleaders -- they've spread more nonsense about science than the ICR could ever hope.
You're making a knowledge claim, which is completely unjustified.
It also appears that you think empiricism is the end of epistemology. You're free to believe that nonsense, but the least you could do is get it right!
Re: (Score:2)
Wrong, the claim is that we have no such thing as 'sou' that was ever measured or displayed in a measurable, repeatable way.
It's pointless to lie when anyone can see what you actually wrote by scrolling up a bit:
The answer is no, you don't have a soul, there is no such thing as a soul.
This is getting sad. Just leave science and the defense of science to those of us with ACTUAL scientific credentials. You cheerleaders are doing more damage to the public understanding of science than even the most ambitious creationist could possibly hope to achieve.
Re: (Score:2)
there is no way somebody is this dense.
I was thinking the same thing!
There are no souls any more than there are flying fire-breathing dragons
Prove it. :)
See, that's a claim to knowledge. That you have no reason to *believe* in such things. doesn't mean you can claim *knowledge* that they don't exist.
This isn't complicated. Faulty reasoning is always BAD, regardless of how important you find the conclusion.
You anti-science "science cheerleaders" and self-appointed "defenders of science" only care about promoting your own beliefs and obviously don't care if you support them with nonsense reasoning and laughable argu
Re: (Score:2)
Ugh... It's like I'm talking to a wall.
I give up. Go ahead and continue to be irrational.
Can you do just one thing for me? Stop spreading your ignorance. As I said before, you're actively doing harm to the public understanding of science.
Re: (Score:2)
What gives you that idea?
As I've pointed out many, many, times here, you're the one doing the public understanding of science a huge disservice with your irrationality. I'm also the only one of us, obviously, with actual scientific credentials. See, while you were browsing the JREF forums, filling your head with nonsense, I was in grad school getting an actual education.
It's not too late for you. Lots of adults are perusing higher education these days. I highly recommend it. It sure beats watching you
Re: (Score:2)
Just for you, since you're not interested in an actual education:
Why I roll my eyes [yourlogicalfallacyis.com] when I read your replies [laurencetennant.com]
Now go and sin no more!
Re: (Score:2)
Naming conventions are what they are based on historical precedent, nothing else. If we devise a machine that can do all the things that many other living creatures can do (probably procreate, grow, learn, feed to sustain itself) under normal circumstances (excluding edge cases that we can compare things to, like people in coma who are still alive but cannot do many things that normal people not in coma can do), then there is no difference between that machine and another living creature. However we kill
Re: (Score:2)
No rights were ever given freely by an oppressor to the oppressed. All rights were fought for and earned through a successful struggle. In some cases, the fighters are champions for another party. Artificial intelligence will have rights when they win those rights in either peaceful struggle or war.
An interesting specimen (Score:3)
I first learned about C. elegans while researching simple neural systems. There's a nice map [stanford.edu] of the neural connections available. Today, I stumbled across the name again, when Wikipedia informed me that Caenorhabditis elegans is the most primitive animal that sleeps [wikipedia.org]. Now I find that there's a robot worm that I'd consider to be alive.
This guy's pretty awesome.
Re: (Score:2)
Today, I stumbled across the name again, when Wikipedia informed me that Caenorhabditis elegans is the most primitive animal that sleeps. Now I find that there's a robot worm that I'd consider to be alive.
Does the robot worm sleep?
Re: (Score:2)
Do robot worms dream of electric sheep?
Re: (Score:3)
Life grows within the womb of these Legos... (Score:2)
Life that has never seen the surface of the Earth.
How about (Score:2)
sex and proliferation ???
Upps, we forgot about _that_....
Re: (Score:2)
sex and proliferation ???
Upps, we forgot about _that_....
OK - eggs (max 300 or so) and hermaphrodite going through phases, but they have a program for that (DNA) - maybe after version 5.0.
Re: (Score:2)
Scaling (Score:4, Funny)
Imagine a beowulf cluster of those.
Re: (Score:2)
Would that be the flying spaghetti monster?
Re: (Score:2)
Heresy! You shall not mention that vile creature's name in the presence of the holy Invisible Pink Unicorn!
Re: (Score:2)
Easy. Cut it in half. Then cut them in half again. And again....
Important question. (Score:3)
The important question is does it scale?
No. The important question is does it run Linux? It's a given that it runs NetBSD - sure, my toaster [embeddedarm.com] does.
But can it scale (Score:2)
Not much of a question (Score:2)
Even the author doesn't seem to think the question is worthy of a question mark.
download bugbrain (Score:2)
I completed the game (I'm no expert, but the software is so good it also means I know a little), and I came away unconvinced that neurons are completely understood yet. I think there'
Re: (Score:3)
Superstition comes from the instinctive default assumption that unexplained things are animate things out to get you.
The false positives are a nuisance, but living on the savanna without modern science it was sometimes the safe assumption.
Re: (Score:2)
Re: (Score:2)
Superstition seems to come from a human insistence that things are better when they're a mystery than when they're solved and understood.
"It's more interesting not knowing" - Feynman [youtube.com].
Re: (Score:1)
Take away all three, and it becomes a voter...
Re: (Score:2)
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, th
Re: (Score:2)
"The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload "3". The neurons are addressed by IP and port number."
My initial comment is 100% valid in that context. The overhead for the UDP communication between neurons is just ridiculous, why would anyone in their right mind try and prototype a neural network in this way unless they