Evolutionary Computing Via FPGAs 218
fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
Hal, open the pod bay doors, please... (Score:2)
Scary, him not being able to explain exactly how the thing works. Still, any good creation is ultimately the creation of madness.
Aged... (Score:3, Interesting)
Still, the technology's fascinating. Though I'm a little shocked that the latest articles still have no other examples (in detail, that bit about HAL doesn't count) than the two-tone recognition.
More detail (if memory serves): the FPGA outputs a logic LOW on a 100-Hz wave and a logic HIGH on a 1000-Hz wave. It is programmed by an evolved bit-sequence fed from a host PC computer. IIRC they started with random noise to wire the gates, so that's cool.
--Knots
Re:Aged... (Score:2, Informative)
Re:Aged... (Score:1)
Re:Aged... (Score:4, Informative)
[1] Hugo de Garis. Evolvable Hardware: Principles and Practice. http://www.hip.atr.co.jp/~degaris/CACM-EHard.html (link is not available today)
[2] Adrian Thompson. Evolving Electronic Robot Controllers that Exploit Hardware Resources. CSRP 368 In: Advances in Artificial Life, Proceedings of the 3rd European Conference on Artificial Life (ECAL95) pp640-656. Springer-Verlag Lecture Notes in Artificial Intelligence number 929, 1995.
[3] Adrian Thompson. Evolving Fault Tolerant Systems. CSRP 385. In: Proceedings of The First IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'95), pp524-520, IEE Conference Publication No. 414, 1995.
[4] Adrian Thompson. Silicon Evolution. In: Proceedings of Genetic Programming 1996 (GP96), J.R. Koza et al. (Eds), pages 444-452, MIT Press 1996.
[5] Adrian Thompson. Through the Labyrinth Evolution Finds a Way: A Silicon Ridge. Inman Harvey and Adrian Thompson. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 406-422, Springer Verlag LNCS 1259, 1997.
[6] Adrian Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 390-405, Springer Verlag LNCS 1259, 1997.
[7] Adrian Thompson. Artificial Evolution in the Physical World. In: Evolutionary Robotics: From Intelligent Robots to Artificial Life (ER'97), T. Gomi (Ed.), pages 101-125. AAI Books, 1997.
[8] Adrian Thompson. On the Automatic Design of Robust Electronics Through Artificial Evolution. In: Proc. 2nd Int. Conf. on Evolvable Systems: From biology to hardware (ICES98), M. Sipper, D. Mange & A. Pe'res-Uribe (Eds.), pp13-24, Springer-Verlag,1998.
Re:Aged... (Score:1)
Strait out of a movie (Score:2, Interesting)
This begs the question "Can evolving machines be controlled?"
It's possible that any machine able of changing its logic could change logic that says "DON'T do this..." if it thinks it is an improvement to itself.
-Bryan
How the future will be (Score:2, Insightful)
Very simple... (Score:5, Funny)
This sounds suspiciously like my lovely wife.
The scary part: Thompson cannot explain exactly how the chip works!
I knew it. Male engineer, female chips. Easy explanation.
Soko
(Posting from the basement so said lovely wife doesn't tear of my baaa-aa-allsssss.... YOWWWUUCH!!!!)
Re:Very simple... (Score:2)
she probably beat you down after catching you preview that comment, and then added the "lovely" before every mention of wife, right?
Score 9, Hilarious/Insightful (Score:2)
Weeelll... not quite right. The program which generates the chip intelligently selects and randomly modifies portions of the existing design (initally generated randomly) based on the performance (if any) of previous iterations. For example, the winner of the first iteration got the door prize for actually having an output. Any output.
For this to match real life, BTW, you need to postulate the pre-existance of FPGA-equivalents - chemicals at least as complex as RNA although RNA itself would not turn the trick - and some kind of teleology to permit selection to operate well in advance of where it would normally kick in, else the critter is quickly crushed by its own genetic burden.
And mine! Perzactly! It's part of The Rules [keyster.com], don'cha know? (-:
Women use non-deterministic logic (Score:2)
Good point, but your example has a flaw (Score:2)
But there is a consistant pair of answers to these questions that is also "correct" according to female logic. Specifically, these answers are "No, Honey, of course not," and "An amount that looks damn sexy, whatever it is." Alternatively, question 2 can be answered with any value w, where w is the average of your actual estimate of the woman's weight and the average weight for a woman of her height, and then subtract 10-15 pounds. However, that method can lead to answers inconsistant with the answer to the first question, and both methods can produce answers that will not satisfy the woman in question for some reason incomprehensible to male logic.
Incidentally, that's why I don't understand why you said "no no no". I provided a rough model of women's emotional responses in general, and you provided an outline of feminine logic - the two do not contradict each other, they complement each other. My own post provides a rough explanation of observed phenomena and a crude predictive model - yours provided a methematical model.
Water under the bridge, though. Excellent piece of work - thank you for posting it. And God save us both if our girlfriends ever read this thread.
Don't blame Goedel (Score:2)
Don't. I suspect women's use of the complete system is intuitive in the majority of cases, rather than based on advanced mathematical study. Goedel described the Incompleteness Theorem, but used it long before he did, I'll bet, and would have kept on using it even if he didn't.
Bah! Mental Typos! (Score:2)
Genetic Algorithms are not new (Score:5, Informative)
The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.
Re:Genetic Algorithms are not new (Score:2)
> The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.
I don't remember the details, but wasn't one of the
I agree that there doesn't seem to be much by way of practical applications for GA, but the technology has come a long way and the CPU time that can be thrown at a run is growing according to Moore's Law, so I would not be surprised to start seeing some noteworth results coming out of the field within the next decade or so. I do know of cases where people have tried to use it for industrial optimization problems, but I don't know whether it has been adopted as a mainstream technology for that sort of thing.
Re:Genetic Algorithms are not new (Score:4, Informative)
The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.
They are good for optimizing functions of very many variables. Like, for instance, the weights for a spam-scoring system, to maximize the score over a sample of junk mails, and minimize it on a sample of not spam mails.
IE, you have a rule that matchs the word "viagra" and a rule that matches the word "money" in a subject, obviously the first one should count more (unless you talk about viagra a lot in your emails), but how much? Imagine you have 100s of rules you came up with, a GA can optimize the weights of each rule, if you have a good selection of emails to let it evolve over.
Using GAs to filter spam (Score:2)
Re:Genetic Algorithms are not new (Score:3, Informative)
Re: (Score:2)
Re:Is that enough? (Score:2)
Re:Genetic Algorithms are not new (Score:2)
.
Cool study using genetic algorithms (Score:4, Informative)
The problem is, in order to measure the "fitness" of a sorting network, you should give it all possible sets of numbers and see how many it sorts correctly (you also give a fitness bonus to smaller networks). It turns out you just need to give it all possible sets of 0's and 1's to see if it will sort any set of numbers correctly, so Hills would have to test each network on 65,536 inputs to see how well it did.
That would take too long, so he wanted to only test the networks on a subset of possible inputs. The clever thing was he made the particular subset used also evolve, as a kind of "parasite" on the sorting networks. The parasites were "rewarded" (had higher fitness) when they broke sorting networks. That way, the system would keep around precisely those test cases which could break the current population of sorting networks, so it was always focusing the testing exactly on the trouble cases, and ignoring the ones "known" to work, and thus saving a ton of time/effort.
Hillis evolved a sorting network which used 61 comparison-swaps, just 1 away from the best man-made one known. I was at Thinking Machines (Hillis' company) for a while, and fiddled around with this myself a bit, thinking that a bit more simulation must beat the record, but I never did beat it.
Hillis had a paper, called "Co-Evolving Parasites Improve Simulated Evolution as an Optimization Procedure", published in Artificial Life II (Langton et al, editors), Addison Wesley, 1991, pages 313-324. A note in my database indicates it may also have been published in the journal Physica D, vol. 42, p. 228-234.
A search also just turned up Hugues Juille [brandeis.edu], who has apparently done some more work in this area. [brandeis.edu] He evolved a 60 comparison sorting network for 16 inputs, tying the record. And he broke a (25-year-old) record for 13-input sorting networks, doing it in 45 comparison/swaps.
Re:Genetic Algorithms are not new (Score:2)
Imagine that there was a super fast and highly intelligent structure in this chip that was thrown out because it's pathways took too much energy and caused too much heat, while another less spectacular contruction happened to survive because it did half the work at half the efficiency yet cost less energy and so produced less heat. So you might come up with a chip that is a evolutionary dead end and way less efficient; sure it can hear a tone, but more than that may not be possible.
Re:Genetic Algorithms are not new (Score:1)
That why you build virtual snipers into your virtual ecology that take joy in murdering the Timmys of your simulation.
Or set up a Doom type interface to it and do the dirty work yourself!
Re:Genetic Algorithms are not new (Score:2)
So really the "organisms" compete for the attention of the obvserver.
tone differentiation is only one allele
what surprises me is that they got much progrss at all and with only 4000 generations. In biology and most computer science, mutations are generally bad.
oh, btw. Darwin didn't use the word evolve to describe his hypothesis but the last word in Origin of The Species" is "evolved".
Re:Genetic Algorithms are not new (Score:2, Informative)
Re:Genetic Algorithms are not new (Score:2)
Re:Genetic Algorithms are not new (Score:3, Interesting)
Also, a lot of what is being discussed sounds like Neural Networks as well; gates interlinking and 'learning'. I found it interesting during my MSc, and the field shows some promise if they can get over the factor discussed of "how do you trust something you can't explain?"
:-) (Score:2)
Interestingly, one of the reasons more people don't is that there are often criteria that need to be taken into account which people would rather not state explicitly (which they would need to do for a GA), such as the fact that more senior lecturers don't like supervising exams early in the morning.
More of a social than a technical problem I suppose.
Exciting times ahead for 'AI' (Score:4, Interesting)
Isn't this how a regular brain works? Or, at least close. I recall being taught something called the 80/20 rule, that applies to almost anything and everything. Doesn't 20% of the brain do 80% of the work?
This article is pretty interesting though. I'm not sure how much is true (newsobserver is hardly the New Scientist) but these devices look like they could be the way of the future.
Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'. How do we know that? How do we know that the mere task of processing is not 'consciousness'?
On the other side, how do we know that animals are self-aware? When I watch ants, I could just as easily be watching SimAnt, for all the intelligence they seem to have. A computer could do pretty much everything as spontaneously and as accurately as an ant could.
I think as the years pass by, we'll see chips pushing the envelope. Soon we'll have chips that can act in *exactly* the same way as a cat or dog brain. Then what will be the different between the 'consciousness' of that chip and the consciousness of an average dog? I say, none.
I don't like to call this Artificial Intelligence. It's real intelligence. Who knows that some sort of 'god' didn't just program us using their own form of electronics based on carbon rather than silicon?
One day we'll reach human level. I can't wait.
Re:Exciting times ahead for 'AI' (Score:1)
I'm just curious, am I conscious?
It can never be *exactly* the same way as a cat or dog brain works... we don't know how it works, in fact we're FAR from knowing how it works.
:) good argument
Re:Exciting times ahead for 'AI' (Score:2, Interesting)
Pah, thats one of the all unifying sentences I shudder when seeing it, normally used by fanatics. I forgot which scientist it was that said "It seems every new theory is first far overstated, before it gets it right place in science", especially at times where the evolutions theory was new and was applied to really everything even a lot of places where it by far did not fit.
For an AI we're still at calculation capability was shortly far away to be able to "simulate" a human brain. The human brain has 20 Giga Neurons, with 2000-5000 synapses per neuron (the basic calculation unit) resulting in a capacitiy of 10 Tera "Byte". It is frightening that for today 2001 this is not so far apart. Theoretically we would already have enough storage capability to "store" a human brain on hard disk. But going for calculation capability we're lucky wise still years away. Since all the Neurons in our brain can work parallell. We've a outrageous serial calculation capability, but our human capability of parallel computing is still enourmes.
To get near to human brains Von Neumann machines as we're using today with a central CPU are the wrong way, altough in key sematics they can already match the human brain they will not do it through the human capability of doing a lot of calculations at the same time. The way to match it lies not in the CPU but in the FPGAs, and here were still light years away. How many cells (""Neorons"") does an typical high performance LCA have today? 10.000 maybe? Well that is still far far away from mine 20.000.000.0000 I've in my head
Re:Exciting times ahead for 'AI' (Score:2)
But the human brain does lots of things that an AI wouldn't need to do. Like maintain blood pressure, and muscle tension.
Also, electronic switching speed are considerably higher than biologic switching speeds. So to some considerable extent speed can be traded for quantity.
Additionally, the robot wouldn't need to be a general purpose intelligence. Certainly not to start with. Something as smart as 10 bees, and with electronic switching speed, would probably be smart enough to drive a car, read a map, and accept a destination. (Taxi, anyone?) There are probably lots of other jobs. The garbage man problem (how does one create an automated garbage man) is mechanical more than AI once the driving problem is solved (though you would need to use the official trash bins, or your garbage wouldn't be collected).
Re:Exciting times ahead for 'AI' (Score:2)
To bring in another clarifying example, the brain works in some ways like a genome. There are thousands of genes that we have no idea what they do. One gene may produce a protein that is inhibited by another gene, which in turn inhibits the second gene production. Throw a thousand genes into the mix, and you get a mass of confusion. Understanding what gene does specifically in the large picture is a very difficult prospect. In this aspect I'm not surprised that he does not know exactly how it works.
Re:Exciting times ahead for 'AI' (Score:1)
i'm willing to stake a prediction point on fpga (or *physically based*) GAs as being a superb analogue to genetic structure, physical structure, etc.
language, by the way, is a form of pattern matching, as is every abstraction.
Re:Exciting times ahead for 'AI' (Score:2, Informative)
Here's a series of links to read up on this:
http://www.urbanlegends.com/science/10_percent_
http://pub3.ezboard.com/fxprojectforumfrm7.show
And finally, from the site for urban legend de-bunking, Snopes:
http://www.snopes2.com/spoons/fracture/10percnt
No AI cats in our future, sorry (Score:2)
My cat routinely behaves in ways that suggest a capacity for comparing past events to present and future ones, an ability to plan, emotional states ranging from "fear" to "anger" and "sense of fun", and other cognitive abilities that are well beyond those of an ant.
Another thing my cat can do that would be very hard to program is form extremely complex associations. For example, she has learned that when I walk towards the food-closet door at breakfast-time or dinnertime, she is about to be fed. She acts on this knowledge by walking over to her food dish and meowing for food - a fairly unambiguous action.
Thing is, she also knows that if I start walking towards the closet door during the middle of the day and saying "Kibble!", this is a ruse to get her into the kitty carrier, and from there to the vet's office. Is that amazing or what! From just two or so incidents every year, my cat has learned to tell when I'm lying to her.
Yes, I'm a very proud cat owner. My point is, these behaviours would all be much harder to model than those of an ant.
Re:No AI cats in our future, sorry (Score:2)
My cat routinely behaves in ways that suggest a capacity for comparing past events to present and future ones, an ability to plan, emotional states ranging from "fear" to "anger" and "sense of fun", and other cognitive abilities that are well beyond those of an ant.
There's no doubt that a cat's cognitive capacities are much greater than that of an ant's, but isn't emotion itself an instinctual reaction to stimulus? And one's control over emotions (something humans exhibit) could be stated as reaction to internally generated stimuli from paths in the brain that were previously stimulated in association with stimuli (e.g. being spanked for throwing a tantrum, causing the emotional reaction to be associated next potential tantrum, triggering avoidant response, and the tantrum is quelled). Are you suggesting that such recursive chains of stimuli are somehow transcendant of the physical matter they reside in, that there is an external source of "free will" that cannot be modelled?
Penrose indeed has such a theory, but from what I've read, it seems to boil down to "quantum mechanics is hard, so we're special". I'm not suggesting you're in the "ineffable quality of human intelligence" camp, I just felt like seeing whether you meant for "not in our future" to mean "not in our lifetime" or "not ever".
You;re right, I was unclear (Score:2)
playing god (Score:4, Interesting)
Imagine if you advance this technology to the point where you can dump a bunch o fthsi stuff on a planet and wait a few millions to come back and see what happens....
Re:playing god (Score:1)
We could simply be a bunch of 'technology' developed by another race (superior to us or not) and dumped on this planet.
If we did the same, we'd become Gods ourselves.
Perhaps that's how the universe lives? Race creates other race, dumps it off somewhere. That new races creates another race, dumps it off somewhere.. ad nauseum.
After all, if we knew that the Earth was going to blow up, perhaps we'd send 'robotic life' to a planet that we couldn't inhabit.. but would carry on our legacy. Who knows that we're not the result of a race that died many eons ago.
All crazy speculation of course, but these possibilities now seem more realistic than ever before.
It Was New Scientist (Score:1, Informative)
From their story, I got the idea that it would be hard to use the identical method to design circuits for mass production, because the designs that evolve may be dependent on any slight imperfections and/or departures from published specs of the components that are wired together in the model as it evolves. They built a second copy with parts out of the same stockroom, and it didn't work.
Genetic algorithms aren't new. (Score:2, Interesting)
For example, if the transputer this guy was using generated FPGAs, which were then automatically translated into some forth dialect, then his new processors could be refactored into other, more von Neuman like equipment more easily.
A few months ago when I was first designing my stockbot, I faced simmilar problems trying to work with neural networks and other correlation engines. The process time was slow, and the strategies they used were not easily portable. In the end I went with a stack-based language and randomly generated code that examines historical prices. It has worked out a LOT better in the long run.
Could the machines hide their intelligence? (Score:1)
Re: Could the machines hide their intelligence? (Score:1)
Time to pick up the remote and turn off the B-Movie "Maximum Overdrive".
Re: Could the machines hide their intelligence? (Score:1)
Could the machines hide their intelligence? Sure, why not? My programs hide their basic correctness all the time!
FPGAs and Starbridge Systems, Inc (Score:1, Insightful)
Now, with the mention in this article (even though it's dated in 4/01) maybe its time for an (in)famous
Re:FPGAs and Starbridge Systems, Inc (Score:2)
Reconfigurable FPGAs would be better because they get around the problem where the message was encrypted using something other than DES.
Curveball way out to left field (Score:1, Interesting)
Re:Curveball way out to left field (Score:1)
> Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)
My favorite Sci-Fi scenario involves me and a bunch of robo-babes from Sexworld, but I don't see what that has to do with your musings.
Re:Curveball way out to left field (Score:4, Funny)
Stability (Score:5, Insightful)
Re:Stability (Score:1)
Not new... Even featured. (Score:2, Interesting)
hype! hype! (Score:2)
1. This will not lead to intelligent machines that will try to make you into toast. This is not even close to the sort of complexity of evolving bacteria.
2. The reason he doesn't understand how it's working is because the design is using the interference generated in one part of the chip someplace else. Conventional designs try to eliminate this because it's so complex to predect. This is not a matter of "some bizarre magic is happenning that we don't understand and it will probably turn us all into pools of gravy."
tripe! tripe! (Score:4, Interesting)
It is, in fact "some bizarre magic," so to speak, not because we do not understand it, but because it requires considerable algorithmic search to find such an efficient (quick, small and effective) state through which the machine can produce its effect - its magic in the same sense that a chess playing program is magic.
The insight that you fail to grasp is that with this technique, we can take advantage of those variables that you say we should eliminate, making designs better. This allows for the possibility of a much wider range of functionality for chips than we currently have for them.
As far as complexity, what kind of bacteria are you thinking of that its so far from? The techniques used in neural networks are almost all taken straight from biology. The major simplification is a lack of frequency encoding. That's pretty much it; everything else works pretty much the same. Perhaps you're under the impression that the "evolution" of bacteria changes their basic behavior. This is extremely seldom - usually changes in bacteria are no more drastic that the cosmetic changes that occur in a "mutating" FPGA design.
So...at least we can have the complexity of bacteria to do the work of genius hardware designers using search techniques to produce better designs.
One thing further, though: if nature is any indication, it is extremely different to increase the level of complexity of an organism (or in this case, of a network). I would agree that "intelligent" machines that make you into toast are a long way off because we can't make evolving machines - only learning ones, even if they do use genetic algorithms to do it (which is essentially what viruses and bacteria do regularly, I might add).
SkyNet. (Score:2, Insightful)
'Nuff said.
Wow (Score:2, Informative)
PRAISE ALMIGHTY CELERON 600 WORKSTATION UNDER MY DESK, I AM NOT WORTHY.
Similiar work a while back... (Score:1)
Why not software simulation? (Score:1)
This kind of experiment would be a relatively easy to implement on a Beowulf cluster by simulating one or more chips on every node.
Re:Why not software simulation? (Score:2)
older than old (Score:1)
small :
http://safemode.homeip.net/small_fpga.jpg
large :
http://safemode.homeip.net/large_fpga.jpg
Old and misinterpreted (Score:5, Informative)
Additionally, the submitter severely misinterpreted what Thompson's system does. He has the FPGA programmer connected via serial or parallel (I'm not sure), and he runs a genetic algorithm on his computer, the fitness function (the component of a GA which evaluates offspring) loads each offspring's genome (each genome in this case codes for different gate settings on the FPGA) into the FPGA, and separate data acquisition equipment supplies input to the FPGA, and checks the output, and based on that supplies a fitness value, which the GA uses to breed and kill off children for subsequent generations.
He has *NOT* implemented a GA inside a 1998 era FPGA (120000 gates max or so at the time on a Xilinx, which is what he was using) when he had a perfectly good freaking general purpose computer sitting right next to it.
Don't get too scared... but they are damn cool. (Score:5, Interesting)
Here's the fundamental decoder-based GA:
* Take an array of N identically long bits.
* Write a function, called the fitness function, that considers a single element in the array as a solution to your problem, and rates how good that solution is as a floating point number. Rate every bit string in the population of N.
* Take the M strings with the highest ratings. Create N-M new strings by randomly picking two or more parent strings, randmoly picking a spot or two in them, and combining the two parts of them.
* Rinse and repeat until the entire population is identical.
Their main limitation is that they take a lot of memory. Take the number of bits in a genome, multiply by population size, and your processing time grows exponentially with both population size and parent genome grouping. The other problem is that they require that the problem have a quantifiable form of measurement - how do you rate an AI as a single number?
The other problem is commonly called the "superman" problem - what happens if you get a gene by chance very early in your generations that rates very very high, but isn't perfect. Imagine a human walking out of apes, albeit with only one arm. It'll dominate the population. GAs do not guarantee an optimal solution. For some problems, this isn't a problem, or it can be avoided, or reduced to a very small probability. For others, this is unacceptable.
That said, you can do some neat shit with them. This screenshot is from a project I did during undergraduate studies at UP [up.edu], geared towards an RTS style of game, automatically generating waypoints between a start and end position. I'll probably clean it up sometime, add a little guy actually walking around the landscape, stick it in my portfolio. Yay, OpenGL eye candy. [pointofnoreturn.org]
Hoping for slashback (Score:1)
Re:Hoping for slashback (Score:2)
How we know Life, to Change forever (Score:1)
this sort of development poses serious philisophical questions, that I don't think our society *can* answer.
What is life? We really don't know. Some say its some form of inteligence... so are these chips intelligence? yes.. but are they life?
We really don't know, and quite frankly, we'll never know.
Every explanation leads into a cycle of questions.
This technology is great, however we don't know how it will be implimented, nor do we know IF it will be implimented. If it ever got advanced enough, we would see INTENSE legislation being thrown back and forth. Chances are, the democratic world will destroy the technology if it is dangerous.
The problem could be others. The others. The other people from some unkown country, pissed off at the world, with their hands on this technology, ready to start another war.
Interesting.
So the future begins here (Score:1)
On the otherhand we have the ability to put chips into humans for tracking medical info and possibly control the populous.
I am not worried about the future. I am worried about today.
professional journalism... (Score:1)
"Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work. In other words, the circuit was more efficient by a huge order of magnitude than a similar circuit designed by humans using known principles."
Last I checked, orders of magnitude were powers of 10, and
I'm excited about this technology, I hope it gets faster, but this kind of coverage isn't what it needs. And I thought that Linux had bad advocates...
Not exactly practical... (Score:5, Interesting)
Another thing that bothers me, how the heck does he know which cells are being used? Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.
Now I should mention, this is pretty cool from an academic standpoint, and it would be interesting if they could produce something that is both stable and useful using these techniques. It's also pretty cool that they could get this to work at all.
Re:Not exactly practical... (Score:2)
Busting the underlying operational model (Score:4, Interesting)
Some of the assumptions of this model are:
For example we say a certain voltage range is interpreted as a logical 0, a certain different higher volatage range is interpreted as a logical 1.
But the evolutionary algorithm was not constrained in any fashion to make use of this ideal digital model only. It can and will make use of the full available degrees of freedom the physical system, that the fpga device is, offers.
With the result that there might evolve analog cuircuits (which use more than 0 or 1 values), or that we might have electro-magnetic signal transport (Thompson reported some spiral structures which might work as electro-magnetic wave guides), yes it might even employ some quantum mechanical effect that could explained by advanced semiconductor physics only.
One might say that the approximation process that the evolution algorithm is, has started in the domain of digital devices and converged out of that domain into the wider domain of physical devices.
This has a couple of draw backs:
I wonder what would have been happend if the algoritm had a control step after each evolution step which ensured that the next generation design would operate strictly under the assumptions of a conventional digital device model, in that case the evolution process should evolve towards a classical design. Would it have been stil something that is hard to understand?
Perhaps in that case it is easier to stick to software simulation of the design.
Re:Busting the underlying operational model (Score:2)
If this is a study, rather than an attempt at a product, perhaps it is more important to examine the unusual features, and to try to understand them. They might be quite important.
.
Re:Not exactly practical... (Score:2, Informative)
Like this paper [susx.ac.uk] which details an experiment using an external clock and a wide variation in temperatures to evolve the same sort of circuit that Adrian evolved in his thesis paper.
And a complete list of his publications can be found here [susx.ac.uk].
If you've bothered to read any of his work, you'd quickly realize that Adrian is interested in how evolution can use certain properties of the physical substrate in these chips to it's advantage. It's not looking to see if evolutionary type strategies can evolve something a human could build, but looking at how they can build things no human could imagine building.
DISCLAIMER: I am currently a Master's student at the University of Sussex, and had Adrian as a lecturer this past semester. However, I am in no way involved in his research, my interests lie in the software side of genetic algorithms.
Evolvable Hardware Not New (Score:2, Informative)
Where do you get them? (Score:2)
David
Re:Where do you get them? (Score:1)
GIGO (Score:1)
x inputs with values 0 or 1 (or neither) == 2^x (++) possibles.
Any AI with a FPGA ends with a limited number of outputs:
y outputs with values 0 or 1 (or unknown) == 2^y (++) possibles
Inputs-to-Outputs are linked (joined, coupled, etc) by the logic between them (a 'black-box' so-to-speak). An 'evolve' can never happen without linking output to input (feedback).
So, all AI inputs/outputs are constrained by their outputs and their sampling periods.
So for some boxes, an input of (properly encoded)"what is the meaning of the universe?" will return "43". After tuning, these boxes may produce "4f*@#%(#@" or perhaps "forth", "For Linux", "forsooth, BG", "for you use..." or "for more useless answers, call your ISP, then ask BG; if in doubt, ask your mother".
This box apparently returned a tone.
Hmmm...
In the christmas/new year tradition...
this is true intelligence.
Nothing new (Score:1)
Magic? (Score:2)
I only have one thing to say:
Magic
For those unfamiliar with the story. [tuxedo.org]
Skrodes? (Score:1)
Ethical considerations as suggested by STNG (Score:1)
Personally I can't wait for more and more of these systems to be designed and to see how they act and react. If the statement is accurate that only a third of the circuits of a human designed chip were used then this is a potentially incredible resource. Drawing again from my Sci-Fi background, if you look at Issac Asimov's robot's books you will find a short about a AI Brain that was used to create the first hyperdrive ship. While only science fiction, a computer has the advantage of being able to look at all possible known rules, be able to test its environment and summarily report back on a problem that it is given. Seeing what humans may not be able to consider, because we just don't have the perspective, is what makes these systems really valueable. In no time computers like these evolving ones will be giving scientists new puzzles to solve, and a challenged scientist is a happy one (most of the time
Old news.... (Score:2, Informative)
He was using FPGA-type chips, and started with a few thousand randomly-designed circuits, and then merged the most successfull ones. He was able to differentiate between a 1hz and 1khz pulse to one of its inputs.
There was one case where there was a single AND gate tucked away in a little corner somewhere, with its inputs tied only to its output - effectively useless. But the circuit failed to work if it was removed.
I wish I could remember his name
Applications (Score:5, Informative)
Interestingly, if I remember right, it was all machine code, ultimately a series of conditionals about what stick movements to do as a response to certain patterns of instrument readings. They started the evolution by "rewarding" the code which just kept the plane in the air the longest... which, at first, was like 5 seconds. Within a few days of cranking, the code could achieve level flight with ease, and a few weeks later, with more added parameters, it was dogfighting mutated versions of itself. Then they brought in real RAF pilots and the thing just kept learning.
If I remember right, the article ended by saying that by now the AI, which runs totally incomprehensible code, wins most of the dogfights against human pilots, and uses some very interesting maneuvers which it wasn't taught (it wasn't taught anything). The RAF is impressed, and are thinking about a class of dogfighting planes that fly on AI. These things wouldn't mind doing turns at over 10 G's. My guess is that I've read this three or four years ago. Maybe the subsequent developments of the program got classified or maybe it just fizzled, but it sure seems like a promising avenue of research.
Being who I am, I don't get thrilled about the prospects of fancy new AI killing machines, but on the other hand, I want these designs to penetrate video game AI soon! For example I now play Civ3, which has pretty good, but not great AI. What would prevent developers from taking that AI, defining a "mutation function" by which certain parameters in it can change randomly, and then play different mutations against each other millions of times on a supercomputer? Or, even better, outsource the whole number-crunching part to a project like seti@home, where our machines do the crunching. Can you imagine an AI war between the best routines from Team Slashdot and Team Anand? Sure it's frivolous, but waay more fun to watch than brute force encryption cracking.
Great! We'll let the machines do the fighting... (Score:2)
Re:Applications: Ugh! (Score:2)
In fact, quite a bit less than intelligent. Does anyone really expect that this thing wouldn't be adapted to other applications? And evolved for them, of course. But the original layer would persist. Inevitably. Otherwise one would start from scratch (a much better idea!).
If one wants to do this, then start with an AI pilot. Perhaps for crop dusters. Evolve from there. And let the fighter be a spur off of that bush.
An AI pilot is probably a good choice. The environment is relatively simple, and most of the information is already instrumented. (Well, not on crop dusters, but the techniques are there.) And for crop dusters one could even have a square of markers (say microwave frequency corner reflectors, or even transmitters) to mark the edges of the area to be dusted. I don't know that the crop duster would pay much, but it's a much safer application. It's simple. And it's a place to grow from.
.
Resources on Evolutionary Computing (Score:5, Insightful)
I've been evolving algorithms for a long time now, using finite state machines (FSM) which can be easily moved across architectures and programming languages. Quite often, an FSM evolves to exhibit surprising behavior -- and given the complexity of the machines, it is impractical to understand why the FSM acts as it does.
Note that I said "impractical" -- given time, I could follow the FSM's logic and discern it's "thinking" (and I have done so with simpler machines).
If you want real, concrete information about genetic algorithms and artificial life, I suggest visiting ALife.org [alife.org] or the U.S. Navy's GA Archive [navy.mil].
Shameless plug: For five years, I've been developing a free (no ads) web site, Complexity Central [coyotegulch.com], devoted to evolutionary algorithms, artificial life, and emergent behavior. I've posted several Java applets that demonstrate genetic algorithms, cellular automata, flocking behavior, and related subjects.
This is part of my Coyote Gulch [coyotegulch.com] web site, which contains lots of articles, web links, bibliographies, and free code in C++, Java, and Fortran(!).
Re:Emergent behaviour (Score:2)
I normally don't reply to anonymous cowards, since they aren't very credible... ;} And if you'd follow the links I provided, you'd find plenty of citations and web links to "credible" sources of information.
However, in this case, I'll make an exception.
Check out:
Complexity International [csu.edu.au] (a refereed journal) Santa Fe Institute [santafe.edu] (assoc. with Los Alamos Nat. Labs) CiteSeer ResearchIndex of Scientific Papers [nec.com]
Re:Emergent behaviour: nit pick (Score:2)
.
If you think that is scary..... (Score:2)
Isn't the whole point of computer science and mathmatics one of learning things like how and why so we can define and then use control?
Of all the possibilities of why Thompson cannot explain how the chip works, could marketing (investments), NDAs, lack of self reflection (doesn't know what he did)...etc. have anything at all to do with it?
Gee, I plowed this field, put down a bunch of seeds, watered it and I don't know how, but this crop grew.
AI - nothing is naturally that stupid!
Now that's SCARY!!!! Is Thompson an AI?
The Really Big FPGA and Real Humans! (Score:2)
Isn't there like a +10 mode for self awareness?
Trial and error design (Score:2)
The trouble with this sort of thing is that you get circuits that only work for a specific set of components. Copies require different tuning. This is partly why old TV sets had so many screwdriver adjustments in the back. Back when resistors were rated +-20%, capacitors were rated -40+100%, and keeping the tube count down was crucial, it was hard to design for repeatable prodution. Today, we have tighter tolerances and big transistor budgets, so we can use much more conservative designs that work every time.
So this is a neat hack, but not a profound result.
Why is this not applied all the time ? (Score:2)
Winton
Re:help me (Score:1)
> This morning while I was eating breakfast and watching TV, I had a vision. I normally don't have visions and I'm not crazy, okay.
I suspect you sprinkled the wrong white powder on your cereal.
Re:help me (Score:1)
Re:help me (Score:1)
Re:help me (Score:1)
Re:Sheesh...another duplicate (Score:2)