Evolutionary Computing Via FPGAs 218
fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
Genetic Algorithms are not new (Score:5, Informative)
The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.
Re:Aged... (Score:2, Informative)
It Was New Scientist (Score:1, Informative)
From their story, I got the idea that it would be hard to use the identical method to design circuits for mass production, because the designs that evolve may be dependent on any slight imperfections and/or departures from published specs of the components that are wired together in the model as it evolves. They built a second copy with parts out of the same stockroom, and it didn't work.
Wow (Score:2, Informative)
PRAISE ALMIGHTY CELERON 600 WORKSTATION UNDER MY DESK, I AM NOT WORTHY.
Old and misinterpreted (Score:5, Informative)
Additionally, the submitter severely misinterpreted what Thompson's system does. He has the FPGA programmer connected via serial or parallel (I'm not sure), and he runs a genetic algorithm on his computer, the fitness function (the component of a GA which evaluates offspring) loads each offspring's genome (each genome in this case codes for different gate settings on the FPGA) into the FPGA, and separate data acquisition equipment supplies input to the FPGA, and checks the output, and based on that supplies a fitness value, which the GA uses to breed and kill off children for subsequent generations.
He has *NOT* implemented a GA inside a 1998 era FPGA (120000 gates max or so at the time on a Xilinx, which is what he was using) when he had a perfectly good freaking general purpose computer sitting right next to it.
Re:Genetic Algorithms are not new (Score:4, Informative)
The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.
They are good for optimizing functions of very many variables. Like, for instance, the weights for a spam-scoring system, to maximize the score over a sample of junk mails, and minimize it on a sample of not spam mails.
IE, you have a rule that matchs the word "viagra" and a rule that matches the word "money" in a subject, obviously the first one should count more (unless you talk about viagra a lot in your emails), but how much? Imagine you have 100s of rules you came up with, a GA can optimize the weights of each rule, if you have a good selection of emails to let it evolve over.
Evolvable Hardware Not New (Score:2, Informative)
Re:Genetic Algorithms are not new (Score:3, Informative)
Re:Genetic Algorithms are not new (Score:2, Informative)
Re:Aged... (Score:4, Informative)
[1] Hugo de Garis. Evolvable Hardware: Principles and Practice. http://www.hip.atr.co.jp/~degaris/CACM-EHard.html (link is not available today)
[2] Adrian Thompson. Evolving Electronic Robot Controllers that Exploit Hardware Resources. CSRP 368 In: Advances in Artificial Life, Proceedings of the 3rd European Conference on Artificial Life (ECAL95) pp640-656. Springer-Verlag Lecture Notes in Artificial Intelligence number 929, 1995.
[3] Adrian Thompson. Evolving Fault Tolerant Systems. CSRP 385. In: Proceedings of The First IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'95), pp524-520, IEE Conference Publication No. 414, 1995.
[4] Adrian Thompson. Silicon Evolution. In: Proceedings of Genetic Programming 1996 (GP96), J.R. Koza et al. (Eds), pages 444-452, MIT Press 1996.
[5] Adrian Thompson. Through the Labyrinth Evolution Finds a Way: A Silicon Ridge. Inman Harvey and Adrian Thompson. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 406-422, Springer Verlag LNCS 1259, 1997.
[6] Adrian Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 390-405, Springer Verlag LNCS 1259, 1997.
[7] Adrian Thompson. Artificial Evolution in the Physical World. In: Evolutionary Robotics: From Intelligent Robots to Artificial Life (ER'97), T. Gomi (Ed.), pages 101-125. AAI Books, 1997.
[8] Adrian Thompson. On the Automatic Design of Robust Electronics Through Artificial Evolution. In: Proc. 2nd Int. Conf. on Evolvable Systems: From biology to hardware (ICES98), M. Sipper, D. Mange & A. Pe'res-Uribe (Eds.), pp13-24, Springer-Verlag,1998.
Old news.... (Score:2, Informative)
He was using FPGA-type chips, and started with a few thousand randomly-designed circuits, and then merged the most successfull ones. He was able to differentiate between a 1hz and 1khz pulse to one of its inputs.
There was one case where there was a single AND gate tucked away in a little corner somewhere, with its inputs tied only to its output - effectively useless. But the circuit failed to work if it was removed.
I wish I could remember his name
Re:Not exactly practical... (Score:2, Informative)
Like this paper [susx.ac.uk] which details an experiment using an external clock and a wide variation in temperatures to evolve the same sort of circuit that Adrian evolved in his thesis paper.
And a complete list of his publications can be found here [susx.ac.uk].
If you've bothered to read any of his work, you'd quickly realize that Adrian is interested in how evolution can use certain properties of the physical substrate in these chips to it's advantage. It's not looking to see if evolutionary type strategies can evolve something a human could build, but looking at how they can build things no human could imagine building.
DISCLAIMER: I am currently a Master's student at the University of Sussex, and had Adrian as a lecturer this past semester. However, I am in no way involved in his research, my interests lie in the software side of genetic algorithms.
Applications (Score:5, Informative)
Interestingly, if I remember right, it was all machine code, ultimately a series of conditionals about what stick movements to do as a response to certain patterns of instrument readings. They started the evolution by "rewarding" the code which just kept the plane in the air the longest... which, at first, was like 5 seconds. Within a few days of cranking, the code could achieve level flight with ease, and a few weeks later, with more added parameters, it was dogfighting mutated versions of itself. Then they brought in real RAF pilots and the thing just kept learning.
If I remember right, the article ended by saying that by now the AI, which runs totally incomprehensible code, wins most of the dogfights against human pilots, and uses some very interesting maneuvers which it wasn't taught (it wasn't taught anything). The RAF is impressed, and are thinking about a class of dogfighting planes that fly on AI. These things wouldn't mind doing turns at over 10 G's. My guess is that I've read this three or four years ago. Maybe the subsequent developments of the program got classified or maybe it just fizzled, but it sure seems like a promising avenue of research.
Being who I am, I don't get thrilled about the prospects of fancy new AI killing machines, but on the other hand, I want these designs to penetrate video game AI soon! For example I now play Civ3, which has pretty good, but not great AI. What would prevent developers from taking that AI, defining a "mutation function" by which certain parameters in it can change randomly, and then play different mutations against each other millions of times on a supercomputer? Or, even better, outsource the whole number-crunching part to a project like seti@home, where our machines do the crunching. Can you imagine an AI war between the best routines from Team Slashdot and Team Anand? Sure it's frivolous, but waay more fun to watch than brute force encryption cracking.
Re:Exciting times ahead for 'AI' (Score:2, Informative)
Here's a series of links to read up on this:
http://www.urbanlegends.com/science/10_percent_
http://pub3.ezboard.com/fxprojectforumfrm7.show
And finally, from the site for urban legend de-bunking, Snopes:
http://www.snopes2.com/spoons/fracture/10percnt
Cool study using genetic algorithms (Score:4, Informative)
The problem is, in order to measure the "fitness" of a sorting network, you should give it all possible sets of numbers and see how many it sorts correctly (you also give a fitness bonus to smaller networks). It turns out you just need to give it all possible sets of 0's and 1's to see if it will sort any set of numbers correctly, so Hills would have to test each network on 65,536 inputs to see how well it did.
That would take too long, so he wanted to only test the networks on a subset of possible inputs. The clever thing was he made the particular subset used also evolve, as a kind of "parasite" on the sorting networks. The parasites were "rewarded" (had higher fitness) when they broke sorting networks. That way, the system would keep around precisely those test cases which could break the current population of sorting networks, so it was always focusing the testing exactly on the trouble cases, and ignoring the ones "known" to work, and thus saving a ton of time/effort.
Hillis evolved a sorting network which used 61 comparison-swaps, just 1 away from the best man-made one known. I was at Thinking Machines (Hillis' company) for a while, and fiddled around with this myself a bit, thinking that a bit more simulation must beat the record, but I never did beat it.
Hillis had a paper, called "Co-Evolving Parasites Improve Simulated Evolution as an Optimization Procedure", published in Artificial Life II (Langton et al, editors), Addison Wesley, 1991, pages 313-324. A note in my database indicates it may also have been published in the journal Physica D, vol. 42, p. 228-234.
A search also just turned up Hugues Juille [brandeis.edu], who has apparently done some more work in this area. [brandeis.edu] He evolved a 60 comparison sorting network for 16 inputs, tying the record. And he broke a (25-year-old) record for 13-input sorting networks, doing it in 45 comparison/swaps.
Re:Not exactly practical... (Score:1, Informative)