Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Evolutionary Computing Via FPGAs 218

fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
This discussion has been archived. No new comments can be posted.

Evolutionary Computing Via FPGAs

Comments Filter:
  • by Sanity ( 1431 ) on Saturday December 29, 2001 @02:43AM (#2761639) Homepage Journal
    Genetic Algorithms, and the subset of the field called Genetic Programming has been around for a while, and there is some really amazing stuff out there. For example, Tierra [talkorigins.org] is an artificial ecosystem in which computer programs evolve and compete with each-other, it has been around for over 10 years.

    The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.

  • Re:Aged... (Score:2, Informative)

    by gedanken ( 24390 ) on Saturday December 29, 2001 @02:45AM (#2761645)
    Yep this is old news. I read about this first in aug/99 issue of SciAm.
  • It Was New Scientist (Score:1, Informative)

    by Anonymous Coward on Saturday December 29, 2001 @02:54AM (#2761656)
    That printed this story at least 5 years back, IIRC.

    From their story, I got the idea that it would be hard to use the identical method to design circuits for mass production, because the designs that evolve may be dependent on any slight imperfections and/or departures from published specs of the components that are wired together in the model as it evolves. They built a second copy with parts out of the same stockroom, and it didn't work.

  • Wow (Score:2, Informative)

    by AnimeFreak ( 223792 ) on Saturday December 29, 2001 @03:05AM (#2761675) Homepage
    Computers are getting smarter while Humans are getting dumber (or is it just me?).

    PRAISE ALMIGHTY CELERON 600 WORKSTATION UNDER MY DESK, I AM NOT WORTHY.
  • by RevRigel ( 90335 ) on Saturday December 29, 2001 @03:27AM (#2761718)
    I have the Discover magazine this guy was on the cover of. I believe it was July of 1998 or so. It was very cool then, it's still very cool, but it's old and I don't know why it was submitted.

    Additionally, the submitter severely misinterpreted what Thompson's system does. He has the FPGA programmer connected via serial or parallel (I'm not sure), and he runs a genetic algorithm on his computer, the fitness function (the component of a GA which evaluates offspring) loads each offspring's genome (each genome in this case codes for different gate settings on the FPGA) into the FPGA, and separate data acquisition equipment supplies input to the FPGA, and checks the output, and based on that supplies a fitness value, which the GA uses to breed and kill off children for subsequent generations.

    He has *NOT* implemented a GA inside a 1998 era FPGA (120000 gates max or so at the time on a Xilinx, which is what he was using) when he had a perfectly good freaking general purpose computer sitting right next to it.
  • by Dr. Awktagon ( 233360 ) on Saturday December 29, 2001 @03:28AM (#2761724) Homepage

    The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.

    They are good for optimizing functions of very many variables. Like, for instance, the weights for a spam-scoring system, to maximize the score over a sample of junk mails, and minimize it on a sample of not spam mails.

    IE, you have a rule that matchs the word "viagra" and a rule that matches the word "money" in a subject, obviously the first one should count more (unless you talk about viagra a lot in your emails), but how much? Imagine you have 100s of rules you came up with, a GA can optimize the weights of each rule, if you have a good selection of emails to let it evolve over.

  • by piehole ( 512670 ) on Saturday December 29, 2001 @04:02AM (#2761785)
    Several people, including James Foster [uidaho.edu] at the University of Idaho have been doing this kind of thing for a while. He got some really interesting results, including circuits evolved to take advantage of quantum effects and highly temperature dependant circuits. Actually, the gist of his work is that there are some severe limitations to this approach. There are references for papers on his web page.
  • by Matts ( 1628 ) on Saturday December 29, 2001 @05:01AM (#2761843) Homepage
    This is what SpamAssassin [taint.org] is doing, and it's becoming incredibly accurate (it was already 99% accurate before they used GAs).
  • by venekamp ( 445124 ) on Saturday December 29, 2001 @06:04AM (#2761895) Homepage
    In the beginning there were genetic algorithms only. Genetic programming has been developped later. It was John Holland with Adpaptation in Natural and Artificial Systems in 1975 who used the idea of evolution first. It was Koza during the '90 who started the generic programming. The two are verry different, though both use the evolution theory of creating new solutions and selecting the most promissing ones. Genetic algorithms use at their hart bit strings that represent a solution, while genetic programming works on trees of instructions (like: turn left, walk).
  • Re:Aged... (Score:4, Informative)

    by mvw ( 2916 ) on Saturday December 29, 2001 @07:21AM (#2761935) Journal
    Yes, this is old:

    [1] Hugo de Garis. Evolvable Hardware: Principles and Practice. http://www.hip.atr.co.jp/~degaris/CACM-EHard.html (link is not available today)

    [2] Adrian Thompson. Evolving Electronic Robot Controllers that Exploit Hardware Resources. CSRP 368 In: Advances in Artificial Life, Proceedings of the 3rd European Conference on Artificial Life (ECAL95) pp640-656. Springer-Verlag Lecture Notes in Artificial Intelligence number 929, 1995.

    [3] Adrian Thompson. Evolving Fault Tolerant Systems. CSRP 385. In: Proceedings of The First IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'95), pp524-520, IEE Conference Publication No. 414, 1995.

    [4] Adrian Thompson. Silicon Evolution. In: Proceedings of Genetic Programming 1996 (GP96), J.R. Koza et al. (Eds), pages 444-452, MIT Press 1996.

    [5] Adrian Thompson. Through the Labyrinth Evolution Finds a Way: A Silicon Ridge. Inman Harvey and Adrian Thompson. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 406-422, Springer Verlag LNCS 1259, 1997.

    [6] Adrian Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 390-405, Springer Verlag LNCS 1259, 1997.

    [7] Adrian Thompson. Artificial Evolution in the Physical World. In: Evolutionary Robotics: From Intelligent Robots to Artificial Life (ER'97), T. Gomi (Ed.), pages 101-125. AAI Books, 1997.

    [8] Adrian Thompson. On the Automatic Design of Robust Electronics Through Artificial Evolution. In: Proc. 2nd Int. Conf. on Evolvable Systems: From biology to hardware (ICES98), M. Sipper, D. Mange & A. Pe'res-Uribe (Eds.), pp13-24, Springer-Verlag,1998.

  • Old news.... (Score:2, Informative)

    by Lardmonster ( 302990 ) on Saturday December 29, 2001 @07:33AM (#2761944)
    One of my coleagues did that at York University (UK) about 5 years ago as a final year project.

    He was using FPGA-type chips, and started with a few thousand randomly-designed circuits, and then merged the most successfull ones. He was able to differentiate between a 1hz and 1khz pulse to one of its inputs.

    There was one case where there was a single AND gate tucked away in a little corner somewhere, with its inputs tied only to its output - effectively useless. But the circuit failed to work if it was removed.

    I wish I could remember his name :-(
  • by hypnotik ( 11190 ) on Saturday December 29, 2001 @08:21AM (#2761977) Homepage
    You found the paper, but you didn't look at any of the followup research.

    Like this paper [susx.ac.uk] which details an experiment using an external clock and a wide variation in temperatures to evolve the same sort of circuit that Adrian evolved in his thesis paper.

    And a complete list of his publications can be found here [susx.ac.uk].

    If you've bothered to read any of his work, you'd quickly realize that Adrian is interested in how evolution can use certain properties of the physical substrate in these chips to it's advantage. It's not looking to see if evolutionary type strategies can evolve something a human could build, but looking at how they can build things no human could imagine building.

    DISCLAIMER: I am currently a Master's student at the University of Sussex, and had Adrian as a lecturer this past semester. However, I am in no way involved in his research, my interests lie in the software side of genetic algorithms.
  • Applications (Score:5, Informative)

    by Dr. Spork ( 142693 ) on Saturday December 29, 2001 @08:25AM (#2761982)
    I read an article in Der Spiegel (paper version; I doubt it's archived) about a problem the Royal Air Force was having with their flight simulator: the AI that flew the enemy dogfighting planes was too predictable to challenge the best pilots. They hired the people who made the Norns game to evolve a more challenging AI flight script.

    Interestingly, if I remember right, it was all machine code, ultimately a series of conditionals about what stick movements to do as a response to certain patterns of instrument readings. They started the evolution by "rewarding" the code which just kept the plane in the air the longest... which, at first, was like 5 seconds. Within a few days of cranking, the code could achieve level flight with ease, and a few weeks later, with more added parameters, it was dogfighting mutated versions of itself. Then they brought in real RAF pilots and the thing just kept learning.

    If I remember right, the article ended by saying that by now the AI, which runs totally incomprehensible code, wins most of the dogfights against human pilots, and uses some very interesting maneuvers which it wasn't taught (it wasn't taught anything). The RAF is impressed, and are thinking about a class of dogfighting planes that fly on AI. These things wouldn't mind doing turns at over 10 G's. My guess is that I've read this three or four years ago. Maybe the subsequent developments of the program got classified or maybe it just fizzled, but it sure seems like a promising avenue of research.

    Being who I am, I don't get thrilled about the prospects of fancy new AI killing machines, but on the other hand, I want these designs to penetrate video game AI soon! For example I now play Civ3, which has pretty good, but not great AI. What would prevent developers from taking that AI, defining a "mutation function" by which certain parameters in it can change randomly, and then play different mutations against each other millions of times on a supercomputer? Or, even better, outsource the whole number-crunching part to a project like seti@home, where our machines do the crunching. Can you imagine an AI war between the best routines from Team Slashdot and Team Anand? Sure it's frivolous, but waay more fun to watch than brute force encryption cracking.

  • by jd142 ( 129673 ) on Saturday December 29, 2001 @11:17AM (#2762167) Homepage
    The urban legend is actually that it is 10% of the brain, and the whole thing is simply false anyway. Hence the urban legend appellation.

    Here's a series of links to read up on this:

    http://www.urbanlegends.com/science/10_percent_o f_ brain.html

    http://pub3.ezboard.com/fxprojectforumfrm7.showM es sage?topicID=94.topic

    And finally, from the site for urban legend de-bunking, Snopes:

    http://www.snopes2.com/spoons/fracture/10percnt. ht m
  • by sunhou ( 238795 ) on Saturday December 29, 2001 @12:12PM (#2762278)
    There's a very cool application of genetic algorithms that I saw a few years back. Danny Hillis was trying to evolve sorting networks, a way of representing a sorting algorithm for a fixed number of inputs. (See volume 3 of Knuth, The Art of Computer Programming). He wanted to do it using genetic algorithms, on 16-input sorting networks. The best known one at the time used 60 comparison/swaps to sort 16 inputs.

    The problem is, in order to measure the "fitness" of a sorting network, you should give it all possible sets of numbers and see how many it sorts correctly (you also give a fitness bonus to smaller networks). It turns out you just need to give it all possible sets of 0's and 1's to see if it will sort any set of numbers correctly, so Hills would have to test each network on 65,536 inputs to see how well it did.

    That would take too long, so he wanted to only test the networks on a subset of possible inputs. The clever thing was he made the particular subset used also evolve, as a kind of "parasite" on the sorting networks. The parasites were "rewarded" (had higher fitness) when they broke sorting networks. That way, the system would keep around precisely those test cases which could break the current population of sorting networks, so it was always focusing the testing exactly on the trouble cases, and ignoring the ones "known" to work, and thus saving a ton of time/effort.

    Hillis evolved a sorting network which used 61 comparison-swaps, just 1 away from the best man-made one known. I was at Thinking Machines (Hillis' company) for a while, and fiddled around with this myself a bit, thinking that a bit more simulation must beat the record, but I never did beat it.

    Hillis had a paper, called "Co-Evolving Parasites Improve Simulated Evolution as an Optimization Procedure", published in Artificial Life II (Langton et al, editors), Addison Wesley, 1991, pages 313-324. A note in my database indicates it may also have been published in the journal Physica D, vol. 42, p. 228-234.

    A search also just turned up Hugues Juille [brandeis.edu], who has apparently done some more work in this area. [brandeis.edu] He evolved a 60 comparison sorting network for 16 inputs, tying the record. And he broke a (25-year-old) record for 13-input sorting networks, doing it in 45 comparison/swaps.
  • by Anonymous Coward on Saturday December 29, 2001 @05:36PM (#2763207)
    Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.
    Xilinx published the bitstreams for the XC62xx parts in the datasheet. Here [bournemouth.ac.uk] has a link to the XC62xx datasheet.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...