Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Evolutionary Computing Via FPGAs 218

fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
This discussion has been archived. No new comments can be posted.

Evolutionary Computing Via FPGAs

Comments Filter:
  • "I'm sorry, Dave. I can't do that."

    Scary, him not being able to explain exactly how the thing works. Still, any good creation is ultimately the creation of madness.
  • Aged... (Score:3, Interesting)

    by _Knots ( 165356 ) on Saturday December 29, 2001 @01:38AM (#2761626)
    This has been around a long while. I recall (sorry, no reference, somebody help me out here!) reading this about a long while ago in Science/Nature/SciAm.

    Still, the technology's fascinating. Though I'm a little shocked that the latest articles still have no other examples (in detail, that bit about HAL doesn't count) than the two-tone recognition.

    More detail (if memory serves): the FPGA outputs a logic LOW on a 100-Hz wave and a logic HIGH on a 1000-Hz wave. It is programmed by an evolved bit-sequence fed from a host PC computer. IIRC they started with random noise to wire the gates, so that's cool.

    --Knots
    • Re:Aged... (Score:2, Informative)

      by gedanken ( 24390 )
      Yep this is old news. I read about this first in aug/99 issue of SciAm.
      • Indeed. This was on slashdot before, however, with the nonsensical titles given to things, it's next to impossible to find it again.
      • Re:Aged... (Score:4, Informative)

        by mvw ( 2916 ) on Saturday December 29, 2001 @06:21AM (#2761935) Journal
        Yes, this is old:

        [1] Hugo de Garis. Evolvable Hardware: Principles and Practice. http://www.hip.atr.co.jp/~degaris/CACM-EHard.html (link is not available today)

        [2] Adrian Thompson. Evolving Electronic Robot Controllers that Exploit Hardware Resources. CSRP 368 In: Advances in Artificial Life, Proceedings of the 3rd European Conference on Artificial Life (ECAL95) pp640-656. Springer-Verlag Lecture Notes in Artificial Intelligence number 929, 1995.

        [3] Adrian Thompson. Evolving Fault Tolerant Systems. CSRP 385. In: Proceedings of The First IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'95), pp524-520, IEE Conference Publication No. 414, 1995.

        [4] Adrian Thompson. Silicon Evolution. In: Proceedings of Genetic Programming 1996 (GP96), J.R. Koza et al. (Eds), pages 444-452, MIT Press 1996.

        [5] Adrian Thompson. Through the Labyrinth Evolution Finds a Way: A Silicon Ridge. Inman Harvey and Adrian Thompson. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 406-422, Springer Verlag LNCS 1259, 1997.

        [6] Adrian Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. In: Proceedings of The First International Conference on Evolvable Systems: from Biology to Hardware (ICES96). Higuchi, T. and Iwata, M. (eds.), 390-405, Springer Verlag LNCS 1259, 1997.

        [7] Adrian Thompson. Artificial Evolution in the Physical World. In: Evolutionary Robotics: From Intelligent Robots to Artificial Life (ER'97), T. Gomi (Ed.), pages 101-125. AAI Books, 1997.

        [8] Adrian Thompson. On the Automatic Design of Robust Electronics Through Artificial Evolution. In: Proc. 2nd Int. Conf. on Evolvable Systems: From biology to hardware (ICES98), M. Sipper, D. Mange & A. Pe'res-Uribe (Eds.), pp13-24, Springer-Verlag,1998.

    • I did work on this thing for my Masters thesis. This was at the beginning of 1998. Read a few interesting articles by Adrian Thompson. I don't know when he started but it has to be well before 1998.
  • by cyngon ( 513625 )
    This has the sounds of something strait out of a movie. Teminator anyone?

    This begs the question "Can evolving machines be controlled?"

    It's possible that any machine able of changing its logic could change logic that says "DON'T do this..." if it thinks it is an improvement to itself.

    -Bryan
  • I think that in the future, we will have more and more things like this happening. Our machines will create themselves, and they will be so complex that we will have no idea how they work. And eventually, they will decide they don't need us and exterminate the whole human species. Wow. I sure hope that doesn't happen!
  • by Soko ( 17987 ) on Saturday December 29, 2001 @01:41AM (#2761635) Homepage
    The chip modifies its own logic randomly.

    This sounds suspiciously like my lovely wife.

    The scary part: Thompson cannot explain exactly how the chip works!

    I knew it. Male engineer, female chips. Easy explanation.

    Soko

    (Posting from the basement so said lovely wife doesn't tear of my baaa-aa-allsssss.... YOWWWUUCH!!!!)
    • hahaha.
      she probably beat you down after catching you preview that comment, and then added the "lovely" before every mention of wife, right?
    • The chip modifies its own logic randomly.

      Weeelll... not quite right. The program which generates the chip intelligently selects and randomly modifies portions of the existing design (initally generated randomly) based on the performance (if any) of previous iterations. For example, the winner of the first iteration got the door prize for actually having an output. Any output.

      For this to match real life, BTW, you need to postulate the pre-existance of FPGA-equivalents - chemicals at least as complex as RNA although RNA itself would not turn the trick - and some kind of teleology to permit selection to operate well in advance of where it would normally kick in, else the critter is quickly crushed by its own genetic burden.

      This sounds suspiciously like my lovely wife.

      And mine! Perzactly! It's part of The Rules [keyster.com], don'cha know? (-:
    • Women, alone of all known organisms on the face of the Earth, are capable of sustaining an emotional state (such as anger, rage, jealosy, etc.) without the need for any external stimulous. Women can be angry at men for what they think they may say, what they said ten years ago, or what they would have said if everything had been completely different. And it is always the man's fault.
  • by Sanity ( 1431 ) on Saturday December 29, 2001 @01:43AM (#2761639) Homepage Journal
    Genetic Algorithms, and the subset of the field called Genetic Programming has been around for a while, and there is some really amazing stuff out there. For example, Tierra [talkorigins.org] is an artificial ecosystem in which computer programs evolve and compete with each-other, it has been around for over 10 years.

    The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.


    • > The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of. It is tempting to blame this on lack of computing power, but I am not sure that is the real reason. Either way, the possibility of automated design is very exciting indeed and I hope more people find ways to apply it in the real world.

      I don't remember the details, but wasn't one of the /. Beowulf articles from a year or so ago about someone who had set up a B-cluster to run a GA to find "patentable algorithms"?

      I agree that there doesn't seem to be much by way of practical applications for GA, but the technology has come a long way and the CPU time that can be thrown at a run is growing according to Moore's Law, so I would not be surprised to start seeing some noteworth results coming out of the field within the next decade or so. I do know of cases where people have tried to use it for industrial optimization problems, but I don't know whether it has been adopted as a mainstream technology for that sort of thing.

    • by Dr. Awktagon ( 233360 ) on Saturday December 29, 2001 @02:28AM (#2761724) Homepage

      The curious thing is that despite GAs being widely researched for over 20 years, they seem to have found few practical applications that I am aware of.

      They are good for optimizing functions of very many variables. Like, for instance, the weights for a spam-scoring system, to maximize the score over a sample of junk mails, and minimize it on a sample of not spam mails.

      IE, you have a rule that matchs the word "viagra" and a rule that matches the word "money" in a subject, obviously the first one should count more (unless you talk about viagra a lot in your emails), but how much? Imagine you have 100s of rules you came up with, a GA can optimize the weights of each rule, if you have a good selection of emails to let it evolve over.

      • It is funny you should mention this, because a few years ago I wrote a simple piece of software which attempted to evolve a regular expression (actually, it was a subset of the standard R.E language) that could filter spam. It never really got far beyond being a toy, although I did give the code to the Jazilla project, not sure if they did anything with it though...
      • This is what SpamAssassin [taint.org] is doing, and it's becoming incredibly accurate (it was already 99% accurate before they used GAs).
      • by sunhou ( 238795 ) on Saturday December 29, 2001 @11:12AM (#2762278)
        There's a very cool application of genetic algorithms that I saw a few years back. Danny Hillis was trying to evolve sorting networks, a way of representing a sorting algorithm for a fixed number of inputs. (See volume 3 of Knuth, The Art of Computer Programming). He wanted to do it using genetic algorithms, on 16-input sorting networks. The best known one at the time used 60 comparison/swaps to sort 16 inputs.

        The problem is, in order to measure the "fitness" of a sorting network, you should give it all possible sets of numbers and see how many it sorts correctly (you also give a fitness bonus to smaller networks). It turns out you just need to give it all possible sets of 0's and 1's to see if it will sort any set of numbers correctly, so Hills would have to test each network on 65,536 inputs to see how well it did.

        That would take too long, so he wanted to only test the networks on a subset of possible inputs. The clever thing was he made the particular subset used also evolve, as a kind of "parasite" on the sorting networks. The parasites were "rewarded" (had higher fitness) when they broke sorting networks. That way, the system would keep around precisely those test cases which could break the current population of sorting networks, so it was always focusing the testing exactly on the trouble cases, and ignoring the ones "known" to work, and thus saving a ton of time/effort.

        Hillis evolved a sorting network which used 61 comparison-swaps, just 1 away from the best man-made one known. I was at Thinking Machines (Hillis' company) for a while, and fiddled around with this myself a bit, thinking that a bit more simulation must beat the record, but I never did beat it.

        Hillis had a paper, called "Co-Evolving Parasites Improve Simulated Evolution as an Optimization Procedure", published in Artificial Life II (Langton et al, editors), Addison Wesley, 1991, pages 313-324. A note in my database indicates it may also have been published in the journal Physica D, vol. 42, p. 228-234.

        A search also just turned up Hugues Juille [brandeis.edu], who has apparently done some more work in this area. [brandeis.edu] He evolved a 60 comparison sorting network for 16 inputs, tying the record. And he broke a (25-year-old) record for 13-input sorting networks, doing it in 45 comparison/swaps.
    • The reason why you can't get any practial application out of it is simple biology 101. When organisms evolve, survival of the fittest only means that the organism passes genetic material to a reproduced organism derived from itself. This *Does_Not* mean that the organism is the best at anything. Fittest may mean that the guy who should have drowned hopped on the back of the guy trying to save him and the *rescuer* is unfit becasue he/she was not able to pass on genetic material becasue he/she died in the process.

      Imagine that there was a super fast and highly intelligent structure in this chip that was thrown out because it's pathways took too much energy and caused too much heat, while another less spectacular contruction happened to survive because it did half the work at half the efficiency yet cost less energy and so produced less heat. So you might come up with a chip that is a evolutionary dead end and way less efficient; sure it can hear a tone, but more than that may not be possible.
      • The reason why you can't get any practial application out of it is simple biology 101. When organisms evolve, survival of the fittest only means that the organism passes genetic material to a reproduced organism derived from itself. This *Does_Not* mean that the organism is the best at anything.

        That why you build virtual snipers into your virtual ecology that take joy in murdering the Timmys of your simulation.

        Or set up a Doom type interface to it and do the dirty work yourself! ;-)
      • The evolutionary pressure in this case is the researcher's observations about which iterations survive.

        So really the "organisms" compete for the attention of the obvserver.

        tone differentiation is only one allele

        what surprises me is that they got much progrss at all and with only 4000 generations. In biology and most computer science, mutations are generally bad.

        oh, btw. Darwin didn't use the word evolve to describe his hypothesis but the last word in Origin of The Species" is "evolved".
    • In the beginning there were genetic algorithms only. Genetic programming has been developped later. It was John Holland with Adpaptation in Natural and Artificial Systems in 1975 who used the idea of evolution first. It was Koza during the '90 who started the generic programming. The two are verry different, though both use the evolution theory of creating new solutions and selecting the most promissing ones. Genetic algorithms use at their hart bit strings that represent a solution, while genetic programming works on trees of instructions (like: turn left, walk).
      • Yours is quite a narrow definition of Genetic Algorithms, the bitstring representation is just one popular option. I would say that Genetic Programming is a subset of the field of Genetic Algorithms.
    • I believe one use that has been found for them is in creating exam timetables; you have a clear set of guidelines (i.e. you want these exams spaced out, these cannot clash etc) and you leave a computer to work them out. IIRC, Edinburgh University [ed.ac.uk] uses a program using GAs for this very purpose.

      Also, a lot of what is being discussed sounds like Neural Networks as well; gates interlinking and 'learning'. I found it interesting during my MSc, and the field shows some promise if they can get over the factor discussed of "how do you trust something you can't explain?"

      • by Sanity ( 1431 )
        I did my degree in artificial intelligence at Edinburgh University, and yes, the AI department does use a GA for this.

        Interestingly, one of the reasons more people don't is that there are often criteria that need to be taken into account which people would rather not state explicitly (which they would need to do for a GA), such as the fact that more senior lecturers don't like supervising exams early in the morning.

        More of a social than a technical problem I suppose.

  • by wackybrit ( 321117 ) on Saturday December 29, 2001 @01:44AM (#2761642) Homepage Journal
    Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work.

    Isn't this how a regular brain works? Or, at least close. I recall being taught something called the 80/20 rule, that applies to almost anything and everything. Doesn't 20% of the brain do 80% of the work?

    This article is pretty interesting though. I'm not sure how much is true (newsobserver is hardly the New Scientist) but these devices look like they could be the way of the future.

    Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'. How do we know that? How do we know that the mere task of processing is not 'consciousness'?

    On the other side, how do we know that animals are self-aware? When I watch ants, I could just as easily be watching SimAnt, for all the intelligence they seem to have. A computer could do pretty much everything as spontaneously and as accurately as an ant could.

    I think as the years pass by, we'll see chips pushing the envelope. Soon we'll have chips that can act in *exactly* the same way as a cat or dog brain. Then what will be the different between the 'consciousness' of that chip and the consciousness of an average dog? I say, none.

    I don't like to call this Artificial Intelligence. It's real intelligence. Who knows that some sort of 'god' didn't just program us using their own form of electronics based on carbon rather than silicon?

    One day we'll reach human level. I can't wait.
    • How do we know you're conscious?

      I'm just curious, am I conscious?

      It can never be *exactly* the same way as a cat or dog brain works... we don't know how it works, in fact we're FAR from knowing how it works.

      :) good argument
    • I recall being taught something called the 80/20 rule, that applies to almost anything and everything.

      Pah, thats one of the all unifying sentences I shudder when seeing it, normally used by fanatics. I forgot which scientist it was that said "It seems every new theory is first far overstated, before it gets it right place in science", especially at times where the evolutions theory was new and was applied to really everything even a lot of places where it by far did not fit.

      For an AI we're still at calculation capability was shortly far away to be able to "simulate" a human brain. The human brain has 20 Giga Neurons, with 2000-5000 synapses per neuron (the basic calculation unit) resulting in a capacitiy of 10 Tera "Byte". It is frightening that for today 2001 this is not so far apart. Theoretically we would already have enough storage capability to "store" a human brain on hard disk. But going for calculation capability we're lucky wise still years away. Since all the Neurons in our brain can work parallell. We've a outrageous serial calculation capability, but our human capability of parallel computing is still enourmes.

      To get near to human brains Von Neumann machines as we're using today with a central CPU are the wrong way, altough in key sematics they can already match the human brain they will not do it through the human capability of doing a lot of calculations at the same time. The way to match it lies not in the CPU but in the FPGAs, and here were still light years away. How many cells (""Neorons"") does an typical high performance LCA have today? 10.000 maybe? Well that is still far far away from mine 20.000.000.0000 I've in my head :o) I can still sleep in peace, not worring about seeing AI in my lifetime, but if the duplication law of computing power goes my children might have to face it.
      • Umnh...

        But the human brain does lots of things that an AI wouldn't need to do. Like maintain blood pressure, and muscle tension.

        Also, electronic switching speed are considerably higher than biologic switching speeds. So to some considerable extent speed can be traded for quantity.

        Additionally, the robot wouldn't need to be a general purpose intelligence. Certainly not to start with. Something as smart as 10 bees, and with electronic switching speed, would probably be smart enough to drive a car, read a map, and accept a destination. (Taxi, anyone?) There are probably lots of other jobs. The garbage man problem (how does one create an automated garbage man) is mechanical more than AI once the driving problem is solved (though you would need to use the official trash bins, or your garbage wouldn't be collected).

    • Sorry, as somewhat alluded to in the article, 100% of the brain does 100% of the work not 80/20; even if we don't completely understand how every part of it works. If you cut connections in the brain, or simply remove parts of it, then it will not work in the same way. The beautiful complexity of the brain makes it possible for us to consolidate disparate information into a coherent whole. Pattern recognition and language are two of the many things that computer science has yet to replicate.

      To bring in another clarifying example, the brain works in some ways like a genome. There are thousands of genes that we have no idea what they do. One gene may produce a protein that is inhibited by another gene, which in turn inhibits the second gene production. Throw a thousand genes into the mix, and you get a mass of confusion. Understanding what gene does specifically in the large picture is a very difficult prospect. In this aspect I'm not surprised that he does not know exactly how it works.
      • you don't think that this sort of computing model might have some relevance to this *other* computing model, do you? :-)

        i'm willing to stake a prediction point on fpga (or *physically based*) GAs as being a superb analogue to genetic structure, physical structure, etc.

        language, by the way, is a form of pattern matching, as is every abstraction.
    • The urban legend is actually that it is 10% of the brain, and the whole thing is simply false anyway. Hence the urban legend appellation.

      Here's a series of links to read up on this:

      http://www.urbanlegends.com/science/10_percent_o f_ brain.html

      http://pub3.ezboard.com/fxprojectforumfrm7.showM es sage?topicID=94.topic

      And finally, from the site for urban legend de-bunking, Snopes:

      http://www.snopes2.com/spoons/fracture/10percnt. ht m
    • There's a real difference between an ant and my cat: The ant simply responds to stimulus by instinct, with little or no capacity for learning or thought. While cats certainly are not capable of thought on the same level of humans, they are infinately more capable than ants.

      My cat routinely behaves in ways that suggest a capacity for comparing past events to present and future ones, an ability to plan, emotional states ranging from "fear" to "anger" and "sense of fun", and other cognitive abilities that are well beyond those of an ant.

      Another thing my cat can do that would be very hard to program is form extremely complex associations. For example, she has learned that when I walk towards the food-closet door at breakfast-time or dinnertime, she is about to be fed. She acts on this knowledge by walking over to her food dish and meowing for food - a fairly unambiguous action.

      Thing is, she also knows that if I start walking towards the closet door during the middle of the day and saying "Kibble!", this is a ruse to get her into the kitty carrier, and from there to the vet's office. Is that amazing or what! From just two or so incidents every year, my cat has learned to tell when I'm lying to her.

      Yes, I'm a very proud cat owner. My point is, these behaviours would all be much harder to model than those of an ant.
      • There's a real difference between an ant and my cat: The ant simply responds to stimulus by instinct, with little or no capacity for learning or thought. While cats certainly are not capable of thought on the same level of humans, they are infinately more capable than ants.

        My cat routinely behaves in ways that suggest a capacity for comparing past events to present and future ones, an ability to plan, emotional states ranging from "fear" to "anger" and "sense of fun", and other cognitive abilities that are well beyond those of an ant.


        There's no doubt that a cat's cognitive capacities are much greater than that of an ant's, but isn't emotion itself an instinctual reaction to stimulus? And one's control over emotions (something humans exhibit) could be stated as reaction to internally generated stimuli from paths in the brain that were previously stimulated in association with stimuli (e.g. being spanked for throwing a tantrum, causing the emotional reaction to be associated next potential tantrum, triggering avoidant response, and the tantrum is quelled). Are you suggesting that such recursive chains of stimuli are somehow transcendant of the physical matter they reside in, that there is an external source of "free will" that cannot be modelled?

        Penrose indeed has such a theory, but from what I've read, it seems to boil down to "quantum mechanics is hard, so we're special". I'm not suggesting you're in the "ineffable quality of human intelligence" camp, I just felt like seeing whether you meant for "not in our future" to mean "not in our lifetime" or "not ever".
        • I meant "not in our lifetime." I don't believe there's anything magical about cat or human brains, I just think modeling them will be very very hard, and so unlikely to take place in our lifetimes. Of course, I freely admit I may be speaking out of my ass here. :-)
  • playing god (Score:4, Interesting)

    by Jonavin ( 71006 ) on Saturday December 29, 2001 @01:45AM (#2761644) Homepage
    Although this is far from creating life, it makes you wonder if our existence is also "unexplainable" even by _the_creator_ (if you believe in such a thing.

    Imagine if you advance this technology to the point where you can dump a bunch o fthsi stuff on a planet and wait a few millions to come back and see what happens....
    • My point entirely. 2001: A Space Odyssey could be right.

      We could simply be a bunch of 'technology' developed by another race (superior to us or not) and dumped on this planet.

      If we did the same, we'd become Gods ourselves.

      Perhaps that's how the universe lives? Race creates other race, dumps it off somewhere. That new races creates another race, dumps it off somewhere.. ad nauseum.

      After all, if we knew that the Earth was going to blow up, perhaps we'd send 'robotic life' to a planet that we couldn't inhabit.. but would carry on our legacy. Who knows that we're not the result of a race that died many eons ago.

      All crazy speculation of course, but these possibilities now seem more realistic than ever before.
  • It Was New Scientist (Score:1, Informative)

    by Anonymous Coward
    That printed this story at least 5 years back, IIRC.

    From their story, I got the idea that it would be hard to use the identical method to design circuits for mass production, because the designs that evolve may be dependent on any slight imperfections and/or departures from published specs of the components that are wired together in the model as it evolves. They built a second copy with parts out of the same stockroom, and it didn't work.

  • Nor are FPGAs. Transputers and other self-modifying pieces of computing equipment are pretty nifty boxen, but until these stories end with descriptions of tools that indicate to scientists exactly *how* their toys are doing these amazing feats, they will not be useful for general consumption.

    For example, if the transputer this guy was using generated FPGAs, which were then automatically translated into some forth dialect, then his new processors could be refactored into other, more von Neuman like equipment more easily.

    A few months ago when I was first designing my stockbot, I faced simmilar problems trying to work with neural networks and other correlation engines. The process time was slow, and the strategies they used were not easily portable. In the end I went with a stack-based language and randomly generated code that examines historical prices. It has worked out a LOT better in the long run.
  • While reading this article, I continually asked myself the question: if we eventually use these genetic algorithms to create software and possibly an AI, could this AI be the best at doing its job if it simply appears to do exactly as we want it to do, but then turn on us because it simply hides its true intellence? Think about the Matrix. If we have computers evolve themselves, what better way to be the "fittest" than to appear to do as the humans wanted you to do until you became smart enough by running an internal genetic algorithm to take over and become the dominate species? When creating these genetic algorithms, we must be very careful to be sure that there is not a background task running, for it is quite possible that one exists in a more complex genetic-algorithm-created program than those created thus far, and having no clue how the program works is not a step in the right direction.
  • by Anonymous Coward
    Starbridge Systems [starbridgesystems.com] popped up a few years ago (they might have been mentioned on /. even). At the time the things they claimed to do and their client list made them seem like yet another hoax (a la Linux on the N64). The prices they had on their web site at that time didnt help. I mean, who would buy a 94 million dollar (if I remember right) computer... even if you had a "black" budget?! But they didn't go away, and as I bounced around to jobs with big budgets, I heard rumblings and grumblings about this group or that department and Starbridge.

    Now, with the mention in this article (even though it's dated in 4/01) maybe its time for an (in)famous /. interview?
  • I could be off my rocker, but a SWAG [everything2.com] that occured to me could be that he may have stumbled upon a Natural Law (ie. 'gravity' or 'no two forms of matter can occupy the same space at any given time') that has always been in existence and has manifested itself in this. Evolution could very well be the correct term, at a light speed rate of course. Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)

    • > Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)

      My favorite Sci-Fi scenario involves me and a bunch of robo-babes from Sexworld, but I don't see what that has to do with your musings.

    • by Sanity ( 1431 ) on Saturday December 29, 2001 @02:44AM (#2761761) Homepage Journal
      Evolution could very well be the correct term, at a light speed rate of course. Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)
      Is it just me, or does this pseudo-scientific babble actually make any sense to anyone?
  • Stability (Score:5, Insightful)

    by Detritus ( 11846 ) on Saturday December 29, 2001 @02:00AM (#2761667) Homepage
    Does the circuit still work properly if the temperature increases by 10 C? What if the FPGA data file is loaded into an FPGA from a different vendor or an FPGA fabbed on a newer process?
    • I read the article in Sci.Am. 3 years ago. The thing didn't even work if it was plugged into another computer. Future work was to evolve more robust behavior.
  • by Anonymous Coward
    This experiment happened a hell of a long time ago - it was even mentioned in The Science of Discworld, which IIRC came out in 1999.
  • Old news... IIRC:

    1. This will not lead to intelligent machines that will try to make you into toast. This is not even close to the sort of complexity of evolving bacteria.

    2. The reason he doesn't understand how it's working is because the design is using the interference generated in one part of the chip someplace else. Conventional designs try to eliminate this because it's so complex to predect. This is not a matter of "some bizarre magic is happenning that we don't understand and it will probably turn us all into pools of gravy."
    • tripe! tripe! (Score:4, Interesting)

      by fireboy1919 ( 257783 ) <(rustyp) (at) (freeshell.org)> on Saturday December 29, 2001 @04:07AM (#2761847) Homepage Journal
      It is quite arguable that current hardware implementations aren't the fastest way to solve most problems (we currently eliminate complex behaviours and only using predictable gate structures), since routing is known to be an NPC problem alone, making the problem of routing and calculating other variables at least NPC. Eliminating variables makes it easy to pick a solution that is known to work, but it will not necessarily determine the optimum design.

      It is, in fact "some bizarre magic," so to speak, not because we do not understand it, but because it requires considerable algorithmic search to find such an efficient (quick, small and effective) state through which the machine can produce its effect - its magic in the same sense that a chess playing program is magic.

      The insight that you fail to grasp is that with this technique, we can take advantage of those variables that you say we should eliminate, making designs better. This allows for the possibility of a much wider range of functionality for chips than we currently have for them.

      As far as complexity, what kind of bacteria are you thinking of that its so far from? The techniques used in neural networks are almost all taken straight from biology. The major simplification is a lack of frequency encoding. That's pretty much it; everything else works pretty much the same. Perhaps you're under the impression that the "evolution" of bacteria changes their basic behavior. This is extremely seldom - usually changes in bacteria are no more drastic that the cosmetic changes that occur in a "mutating" FPGA design.

      So...at least we can have the complexity of bacteria to do the work of genius hardware designers using search techniques to produce better designs.

      One thing further, though: if nature is any indication, it is extremely different to increase the level of complexity of an organism (or in this case, of a network). I would agree that "intelligent" machines that make you into toast are a long way off because we can't make evolving machines - only learning ones, even if they do use genetic algorithms to do it (which is essentially what viruses and bacteria do regularly, I might add).
  • SkyNet. (Score:2, Insightful)

    by x136 ( 513282 )
    The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

    'Nuff said.
  • Wow (Score:2, Informative)

    by AnimeFreak ( 223792 )
    Computers are getting smarter while Humans are getting dumber (or is it just me?).

    PRAISE ALMIGHTY CELERON 600 WORKSTATION UNDER MY DESK, I AM NOT WORTHY.
  • was done at Brandeis University. There they developed robots through an evolutionary process and a rapid prototyping machine. It was called the "GOLEM" project. The site seems to be broken, but this [google.com] is the google cache.
  • I wonder why they used real hardware instead of simulating it in software. In the later case it would be easier to figure out how this thing works.

    This kind of experiment would be a relatively easy to implement on a Beowulf cluster by simulating one or more chips on every node.

  • Discover magazine, June 1998 issue. "The darwin chip" need i say more? http://208.245.156.153/archive/outputgo.cfm?ID=145 5 for all the lazy people out there. I have the magazine and some pictures on my site.
    small :
    http://safemode.homeip.net/small_fpga.jpg
    large :
    http://safemode.homeip.net/large_fpga.jpg
  • by RevRigel ( 90335 ) on Saturday December 29, 2001 @02:27AM (#2761718)
    I have the Discover magazine this guy was on the cover of. I believe it was July of 1998 or so. It was very cool then, it's still very cool, but it's old and I don't know why it was submitted.

    Additionally, the submitter severely misinterpreted what Thompson's system does. He has the FPGA programmer connected via serial or parallel (I'm not sure), and he runs a genetic algorithm on his computer, the fitness function (the component of a GA which evaluates offspring) loads each offspring's genome (each genome in this case codes for different gate settings on the FPGA) into the FPGA, and separate data acquisition equipment supplies input to the FPGA, and checks the output, and based on that supplies a fitness value, which the GA uses to breed and kill off children for subsequent generations.

    He has *NOT* implemented a GA inside a 1998 era FPGA (120000 gates max or so at the time on a Xilinx, which is what he was using) when he had a perfectly good freaking general purpose computer sitting right next to it.
  • by Uller-RM ( 65231 ) on Saturday December 29, 2001 @02:29AM (#2761727) Homepage
    One thing people should consider is that while Genetic Algorithms are neat, they are limited.

    Here's the fundamental decoder-based GA:
    * Take an array of N identically long bits.
    * Write a function, called the fitness function, that considers a single element in the array as a solution to your problem, and rates how good that solution is as a floating point number. Rate every bit string in the population of N.
    * Take the M strings with the highest ratings. Create N-M new strings by randomly picking two or more parent strings, randmoly picking a spot or two in them, and combining the two parts of them.
    * Rinse and repeat until the entire population is identical.

    Their main limitation is that they take a lot of memory. Take the number of bits in a genome, multiply by population size, and your processing time grows exponentially with both population size and parent genome grouping. The other problem is that they require that the problem have a quantifiable form of measurement - how do you rate an AI as a single number?

    The other problem is commonly called the "superman" problem - what happens if you get a gene by chance very early in your generations that rates very very high, but isn't perfect. Imagine a human walking out of apes, albeit with only one arm. It'll dominate the population. GAs do not guarantee an optimal solution. For some problems, this isn't a problem, or it can be avoided, or reduced to a very small probability. For others, this is unacceptable.

    That said, you can do some neat shit with them. This screenshot is from a project I did during undergraduate studies at UP [up.edu], geared towards an RTS style of game, automatically generating waypoints between a start and end position. I'll probably clean it up sometime, add a little guy actually walking around the landscape, stick it in my portfolio. Yay, OpenGL eye candy. [pointofnoreturn.org]
  • I thought this was really cool when the first story was presented on slashdot several years ago. Since then I've been waiting for a SlashBack with more info of recent developments. Oh Well... The article seems to describe the exact same system (although in less detail).
  • We don't know how our brain works, how ants brains work.. and most importantly how LIFE works.

    this sort of development poses serious philisophical questions, that I don't think our society *can* answer.

    What is life? We really don't know. Some say its some form of inteligence... so are these chips intelligence? yes.. but are they life?

    We really don't know, and quite frankly, we'll never know.

    Every explanation leads into a cycle of questions.

    This technology is great, however we don't know how it will be implimented, nor do we know IF it will be implimented. If it ever got advanced enough, we would see INTENSE legislation being thrown back and forth. Chances are, the democratic world will destroy the technology if it is dangerous.

    The problem could be others. The others. The other people from some unkown country, pissed off at the world, with their hands on this technology, ready to start another war.

    Interesting.
  • we will have chips that will evolve and reprogram themselves so they will increase in effeciently and speed.

    On the otherhand we have the ability to put chips into humans for tracking medical info and possibly control the populous.

    I am not worried about the future. I am worried about today.
  • OK, how seriously can I take this article if the author makes statements like this:

    "Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work. In other words, the circuit was more efficient by a huge order of magnitude than a similar circuit designed by humans using known principles."

    Last I checked, orders of magnitude were powers of 10, and .3 was not 1/10th of 1. Maybe "huge" orders of magnitude work differently... And if NASA buys "a HAL hypercomputer from Star Bridge Systems", then their claim that it "is no larger than a regular desktop machine, yet it's roughly 1,000 times faster than traditional commercial systems" has to be true, too.

    I'm excited about this technology, I hope it gets faster, but this kind of coverage isn't what it needs. And I thought that Linux had bad advocates...
  • by smasch ( 77993 ) on Saturday December 29, 2001 @02:57AM (#2761779)
    I found the paper [susx.ac.uk] on this project, and I found a few things disturbing. First of all, there was no clock: the circuit was completely asynchronous. In other words, the only timing reference they had was the timing of the FPGA itself. Trying to do something like this in silicon is difficult, and doing it in an FPGA is just plain insane. Delays in a circuit vary with just about everything: power supply voltage (and noise), temperature, different chips, the current state of the circuit, and so on. While you might be able to deal with these problems in a custom chip, an FPGA was never designed to be stable in these respects. Also mentioned is that there are several cells in the circuit that appear to have no real use, but when removed, the circuit ceases to operate. As they mention, this could be because of electromagnetic coupling or coupling through the power supplies. Again, I would never want to see something like that in one of my chips.

    Another thing that bothers me, how the heck does he know which cells are being used? Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.

    Now I should mention, this is pretty cool from an academic standpoint, and it would be interesting if they could produce something that is both stable and useful using these techniques. It's also pretty cool that they could get this to work at all.
    • Actually, AFAIK, Intel is working towards asynchronous chip design. There is a quote by an intel spokesman saying that if another company had a completely asynchronous chip designed which could function at somewhat the rate of their later chips, Intel would be toast. In fact, the P4 is a move towards asynchronous design - IIRC, some parts of it are, or its a design which will be more usable in an asynchronous fashion.
    • by mvw ( 2916 ) on Saturday December 29, 2001 @07:06AM (#2761967) Journal
      The major point is that the conventional digital cuircuit logic is based on a certain ideal model.

      Some of the assumptions of this model are:

      1. we have two states 0 and 1
      2. states evolve over time controlled by a regular clock signal
      3. signals propagate by conventional electric current (moving electrons)
      But guess what, a typical phyiscal device implements only an approximation of this model.

      For example we say a certain voltage range is interpreted as a logical 0, a certain different higher volatage range is interpreted as a logical 1.

      But the evolutionary algorithm was not constrained in any fashion to make use of this ideal digital model only. It can and will make use of the full available degrees of freedom the physical system, that the fpga device is, offers.

      With the result that there might evolve analog cuircuits (which use more than 0 or 1 values), or that we might have electro-magnetic signal transport (Thompson reported some spiral structures which might work as electro-magnetic wave guides), yes it might even employ some quantum mechanical effect that could explained by advanced semiconductor physics only.

      One might say that the approximation process that the evolution algorithm is, has started in the domain of digital devices and converged out of that domain into the wider domain of physical devices.

      This has a couple of draw backs:

      • the resulting design is harder to understand
      • individual fgpa chips vary slightly, which is no problem in a digital world, where ranges in the specification allow for slight variations among individual chips, but the resulting evolutionary design migh work only with certain chips, because it has much narrower tolerances than the production spec takes into account

      I wonder what would have been happend if the algoritm had a control step after each evolution step which ensured that the next generation design would operate strictly under the assumptions of a conventional digital device model, in that case the evolution process should evolve towards a classical design. Would it have been stil something that is hard to understand?

      Perhaps in that case it is easier to stick to software simulation of the design.

    • You found the paper, but you didn't look at any of the followup research.

      Like this paper [susx.ac.uk] which details an experiment using an external clock and a wide variation in temperatures to evolve the same sort of circuit that Adrian evolved in his thesis paper.

      And a complete list of his publications can be found here [susx.ac.uk].

      If you've bothered to read any of his work, you'd quickly realize that Adrian is interested in how evolution can use certain properties of the physical substrate in these chips to it's advantage. It's not looking to see if evolutionary type strategies can evolve something a human could build, but looking at how they can build things no human could imagine building.

      DISCLAIMER: I am currently a Master's student at the University of Sussex, and had Adrian as a lecturer this past semester. However, I am in no way involved in his research, my interests lie in the software side of genetic algorithms.
  • Several people, including James Foster [uidaho.edu] at the University of Idaho have been doing this kind of thing for a while. He got some really interesting results, including circuits evolved to take advantage of quantum effects and highly temperature dependant circuits. Actually, the gist of his work is that there are some severe limitations to this approach. There are references for papers on his web page.
  • These FPGA's sound pretty interesting, where do you get them? Could one build a useful, interesting homebrew computer with them? Thanks,

    David
  • by Bsobla ( 262180 )
    Any AI with a FPGA starts with a limited number of inputs:
    x inputs with values 0 or 1 (or neither) == 2^x (++) possibles.
    Any AI with a FPGA ends with a limited number of outputs:
    y outputs with values 0 or 1 (or unknown) == 2^y (++) possibles

    Inputs-to-Outputs are linked (joined, coupled, etc) by the logic between them (a 'black-box' so-to-speak). An 'evolve' can never happen without linking output to input (feedback).
    So, all AI inputs/outputs are constrained by their outputs and their sampling periods.

    So for some boxes, an input of (properly encoded)"what is the meaning of the universe?" will return "43". After tuning, these boxes may produce "4f*@#%(#@" or perhaps "forth", "For Linux", "forsooth, BG", "for you use..." or "for more useless answers, call your ISP, then ask BG; if in doubt, ask your mother".

    This box apparently returned a tone.
    Hmmm...
    In the christmas/new year tradition...
    this is true intelligence.
  • Evolving hardware is nothing new. earlier it was done in software infact there's a whole book on VLSI design using genetic algoriths (sorry don't remember the author)> Work on reconfigurable hardware has been going on for a long time now. here's one reference: http://www.work/research/nichol2full.html [caltech.edu]
  • And get this: Evolution had left five logic cells unconnected to the rest of the circuit, in a position where they should not have been able to influence its workings. Yet if Thompson disconnected them, the circuit failed.

    I only have one thing to say:

    Magic :: More Magic

    For those unfamiliar with the story. [tuxedo.org]
  • Sounds like the skroderider's skrodes from Vernor Vinge's "A Fire Upon the Deep". No one could explain how they worked, or what any individual piece of the machine did, but it all worked. Kinda cool.
  • There was an episode of STNG in which a group of special "adaptive" robot-like drones evolved an awareness and Data tries to save them when they are put in danger. The problem encountered in the episode was an ethical one. It asked the crew to look at what was considered intelligent, aware life for a machine. It should be a while before we are faced with such a problem, but it still doesn't mean we shouldn't be asking some questions.

    Personally I can't wait for more and more of these systems to be designed and to see how they act and react. If the statement is accurate that only a third of the circuits of a human designed chip were used then this is a potentially incredible resource. Drawing again from my Sci-Fi background, if you look at Issac Asimov's robot's books you will find a short about a AI Brain that was used to create the first hyperdrive ship. While only science fiction, a computer has the advantage of being able to look at all possible known rules, be able to test its environment and summarily report back on a problem that it is given. Seeing what humans may not be able to consider, because we just don't have the perspective, is what makes these systems really valueable. In no time computers like these evolving ones will be giving scientists new puzzles to solve, and a challenged scientist is a happy one (most of the time :)
  • Old news.... (Score:2, Informative)

    by Lardmonster ( 302990 )
    One of my coleagues did that at York University (UK) about 5 years ago as a final year project.

    He was using FPGA-type chips, and started with a few thousand randomly-designed circuits, and then merged the most successfull ones. He was able to differentiate between a 1hz and 1khz pulse to one of its inputs.

    There was one case where there was a single AND gate tucked away in a little corner somewhere, with its inputs tied only to its output - effectively useless. But the circuit failed to work if it was removed.

    I wish I could remember his name :-(
  • Applications (Score:5, Informative)

    by Dr. Spork ( 142693 ) on Saturday December 29, 2001 @07:25AM (#2761982)
    I read an article in Der Spiegel (paper version; I doubt it's archived) about a problem the Royal Air Force was having with their flight simulator: the AI that flew the enemy dogfighting planes was too predictable to challenge the best pilots. They hired the people who made the Norns game to evolve a more challenging AI flight script.

    Interestingly, if I remember right, it was all machine code, ultimately a series of conditionals about what stick movements to do as a response to certain patterns of instrument readings. They started the evolution by "rewarding" the code which just kept the plane in the air the longest... which, at first, was like 5 seconds. Within a few days of cranking, the code could achieve level flight with ease, and a few weeks later, with more added parameters, it was dogfighting mutated versions of itself. Then they brought in real RAF pilots and the thing just kept learning.

    If I remember right, the article ended by saying that by now the AI, which runs totally incomprehensible code, wins most of the dogfights against human pilots, and uses some very interesting maneuvers which it wasn't taught (it wasn't taught anything). The RAF is impressed, and are thinking about a class of dogfighting planes that fly on AI. These things wouldn't mind doing turns at over 10 G's. My guess is that I've read this three or four years ago. Maybe the subsequent developments of the program got classified or maybe it just fizzled, but it sure seems like a promising avenue of research.

    Being who I am, I don't get thrilled about the prospects of fancy new AI killing machines, but on the other hand, I want these designs to penetrate video game AI soon! For example I now play Civ3, which has pretty good, but not great AI. What would prevent developers from taking that AI, defining a "mutation function" by which certain parameters in it can change randomly, and then play different mutations against each other millions of times on a supercomputer? Or, even better, outsource the whole number-crunching part to a project like seti@home, where our machines do the crunching. Can you imagine an AI war between the best routines from Team Slashdot and Team Anand? Sure it's frivolous, but waay more fun to watch than brute force encryption cracking.

    • ...and then we'll report to termination centres when the machines on our side lose!
    • It might work. And it might get more dangerous than can be imagined. Creating a adaptable robot that we don't understand, but which has been evolved as a killing machine is, perhaps, a bit less than intelligent.

      In fact, quite a bit less than intelligent. Does anyone really expect that this thing wouldn't be adapted to other applications? And evolved for them, of course. But the original layer would persist. Inevitably. Otherwise one would start from scratch (a much better idea!).

      If one wants to do this, then start with an AI pilot. Perhaps for crop dusters. Evolve from there. And let the fighter be a spur off of that bush.

      An AI pilot is probably a good choice. The environment is relatively simple, and most of the information is already instrumented. (Well, not on crop dusters, but the techniques are there.) And for crop dusters one could even have a square of markers (say microwave frequency corner reflectors, or even transmitters) to mark the edges of the area to be dusted. I don't know that the crop duster would pay much, but it's a much safer application. It's simple. And it's a place to grow from.
      .
  • by ChaoticCoyote ( 195677 ) on Saturday December 29, 2001 @08:25AM (#2762028) Homepage

    I've been evolving algorithms for a long time now, using finite state machines (FSM) which can be easily moved across architectures and programming languages. Quite often, an FSM evolves to exhibit surprising behavior -- and given the complexity of the machines, it is impractical to understand why the FSM acts as it does.

    Note that I said "impractical" -- given time, I could follow the FSM's logic and discern it's "thinking" (and I have done so with simpler machines).

    If you want real, concrete information about genetic algorithms and artificial life, I suggest visiting ALife.org [alife.org] or the U.S. Navy's GA Archive [navy.mil].

    Shameless plug: For five years, I've been developing a free (no ads) web site, Complexity Central [coyotegulch.com], devoted to evolutionary algorithms, artificial life, and emergent behavior. I've posted several Java applets that demonstrate genetic algorithms, cellular automata, flocking behavior, and related subjects.

    This is part of my Coyote Gulch [coyotegulch.com] web site, which contains lots of articles, web links, bibliographies, and free code in C++, Java, and Fortran(!).

  • "The scary part: Thompson cannot explain exactly how the chip works! "

    Isn't the whole point of computer science and mathmatics one of learning things like how and why so we can define and then use control?

    Of all the possibilities of why Thompson cannot explain how the chip works, could marketing (investments), NDAs, lack of self reflection (doesn't know what he did)...etc. have anything at all to do with it?

    Gee, I plowed this field, put down a bunch of seeds, watered it and I don't know how, but this crop grew.

    AI - nothing is naturally that stupid!
    Now that's SCARY!!!! Is Thompson an AI?
  • Oh I get it, Slashdot is the FPGAs [slashdot.org] and WE are the Genetic Algorithims [slashdot.org] that are being feed articles that we then generate feedback on. So as to improve the /. FPGA.

    Isn't there like a +10 mode for self awareness?
  • Read about this in Electronic Design years ago. It's a throwback to trial-and-error analog circuit design, the way people designed circuits in the 1930s to 1950s. Technicians used to wire up circuits more or less reasonably, then plug in resistor and capacitor substitution boxes and adjust the rotary switches until things worked.

    The trouble with this sort of thing is that you get circuits that only work for a specific set of components. Copies require different tuning. This is partly why old TV sets had so many screwdriver adjustments in the back. Back when resistors were rated +-20%, capacitors were rated -40+100%, and keeping the tube count down was crucial, it was hard to design for repeatable prodution. Today, we have tighter tolerances and big transistor budgets, so we can use much more conservative designs that work every time.

    So this is a neat hack, but not a profound result.

  • One problem as far as I see it with GA's is that you need a decent ranking function to judge success... i.e. effectively you have to "know" the answer, AND know how to rank or grade non-or partial solutions with respect to it, before you learn the solution. Otherwise its basically just the regular Generate and Test algorithm which can never scale to large enough problems.

    Winton

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...