Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Evolutionary Computing Via FPGAs 218

fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
This discussion has been archived. No new comments can be posted.

Evolutionary Computing Via FPGAs

Comments Filter:
  • Aged... (Score:3, Interesting)

    by _Knots ( 165356 ) on Saturday December 29, 2001 @02:38AM (#2761626)
    This has been around a long while. I recall (sorry, no reference, somebody help me out here!) reading this about a long while ago in Science/Nature/SciAm.

    Still, the technology's fascinating. Though I'm a little shocked that the latest articles still have no other examples (in detail, that bit about HAL doesn't count) than the two-tone recognition.

    More detail (if memory serves): the FPGA outputs a logic LOW on a 100-Hz wave and a logic HIGH on a 1000-Hz wave. It is programmed by an evolved bit-sequence fed from a host PC computer. IIRC they started with random noise to wire the gates, so that's cool.

    --Knots
  • by cyngon ( 513625 ) on Saturday December 29, 2001 @02:38AM (#2761627)
    This has the sounds of something strait out of a movie. Teminator anyone?

    This begs the question "Can evolving machines be controlled?"

    It's possible that any machine able of changing its logic could change logic that says "DON'T do this..." if it thinks it is an improvement to itself.

    -Bryan
  • by wackybrit ( 321117 ) on Saturday December 29, 2001 @02:44AM (#2761642) Homepage Journal
    Thompson's chip was doing its work preternaturally well. But how? Out of 100 logic cells he had assigned to the task, only a third seemed to be critical to the circuit's work.

    Isn't this how a regular brain works? Or, at least close. I recall being taught something called the 80/20 rule, that applies to almost anything and everything. Doesn't 20% of the brain do 80% of the work?

    This article is pretty interesting though. I'm not sure how much is true (newsobserver is hardly the New Scientist) but these devices look like they could be the way of the future.

    Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'. How do we know that? How do we know that the mere task of processing is not 'consciousness'?

    On the other side, how do we know that animals are self-aware? When I watch ants, I could just as easily be watching SimAnt, for all the intelligence they seem to have. A computer could do pretty much everything as spontaneously and as accurately as an ant could.

    I think as the years pass by, we'll see chips pushing the envelope. Soon we'll have chips that can act in *exactly* the same way as a cat or dog brain. Then what will be the different between the 'consciousness' of that chip and the consciousness of an average dog? I say, none.

    I don't like to call this Artificial Intelligence. It's real intelligence. Who knows that some sort of 'god' didn't just program us using their own form of electronics based on carbon rather than silicon?

    One day we'll reach human level. I can't wait.
  • playing god (Score:4, Interesting)

    by Jonavin ( 71006 ) on Saturday December 29, 2001 @02:45AM (#2761644) Homepage
    Although this is far from creating life, it makes you wonder if our existence is also "unexplainable" even by _the_creator_ (if you believe in such a thing.

    Imagine if you advance this technology to the point where you can dump a bunch o fthsi stuff on a planet and wait a few millions to come back and see what happens....
  • by LazyDawg ( 519783 ) <`lazydawg' `at' `hotmail.com'> on Saturday December 29, 2001 @02:55AM (#2761657) Homepage
    Nor are FPGAs. Transputers and other self-modifying pieces of computing equipment are pretty nifty boxen, but until these stories end with descriptions of tools that indicate to scientists exactly *how* their toys are doing these amazing feats, they will not be useful for general consumption.

    For example, if the transputer this guy was using generated FPGAs, which were then automatically translated into some forth dialect, then his new processors could be refactored into other, more von Neuman like equipment more easily.

    A few months ago when I was first designing my stockbot, I faced simmilar problems trying to work with neural networks and other correlation engines. The process time was slow, and the strategies they used were not easily portable. In the end I went with a stack-based language and randomly generated code that examines historical prices. It has worked out a LOT better in the long run.
  • by BlueJay465 ( 216717 ) on Saturday December 29, 2001 @02:57AM (#2761665)
    I could be off my rocker, but a SWAG [everything2.com] that occured to me could be that he may have stumbled upon a Natural Law (ie. 'gravity' or 'no two forms of matter can occupy the same space at any given time') that has always been in existence and has manifested itself in this. Evolution could very well be the correct term, at a light speed rate of course. Could this be the first step into determining or simulating where the source of life came from, or could this lead to the destruction of it? (insert your favorite Sci-Fi scenario here)
  • by Anonymous Coward on Saturday December 29, 2001 @03:03AM (#2761671)
    This experiment happened a hell of a long time ago - it was even mentioned in The Science of Discworld, which IIRC came out in 1999.
  • by Uller-RM ( 65231 ) on Saturday December 29, 2001 @03:29AM (#2761727) Homepage
    One thing people should consider is that while Genetic Algorithms are neat, they are limited.

    Here's the fundamental decoder-based GA:
    * Take an array of N identically long bits.
    * Write a function, called the fitness function, that considers a single element in the array as a solution to your problem, and rates how good that solution is as a floating point number. Rate every bit string in the population of N.
    * Take the M strings with the highest ratings. Create N-M new strings by randomly picking two or more parent strings, randmoly picking a spot or two in them, and combining the two parts of them.
    * Rinse and repeat until the entire population is identical.

    Their main limitation is that they take a lot of memory. Take the number of bits in a genome, multiply by population size, and your processing time grows exponentially with both population size and parent genome grouping. The other problem is that they require that the problem have a quantifiable form of measurement - how do you rate an AI as a single number?

    The other problem is commonly called the "superman" problem - what happens if you get a gene by chance very early in your generations that rates very very high, but isn't perfect. Imagine a human walking out of apes, albeit with only one arm. It'll dominate the population. GAs do not guarantee an optimal solution. For some problems, this isn't a problem, or it can be avoided, or reduced to a very small probability. For others, this is unacceptable.

    That said, you can do some neat shit with them. This screenshot is from a project I did during undergraduate studies at UP [up.edu], geared towards an RTS style of game, automatically generating waypoints between a start and end position. I'll probably clean it up sometime, add a little guy actually walking around the landscape, stick it in my portfolio. Yay, OpenGL eye candy. [pointofnoreturn.org]
  • by smasch ( 77993 ) on Saturday December 29, 2001 @03:57AM (#2761779)
    I found the paper [susx.ac.uk] on this project, and I found a few things disturbing. First of all, there was no clock: the circuit was completely asynchronous. In other words, the only timing reference they had was the timing of the FPGA itself. Trying to do something like this in silicon is difficult, and doing it in an FPGA is just plain insane. Delays in a circuit vary with just about everything: power supply voltage (and noise), temperature, different chips, the current state of the circuit, and so on. While you might be able to deal with these problems in a custom chip, an FPGA was never designed to be stable in these respects. Also mentioned is that there are several cells in the circuit that appear to have no real use, but when removed, the circuit ceases to operate. As they mention, this could be because of electromagnetic coupling or coupling through the power supplies. Again, I would never want to see something like that in one of my chips.

    Another thing that bothers me, how the heck does he know which cells are being used? Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.

    Now I should mention, this is pretty cool from an academic standpoint, and it would be interesting if they could produce something that is both stable and useful using these techniques. It's also pretty cool that they could get this to work at all.
  • by anshil ( 302405 ) on Saturday December 29, 2001 @04:11AM (#2761791) Homepage
    I recall being taught something called the 80/20 rule, that applies to almost anything and everything.

    Pah, thats one of the all unifying sentences I shudder when seeing it, normally used by fanatics. I forgot which scientist it was that said "It seems every new theory is first far overstated, before it gets it right place in science", especially at times where the evolutions theory was new and was applied to really everything even a lot of places where it by far did not fit.

    For an AI we're still at calculation capability was shortly far away to be able to "simulate" a human brain. The human brain has 20 Giga Neurons, with 2000-5000 synapses per neuron (the basic calculation unit) resulting in a capacitiy of 10 Tera "Byte". It is frightening that for today 2001 this is not so far apart. Theoretically we would already have enough storage capability to "store" a human brain on hard disk. But going for calculation capability we're lucky wise still years away. Since all the Neurons in our brain can work parallell. We've a outrageous serial calculation capability, but our human capability of parallel computing is still enourmes.

    To get near to human brains Von Neumann machines as we're using today with a central CPU are the wrong way, altough in key sematics they can already match the human brain they will not do it through the human capability of doing a lot of calculations at the same time. The way to match it lies not in the CPU but in the FPGAs, and here were still light years away. How many cells (""Neorons"") does an typical high performance LCA have today? 10.000 maybe? Well that is still far far away from mine 20.000.000.0000 I've in my head :o) I can still sleep in peace, not worring about seeing AI in my lifetime, but if the duplication law of computing power goes my children might have to face it.
  • tripe! tripe! (Score:4, Interesting)

    by fireboy1919 ( 257783 ) <rustyp AT freeshell DOT org> on Saturday December 29, 2001 @05:07AM (#2761847) Homepage Journal
    It is quite arguable that current hardware implementations aren't the fastest way to solve most problems (we currently eliminate complex behaviours and only using predictable gate structures), since routing is known to be an NPC problem alone, making the problem of routing and calculating other variables at least NPC. Eliminating variables makes it easy to pick a solution that is known to work, but it will not necessarily determine the optimum design.

    It is, in fact "some bizarre magic," so to speak, not because we do not understand it, but because it requires considerable algorithmic search to find such an efficient (quick, small and effective) state through which the machine can produce its effect - its magic in the same sense that a chess playing program is magic.

    The insight that you fail to grasp is that with this technique, we can take advantage of those variables that you say we should eliminate, making designs better. This allows for the possibility of a much wider range of functionality for chips than we currently have for them.

    As far as complexity, what kind of bacteria are you thinking of that its so far from? The techniques used in neural networks are almost all taken straight from biology. The major simplification is a lack of frequency encoding. That's pretty much it; everything else works pretty much the same. Perhaps you're under the impression that the "evolution" of bacteria changes their basic behavior. This is extremely seldom - usually changes in bacteria are no more drastic that the cosmetic changes that occur in a "mutating" FPGA design.

    So...at least we can have the complexity of bacteria to do the work of genius hardware designers using search techniques to produce better designs.

    One thing further, though: if nature is any indication, it is extremely different to increase the level of complexity of an organism (or in this case, of a network). I would agree that "intelligent" machines that make you into toast are a long way off because we can't make evolving machines - only learning ones, even if they do use genetic algorithms to do it (which is essentially what viruses and bacteria do regularly, I might add).
  • by larien ( 5608 ) on Saturday December 29, 2001 @06:53AM (#2761922) Homepage Journal
    I believe one use that has been found for them is in creating exam timetables; you have a clear set of guidelines (i.e. you want these exams spaced out, these cannot clash etc) and you leave a computer to work them out. IIRC, Edinburgh University [ed.ac.uk] uses a program using GAs for this very purpose.

    Also, a lot of what is being discussed sounds like Neural Networks as well; gates interlinking and 'learning'. I found it interesting during my MSc, and the field shows some promise if they can get over the factor discussed of "how do you trust something you can't explain?"

  • by mvw ( 2916 ) on Saturday December 29, 2001 @08:06AM (#2761967) Journal
    The major point is that the conventional digital cuircuit logic is based on a certain ideal model.

    Some of the assumptions of this model are:

    1. we have two states 0 and 1
    2. states evolve over time controlled by a regular clock signal
    3. signals propagate by conventional electric current (moving electrons)
    But guess what, a typical phyiscal device implements only an approximation of this model.

    For example we say a certain voltage range is interpreted as a logical 0, a certain different higher volatage range is interpreted as a logical 1.

    But the evolutionary algorithm was not constrained in any fashion to make use of this ideal digital model only. It can and will make use of the full available degrees of freedom the physical system, that the fpga device is, offers.

    With the result that there might evolve analog cuircuits (which use more than 0 or 1 values), or that we might have electro-magnetic signal transport (Thompson reported some spiral structures which might work as electro-magnetic wave guides), yes it might even employ some quantum mechanical effect that could explained by advanced semiconductor physics only.

    One might say that the approximation process that the evolution algorithm is, has started in the domain of digital devices and converged out of that domain into the wider domain of physical devices.

    This has a couple of draw backs:

    • the resulting design is harder to understand
    • individual fgpa chips vary slightly, which is no problem in a digital world, where ranges in the specification allow for slight variations among individual chips, but the resulting evolutionary design migh work only with certain chips, because it has much narrower tolerances than the production spec takes into account

    I wonder what would have been happend if the algoritm had a control step after each evolution step which ensured that the next generation design would operate strictly under the assumptions of a conventional digital device model, in that case the evolution process should evolve towards a classical design. Would it have been stil something that is hard to understand?

    Perhaps in that case it is easier to stick to software simulation of the design.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...