Evolutionary Computing Via FPGAs 218
fm6 writes "There's this computer scientist named Adrian Thompson who's into what he calls "soft computing". He takes FPGAs and programs them to "evolve", Darwin-style. The chip modifies its own logic randomly. Changes that improve the chips ability to do some task are kept, others are discarded. He's actually succeeded in producing a chip that recognized a tone. The scary part: Thompson cannot explain exactly how the chip works! Article here."
Aged... (Score:3, Interesting)
Still, the technology's fascinating. Though I'm a little shocked that the latest articles still have no other examples (in detail, that bit about HAL doesn't count) than the two-tone recognition.
More detail (if memory serves): the FPGA outputs a logic LOW on a 100-Hz wave and a logic HIGH on a 1000-Hz wave. It is programmed by an evolved bit-sequence fed from a host PC computer. IIRC they started with random noise to wire the gates, so that's cool.
--Knots
Strait out of a movie (Score:2, Interesting)
This begs the question "Can evolving machines be controlled?"
It's possible that any machine able of changing its logic could change logic that says "DON'T do this..." if it thinks it is an improvement to itself.
-Bryan
Exciting times ahead for 'AI' (Score:4, Interesting)
Isn't this how a regular brain works? Or, at least close. I recall being taught something called the 80/20 rule, that applies to almost anything and everything. Doesn't 20% of the brain do 80% of the work?
This article is pretty interesting though. I'm not sure how much is true (newsobserver is hardly the New Scientist) but these devices look like they could be the way of the future.
Some people will argue that it's merely a computer program running in these chips and that 'real' creatures are actually 'conscious'. How do we know that? How do we know that the mere task of processing is not 'consciousness'?
On the other side, how do we know that animals are self-aware? When I watch ants, I could just as easily be watching SimAnt, for all the intelligence they seem to have. A computer could do pretty much everything as spontaneously and as accurately as an ant could.
I think as the years pass by, we'll see chips pushing the envelope. Soon we'll have chips that can act in *exactly* the same way as a cat or dog brain. Then what will be the different between the 'consciousness' of that chip and the consciousness of an average dog? I say, none.
I don't like to call this Artificial Intelligence. It's real intelligence. Who knows that some sort of 'god' didn't just program us using their own form of electronics based on carbon rather than silicon?
One day we'll reach human level. I can't wait.
playing god (Score:4, Interesting)
Imagine if you advance this technology to the point where you can dump a bunch o fthsi stuff on a planet and wait a few millions to come back and see what happens....
Genetic algorithms aren't new. (Score:2, Interesting)
For example, if the transputer this guy was using generated FPGAs, which were then automatically translated into some forth dialect, then his new processors could be refactored into other, more von Neuman like equipment more easily.
A few months ago when I was first designing my stockbot, I faced simmilar problems trying to work with neural networks and other correlation engines. The process time was slow, and the strategies they used were not easily portable. In the end I went with a stack-based language and randomly generated code that examines historical prices. It has worked out a LOT better in the long run.
Curveball way out to left field (Score:1, Interesting)
Not new... Even featured. (Score:2, Interesting)
Don't get too scared... but they are damn cool. (Score:5, Interesting)
Here's the fundamental decoder-based GA:
* Take an array of N identically long bits.
* Write a function, called the fitness function, that considers a single element in the array as a solution to your problem, and rates how good that solution is as a floating point number. Rate every bit string in the population of N.
* Take the M strings with the highest ratings. Create N-M new strings by randomly picking two or more parent strings, randmoly picking a spot or two in them, and combining the two parts of them.
* Rinse and repeat until the entire population is identical.
Their main limitation is that they take a lot of memory. Take the number of bits in a genome, multiply by population size, and your processing time grows exponentially with both population size and parent genome grouping. The other problem is that they require that the problem have a quantifiable form of measurement - how do you rate an AI as a single number?
The other problem is commonly called the "superman" problem - what happens if you get a gene by chance very early in your generations that rates very very high, but isn't perfect. Imagine a human walking out of apes, albeit with only one arm. It'll dominate the population. GAs do not guarantee an optimal solution. For some problems, this isn't a problem, or it can be avoided, or reduced to a very small probability. For others, this is unacceptable.
That said, you can do some neat shit with them. This screenshot is from a project I did during undergraduate studies at UP [up.edu], geared towards an RTS style of game, automatically generating waypoints between a start and end position. I'll probably clean it up sometime, add a little guy actually walking around the landscape, stick it in my portfolio. Yay, OpenGL eye candy. [pointofnoreturn.org]
Not exactly practical... (Score:5, Interesting)
Another thing that bothers me, how the heck does he know which cells are being used? Last time I checked, the bitstream (programming) files for these chips is extremely proprietary, and nobody (except XILINX) has the formats for these files. I really want to know how they know how this thing is wired.
Now I should mention, this is pretty cool from an academic standpoint, and it would be interesting if they could produce something that is both stable and useful using these techniques. It's also pretty cool that they could get this to work at all.
Re:Exciting times ahead for 'AI' (Score:2, Interesting)
Pah, thats one of the all unifying sentences I shudder when seeing it, normally used by fanatics. I forgot which scientist it was that said "It seems every new theory is first far overstated, before it gets it right place in science", especially at times where the evolutions theory was new and was applied to really everything even a lot of places where it by far did not fit.
For an AI we're still at calculation capability was shortly far away to be able to "simulate" a human brain. The human brain has 20 Giga Neurons, with 2000-5000 synapses per neuron (the basic calculation unit) resulting in a capacitiy of 10 Tera "Byte". It is frightening that for today 2001 this is not so far apart. Theoretically we would already have enough storage capability to "store" a human brain on hard disk. But going for calculation capability we're lucky wise still years away. Since all the Neurons in our brain can work parallell. We've a outrageous serial calculation capability, but our human capability of parallel computing is still enourmes.
To get near to human brains Von Neumann machines as we're using today with a central CPU are the wrong way, altough in key sematics they can already match the human brain they will not do it through the human capability of doing a lot of calculations at the same time. The way to match it lies not in the CPU but in the FPGAs, and here were still light years away. How many cells (""Neorons"") does an typical high performance LCA have today? 10.000 maybe? Well that is still far far away from mine 20.000.000.0000 I've in my head
tripe! tripe! (Score:4, Interesting)
It is, in fact "some bizarre magic," so to speak, not because we do not understand it, but because it requires considerable algorithmic search to find such an efficient (quick, small and effective) state through which the machine can produce its effect - its magic in the same sense that a chess playing program is magic.
The insight that you fail to grasp is that with this technique, we can take advantage of those variables that you say we should eliminate, making designs better. This allows for the possibility of a much wider range of functionality for chips than we currently have for them.
As far as complexity, what kind of bacteria are you thinking of that its so far from? The techniques used in neural networks are almost all taken straight from biology. The major simplification is a lack of frequency encoding. That's pretty much it; everything else works pretty much the same. Perhaps you're under the impression that the "evolution" of bacteria changes their basic behavior. This is extremely seldom - usually changes in bacteria are no more drastic that the cosmetic changes that occur in a "mutating" FPGA design.
So...at least we can have the complexity of bacteria to do the work of genius hardware designers using search techniques to produce better designs.
One thing further, though: if nature is any indication, it is extremely different to increase the level of complexity of an organism (or in this case, of a network). I would agree that "intelligent" machines that make you into toast are a long way off because we can't make evolving machines - only learning ones, even if they do use genetic algorithms to do it (which is essentially what viruses and bacteria do regularly, I might add).
Re:Genetic Algorithms are not new (Score:3, Interesting)
Also, a lot of what is being discussed sounds like Neural Networks as well; gates interlinking and 'learning'. I found it interesting during my MSc, and the field shows some promise if they can get over the factor discussed of "how do you trust something you can't explain?"
Busting the underlying operational model (Score:4, Interesting)
Some of the assumptions of this model are:
For example we say a certain voltage range is interpreted as a logical 0, a certain different higher volatage range is interpreted as a logical 1.
But the evolutionary algorithm was not constrained in any fashion to make use of this ideal digital model only. It can and will make use of the full available degrees of freedom the physical system, that the fpga device is, offers.
With the result that there might evolve analog cuircuits (which use more than 0 or 1 values), or that we might have electro-magnetic signal transport (Thompson reported some spiral structures which might work as electro-magnetic wave guides), yes it might even employ some quantum mechanical effect that could explained by advanced semiconductor physics only.
One might say that the approximation process that the evolution algorithm is, has started in the domain of digital devices and converged out of that domain into the wider domain of physical devices.
This has a couple of draw backs:
I wonder what would have been happend if the algoritm had a control step after each evolution step which ensured that the next generation design would operate strictly under the assumptions of a conventional digital device model, in that case the evolution process should evolve towards a classical design. Would it have been stil something that is hard to understand?
Perhaps in that case it is easier to stick to software simulation of the design.