Robot Makes Scientific Discovery (Mostly) On Its Own 250
Hugh Pickens writes "A science-savvy robot called Adam has successfully developed and tested its first scientific hypothesis, discovering that certain genes in baker's yeast code for specific enzymes which encourage biochemical reactions in yeast, then ran an experiment with its lab hardware to test its predictions, and analyzed the results, all without human intervention. Adam was equipped with a database on genes that are known to be present in bacteria, mice and people, so it knew roughly where it should search in the genetic material for the lysine gene in baker's yeast, Saccharomyces cerevisiae. Ross King, a computer scientist and biologist at Aberystwyth University, first created a computer that could generate hypotheses and perform experiments five years ago. 'This is one of the first systems to get [artificial intelligence] to try and control laboratory automation,' King says. '[Current robots] tend to do one thing or a sequence of things. The complexity of Adam is that it has cycles.' Adam has cost roughly $1 million to develop and the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can. King's group has also created another robot scientist called Eve dedicated to screening chemical compounds for new pharmaceutical drugs that could combat diseases such as malaria.
Re:Robot discovers Humans "unnecessary"... (Score:3, Interesting)
I think this is a more limited type of thought. The scope is limited to thinking about genes, genetic material, and identifying similarities between genetic code from multiple species, then trying experiments before proceeding and trying another experiment.
Effectively it is guessing, examining the result, comparing it in fancy statistical ways, then making another guess. The end result is it discovers something faster than humans could.
Now... pair it with object recognition [slashdot.org], and you're one step closer to Skynet!
Personal (Score:4, Interesting)
I knew that Ross was up to something bigger than protein secondary structure prediction when I met him 15 years ago at ICRF. He was a great Prolog fan then. Now he has probably bunch of graduate students coding for him.
Re:But... (Score:5, Interesting)
Yes, but there are no ethical rules against watching your two lab robots fuck each other.
I'm sure with the right thesis, you can get away with watching student volunteers fucking each other.
Automated Mathematician (Score:4, Interesting)
This seems to be doing the same thing: mixing and matching patterns, looking for interesting coincidences, and then testing for them. The only difference is that this is doing it with real world biological samples, and not abstract mathematical constructs.
Re:The first step to a singularity? (Score:3, Interesting)
Isn't the first requirement for a singularity be that it's able to improve itself, thus leading to an accelerating growth that ends in the subjugation of humanity?
We've had that for years with simple statistics keeping, neural networks, evolutionary algorithms and other ways of limited learning. You can have a learning chess computer that'll run circles around me yet it's completely harmless because it's not self-aware - it does not understand what it means to be turned off.
I'd be much more fascinated by a robot that given access to its own schematics etc. was to implement its own survivability routine like avoiding excess heat, cold, pressure, electrical jolts, water damage, corrosion, metal fatigue and so on and found pressing the "off" button as one of the identified threats to its survival. Not self-awareness in a human sense but enough logic to recognize the puppeteer.
Re:Robot discovers Humans "unnecessary"... (Score:3, Interesting)
Not necessarily. The least elegant way to create strong AI is probably to brute force simulate a whole brain down to nearly every neurotransmitter molecule, something which futurists argue will be doable by supercomputers around 2020.
This is a worst case solution since it would imply that the brain is not understood yet and instead of having a simpler model that can provide the same level of strong AI we just throw raw power at it.
In this case, the AI would theoritically emerge out of the complexity of the system and although malicious intent wouldn't be programmed in (neither would anything else) the system might learn it by himself.
Software AI == Cold fusion (Score:3, Interesting)
What's far more fascinating and promising is the development of hardware neural nets [physorg.com]. To put it into perspective:
Since the neurons are so small, the system runs 100,000 times faster than the biological equivalent and 10 million times faster than a software simulation. "We can simulate a day in one second," Meier notes.
10 million times faster than software? That's like jumping from an abacus to a Pentium.
I just hope these folks continue to receive the funding they need.
Re:Please, fellow slashdotters... (Score:3, Interesting)
I'd shoot you if you named it Skynet.
I was waiting for that. Second comment from the top, we've achieved a new level of predictability.
Count yourself lucky - The Belgian telco, Belgacom, decided when they were getting into the Internet business to call their ISP "Skynet" - check http://www.skynet.be/ [skynet.be]
Re:But... (Score:3, Interesting)
Re:But... (Score:3, Interesting)
Re:Robot discovers Humans "unnecessary"... (Score:2, Interesting)
Motives are not just the result of reasoning. Reasoning can only give you the possible consequences of any action. Reasoning alone cannot give any indication whether those consequences are things to aim at, or things to avoid. At some point you must resort to another way of judging. That other way can be:
Any autonomous AI would have to have such a system of final rules. Those rules would effectively determine the motives. This of course means that they should be crafted very carefully, and that if there's ever a conflict between those basic rules, it should not decide by itself, but get the information from a trusted human (of course one problem would be to determine who's that trusted human).