Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics Biotech Science

Robot Makes Scientific Discovery (Mostly) On Its Own 250

Hugh Pickens writes "A science-savvy robot called Adam has successfully developed and tested its first scientific hypothesis, discovering that certain genes in baker's yeast code for specific enzymes which encourage biochemical reactions in yeast, then ran an experiment with its lab hardware to test its predictions, and analyzed the results, all without human intervention. Adam was equipped with a database on genes that are known to be present in bacteria, mice and people, so it knew roughly where it should search in the genetic material for the lysine gene in baker's yeast, Saccharomyces cerevisiae. Ross King, a computer scientist and biologist at Aberystwyth University, first created a computer that could generate hypotheses and perform experiments five years ago. 'This is one of the first systems to get [artificial intelligence] to try and control laboratory automation,' King says. '[Current robots] tend to do one thing or a sequence of things. The complexity of Adam is that it has cycles.' Adam has cost roughly $1 million to develop and the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can. King's group has also created another robot scientist called Eve dedicated to screening chemical compounds for new pharmaceutical drugs that could combat diseases such as malaria.
This discussion has been archived. No new comments can be posted.

Robot Makes Scientific Discovery (Mostly) On Its Own

Comments Filter:
  • by BikeHelmet ( 1437881 ) on Friday April 03, 2009 @12:10AM (#27440845) Journal

    I think this is a more limited type of thought. The scope is limited to thinking about genes, genetic material, and identifying similarities between genetic code from multiple species, then trying experiments before proceeding and trying another experiment.

    Effectively it is guessing, examining the result, comparing it in fancy statistical ways, then making another guess. The end result is it discovers something faster than humans could.

    Now... pair it with object recognition [slashdot.org], and you're one step closer to Skynet!

  • Personal (Score:4, Interesting)

    by mapkinase ( 958129 ) on Friday April 03, 2009 @12:21AM (#27440913) Homepage Journal

    I knew that Ross was up to something bigger than protein secondary structure prediction when I met him 15 years ago at ICRF. He was a great Prolog fan then. Now he has probably bunch of graduate students coding for him.

  • Re:But... (Score:5, Interesting)

    by Narnie ( 1349029 ) on Friday April 03, 2009 @12:55AM (#27441087)

    Yes, but there are no ethical rules against watching your two lab robots fuck each other.

    I'm sure with the right thesis, you can get away with watching student volunteers fucking each other.

  • by camperdave ( 969942 ) on Friday April 03, 2009 @01:38AM (#27441319) Journal
    This reminds me of the Automated Mathematician (AM) program I read about in an AI course (or was it an old Byte magazine?). This program was programmed with a bunch of axioms, and basic strategies. It looked for "interesting things", like what happens when you apply identical arguments to a two argument function. As I recall, it discovered for itself the concept of prime numbers. It applied what it learned and came up with the theorem that all angles can be expressed as the sum of two prime angles (or something like that).

    This seems to be doing the same thing: mixing and matching patterns, looking for interesting coincidences, and then testing for them. The only difference is that this is doing it with real world biological samples, and not abstract mathematical constructs.
  • by Kjella ( 173770 ) on Friday April 03, 2009 @01:49AM (#27441407) Homepage

    Isn't the first requirement for a singularity be that it's able to improve itself, thus leading to an accelerating growth that ends in the subjugation of humanity?

    We've had that for years with simple statistics keeping, neural networks, evolutionary algorithms and other ways of limited learning. You can have a learning chess computer that'll run circles around me yet it's completely harmless because it's not self-aware - it does not understand what it means to be turned off.

    I'd be much more fascinated by a robot that given access to its own schematics etc. was to implement its own survivability routine like avoiding excess heat, cold, pressure, electrical jolts, water damage, corrosion, metal fatigue and so on and found pressing the "off" button as one of the identified threats to its survival. Not self-awareness in a human sense but enough logic to recognize the puppeteer.

  • by Failed Physicist ( 1411173 ) on Friday April 03, 2009 @01:57AM (#27441429) Journal

    Not necessarily. The least elegant way to create strong AI is probably to brute force simulate a whole brain down to nearly every neurotransmitter molecule, something which futurists argue will be doable by supercomputers around 2020.
    This is a worst case solution since it would imply that the brain is not understood yet and instead of having a simpler model that can provide the same level of strong AI we just throw raw power at it.
    In this case, the AI would theoritically emerge out of the complexity of the system and although malicious intent wouldn't be programmed in (neither would anything else) the system might learn it by himself.

  • by Henkc ( 991475 ) on Friday April 03, 2009 @02:29AM (#27441565)
    Academics have been poking away on software AI for decades (also ANN [wikipedia.org]) - I can't help feeling that this is a dead-end in the same way that cold fusion is, even though it's intellectually (hacking) fascinating.

    What's far more fascinating and promising is the development of hardware neural nets [physorg.com]. To put it into perspective:

    Since the neurons are so small, the system runs 100,000 times faster than the biological equivalent and 10 million times faster than a software simulation. "We can simulate a day in one second," Meier notes.

    10 million times faster than software? That's like jumping from an abacus to a Pentium.

    I just hope these folks continue to receive the funding they need.

  • by Ronald Dumsfeld ( 723277 ) on Friday April 03, 2009 @04:48AM (#27442233)

    I'd shoot you if you named it Skynet.

    I was waiting for that. Second comment from the top, we've achieved a new level of predictability.

    Count yourself lucky - The Belgian telco, Belgacom, decided when they were getting into the Internet business to call their ISP "Skynet" - check http://www.skynet.be/ [skynet.be]

  • Re:But... (Score:3, Interesting)

    by gerddie ( 173963 ) on Friday April 03, 2009 @11:42AM (#27445981)
    While you are thinking about the right thesis, others are already work on it. [bmj.com]
  • Re:But... (Score:3, Interesting)

    by Bowling Moses ( 591924 ) on Friday April 03, 2009 @01:02PM (#27447399) Journal
    That's not all that funny. I know someone who went on sabbatical to a Chinese university a couple years ago. They're building brand new high-tech bioscience labs but not the necessary infrastructure to support them properly. There were no facilities at that particular university, one of the top ten in China, for handling hazardous waste. Hazardous liquids are just poured down the drain, nothing for disposal was autoclaved. He saw a peasant (his term) poking through the trash and yes, eat the agar out of a petri dish. The rest of the trash is often picked over by other peasants for any sort of plastic that could be reused or recycled. What's left just got mixed in with other municipal trash at the dump. What's even scarier is that they're building a level three biohazard facility nearby, right in the middle of the city.
  • by maxwell demon ( 590494 ) on Friday April 03, 2009 @01:03PM (#27447407) Journal

    Motives are not just the result of reasoning. Reasoning can only give you the possible consequences of any action. Reasoning alone cannot give any indication whether those consequences are things to aim at, or things to avoid. At some point you must resort to another way of judging. That other way can be:

    • Emotions: You just feel it's not right.
    • Rules which you have been given and which you just believe (because you cannot derive or disprove them from other knowledge).

    Any autonomous AI would have to have such a system of final rules. Those rules would effectively determine the motives. This of course means that they should be crafted very carefully, and that if there's ever a conflict between those basic rules, it should not decide by itself, but get the information from a trusted human (of course one problem would be to determine who's that trusted human).

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...