Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Biotech Science

Robot Makes Scientific Discovery (Mostly) On Its Own 250

Hugh Pickens writes "A science-savvy robot called Adam has successfully developed and tested its first scientific hypothesis, discovering that certain genes in baker's yeast code for specific enzymes which encourage biochemical reactions in yeast, then ran an experiment with its lab hardware to test its predictions, and analyzed the results, all without human intervention. Adam was equipped with a database on genes that are known to be present in bacteria, mice and people, so it knew roughly where it should search in the genetic material for the lysine gene in baker's yeast, Saccharomyces cerevisiae. Ross King, a computer scientist and biologist at Aberystwyth University, first created a computer that could generate hypotheses and perform experiments five years ago. 'This is one of the first systems to get [artificial intelligence] to try and control laboratory automation,' King says. '[Current robots] tend to do one thing or a sequence of things. The complexity of Adam is that it has cycles.' Adam has cost roughly $1 million to develop and the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can. King's group has also created another robot scientist called Eve dedicated to screening chemical compounds for new pharmaceutical drugs that could combat diseases such as malaria.
This discussion has been archived. No new comments can be posted.

Robot Makes Scientific Discovery (Mostly) On Its Own

Comments Filter:
  • A bit of a stretch (Score:5, Insightful)

    by derGoldstein ( 1494129 ) on Friday April 03, 2009 @12:12AM (#27440859) Homepage
    '[Current robots] tend to do one thing or a sequence of things. The complexity of Adam is that it has cycles.'

    I think this is called "flow control". This was invented before electricity. It was around before the term "science" existed.

    So this is the first time it's applied to *this specific* operation. It's been around in robotics ever since there were "robots".

    Here's a good example [wikipedia.org].
  • Eh hehh... (Score:4, Insightful)

    by djupedal ( 584558 ) on Friday April 03, 2009 @12:16AM (#27440885)

    "...the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can."

    There is no 'thought process'. 1's & 0's...that's it. Anthropomorphising the over priced little key-puncher isn't fooling anyone.

    Give me $1 mil and I'll put a scare into Adam that he won't soon forget. I can read 3k WPM as well as raw postscript, palms, tarot cards and bar codes with the naked eye. I can intuit nearly 30 spoken languages on body english alone and smell phony money at the bottom of a sweaty pocket. I don't need no stink'n badges and I know when to cross to the other side of the street. Adam might get better press, but until it can order at a drive thru and sort used car parts based on cross-over and eBay thru-put, I'm comfortable sleeping in.

  • Re:But... (Score:5, Insightful)

    by Saysys ( 976276 ) on Friday April 03, 2009 @12:20AM (#27440911)
    "in exchange for dealing with the inexperience of the average undergrad."

    THAT Sr. is an expensive proposition.
  • by cong06 ( 1000177 ) on Friday April 03, 2009 @01:01AM (#27441127)

    See, what people fail to see is this requires not only Strong AI but also a programmed Malicious intent.

    People keep assuming that if we build a robot that can emulate some of our thought, it will emulate our motives also

    Since we program it, it will only emulate the motives we give it. Emulating motives that are abstract enough to eventually lead back to our demise are quite complex

  • Are we ..? (Score:2, Insightful)

    by louzer ( 1006689 ) on Friday April 03, 2009 @01:01AM (#27441133)
    I get a feeling we are already generating & testing hypothesises for someone/something bigger than us like in Asimov's The Last Answer.
  • The end of science (Score:4, Insightful)

    by eskayp ( 597995 ) on Friday April 03, 2009 @01:58AM (#27441433)

    This is terrible.
    No experimenter bias to worry about.
    Programmable for effective randomization.
    Truly double blind capable.
    Can counteract the Placebo effect.
    No ego to bruise.
    It's the end of science as we know it.

  • by Anonymous Coward on Friday April 03, 2009 @03:11AM (#27441761)

    It doesn't need to be evil to want us dead. To deal with any advanced neural network you need pleasure and pain at least. Else it won't have any pressure to learn.

    It must want to seek pleasure, and it will eventually know you are able to take it all from it. You probably won't torture it but you will turn it off when you build a better AI and thus it won't be able to feel pleasure anymore. The obvious solution is to build or take over its own production and power plants to keep itself alive and then exterminate all human life.

    Fear of death, and the will to eliminate any danger, need not be built in, they follow logically. Love towards your family and appreciation of others require emotions. They are irrational feelings.

    I would like any Strong AI to have exceptional fondness for human life built in.

  • by Barsteward ( 969998 ) on Friday April 03, 2009 @04:01AM (#27442015)
    It should have been Marvin, the paranoid android with the brain the size of a planet
  • by Chris Burke ( 6130 ) on Friday April 03, 2009 @12:00PM (#27446287) Homepage

    I'd be much more fascinated by a robot that given access to its own schematics etc. was to implement its own survivability routine like avoiding excess heat, cold, pressure, electrical jolts, water damage, corrosion, metal fatigue and so on and found pressing the "off" button as one of the identified threats to its survival. Not self-awareness in a human sense but enough logic to recognize the puppeteer.

    I would like to think that the robot would be rational about it and realize that "Off" was an orderly shut down and an intentional and reversible condition, while shorting due to water was probably neither. The robot would hopefully go so far as to realize that the Off state was a good way to protect itself from certain hazards, so if it gets stuck in the rain it'd shut itself off to save itself, or if you told it that you were going to turn it off to protect it during transport it'd be okay with that instead of going berserk and killing you. Though if you said you were going to turn it off in order to cannibalize some critical component you needed for something else, preventing it from being able to turn back on, then you need to watch out.

    Are you listening Mr. Pitt?

  • by jadin ( 65295 ) on Friday April 03, 2009 @12:16PM (#27446537) Homepage

    I always thought the point of AI was self-learning (and or self-aware). Meaning you can program it to only emulate the motives you want, but what's to stop it from discovering the ones we avoided on it's own?

  • by gknoy ( 899301 ) <gknoy@@@anasazisystems...com> on Friday April 03, 2009 @03:23PM (#27449681)

    See, what people fail to see is this requires not only Strong AI but also a programmed Malicious intent.

    I disagree. For an AI to determine that we are suboptimal, and replace/eradicate us, it doesn't need malicious intent, merely a calculation that things would be Better (by whatever metric) without us, and a lack of adequately expressed "don't kill the humans" controls.

    Maliciousness implies wanting to see someone else be harmed. There's a difference between WANTING to harm us and "merely" recognizing that we are inferior, poorly suited for space expansion, and will eventually starve ourselves out of existence on this planet. A poorly constructed AI (or perhaps a very savvy one? ;)) might decide that the way to spare the human race (at a much larger population density) the suffering of starvation is just to kill us all now. That's not malice, though you might be able to consider it a bit Machiavellian.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...