Robot Makes Scientific Discovery (Mostly) On Its Own 250
Hugh Pickens writes "A science-savvy robot called Adam has successfully developed and tested its first scientific hypothesis, discovering that certain genes in baker's yeast code for specific enzymes which encourage biochemical reactions in yeast, then ran an experiment with its lab hardware to test its predictions, and analyzed the results, all without human intervention. Adam was equipped with a database on genes that are known to be present in bacteria, mice and people, so it knew roughly where it should search in the genetic material for the lysine gene in baker's yeast, Saccharomyces cerevisiae. Ross King, a computer scientist and biologist at Aberystwyth University, first created a computer that could generate hypotheses and perform experiments five years ago. 'This is one of the first systems to get [artificial intelligence] to try and control laboratory automation,' King says. '[Current robots] tend to do one thing or a sequence of things. The complexity of Adam is that it has cycles.' Adam has cost roughly $1 million to develop and the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can. King's group has also created another robot scientist called Eve dedicated to screening chemical compounds for new pharmaceutical drugs that could combat diseases such as malaria.
Please, fellow slashdotters... (Score:5, Funny)
Re:Please, fellow slashdotters... (Score:5, Funny)
I'd shoot you if you named it Skynet.
Re:Please, fellow slashdotters... (Score:5, Funny)
I'd shoot you if you named it Skynet.
I was waiting for that. Second comment from the top, we've achieved a new level of predictability.
Re:Please, fellow slashdotters... (Score:5, Funny)
I'd shoot you if you named it Skynet.
I was waiting for that. Second comment from the top, we've achieved a new level of predictability.
Okay, good. That means my /. AI is nearing perfection. I think I'll call it KDawson.
Re: (Score:2, Funny)
Ah, so you're applying for a grant to research Artificial Stupidity?
Re: (Score:3, Interesting)
I'd shoot you if you named it Skynet.
I was waiting for that. Second comment from the top, we've achieved a new level of predictability.
Count yourself lucky - The Belgian telco, Belgacom, decided when they were getting into the Internet business to call their ISP "Skynet" - check http://www.skynet.be/ [skynet.be]
Re: (Score:2)
I was expecting a "hail the new robot overlords" thing actually.
Re:Please, fellow slashdotters... (Score:5, Funny)
Re: (Score:2)
Hello "H" - missing your sunglasses today?
Now I'm just waiting for crime investigation groups to take on using robots for solving crimes.
Re:Please, fellow slashdotters... (Score:5, Funny)
Re: (Score:2)
Won't that make Alice jealous?
Re:Please, fellow slashdotters... (Score:5, Funny)
Re: (Score:2)
Somehow I knew the first "Intelligent" robots would be gay. I just didn't suspect so soon. CP30 must be right around the corner.
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
Would have preferred Hal and Sal?
I know I would.
Re: (Score:2)
I'm still looking for a good book based on the premise of Creationism... it seems like an epic opportunity for a great work of Science Fiction. I would have fingered Zelazny for the project (too late) :( But it can't be a coincidence that we have all these names beginning with letters close to the beginning of the alphabet. Either the authors of the Torah were cryptologists (As has been widely speculated, heh) or there's something else interesting to play around with there. The crypto thing has been done to
Re: (Score:2)
T-1000, huh? Mine is T260G
Call me when (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Okay, then, she just HAPPENED always to land perfectly on her magical leg apparatus. Instead of e.g. landing on her head or her back, like most of us would. What was she, a cat-person?
Re:Call me when (Score:4, Funny)
Call me when it begins faking results to get published.
Re: (Score:2)
Results won't need to be faked. Just like human researchers, it will be programmed to find certain results ahead of time. No faking involved.
Re: (Score:2)
When it develops its own hypothesis and after extensive testing and discovers it's wrong, will it feel bad?
Re: (Score:2)
We won't have to. The machines will call you with a calm message asking you to stay indoors.
Time for... (Score:3, Funny)
the union of scientists. You thought Teamsters were nasty? You ain't seen jack squat. WE SPLICE GENES!!! WE SPLIT ATOMS!!! WE (probably) MAKE BLACKHOLES!!!
Ross King, gutless traitor, you and your tin cans, your names will live in infamy.
But... (Score:5, Funny)
Re:But... (Score:5, Informative)
You joke but really undergrads are cheaper than graduate students... At least from my experience working in a biology lab in college. It was/is common practice to recruit undergrads to do free work for the labs. The undergrad gets some experience in the field and the lab gets free labor in exchange for dealing with the inexperience of the average undergrad.
Re:But... (Score:5, Insightful)
THAT Sr. is an expensive proposition.
Re:But... (Score:4, Funny)
That's nothing, you could probably get some hobos to do the work for free and save some money by having them eat the hazardous biological waste rather than disposing of it.
Re: (Score:3, Interesting)
Re: (Score:2)
....and said undergrad becomes extremely disillusioned with the current state of affairs in scientific research, and decides to go into a different field instead.
Tenured faculty do extremely little original research of their own, but are often paid 5 times in excess of what the graduate students are making.
Where's the justice in THAT?
Re:But... (Score:5, Funny)
Yes, but there are no ethical rules against watching your two lab robots fuck each other.
Re:But... (Score:5, Interesting)
Yes, but there are no ethical rules against watching your two lab robots fuck each other.
I'm sure with the right thesis, you can get away with watching student volunteers fucking each other.
Re:But... (Score:5, Funny)
If you're a Windows system admin, you get paid to watch computers fuck eachother.
Re: (Score:2, Funny)
You're also promoting unsafe sex, what with the viruses and all...
Re: (Score:3, Interesting)
Re:But... (Score:5, Informative)
The case of electronics assembly is arguably analogous. Humans are cheap; but (quite expensive) pick and place machines are ubiquitous. Why? Because they are faster, more precise, and more consistent than humans.
It is already starting. This piece [wired.com] describes a massive robot setup for processing brain samples(cue: whatcouldpossiblygowrong). In high volume gene sequencing, automated equipment is common enough to essentially be a stock photo cliche by now.
Re: (Score:2)
Grad students often move on or will eventually die. Sure, they're replaced, but each replacement has to start fresh. With something like Adam, it can continually go back to previous results and not miss a single detail. Future upgrades could give it better analysis methods so it can do better hypotheses, but still retain all previous data.
Honestly, I'm not aware of the full scope, but for something like this $1MM seems like a bargain.
Re: (Score:2)
It cost $1M to develop. It would probably be a lot cheaper if produced in thousand-lot quantities. Plus, it performs 1000 trials a day, keeping track of the results flawlessly; you'd need a lot of grad students to keep up with a facility with a few dozen of these. Or a few hundred. Or a few thousand.
This sort of thing is going to be a very big deal over the coming decades. There is a very good chance that more than one person reading this post will have their life saved at some point by a cure that results
Re: (Score:2, Funny)
Will we one day see a scientific institution operated solely on robots?
Depends on how heavy the institution is, and how strong the robots are, of course...
Re: (Score:2)
Or, it is to *become* the superior intelligences, aka. the transhumanism idea.
Re: (Score:2)
Truth be told, I find the Wall-E fantasy to be pretty grim and frightening too. AI that cares for us so well that we become lethargic, fat, and unambitious? Yes, SOME people are like that now, but what of humanity when we have no greater aspiration than to allow for AI to take care of us? Of course, if Kurzweil is right, we'll integrate with our non-biological intelligence and thinking machines will BE human.
A bit of a stretch (Score:5, Insightful)
I think this is called "flow control". This was invented before electricity. It was around before the term "science" existed.
So this is the first time it's applied to *this specific* operation. It's been around in robotics ever since there were "robots".
Here's a good example [wikipedia.org].
Eh hehh... (Score:4, Insightful)
"...the software that drives Adam's thought process sits on three computers, allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can."
There is no 'thought process'. 1's & 0's...that's it. Anthropomorphising the over priced little key-puncher isn't fooling anyone.
Give me $1 mil and I'll put a scare into Adam that he won't soon forget. I can read 3k WPM as well as raw postscript, palms, tarot cards and bar codes with the naked eye. I can intuit nearly 30 spoken languages on body english alone and smell phony money at the bottom of a sweaty pocket. I don't need no stink'n badges and I know when to cross to the other side of the street. Adam might get better press, but until it can order at a drive thru and sort used car parts based on cross-over and eBay thru-put, I'm comfortable sleeping in.
Re: (Score:2)
Re: (Score:3, Informative)
Personal (Score:4, Interesting)
I knew that Ross was up to something bigger than protein secondary structure prediction when I met him 15 years ago at ICRF. He was a great Prolog fan then. Now he has probably bunch of graduate students coding for him.
Gender bender (Score:5, Funny)
The complexity of Adam is that it has cycles.
No, no, no -- the complexity of *Eve* is that it has cycles.
Robot Bio-Research (Score:2)
Next thing you know, the robot will abduct a pretty female lab assistant to experiment on. [imdb.com]
Re: (Score:2)
Say what you want, that movie was excellent :)
Bender would say... (Score:3, Funny)
Well, we're boned.
Re: (Score:2)
No, he'd say "Hey, sexy mama! Wanna kill all humans?"
Are we ..? (Score:2, Insightful)
The first step to a singularity? (Score:4, Funny)
Re: (Score:3, Interesting)
Isn't the first requirement for a singularity be that it's able to improve itself, thus leading to an accelerating growth that ends in the subjugation of humanity?
We've had that for years with simple statistics keeping, neural networks, evolutionary algorithms and other ways of limited learning. You can have a learning chess computer that'll run circles around me yet it's completely harmless because it's not self-aware - it does not understand what it means to be turned off.
I'd be much more fascinated by a robot that given access to its own schematics etc. was to implement its own survivability routine like avoiding excess heat, cold, pressure, electrical jolts, wate
Re: (Score:3, Insightful)
I'd be much more fascinated by a robot that given access to its own schematics etc. was to implement its own survivability routine like avoiding excess heat, cold, pressure, electrical jolts, water damage, corrosion, metal fatigue and so on and found pressing the "off" button as one of the identified threats to its survival. Not self-awareness in a human sense but enough logic to recognize the puppeteer.
I would like to think that the robot would be rational about it and realize that "Off" was an orderly shu
Re: (Score:2)
Re: (Score:2)
Just remember to build morality into the AI. Culture Minds FTW.
Re: (Score:2, Funny)
Why? So it can feel bad after it's killed us all?
Re: (Score:2)
Re: (Score:2)
Isn't the first requirement for a singularity be that it's able to improve itself, thus leading to an accelerating growth that ends in the subjugation of humanity? If so, wouldn't it be prudent to withhold knowledge of the scientific method as long as possible?
No. If a machine is going to run the world, then we don't want it to be a creationist.
Robot or automated lab? (Score:3, Informative)
I used to work with Motoman K6's a few years back. Using these robots, we performed plasma cutting, arc welding, material handling, etc... Just looking at the K6, you knew it was a robot. Watching a robot work in a cell after you've trained it to do it's job is a very rewarding experience. Of course we also had other machines that were also very complex in their tasks, but we didn't consider them robots. CNC mills and lathes, pipe benders, other machines that ran autonomously that also had to be programmed and synchronized with the flow of production. Sometimes the line resembled a kind of demented Rube Goldberg contraption, but we were somewhat strict to define only the articulated manipulators themselves as robots.
So when I saw this pile of servos in a glass cleanroom set to the over-dramatic theme of "Bonanza Reloaded", I thought, "Yeah, that's nice, but... It just doesn't strike me as a 'robot' so much as it does an automated bio lab."
And yes, I realize there were clearly robots within the cell, but calling the unit as a whole a "robot" just irks me a little.
Of course in the spirit of all the other bad jokes I've seen posted, do you think this "robot" will use it's genetic findings with the yeast cells to perfect the most delicious and moist cake recipe ever?
Re: (Score:2)
No. The cake is a lie.
Automated Mathematician (Score:4, Interesting)
This seems to be doing the same thing: mixing and matching patterns, looking for interesting coincidences, and then testing for them. The only difference is that this is doing it with real world biological samples, and not abstract mathematical constructs.
Re: (Score:2)
Me too, and there still is an earlier ancestor, namely the Logic Theorist [wikipedia.org] (1955). Looking at how far AI has come since then, I meanwhile wonder if basically logic based pattern matching will cut it.
And yes, this thing is not a robot at all (which/who(?), IMHO, should have a considerable repository of 'senses' to get a grasp of what is going on in the real world (an interesting observation is that this repository gradually gets lost from humans
Press release crap..How? (Score:2)
Yeah, cute. I'd be more impressed if there was a link to the code that showed how it worked. The Scientific American article was particularly disappointing. I remember when SA gave you enough information to learn something.
The end of science (Score:4, Insightful)
This is terrible.
No experimenter bias to worry about.
Programmable for effective randomization.
Truly double blind capable.
Can counteract the Placebo effect.
No ego to bruise.
It's the end of science as we know it.
Re: (Score:2)
No built in heuristics which where thought up by the developer who was totally isolated from social influences (probably some kind of condition along the autistic spectrum).
Programmable for effective randomization.
Perfectly 'well-definedness' of "effective"; amen.
Truly double blind capable.
Yes, one hand not knowing what the other is doing.
Can counteract the Placebo effect.
Is free to ignore hard to explain observations.
No ego to bruise.
A frozen, muted
Lysine? (Score:5, Funny)
So, our future AI overlords begin their research with the Lysine Contingency? Should we be worried?
Re: (Score:3, Funny)
So, our future AI overlords begin their research with the Lysine Contingency? Should we be worried?
Of course we should. Next thing you know, they'll be cloning dinosaur shock troops.
so after all this grad school... (Score:2, Funny)
Throwing darts (Score:2)
So the robot accomplished 1 experiment by how?
allowing Adam to investigate a thousand experiments a day and still keep track of all the results better than humans can.
Throwing darts... and eventually hitting something.
Woop woop!
Re: (Score:2, Funny)
I for one welcome our infinite Shakespearian typewriter-monkey overlords.
Software AI == Cold fusion (Score:3, Interesting)
What's far more fascinating and promising is the development of hardware neural nets [physorg.com]. To put it into perspective:
Since the neurons are so small, the system runs 100,000 times faster than the biological equivalent and 10 million times faster than a software simulation. "We can simulate a day in one second," Meier notes.
10 million times faster than software? That's like jumping from an abacus to a Pentium.
I just hope these folks continue to receive the funding they need.
Re: (Score:2)
The question here (which from time to time plagues me since decades) is whether, given these conditions, a day is still a day? IMHO, hard to decide.
CC.
Re: (Score:2)
Any neuron, be it natural or artificial, hardware or software or wetware, performs a very simple task: what mathematicians call an "internal product", or "dot product".
The difference between what you call "fast" and "slow" artificial neurons is, does it need to be differentiable for the training process? If the training process is some sort of gradient climbing algorithm, then the func
Re: (Score:2)
achieved... what?
Academically interesting formulas and algorithms with limited application and no ROI?
Which is something, while hardware AI has been around for a finite time and achieved nothing. When trying to compare the actual accomplishments of the respective AIs, my neural net gets a divide by zero error.
Also Interesting (Score:2)
http://blog.wired.com/wiredscience/2009/04/newtonai.html [wired.com]
Car analogies (Score:2, Funny)
Scientists might have to learn how to program?! (Score:2)
From the Science article: "In the future, he says, scientists, in order to carry out their work, might have to learn how to program computers and express knowledge about the world the way people in artificial intelligence have done."
Huh? Weird. Scientists might have to learn how to program computers? Who would have expected it?
But can it publish (Score:2)
If it can't publish it will never get tenure, will be fired, and then hired by Rumsfield who is still looking for WMD.
Industrial usage? (Score:2)
Long term if it does take off I can see industry going for it big time. It would mean the possibility of having less skilled staff (ie smaller wage outgoing) to come up with new ideas. I feel its no coincidence they throw the C.elegans genome at it to see what sticks, rather than problems from the world of physics.
Re: (Score:3, Interesting)
I think this is a more limited type of thought. The scope is limited to thinking about genes, genetic material, and identifying similarities between genetic code from multiple species, then trying experiments before proceeding and trying another experiment.
Effectively it is guessing, examining the result, comparing it in fancy statistical ways, then making another guess. The end result is it discovers something faster than humans could.
Now... pair it with object recognition [slashdot.org], and you're one step closer to Sk
Re: (Score:2)
Effectively it is guessing, examining the result, comparing it in fancy statistical ways, then making another guess.
So according to you, this is thinking? Sounds more like computing to me, which would explain why a computer would be so good at it, but if one chooses to personify its behavior, so be it!
Re: (Score:2)
Then it would conclude that in whatever report it generates after finishing its experiments.
Re: (Score:2, Informative)
Awwwww. I've got mod points but I've already posted above. That really made me chuckle.
Your description above of guessing and stats is a really good non-technical description of how the system works. The first step is to actually analyse what is already known about the problem domain, then some guesswork is applied about how to improve that knowledge. The nice thing about King's work is those guesses translate directly into automated experiments, and then the system can close the loop - the results can be a
Re:Robot discovers Humans "unnecessary"... (Score:5, Insightful)
See, what people fail to see is this requires not only Strong AI but also a programmed Malicious intent.
People keep assuming that if we build a robot that can emulate some of our thought, it will emulate our motives also
Since we program it, it will only emulate the motives we give it. Emulating motives that are abstract enough to eventually lead back to our demise are quite complex
Re:Robot discovers Humans "unnecessary"... (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Or the killbot "kill threshold".
Re: (Score:2)
We could always build them with OFF switches as well.
Yes, but they are located under the rocket arm, and near the gratuitously unnecessary buzz-saw attachment.
Re: (Score:2)
Data from Star Trek had an off switch and he turned okay.
Re: (Score:3, Interesting)
Not necessarily. The least elegant way to create strong AI is probably to brute force simulate a whole brain down to nearly every neurotransmitter molecule, something which futurists argue will be doable by supercomputers around 2020.
This is a worst case solution since it would imply that the brain is not understood yet and instead of having a simpler model that can provide the same level of strong AI we just throw raw power at it.
In this case, the AI would theoritically emerge out of the complexity of the
Re: (Score:2)
"but also a programmed Malicious intent."
Crap! Look at the the technologies/ecologies that have been replaced by humans. Little was done with malicious intent.
"Emulating motives that are abstract enough to eventually lead back to our demise are quite complex"
Other way around isn't it? Emulating motives that allow for intellectual value are more complex.
Re: (Score:2)
Emulating motives that are abstract enough to eventually lead back to our demise are quite complex
No it's not. Ignoring the engineering aspect (ie: to build a robot in the first place), it seems pretty easy to ask the robot to "solve world hunger" or "global warming", and have it figure out that "killing all humans" is the quickest solution.
Re: (Score:3, Insightful)
I always thought the point of AI was self-learning (and or self-aware). Meaning you can program it to only emulate the motives you want, but what's to stop it from discovering the ones we avoided on it's own?
Re: (Score:3, Insightful)
I disagree. For an AI to determine that we are suboptimal, and replace/eradicate us, it doesn't need malicious intent, merely a calculation that things would be Better (by whatever metric) without us, and a lack of adequately expressed "don't kill the humans" controls.
Maliciousness implies wanting to see someone else be harmed. There's a difference between WANTING to harm us and "merely" recognizing tha
Re: (Score:3, Funny)
What if it concludes that humans are genetically inefficient and decides to replace them with a specie designed by itself?
Humans replaced by coins? Now that is a dystopian future that even Philip K. Dick never considered.
May God have mercy on us.
Re:Robot discovers Humans "unnecessary"... (Score:4, Funny)