Forgot your password?
typodupeerror
AI Medicine Robotics

Computers Shown To Be Better Than Docs At Diagnosing, Prescribing Treatment 198

Posted by Soulskill
from the boop-beep-you-have-cancer-boop-beep dept.
Lucas123 writes "Applying the same technology used for voice recognition and credit card fraud detection to medical treatments could cut healthcare costs and improve patient outcomes by almost 50%, according to new research. Scientists at Indiana University found that using patient data with machine-learning algorithms can drastically improve both the cost and quality of healthcare through simulation modeling.The artificial intelligence models used for diagnosing and treating patients obtained a 30% to 35% increase in positive patient outcomes, the research found. This is not the first time AI has been used to diagnose and suggest treatments. Last year, IBM announced that its Watson supercomputer would be used in evaluating evidence-based cancer treatment options for physicians, driving the decision-making process down to a matter of seconds."
This discussion has been archived. No new comments can be posted.

Computers Shown To Be Better Than Docs At Diagnosing, Prescribing Treatment

Comments Filter:
  • Interesting (Score:5, Interesting)

    by Intrepid imaginaut (1970940) on Wednesday February 13, 2013 @06:08PM (#42889425)

    I find this interesting, I was wondering when we'd reach the point where the accumulation of knowledge available in any given field exceeded the ability of the human mind to completely grasp in a useful manner. It's going to reach a situation where multiple experts on a given subject with a fair idea about related subjects are going to be the only unit capable of actually doing anything sooner rather than later - apparently in medicine at least computers have come to the rescue.

    I suppose with the many specialisations in every area we're already there, the question is can we repeat the improved returns in areas like physics and chemistry.

  • Re:Interesting (Score:5, Interesting)

    by ColdWetDog (752185) on Wednesday February 13, 2013 @06:46PM (#42889833) Homepage

    We could start by getting some real information instead of the pair of nearly identical fluff pieces in the TFA. While it's nice they used Markov Decision Chains, as best as I can tell they did a bunch of simulations with pre existing data and came out with 'better' information than the docs who, unfortunately, were dealing with problems in real time.

    The lawyers have had this sort of thing for years. It's called a 'retrospectascope'. It tells you what you SHOULD have done after you know what the outcome is.

    Very, very helpful. To lawyers anyway, to doctors, not so much.

    I'd love to see some real computerized decision analysis that would be useful in real time medicine. I'd love to have "all" of the information about a patient in real time.

    I'd also like a Pony and one million dollars. Before I get worried about job security and before everyone goes all Star Trek, lets see if this works in a real clinical setting.

  • by anthony_greer (2623521) on Wednesday February 13, 2013 @07:02PM (#42890015)

    This sort of thing is just what big pharma wants, no human interaction and careful consideration, just a pill dispenser...symptom a + symptom B == Pill 2...
    How much you wanna bet this thing always prescribes expensive non generic drugs and never tries the 50-70 year old known treatments that are usually the first steps in treatment before new expensive drugs are prescribed.

    Also, anyone notice the change in medical advertising and communications, they never say "ask your doctor" any more, its ask your prescriber, or ask your provider...like they want to dis-intermediate doctors and are getting the public ready.

  • by TheCrazyMonkey (1003596) on Wednesday February 13, 2013 @07:04PM (#42890035)

    And will the system consider the patients age/cost to treat/insurance level/likelihood of patient paying future insurance premiums to make up for expenses?

    It will if you program it to. Things like this are tools. As a relatively young doctor (resident) I welcome things like this. Every doctor I know uses reference material, some are printed on dead trees and some are electronic. Today, there's not much difference. But the point it is that there's too much medical knowledge for one person to keep it all in their head at one time. If something like this were to come to market it wouldn't be replacing doctors, it would be augmenting them. Machines do what we tell them to, always have and (hopefully) always will. False rivalries like this completely miss the point. I would love to have a computer algorithm that could correctly diagnose 99% of the time even if it were flagrently wrong the other 1%. That's why humans are in the loop.

  • Re:Mycin (Score:2, Interesting)

    by Tailhook (98486) on Wednesday February 13, 2013 @07:49PM (#42890513)

    Who could possibly be opposed to cheap, automated healthcare?

    Doctors [ama-assn.org]. Obviously.

    People that can do math see Obamacare as infeasible [latimes.com] given current practice and the number of practicing doctors. Doctors vociferously oppose [fiercehealthcare.com] delegating anything, however.

    We're going to have to break the doctor monopoly in the US. The cost has gotten too high to indulge this exclusivity any longer. Automation, nurse practitioners, whatever. It's got to end. If there is anything good about Obamacare it is that this issue will be forced.

    I don't wish to see Doctors punished, but the fact is that tens of millions of people are about to arrive in their offices with uncancel-able, no-lifetime-limit, fixed-rate Obamacare and a lifetime of accumulated, untreated damage. At the very least this is going to force a LOT of delegation.

    Physics. It's a bitch.

  • Re:Mycin (Score:5, Interesting)

    by AaronLS (1804210) on Wednesday February 13, 2013 @07:51PM (#42890527)

    I'm believe they are in slightly diminished roles. The US military has triage lines, where family members call in about medical problems, a registered nurse answers and then decides if the patient should self-care, book a Dr. appointment, or go to emergency room. I handled appointment booking, and sometimes the nurse would call and no appointments would be available and they'd get annoyed at me and say "Well that's what the computer told me to do."

    I figured they had some sort of system that the nurse entered symptoms into, and it used the patient history+symptoms to suggest a self-treatment or triage to appointment/emergency room. I had also read about these systems in the book AI: A Modern Approach Even when the doctor doubted the diagnosis, the computer could even explain the conclusion(this is pretty advanced for an expert system) which would usually elicit a kind of "oh I didn't consider that factor" kind of realization from the Doctor.

    I assume a registered nurse must still be involved to meet legal requirements, to properly elicit symptom information, and serve as a sanity check for the system. The problem, demonstrated by their response and inability to troubleshoot problems, is that they become completely trustful of the system. I imagine the opposite problem is also common, where they don't trust the system at all.

  • Re:Mycin (Score:5, Interesting)

    by Tailhook (98486) on Wednesday February 13, 2013 @08:31PM (#42890893)

    No need to go overseas. The Veterans Administration under the US DOD uses so-called nurse triage lines with an expert system to direct patients to care over the phone. They're making a mobile, tablet [govhealthit.com] based system now:

    The combined solution, called ER Mobile, will make it possible for nurses to perform timely, accurate triage on a mobile device anywhere in the ER, as well as create a comprehensive record that will be recorded in the VA EMR.

    Shazam. Tri-corder.

    The VA isn't nearly as slavishly obedient to the AMA as private practice, and they definitely don't have employer provided health insurance systems to bilk, so things like this (delegation to nurses) get traction.

  • by quantumghost (1052586) on Wednesday February 13, 2013 @09:13PM (#42891267) Journal
    As an attending physician, I have several issues with this article.

    A) the slashdot title is a little sensationalistic....never did TFA mention diagnosis without a physician in the loop.

    B) by what standards was the final diagnosis discovered (i.e. the gold standard)? Another physician? Another program? Was the trial blinded?

    C) this article mentions only one disease process - depression, I fail to accept, blindly, that their results can be extrapolated - that is the crux of medical versus scientific research....see D. Not all diagnoses are obtained by just talking with a patient, in fact short of a psychiatric diagnosis, most require a physical exam....and a competent one. Suppose someone is obviously malingering and complaining about abdominal pain....this system would not pick up on malingering and would likely recommend an operation....a totally wrong diagnosis.

    D) this is a retrospective study...in medicine, this is not adequate proof of effectiveness.....you need to perform a prospective trial, preferably with randomization and blinding to adequately prove your hypothesis for treatment. Actually, upon re-reading TFA...it was _simulations_ that were performed. This is hardly world class evidence.

    E) cost savings were mentioned, but not long term outcomes....who cares if I saved 75% in the cost of treatment if the patient didn't get better in the end. (yes, short term were noted, but anyone who's ever been on long term therapy knows that the short term does not dictate the long term outcome.

    F) In life threatening situations - those that require the most expedient decisions, often with less than complete information, this system would be useless because the patient would die in the time it takes you to input the facts.

    G) not all situations are cut and dry. I am often consulted to make decisions about patients that are not addressed in any book. In fact, there may be only 1 or 2 journal articles about the problem, and often there are none. Making a decision treatment in the absence of an established precedent is not going to be one of this systems strengths...."Oh, I'm sorry, I can't help you....I just got the blue-screen of death from the program that was supposed to diagnose you!"

    H) would this program tolerate patient autonomy? What happens when the patient refuses some or all of the initial treatment plan?

    So, while I point out flaws, it is not to say that this is totally without merit....I am merely pointing out the obvious short coming of this article. In certain fields this could be very advantageous.

    I will tell you that in my field, this computer program borders on useless. There is very little doubt about what my diagnosis is, and when I am in doubt, my best evidence is collected by doing something. And computers are really a long way away from matching my skill set. A lot of my diagnosis is made by touching the patient during the physical exam. That exam can completely revamp my decision that started based on the history. And, since I am the one performing procedures, I also would not have a machine dictate the exact method that I use - I am the one performing the operation, I do it the way that I know will result in a safe and effective outcome. In my case, I just don't really don't know what this system would provide to me for patients.

  • Re:Interesting (Score:4, Interesting)

    by loneDreamer (1502073) on Wednesday February 13, 2013 @11:49PM (#42892417)
    I know something about machine learning, so let me tell you how it works. The input is partitioned in two sets, a training set and a test set. The training set is used to teach the algorithm, the test set to measure it's performance. So, while we know the outcome for the second set, the computer does not, he is literally seeing it for the first time, as if the patient has just came for a consultation. The decision accuracy is then computed comparing the new output with the known outcome we had reserved to ourselves to see it it matches. And it does it in real time. It IS a real clinical setting!

    So no, while I understand your fears, calling anything in ML a "retrospectascope" is completely wrong and ignorant. In fact, if you build such an algorithm it tends to have very poor behavior, since it looses the power to generalize for insight (the technical term is "overfitting").

    Truth is, it's a good think that you would love to get some of the things you mention, since the article is saying you'll get them (and I can attest to that). Very soon. Don't believe me? Look at Watson in action [youtube.com] and think deep about what the computer is doing. It might seem a game, but really think what it is going on. It is not a movie script. It does not know the answers, it is UNDERSTANDING the questions and COMING UP WITH the right answers. Faster than the best humanity has to offer. Are you smarter/more knowledgeable than them? The truth is indeed astonishing and might look like science fiction, but it is not.

    The pony though might take some time ;-)
  • by nbauman (624611) on Thursday February 14, 2013 @02:12AM (#42893049) Homepage Journal

    Those are all very good points. I just spent half an hour going through the articles, press release, and article itself (which is available here http://arxiv.org/abs/1301.2158 [arxiv.org] http://www.caseybennett.com/uploads/Bennett_AI_ClinicalDecisionMaking__Article_in_Press_.pdf [caseybennett.com] ) trying to figure out how they determined that the program diagnosed patients better than doctors. I couldn't do it. And it didn't look like it was worth another hour of trying to figure it out.

    I invite anybody to explain that to me. What do they mean by a "30-35% increase in patient outcomes"?

    For example, the press release says:

    "This was at the same time that the AI approach obtained a 30 to 35 percent increase in patient outcomes," Bennett said. "And we determined that tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost."

    What does that mean -- "a 30 to 35 percent increase in patient outcomes"? Does that mean the program treated patients with diabetes and got 30% fewer foot ulcers? Or 30% lower blood sugar? Or 30% longer survival? Or did they reduce the weight of overweight patients by 30%? Did they get 30% more patients to stop smoking? Did they diagnose 30% more cases of colon cancer?

    They don't seem to have defined their outcomes or endpoints.

    This is one of those times when you wish they had to publish in a peer-reviewed journal where an editor would have made them answer some obvious questions.

    It looks like an entirely theoretical study. I don't see where they compared their predictions to real-world data. And if they did, how would they decide that they're right and the doctors are wrong?

    They're like economists who come up with clever theories that ignore the real world.

    This reminds me of the story about the efficiency expert who heard a symphony orchestra. There's nothing here to indicate that they understand anything about medicine.

Recent investments will yield a slight profit.

Working...