Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Software Hardware Science

Initiative for Autonomic Computing Gains Strength 96

museumpeace writes "Tired of fixing your computer? What if your system broke down two billion miles from the nearest spare part or human? NASA has just held a colloquium where Ulster University computer science researcher Roy Sterritt was invited to present his ideas on Autonomic Computing. In the last few years,the leading system vendors have realized 'There is no less than a crisis today in three areas: cost, availability and user experience.' There has been a fair amount of academic research since customers like NASA see in it the potential to make remotely operated complex systems sustainable. It all makes for some very cool systems design work and there are lots of further research opportunities. Just don't forget what it may do to your job."
This discussion has been archived. No new comments can be posted.

Initiative for Autonomic Computing Gains Strength

Comments Filter:
  • by EatenByAGrue ( 210447 ) on Friday December 03, 2004 @02:46PM (#10989384)
    Yah the leading system vendors have realized there's a crisis. How else are they going to sell more systems if the ones in place now aren't dangerously unstable? They could probably explode at any minute, are toxic, and will probably delete all my data at any second.

    I better go buy a new computer.
  • Two words: (Score:2, Funny)

    by bourne_id ( 812415 )

    Automated nanobots

    Now we need only worry about the whole thing going berserk, killing the crewmembers, and attempting to destroy the Earth.


  • by ralphart ( 70342 ) on Friday December 03, 2004 @02:47PM (#10989401)
    If self-fixing computers become the norm, that means half the phone calls I get from friends will stop.

    Hmmm....bug or feature?
    • Remember: Half the phone calls, half the money/donations/whatever you want to call it. (Could just be food, beer, etc.)

      Just like job security, your not going out ...... or up.
    • That's scary to me... I don't want to find out that the only reason people are friends with me is because I can fix their computer... I have a feeling I would find that out pretty quickly heh
      • Then start turning them down one by one. When one turns out to be a techno-sycophant, go meet someone new. Eventually you'll winnow away the users.

        Me, I'm past that hurdle. I just figured "why the hell am I wasting my life helping Bill Gates get away with selling crap" and layed down a "I don't do Windows" policy. Since the majority of people that were mooching computer support from me were Windows users, that decreased my workload quite a bit... and yes it did reveal one or two to be less than true fr
    • I was thinking the same thing, except in refernce to my inlaws.
    • Self-fixing computers means some form of Artificial Intelligence, which again means smarter computers in the sense that it can do more logical operations rather than crunch numbers and data only. May be it won't be much of an impact on computers with personal and business uses but in the science arena, it might bring about a revolution. Just imagine producing complex models in bio-chemistry or designing a chip would be so very much easier with a machine to fix stuff whenever the need be.
      • Wow, just imagine!

        Super smart computers which can fix anything. Revolutionary!

        Just imagine producing complex models in bio-chemistry or designing a chip would be so very much easier with a machine to fix stuff whenever the need be.

        And computers that write and produce sitcoms! That *would* be awesome.

        I mean, like, the sky is the limit!
    • If self-fixing computers become the norm, that means half the phone calls I get from friends will stop.

      And IM's of the substance "d0od, you there?" and "You'll never guess who was on Oprah today" from my stupid friends and relatives will increase tenfold. Some people don't deserve to have a working computer.

  • It doens't matter (Score:4, Insightful)

    by Kipsaysso ( 828105 ) on Friday December 03, 2004 @02:49PM (#10989433) Homepage Journal
    It won't matter until it can fix user errors anyway.
    • It won't matter until it can fix user errors anyway.

      Well, it will take care of the second most annoying part of any IT-related job...

      Of course, we'll have lots of fun with systems constantly rebooting in attempts to 'fix' themselves... that'll be fun.

    • The user experience that I think atonomic computing is trying to improve is the "I don't think its working" and all that can be done in that respect is that a system be able to fail-over and recover from user errors. [I think there are social issues with trying to enhance the computer's "person experience";]. And users are not the only humans a good system has to tolerate, some pretty bad things can happen [] when the system administrators screw up. Clearly autonomic computing is not going to be founded on
    • It's how it "fixes" those "user" errors that has me worried.
      • I think this is how that Nomad [] robot from Star Trek got its start...

        Sterilize soil samples...fix user errors...sterilize...users? Sterilize users! Got it!
    • I'd like to tangentisize (I love neology) and qualify that most "user errors" are the result of poor user interface and interaction design, the third of the crises listed in the article.

      Indeed, the term "self-fixing" implies to me recoverability from problems including erroneous input. Input validation with range checking for reasonable values and informative feedback can catch a good amount of bad input. Add reversability and recoverability to the mix and you have a friendly software layer protecting agai
    • The irony here is that in an ideal world we would have perfect autonomic error handling...that never did anything. Point being we should be working on debugging aids to help us produce code that doesn't need a watchdog.
  • by Artifakt ( 700173 ) on Friday December 03, 2004 @02:54PM (#10989524)
    It's just about impossible that a tecnic that makes robotic spacecraft all that much more self sufficient will be confined to just robotic space travel for long. If NASA is successful, we will see widespread robotization here on Earth as a consequence.
    30 years from now, this will be characterized as a 'mere spin off', and instead of bitching about Moonrocks, ignorant people will be saying "We spent billions to send robot probes to Pluto, and all we got was a bunch of contaminated Helium."
    • It's just about impossible that a tecnic that makes robotic spacecraft all that much more self sufficient will be confined to just robotic space travel for long.

      If you spend as much time and money on developing your systems as NASA does on theirs, you can get the same degree of autonomy and reliability.

      And that degree is somewhat limited: their spacecraft crash with some frequency, and they spend a lot of time patching and bug-fixing.
  • by Anonymous Coward
    grep -c icrosoft *
  • by mogrify ( 828588 ) on Friday December 03, 2004 @02:56PM (#10989547) Homepage
    In the ZDNet article [] on Google's inner workings that was posted earlier on /., Urs Hölzle mentions that in the larger Google clusters, 2 machines per day will fail. They compensate for this with triple redundancy, good software for failover control, and a staff of 800(!) computer scientists. Needless to say, not everyone could manage this... there's definitely an enterprise niche for system autonomy. This also brings IBM's eFuse technology [] for self-repairing chips to mind.
    • " and a staff of 800(!) computer scientists"

      I doubt they are all scientists. I'm sure most are just sysadmin/operations employees.
      • Yes,they must be counting everyone in that number. Getting TWO scientists to agree on what to do is hard. Just imagine getting 800 of them to work together. The Management team at Google must be *really* good, or the perks must be awesome maybe both.
    • I could easily see a future where, a wall of blade-style servers has bad units culled by robotic arm. This would be somewhat like a large tape-data silo does only with server nodes instead of tapes. Just keep technicians working on keeping enough fresh nodes at one end of the pipeline and refurbishing the broken ones on the other and the rote portions of the work (finding the broken machines and replacing them) are done for you.

      This also brings IBM's eFuse technology for self-repairing chips to mind.

  • by Iphtashu Fitz ( 263795 ) on Friday December 03, 2004 @02:56PM (#10989551)
    For years SAN's from EMC, fault tolerant serves from Stratus, etc. have all had the ability to phone home when they detect a failure is imminent or has occured. Usually the customer doesn't realize there's even a problem until a service tech shows up with replacement parts.

    Of course getting this down to the level of home users is still a long way away...
    • I think I'd rather have my equipment tell me when theres a problem, so that I can evaluate the risk (do I have redundant systems to handle the failure when it happens) vs. the alternatives (can I repair this myself) vs. the cost (how much does it cost to have a field tech show up unannounced, perform some voodoo on my server, then tell me that whatever is wrong was fixed and won't be a problem? Or is that covered in my support contract?)
    • IBM has had the same feature for many years with it's Mainframes, NCPs, and 317X controllers. They would run a POTS line to the equipment and if it encountered a, "Condition", it would phone IBM Service. Then you have dispatch send out a CE to investigate.
  • by vurg ( 639307 )
    Tired of fixing your computer?

    I don't think this applies to most of us.
  • by Anonymous Coward
    .. consists of a dog and a man (and a computer of course).

    the man is there to feed the dog, the dog is there to keep the man away from the computer
  • Computers telling you what to do, oh wait. Actually I think we are already there.

    "Hal, I think you should install the latest service pack, you have been acting funny lately"

    "I cant do that Dave"

  • by Darthmalt ( 775250 ) on Friday December 03, 2004 @03:00PM (#10989610)
    There's a race, Manufactuer's building smarter computers and AOL signing up dumber users.

    So far AOL is winning
  • lemme guess (Score:2, Funny)

    by ilmdba ( 84076 )
    code name for this project 'SkyNet' by any chance?
  • Fix moved/broken links: 15 2-8C15-1CDA-B4A8809EC588EEDF&pageNumber=3&catI D=4 cites: ns

    "Not Found

    The requested URL /public/publications was not found on this server."
    • sorry. its hard to get'em right. The SCIAM link was pasted from a googling session. I have my own digitalsciam account but I know those links can't be passed around. If I try this again, I'll look harder for links that work for all readers. Trouble is they always work for me, testing is tricky...I need to be someone else!
  • How is this idea any different from AI?

    Software algorithms sufficiently complex so as to appear as though heuristic. This seems to be a new application for AI.

    • Depends on how you quantify the term "intelligence".

      But I dont see how a "self-healing" computer would be AI. AI would not only be able to Heal itself, but Upgrade itself aswell with objects of its own design.

      I guess the perfect example would be the AI from any computer game, all its moves are rules based, even IF it is claimed to be dynamic. My reasoning would be that all intelligence is rules based, and the expansion of that intelligence usually comes from ignoring or expanding the set rules that you

  • by geg81 ( 816215 ) on Friday December 03, 2004 @03:10PM (#10989737)
    People have been trying to make systems easier to manage for years. Unfortunately, it's not enough to have the desire to make systems self-managing, you also need good ideas for how to do it, and those are still lacking as much as they always have been.

    Give the guy credit, though, for seeing a good opportunity. Industry will believe in this silver bullet like they have done in the ones before.

    Unfortunately, the real research will still take decades to complete, and then this area will have a bad name just like most of the other overhyped technologies before it.
  • by Lodragandraoidh ( 639696 ) on Friday December 03, 2004 @03:14PM (#10989801) Journal
    I have been talking about this for years... []

    If the autonomous systems NASA and the ESA have put into the void are any indication, I don't think we have much to worry about - the costs will be prohibitive for all save the largest organizations, and true autonomy (in the form of robotics) will have a whole range of other problems (imagine your main file server getting up and walking out of the data center because it mistakenly assumed there was a fire...)

    The key, in the interrum is make yourself indispensible. If you have the mindset that you are a code grinder/monkey and that is all you want to be, then your days are numbered. Your goal should instead be becoming the guy who can put together a complete solution (data, application, hardware, network) in short order that works, scales well, and is extensible by your users. You need to be a jack-of-all-trades. That is how to survive and gain esteem in the eyes of your clients and peers, as I see it.
  • by Anonymous Coward
  • The problem is that we still rely on hardware! Software is limitless without hardware, but the stupid hardware people insit on limiting our abilities! If we didn't have hardware, the software would be easy to fix, just patch it and upload the new version!
  • Just think of all of the service contract revenue that would be lost. Also, how much R&D money will go into systems like these? Then, what will the price of these systems (at least early ones) look like to make back that money? Most importantly, what about people like me that use the phrase 'Honey, the computer just died' as an excuse to upgrade???
  • I wouldn't consider this to be new...rather it's the idea of this that is starting to propigate.

    CISCO's new 92 terabit/sec router already has some of these features. The OS they used to build the system supports many of these features (high availability, self healing, etc). o/ [] tml []

    It's a self healing system. It uses the services and functionality of the OS to accomplish it.

    QNX's networking sys

  • I understand that autonomic computing is really neat. I understand that it is a dificult problem mathematically and programatically. It doesn't help me much or the average end user. I see the next great advance in computers being a technology that was discovered/invented 20 years ago. It won't be the technology, but how it is presented to the user. I understand that research for these two projects can occur concurrently. I would just rather see people get excited about using old technology that actu
  • I think it is funny that this talk automatically moved (no i didn't read the article you insensitive clod!) to nanobots and self-repairing systems! Why wouldn't it be cheaper/easier (is today in compu hardware anyway, ever see someone 'fix' a computer... they take old modular bits and replace them with new modular bits), do to have heavily redundant systems... So throw away that hydrospanner, just activate seconday, or tieriary systems, or 4th etc... use advanced minituration to stuff as many redundant syst
  • .. but what makes me sceptical about machines that can fix themselves - if they're smart enough to understand what's wrong, they shouldn't break in the first place..
  • All NASA did here was provide the meeting space. IBM and some universities are doing the work. But in the article, NASA gets mentioned twice.

    Actually, increasing system reliability and restartability isn't fundamentally all that hard. It's trying to do it in the presence of the vast amount of dreck on Microsoft systems that makes it difficult.

  • The 4 Rs, matey (Score:4, Insightful)

    by tootlemonde ( 579170 ) on Friday December 03, 2004 @04:09PM (#10990562)

    The IBM links says, under "The Solution":

    Autonomic computing: a systemic view of computing modeled after a self-regulating biological system.

    In conventional system design, the Rs of reliable systems are: (1) Robust, (2) Repair, and (3) Redundant.

    • Robust means the system is less likely to fail.
    • Repair means a secondary system looks for signs of failure in the primary system and repairs the problem.
    • Redundant means a secondary system takes over when the primary system fails.

    Biological systems use all three methods to varying degrees but the problem is that biological systems do not survive as individuals, they survive as a species by tolerating a high degree of failure and using a fourth R: Replication.

    For computer systems, this biological systems approach would mean replacing every component of the system on a regular basis the way all the cells in the human body are completely replaced every seven years. Periodically, you would throw out the entire system and replace it with two or three new ones that have undergone a period of testing and development.

    The replication approach, which is key to the survival of biological systems, runs counter to most business thinking, which is to replace multiple systems with fewer, more powerful systems. This limits reliability to the first three Rs.

    There is much that can be done to increase reliability with these 3 Rs but if biological systems are any indication (as well as some theoretical limits), they are inadequate.

    The problem of reliability could ultimately be a flaw in the way business works rather than a technical problem.

    • "The problem of reliability could ultimately be a flaw in the way business works rather than a technical problem."

      Well of course it is; business is *all* about making as fast a profit for your shareholders as possible. And thats really all there is to it.

      Putting time into consolidating your existing systems is often seen as pointless; why make existing systems more reliable when you can use that time to build new systems for new clients.

      Oh and by the way, when the existing systems go wrong and have to be
  • Be comforted that in the face of all aridity & disillusionment and despite the changing fortunes of time, there will always be a big future in computer maintenance.

    "Deteriorata" - National Lampoon - 1972
  • We already have self-repairing computers. Haven't you ever used to Windows Troubleshooter?!
  • Autonomic computing means a computing system which is self-configuring, self-healing, self-optimizing, and self-protecting.

    As we modelled the eye to build cameras, the brain to build computers, the ear to build speakers, we're modeling our autonomic nervous system to build the next evolutionary step in computing. Networks that independently and reflexively self -regulate, configure, repair, optimize, and protect in the same sense as an immune system or an automatic pilot.

    This would allow the network to

  • My addition to the 'maybe this is not such a great idea' meme would be the idea that longer life-spans for adults would lead to an increasingly greater de-valuation of children, whom would increasingly be seen as competition rather than the hope of a new generation.

  • UPS (Score:3, Funny)

    by east coast ( 590680 ) on Friday December 03, 2004 @05:25PM (#10991484)
    What if your system broke down two billion miles from the nearest spare part or human?

    I think they'll do a one-day deliver on this for a small surcharge.
  • Just don't forget what it may do to your job.

    Where in history has scientific advancement *not* removed the need for some jobs? When we're basically working towards efficiency the end product of all the technological revolutions will be no one needing any jobs. Self fixing machines leads to a whole mechanical metabolism for the world which humans will be able to leech off of ad infinitum.

    Until, of course, our more luddite/conservative/squeemish types rise up and destroy the atmosphere trying to kill all t
  • As a matter of face, it would be quite logical to notice, that completeness in computing autonomy could be achived only in one way; and that way is basically making computing all-sufficient. Which, in general, meens for it not to require any human intervention after the system's in standard execution mode == up'n'running. That is, the solution for this problem is quite simple and extraordinarily xomplex & important for humanity @ the same time: AI, in the classical acceptance, creation of a ...

    - Zx-man
  • So if your computer system is smart enough to adapt to troubles around it and internal "breakdowns" by changing itself, isn't this simply machine evolution?

    How long until same computers consider meat puppet life forms to be a troubling virus infecting their planet, and standing in the way of fixing "breakdowns?"
  • I have a friend who worked on this a bit for NASA. Its purely hardware based stuff, to do with fixing broken chips in flight and reducing the amount of redundant hardware needed. Admittedly its a very specialised area but an interesting one.
    The solution she was looking at was to use FPGAs to implement the hardware and when part of the silicon became damaged to use software to redesign the layout of the circuits to route around the damaged area.
    Its basically a SAT problem, finding a suitable SAT solving algo
  • While some good stuff will come out this idea, I think that a lot of it is bullshit or rehashed existing ideas.

    Now the one good thing I see is sharing computing cycles. But even to do this, you have to define the mentioned service contracts, so you'll end up with a lot of accounting ("micropayments") for who helped whom when. Of course, IBM would like to do that accouting.

    Now this "self-healing system" idea that IBM is hyping everytime it gets the chance, isn't that just a rehash of Suns/Oracles idea of

To avoid criticism, do nothing, say nothing, be nothing. -- Elbert Hubbard