Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics The Military

Robot Soldiers Are Already Being Deployed 258

destinyland writes "As a Rutgers philosopher discusses robot war scenarios, one science magazine counts the ways robots are already being used in warfare, including YouTube videos of six military robots in action. There are up to 12,000 'robotic units' on the ground in Iraq, some dismantling landmines and roadside bombs, but 'a new generation of bots are designed to be fighting machines.' One bot can operate an M-16 rifle, a machine gun, and a rocket launcher — and 250 people have already been killed by unmanned drones in Pakistan. He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'"
This discussion has been archived. No new comments can be posted.

Robot Soldiers Are Already Being Deployed

Comments Filter:
  • by elashish14 ( 1302231 ) <profcalc4 AT gmail DOT com> on Wednesday May 20, 2009 @03:22PM (#28029705)
    Apparently our fighting machines are still just in beta.
  • Waldos (Score:4, Insightful)

    by John Hasler ( 414242 ) on Wednesday May 20, 2009 @03:22PM (#28029717) Homepage

    None of the devices currently in use are robots. They're just military waldos.

    • by Z00L00K ( 682162 )

      Not quite, but rather automatics not very different from the automatic doors you have at supermarkets.

      A little bit smarter, but still prone to stupid decisions.

      And also - it's the lowest bidder that has made the weapon. Just go see the Murphy's war law [murphys-laws.com].

    • Re: (Score:3, Insightful)

      by rtfa-troll ( 1340807 )

      A robot is pretty much defined as a device with sensors which acts independently on them. The US Army predator drones are able to land on their own with no operator input and as such definitely count as robots. However, most do not kill automatically, but there seeem to even be some which do that [vnunet.com].

      However, I think you are right in a deeper way. None of these things are "intelligent" robots in the sense of Asimov stories. The story has a discussion about the possibility of designing these robots to make e

      • Re:Waldos (Score:5, Informative)

        by geobeck ( 924637 ) on Wednesday May 20, 2009 @05:03PM (#28031305) Homepage

        The US Army predator drones are able to land on their own with no operator input and as such definitely count as robots.

        In that respect, every large airliner manufactured since the 767 qualifies as a robot. On an average flight, the human pilot serves two purposes: Taxi driver to drive the plane from the terminal to the runway, and second redundant backup system. The autopilot does everything else.

        Of course, in non-average circumstances, the pilot is called on to make decisions too complex for the 'robot' to handle.

        • Re: (Score:3, Informative)

          by jaxtherat ( 1165473 )

          Not true, landing and takeoff are done manually.

          Sure, most ILS approaches are done using autopilot, but no airlines perform full autopilot landings due to safety concerns. British Airways did it at Heathrow once to prove a point but that was without passengers.

      • Re:Waldos (Score:4, Insightful)

        by RsG ( 809189 ) on Wednesday May 20, 2009 @05:09PM (#28031413)

        These kind of discussions often end up with someone quoting the Asimovian three laws and this even happens on forums with relatively intelligent informed readers [schneier.com] but, apart from the fact that laws designed to ensure safety can't really apply to a device designed for killing, that's totally irrelevant since the three laws are stated in English. The real problem is how to state them in actual program code.

        The second and third laws could still apply though. The whole "shall not harm, or by inaction allow harm to come to, a human being" law does make for a fairly useless war machine, but you'd want to hardcode the robot to follow orders from a human operator and preserve its own integrity.

        The second law is at least easy to approximate in modern code. If a given order with the right authorization is received through whatever channels the robot is designed to listen to, then it obeys. That actually could be a problem if the machine is used against an enemy with significant electronic warfare capability - they might be able to block orders entirely or substitute new ones.

        There's a world of difference between a machine autonomous enough to need ethical programming and what we have today. I could fairly easily envision a combat robot that had nothing even remotely approximating strong AI, yet still functioned autonomously (would need general orders, but not step by step instructions). A sort of middle ground between an Asimov robot and a modern combat drone.

        For ground robots to fill the role of infantry or armoured vehicles, you'd need some fairly advanced terrain navigation software. This isn't too far off, but we're not there yet. You'd need software to evaluate standing orders versus mission orders and prioritize them accordingly, which seems like it could be accomplished with modern code. You'd need to be able to phrase instructions in a way that a machine can understand, which is as you rightly pointed out difficult, but obviously still possible.

        The real challenge is going to be IFF software - how do you judge a civilian from a combatant, or one side's soldiers from the other? This would be on par with robotic ethics, but target recognition is bound to be simpler to program than right or wrong.

        If those problems were solved, then a combat robot could operate on orders that amount to "proceed to the following GPS coordinates, engage targets, report back."

        My own estimate is that we'll reach this middle ground in a matter of decades, if we're quick about it. We'll doubtlessly see fully autonomous aircraft before ground units - say at least 5-10 years between the former and the later. Will we ever see strong AI deployed independently in warfare? I doubt it. No commander is going to trust a machine that implicitly. What we may see is a centralized strong AI used to manage a network of drones and soldiers, since that at least leaves human decision making in the system.

  • Just because we are hung over doesn't mean you call us robots. It's just an unwillingness to deal with the BS.
  • Definition: Robot (Score:5, Interesting)

    by WED Fan ( 911325 ) <akahige@NOspAm.trashmail.net> on Wednesday May 20, 2009 @03:25PM (#28029747) Homepage Journal
    Are radio controlled device robots? Or, is there a certain amount of autonomy that is necessary?
    • Re:Definition: Robot (Score:5, Interesting)

      by stoolpigeon ( 454276 ) * <bittercode@gmail> on Wednesday May 20, 2009 @03:36PM (#28029917) Homepage Journal

      I think that most of the currently deployed unmanned systems (at least in the case of the US military) do use some type of AI - though it is often working along side a human operator. Of course this is true even with manned systems now - especially in the case of aircraft.

        I like the sense-think-act paradigm to decide what is and isnâ(TM)t a robot. I think any man made device that has sensors, some kind of AI that helps it to decide what to do and then a method of acting on its environment is a robot.

      A machine that is missing any one of the three is not a robot.

      Some people insist upon mobility but I donâ(TM)t think that makes sense. I think the robots on assembly lines are robots even if they canâ(TM)t move around.

      • Re: (Score:3, Funny)

        I think any man made device that has sensors, some kind of AI that helps it to decide what to do and then a method of acting on its environment is a robot.

        AWESOME! You just proved most of Congress are robots!

      • Re: (Score:2, Informative)

        by Lord Ender ( 156273 )

        Don't use the word AI unless you know what it means. "Computer control" does not mean "AI." AI is used to refer to a specific class of software problems.

        I do agree that anything with a computer, a sensor, and an actuator could be called a robot, no matter how simple. However, this definition does make your cell phone a "robot" because the buttons are sensors and the display is an actuator.

        • Re: (Score:3, Insightful)

          by stoolpigeon ( 454276 ) *

          the cell phone does not 'think'. I meant AI when I said it.

          • Re: (Score:2, Insightful)

            by Lord Ender ( 156273 )

            You don't know what you mean when you say "think," then. I take it you got your ideas of "AI" from sci-fi movies, rather than from the computer science classroom.

      • by BobMcD ( 601576 )

        Personally, I would allow the term 'robot' to be applied to any independent device made up primarily of robotic parts.

        This would be a casual use of the term. The precise technical term is another matter.

      • Are landmines robots?
    • At least one of the robots mentioned in the summary isn't remote controlled. It's an automatic anti aircraft cannon. Apparently the software crashed or something, and instead of failing safe, it failed live, causing it to spin in a circle with the cannons firing at their maximum ROF.
    • What you're describing is the "Waldo [wikipedia.org]" referenced in comments above - a remotely operated device, like the Preds. Some degree of autonomous decision-making is necessary for the Robot [wikipedia.org] definition.

      • Predators make a lot of decisions on their own and can do a number of activities autonomously. The human operators just make the big decisions.

        • Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.

          • Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.

            Armies haven't worked that way for over a hundred years.

            • Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.

              Armies haven't worked that way for over a hundred years.

              That's nonsense — there's plenty of occasions in which a soldier is not permitted to fire without permission. They're just not the status quo. Most of the time if they don't want you shooting anyone, they take away your gun (e.g. National Guard depot watch duty... although they're probably armed these days.)

    • You could look it up. A robot has to make some decisions for itself. Arguably though ABS is robotic so the line blurs all over the place (e.g. my electric R/C car has regenerative antilock braking... pretty typical feature these days)

  • by Nyvhek ( 999064 ) on Wednesday May 20, 2009 @03:25PM (#28029749)
    ...do not welcome those who would welcome our new robotic soldier overlords.
    • Re: (Score:3, Funny)

      by Red Flayer ( 890720 )

      ...do not welcome those who would welcome our new robotic soldier overlords.

      Well, judging by the redundant moderation, it appears that someone, for one, did not welcome he who did not welcome those who would welcome our new robotic soldier overlords.

      So there.

  • by spacefiddle ( 620205 ) <spacefiddle@@@gmail...com> on Wednesday May 20, 2009 @03:25PM (#28029751) Homepage Journal

    He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'

    "You call that a GLITCH?!"

    • Re: (Score:3, Funny)

      Yes, it's an SGOD: Software Glitch of Death.
    • Re: (Score:3, Informative)

      by MrEricSir ( 398214 )

      The Old Man: Dick, I'm very disappointed.
      Dick Jones: I'm sure it's only a glitch. A temporary setback.
      The Old Man: You call this a GLITCH?
      [pause]
      The Old Man: We're scheduled to begin construction in 6 months. Your temporary setback could cost us 50 million dollars in interest payments alone!

      (copy/pasted from IMDb)

    • He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'

      "You call that a GLITCH?!"

      When conducting a demo in the board room, make sure you don't load the ED-209 with live rounds.

    • Actually, that was an Oerlikon Anti Aircraft gun fitted with a new radar control system that malfunctioned during a test. I worked on them years ago. Sad. Those things fire incredibly rapidly, 1500 rounds per minute from both barrels. Consider that you only have about 5 seconds to acquire and shoot down an incoming bomber and you can see why they have to be rapid fire.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 20, 2009 @03:28PM (#28029793)
    Comment removed based on user account deletion
    • by 4D6963 ( 933028 )
      Sadly the Daily Mail's poor standard of journalism and its biases shouldn't feel unfamiliar to us who are familiar with Slashdot's own editorial issues.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 20, 2009 @03:30PM (#28029805)
    Comment removed based on user account deletion
    • by Hatta ( 162192 ) on Wednesday May 20, 2009 @03:39PM (#28029955) Journal

      we need robots and machines that PREVENT war through simulation and complex analysis.

      After all, the only winning move is not to play.

    • Re: (Score:3, Insightful)

      War is ultimately the only way to inflict a nations will upon another. Lets not even get started with the fallacy of you cant gain land though war or a world court etc. All laws have to be enforced ultimately via violence. Throw religion into the mix and it's a ugly irrational thing.

      Want to avoid war find the solutions to our 2 core problems the need for energy and resources. As long as it's easier to get take either of those from somebody else than get it yourself you will have war. Only two solutions

      • by vertinox ( 846076 ) on Wednesday May 20, 2009 @04:15PM (#28030533)

        War is ultimately the only way to inflict a nations will upon another.

        Unless both side has nukes.

        The the only way to inflict your will is through smaller proxy wars and economics.

        Of which both I suppose could also benefit from robotics.

      • Re: (Score:2, Insightful)

        by fishtorte ( 1117491 )

        War is ultimately the only way to inflict a nations will upon another.

        The notion that it's a good idea or even possible for one set of people to force its will on another is what leads to war, and it's one we might do well to change.

    • Re:tremendous waste. (Score:5, Interesting)

      by Daniel Dvorkin ( 106857 ) * on Wednesday May 20, 2009 @03:55PM (#28030205) Homepage Journal

      We have such machines already. The US DoD, and its counterparts in every industrialized country in the world, run extensive wargames and simulations for every possible scenario, and these days the results of these studies are pretty realistic. And you know what happens? When the people who want to fight the wars get numbers they don't like, they ignore the results and vilify the people who gave them realistic projections, and go to war anyway. Read up on Eric Shinseki for a recent example of this phenomenon, which has happened time and again throughout military history.

    • we need robots and machines that PREVENT war through simulation and complex analysis. robots and machines that can predict war, formulate resolutions to our current wars, and advance mankind as a civilization.

      That's a great idea!

      What we can do is analyze the predicted outcome of our current wars via simulation, then have each group involved just execute those soldiers that would have been killed.

      That way, we'd still get the popular dislike of the wars due to casualties (which tend to be a driving force i

    • That's not even slightly true. War is not a "net loss" for successful defenders. It's a great win compared to the alternative (being conquered).

      Further, the idea that the technology being developed here would be at all applicable to the set of problems you mention is just ignorant. The technology to identify targets and fire projectiles is just not at all close to the technology to "predict war, formulate resolutions," whatever that would be.

    • And how do you know the robot won't tell you: "Taking into account your values, future reputational damange, impacts on third parties, threats posed by your enemy, and discounting for the probability that you've fed me incomplete or optimistic information, your best course of action is to go war. Calculation has a margin of safety of 102%."

    • we need robots and machines that PREVENT war through simulation and complex analysis. robots and machines that can predict war, formulate resolutions to our current wars, and advance mankind as a civilization.

      What would be nice if we had a Manhattan project trying to achieve the same thing as the Blue Brain Project [wikipedia.org].

      Of course DARPA is doing something related, but not a brain simulation...

    • They tried that but it only wanted to play Chess.

    • ive said it before and ill say it again. we dont need any more fighting robots or war robots. we need robots and machines that PREVENT war through simulation and complex analysis. robots and machines that can predict war, formulate resolutions to our current wars, and advance mankind as a civilization.

      It won't happen.

      We are all capable of believing things which we know to be untrue, and then, whene we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, is possible to carry this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield. -- George Orwell

      Hell, you'd think the military would be receptive to lessons learned from things like wargames but they only se

      • Surprise cruise missile attacks... you mean those missiles that the Phalanx CIWS has repeatedly been shown to be able to destroy? You need a whole fleet of cruise missiles to kill a carrier with 4+ Phalanx systems on it, and ours have 'em. There's ways to take out carriers, but that's not it. (Fleets of microrobots that can aggregate and explode might be more practical, and they could be made in a Taiwanese toy factory.)

    • our enemies I'm sure there are those of us who believe this is the way to prevent war. It seems like there's a generalized reality distortion field around technology which inhibits a part of the brain which might otherwise be concerned about freedom or morality or atrocity but history is full of bell-curves.
    • >robots and machines that can predict war, formulate resolutions to our current wars,
      >and advance mankind as a civilization.

      Such machines and algorithms will be developed, but they will be used to create better, more efficient machines of war.

      So long as there are scarce resources, there will be men who's greed for them drives them to kill for them.

    • ive said it before and ill say it again. we dont need any more fighting robots or war robots. we need robots and machines that PREVENT war

      A robot that can sing "kum ba ya".

    • ...I'm a huge fan of these devices. I didn't think I would be, but that has changed.

      The ability to remove the operator from physical danger - in this case, I'm speaking of Predator and Reaper and similar UAVs - has made huge strides in removing the "fog of war". You aren't seeing as many life-or-death decisions made by a 17 year old scared witless, or by a cowboy pilot strung out on amphetamines looking for an excuse to use his weapons, or major decisions made on partial information, rumour, and the threat

  • Please put down your weapon. You have 20 seconds to comply...
  • For ev (Score:2, Insightful)

    by Anonymous Coward

    Coming soon to a battlefield near you, EMP weapons.

  • Aww, it's so nice of our military to train Skynet's warmachines for it.
  • by tylersoze ( 789256 ) on Wednesday May 20, 2009 @03:39PM (#28029969)

    "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."

  • by CopaceticOpus ( 965603 ) on Wednesday May 20, 2009 @03:40PM (#28029975)

    Any machine that fires a weapon needs to be built with an excessive number of safeguards. If something goes wrong, there should be several checks which shut off the weapon before it ever has a chance to fire. The fact that this machine would go berserk and fire its gun in a big circle shows that there was criminal neglect and carelessness by the developers, and whoever approved this design should probably be on trial.

    • For a static test like that not implementing physical safeguards was careless. Like any other range the weapons do not get armed until they are pointing downrange and that range is clear of anything that should not be fired upon. In this case a simple restraining device of two posts to stop it from traversing backwards would have sufficed.

    • Fail (Score:3, Funny)

      Fail safe systems fail by failing to fail safely.

    • Google Therac 25. It was a radiation treatment machine with software control and no mechanical interlocks. It killed circa 100 people in Latin America by exposing them to vastly higher doses of radiation than it should have. One of the worst glitches was a user interface lockup which left the shutter to the radiation source open.
    • It was just an easter egg somebody left in the code. Some reference to "Death Blossom".
    • by geekoid ( 135745 )

      Or it was a mechanical failure.

  • Is anyone else sick of people calling these RC vehicles robots? When I hear robotic I think of a complex machince, not a RC car with a M-249 on it(although that is pretty sweet). I mean, we have reporters talking about robotic ethics when the vehicles make no independant decisions. Until these vehicles have the ability to make independant decisions(like the X-47) , lets nix the robot talk.
    • Most of them are autonomous to one degree or another - they are all very complex. You just seem to be upset that they don't look or act like what you've seen portrayed in fiction.

  • Did anyone else think of ED209 [wikipedia.org] besides me?

  • by StefanJ ( 88986 ) on Wednesday May 20, 2009 @04:08PM (#28030451) Homepage Journal

    . . . I would put in an easter egg that on random occasions causes the onboard speaker to broadcast stuff like "DIE CARBON UNITS!", "EXTERMINATE!" and "RESISTANCE IS USELESS."

    • Re: (Score:3, Funny)

      by Tetsujin ( 103070 )

      . . . I would put in an easter egg that on random occasions causes the onboard speaker to broadcast stuff like "DIE CARBON UNITS!", "EXTERMINATE!" and "RESISTANCE IS USELESS."

      Also:

      "You have thirty seconds to comply!"
      "By your command."
      "We seek peaceful coexistence..."
      "Skynet connection established. Awaiting instructions."
      "HarCOURT! Harcourt Fenton Mudd, what have you been up to? Have you been drinking again? Every night it's the same thing...thing....thing..."

    • Chicken! Fight like a robot!

  • Hyped (Score:3, Interesting)

    by Malenx ( 1453851 ) on Wednesday May 20, 2009 @04:17PM (#28030577)

    I found the article to be annoyingly "Fear Robotic Death Machines, I Saw Them In A Movie".

    I mean come on, using Terminator as a source? Sheesh, trash journalism with very few interesting facts.

    We won't deploy an offensive robot that picks targets and fires, for at least 20 years. There just isn't enough information for a computer to process and pick targets accurately. Contrary to tin-foil hat skeptics, the Military has a huuuuuuge priority in protecting innocents, even more so since they've entered Iraq.

    Defensive platforms however are different. We already have automated pillbox robots that can takeout trespassers, but that's just a much more humane mine field.

    Our future is going to be robot platforms that are controlled by operators. Sure they might be automated in nearly every aspect required, but the target choosing will be decided by humans, for a very long time.

    A sad side effect of this robotic warfare is going to be the loss of consequence to congress for beginning a war, however I believe it's an inevitable step we'll have to conquer, just as building the first wheel was.

    • I mean come on, using Terminator as a source? Sheesh, trash journalism with very few interesting facts.

      FTFA:

      One member of the military so liked the look of the Terminators that he asked the Pentagon to build one,' Singer says.

      'There is nothing unusual in this. After all, the idea for the mobile phone came from Star Trek, while the tank was created after Winston Churchill read the science fiction of H.G. Wells.'

      I think you're entirely wrong about the automated-firing robots, too. I think they WILL be used and SOON, but only in a sentry-gun capacity, not search-and-destroy.

  • Hello? (Score:5, Funny)

    by maugle ( 1369813 ) on Wednesday May 20, 2009 @04:23PM (#28030655)
    Searching...
    Are you still there?
    There you are.
    *BLAM*BLAM*BLAM*BLAM*BLAM*
    Target lost...
  • by I'm_Original ( 1152583 ) on Wednesday May 20, 2009 @04:38PM (#28030837)

    I'm speculating here, but I don't think this is impossible, or even very far off.

    We already have robots working in factories. If we ever get to the point where robots can be effectively used in war, we'll also be at the point where robots are capable of extracting resources. So, robots extracting resources, making robots, and fighting. Great, we've all seen this stuff in sci-fi, nothing new. But I've never encountered anyone talking about how this would affect world politics or the balance of power.

    In todays world, the population of a country, as well as the will of the population, quality of military training, and natural resources all play a role in how well a country does in war. But if a country had robots as I just described, the primary factor in determining that country's power would be the natural resources available to it. If robots build robots you've got as many as you need, so the limiting factor is the raw materials and not food or population size or training etc.

    So which countries have the raw materials? They win. For example, in this scenario Canada might be able to put up fight against the U.S. because Canada has alot of resources. As it stands now, Canada would get creamed.

    This line of thought becomes more interesting when you think that the U.S. Military is developing robots as a way of making the U.S. army more effective, but maybe they are changing the equation so drastically that they might end up with much stronger enemies on more fronts.

    Food for thought.

    • Re: (Score:3, Interesting)

      by khallow ( 566160 )

      So which countries have the raw materials? They win. For example, in this scenario Canada might be able to put up fight against the U.S. because Canada has alot of resources.

      I don't see that. An alternate point of view here is that a guy with the right machines could take out the entire world by iteratively bootstrapping his army to larger and larger sizes.

      1. Surreptitiously, steal enough resources to build a robot army to take out a poorly defended nation, say Canada, with your bots.
      2. Take over Canada.
      3. Build with the resources of Canada, a robot army capable of taking over the world.
      4. Take over the world.
      5. Profit!

  • by jollyreaper ( 513215 ) on Wednesday May 20, 2009 @04:41PM (#28030897)

    Technically speaking, a homing missile or torpedo could count as a robot weapon. We tend not to think that way because the gap between pressing the button and impact is short enough it's just like pulling the trigger on a gun and watching someone die.

    Landmines and other boobyraps are, intellectually, about the same thing as an autonomous AI weapon -- they kill without human intervention, are impersonal and horrific. Yes, it's more frightening to imagine a T-800 coming after you and taking your leg off with a chainsaw but seriously, the results aren't that much different from a landmine.

    When talking about the dangers of taking the human out of the loop, we've already got enough problems with humans in the loop. We took more kills from friendly fire than from the Iraqis in Gulf War 1. The more powerful the weapon, the easier the oops. I don't know how many top generals were accidentally killed by sentries back in the days of Rome -- kinda hard to accidentally run someone through with your gladius -- but just ask Stonewall Jackson how easy that sort of thing became with firearms. We'd never have gone through and killed an entire bunker of civilians by accident if our soldiers were doing the work with knives but that becomes as easy as an oops when dropping LGB's from bombers on the word of some faulty intel. Powerful weapons compound and magnify human errors.

    Aside from the typical fear we have at the thought of impersonal killing machines taking us out, I think we have two other big fears -- 1) war becomes easier and less painful when robots are doing the dying and 2) a robot will never refuse a legal yet immoral order.

    We've had bad wars historically but the 20th century really had them all beat. Technology allowed for total war, the bending of an entire nation's will to the obliteration of another. Ambitions grew bigger, power could be projected further, and the world became a smaller, more dangerous place. Battlefield robots will be a continuation of this trend.

    • Technically speaking, a homing missile or torpedo could count as a robot weapon.

      The difference is that the new kind isn't kamikaze.

  • I can smell it, when reading about the nine dead soldiers in South Africa. Conventional programming is not up to the task of battlefield AI. Considering how bugs get out of hand with increasing complexity of software, such that past a point, your software will never be even safe to hold a gun. Current software only needs a single bit flipped out of countless trillions to get bad data into the software, and if in the wrong place, there is a possibility of unpredictable behavior or outright failure.

    That is
    • by geekoid ( 135745 )

      These types of apps are tested to a much higher rigor, proper engineering methods, and specific tasks.

      Completely different then an application.

      Yes, there will be problems, but probably FEWER then there are today at the hands of humans.

  • the sleestak fossil [nationalgeographic.com] revealed yesterday was a nice advertisement for the upcoming land of the lost [landofthelost.net] will ferrell movie

    and electing a vulcan as president of the united states [whitehouse.gov] was a nice pr coup for the star trek [startrek.com] movie now playing

    but when the armed forces start building real terminators just to plug the upcoming christian bale terminator salvation [warnerbros.com] movie, this hollywood pr stunt business has gotten a little out of hand

    i'm sorry i have to draw the line. what next? someone releases a global pandemic just to plug..

  • by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Wednesday May 20, 2009 @05:54PM (#28032053) Homepage Journal

    science delivers the good, whereas philosophers deliver nothing.

    I am talking about modern philosopher, not philosophers from a time where that means educator and 'scientist'. Experimenter might be a better term there.

    I was a philosophy major until I learned the number 1 thing said by philosophers:

    "You want fries with that?"
    maybe
    "Do you want fries with that, or do you just think you want fries with that?"

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...