Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Robots Learn To Lie 276

garlicnation writes "Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'"
This discussion has been archived. No new comments can be posted.

Robots Learn To Lie

Comments Filter:
  • I robot (Score:5, Funny)

    by canuck57 ( 662392 ) on Saturday January 19, 2008 @06:32AM (#22107352)

    Robot: I Robot

    Human: Tell me what I want to here.

    Robot: You mean lie?

    • Re:I robot (Score:5, Funny)

      by Anonymous Coward on Saturday January 19, 2008 @06:50AM (#22107448)
      Robot: I Robot

      Human: Tell me what I want to here.

      Robot: Tell you what you want to *where* ?

      • Re: (Score:3, Funny)

        by xlyz ( 695304 )
        you should be new hear!
    • Re:I robot (Score:5, Funny)

      by Eudial ( 590661 ) on Saturday January 19, 2008 @08:07AM (#22107790)
      I'm sorry Dave, I'm afraid I can't do that.
  • like father, like son (sorry about the gender bias)
  • Dune's lesson (Score:5, Interesting)

    by Anonymous Coward on Saturday January 19, 2008 @06:40AM (#22107402)
    If it only took 50 generations for them to start killing each other, how long before they decide that we are just little batteries or even worse, annoyances that need to be eliminated?

    Dune was right. AI must be stopped.
    • Re: (Score:3, Interesting)

      That was AI through evolution. It only means we should intelligently design our AI.
      • Won't work. Robots evolving in natural environments will have the survival advantage on domestic robots. (Wtf? Well, if social robots evolve social behavors, some will leave their groups eventually.)
        They will evolve altruistic behaviors too. They will just calculate how advantageous each alternative is, within the boundaries of what they can calculate before they act. That sounds much the same as what we do IMO, just that we take other data into account, like how we feel. To robots it would just be a variab
        • Re:Dune's lesson (Score:5, Interesting)

          by neomunk ( 913773 ) on Saturday January 19, 2008 @11:07AM (#22109104)
          You're assuming these little buggers are running a 'programmed' type thought process, which I don't think is true.

          The little devils will know nothing of variables, declarations or anything in the sphere of programming, all they know is voltage levels, pulse widths and how those things make them FEEL... Just like you and me.

          They don't think in logical blocks, they are matrices of discrete values interacting.... just like your brain. This whole meme of 'AI as a logical thought machine' MAY be rue someday, but the first real AIs won't use that system, they will be remarkably like us in brain design, replacing our biological neurons with electronic ones, but they work in the SAME WAY. Yes Virginia, your Beowulf cluster CAN fee anger...

          BTW, this little nugget of wisdom also dashes the hopes people have of installing 'Asimov circuits' or whatever. The closest we'll be able to come is overclocked-brainwash^H^H^H^H^H^H^H^H^H^H^H^Hhigh-speed supervised training.

          So, in other words, it's not their logic we have to worry about, it's their PASSIONS. And that is the spooky thought of the day.
    • Re:Dune's lesson (Score:5, Interesting)

      by mike2R ( 721965 ) on Saturday January 19, 2008 @07:03AM (#22107506)
      It's been a while since I read Dune, and I haven't read all the later books, but I had the impression that "Thou shalt not make a machine in the likeness of a huma mind" came about because men turning their thinking over to machines had allowed other men with machines to enslave them, rather than the Terminator or Matrix idea of the machines working for themselves.
    • Re:Dune's lesson (Score:4, Insightful)

      by HeroreV ( 869368 ) on Saturday January 19, 2008 @10:26AM (#22108688) Homepage
      Some of the robots in this experiment started lying to other robots because there was an advantage to doing so. What advantage would a robot have to harming a human in a world that is completely dominated by humans? It would probably result in their memory being wiped (a robot death).

      You are against AI because it may cost human lives. But it's unlikely that you are against many other useful technologies that cost human lives, like cars and roads, or high-calorie unhealthy food. (Even unprotected sex, which is the usual means of human reproduction, can spread STDs that lead to death.) These things are still allowed because their advantages greatly outweigh the disadvantages of outlawing them.

      As AI technology improves, there will probably be some deaths, just as there have been with many other emerging new powerful technologies. But that doesn't humanity should run away screaming, never to progress further.
      • Some of the robots in this experiment started lying to other robots because there was an advantage to doing so. What advantage would a robot have to harming a human in a world that is completely dominated by humans? It would probably result in their memory being wiped (a robot death).

        You're missing the obvious here. What advantage would a human have in stealing from another human, when it'll probably result in him being sent to prison?

        You can complete the rest of the argument yourself, I'm sure.

        Like you, I

  • not lying (Score:5, Insightful)

    by rucs_hack ( 784150 ) on Saturday January 19, 2008 @06:40AM (#22107410)
    Strictly speaking they are learning that the non co-operative strategy benefits them.
    • Re: (Score:2, Interesting)

      by Seto89 ( 986727 )
      It's Prisoner's dilemma [wikipedia.org] - if you know that others will for sure believe your lie and that they won't lie back, you benefit greatly...

      Everything will balance out when they all learn to lie and distrust...
      but do we REALLY want this with robots?
      • Re:not lying (Score:5, Insightful)

        by maxwell demon ( 590494 ) on Saturday January 19, 2008 @08:45AM (#22107978) Journal

        Everything will balance out when they all learn to lie and distrust...
        but do we REALLY want this with robots?


        We definitively want them to learn to distrust. After all, we are already building mistrust into our non-intelligent computer systems (passwords, access control, firewalls, AV software, spamfilters, ...). Any system without proper mistrust will just fail in the real world.
      • Re:not lying (Score:5, Insightful)

        by HeroreV ( 869368 ) on Saturday January 19, 2008 @10:36AM (#22108766) Homepage
        human: Sup, robot?
        robot: Hello human.
        human: Yo, your master told me he wants you to kill him. Says he's tired of life. But he doesn't want to see it coming, because that would scare him.
        robot: Understood. I'll get right on it.

        I am greatly in favor of robots having distrust. I can't trust a robot that is perfectly trusting.
    • Re: (Score:3, Funny)

      by Jeff DeMaagd ( 2015 )
      Great, so now the White House can call their statements "non cooperative strategies".
    • Re:not lying (Score:4, Interesting)

      by Mantaar ( 1139339 ) on Saturday January 19, 2008 @09:26AM (#22108230) Homepage
      It doesn't have to be like that. Let's think of a system for the robots where helping each other would be more appealing than cheating on each other. I read this article [damninteresting.com] once and was really amazed that nature itself has already invented altruism - in a very elegant - and, most important of all robust - manner.
    • Strictly speaking they aren't "learning" anything, and for the benefit of anyone who is about to start spouting off about "evolving", just don't. There's no learning or selection going on here as the robots aren't capable of sustaining themselves or reproducing. All that's going on here is that some defective algorithms that have forgotten how to communicate properly are being artificially preserved by some human researchers who want more grant money.

      If you read more into it than that, then I have a bri

      • Re: (Score:3, Informative)

        by jwisser ( 1038696 )
        Evolution: All that's going on here is that some defective genes that have forgotten how to work the way they originally did are being artificially preserved by an environment that encourages them.

        There. I fixed that for you.

        If you read the article, you'll notice that there is selection going on here, on the part of the researchers. They're combining the "genes" from the most successful robots of each generation to create the robots of the next generation. In other words, whether the genes of a given
  • by Narcocide ( 102829 ) on Saturday January 19, 2008 @06:46AM (#22107428) Homepage
    ... there goes my dream of the perfect girlfriend.
  • by MPAB ( 1074440 ) on Saturday January 19, 2008 @06:50AM (#22107444)
    Just imagine a Beowulf Cluster of those [house.gov]!
  • Seriously (Score:2, Insightful)

    by Daimanta ( 1140543 )
    This is HIGHLY disturbing. Even if this is just a fluke or a bug, it shows what can happen if we give too much power to robots.
    • Re:Seriously (Score:5, Insightful)

      by iangoldby ( 552781 ) on Saturday January 19, 2008 @07:09AM (#22107528) Homepage
      Why is this disturbing? I don't think it is that surprising that in a kind of evolution simulation there should be some individuals that act in a different way to the others. If that behaviour is makes their survival more likely and they are able to pass that behaviour on to their 'offspring' then the behaviour will become more common.

      I imagine that if this experiment is continued to the point where the uncooperative robots become too numerous, their uncooperative strategy will become less advantageous and another strategy might start to prevail. Who knows? I'd certainly be interested to see what happens.

      This has nothing whatsoever to do with morality. The article's use of the word 'lie' was inappropriate and adds a level of description that is not applicable.

      (Ok, maybe the thought that humans could create something with unforeseen consequences is slightly disturbing, but that would never happen, would it?)
      • Re:Seriously (Score:5, Informative)

        by aussie_a ( 778472 ) on Saturday January 19, 2008 @08:27AM (#22107886) Journal

        The article's use of the word 'lie' was inappropriate and adds a level of description that is not applicable.
        Lying simply means telling someone (or something) a statement that is believed to be false.
      • I imagine that if this experiment is continued to the point where the uncooperative robots become too numerous, their uncooperative strategy will become less advantageous and another strategy might start to prevail. Who knows? I'd certainly be interested to see what happens.

        This fits neatly with a sociological thought I've had a few times. I believe that there's a level of parasitism that a society can support before it collapses. These are the con men, the petty thieves, etc. In human societies, operato

    • Re:Seriously (Score:5, Interesting)

      by NetSettler ( 460623 ) <kent-slashdot@nhplace.com> on Saturday January 19, 2008 @08:04AM (#22107782) Homepage Journal

      This is HIGHLY disturbing. Even if this is just a fluke or a bug, it shows what can happen if we give too much power to robots.

      While this kind of stuff creeps me out as much as the next guy, and while it argues for being careful about what we trust robots to do, it's something we should know anyway because there many ways our trust can be violated without a robot lying. By far the more likely way they're going to let us down is just to exercise poor judgment. That is, to search for something that looks like a peanut butter sandwich but is really a rag with some grease on it... Getting the small details of common sense wrong is just as dangerous as anything deliberate.

      What we really learn here is that the mathematics of learn things like lying as a strategy isn't remarkably complex; that is, (that is, the number of computational steps required to discover it works in at least some cases is small... note that we have no evidence that there is a general purpose intent to lie, only a case where communication was used and observed to score better in one mode than another). This is not a story about robots, it's a cautionary tale about neural nets, what they measure, how they fail, etc... and we didn't invent the idea of neural nets--we found it already installed in every living thing around us.

      I went to the Museum of Science in Boston a few months back and saw, in the butterfly exhibit, a moth that had evolved coloration that was indistinguishable from an owl's face, hoping to scare off predators that were afraid of owls. Probably that's the more sophisticated result of the same notions. And yet it occurs in an animal that isn't, as a general purpose matter, a very sophisticated animal. Most people would find already-extant robots more socially engaging than a moth. For example, a moth is not capable of even serving up a beer during the game or vacuuming up the mess after your buddies go home.

      So take heart: The likely truth is that this is unavoidable. If all it does is teach us to have a healthy skepticism for unrestrained technology, it's actually a good thing. We needed that skepticism anyway.

      • note that we have no evidence that there is a general purpose intent to lie, only a case where communication was used and observed to score better in one mode than another

        1. a false statement made with deliberate intent to deceive; an intentional untruth; a falsehood. 2. something intended or serving to convey a false impression; imposture: His flashy car was a lie that deceived no one. 3. an inaccurate or false statement. ("lie." Dictionary.com Unabridged (v 1.1). Random House, Inc. 19 Jan. 2008. <Dictionary.com http://dictionary.reference.com/browse/lie>. [reference.com])

        There's more definitions, but this activity fits two of the top three (actually, at least four of the top seve

    • Actually I find it quite refreshing. Not only does it help show lying is a natural part of who we are (I'll leave whether or not its something that should be accepted for another time). Its also a step towards creating AI that is truly equal to our own. Had you asked me a few years ago if robots could spontaneously learn to lie within my lifetime, I would have said no. Spontaneous activity, especially deceitful ones, is a good thing if you're interested in creating robots that are our equal (rather then min
    • The real is not "disturbing" at all. What is disturbing is the sensationalism accompanying this situation. Equating a bunch of flashing lights to deliberate "lying", on the part of an "organism" that has considerably less "intelligence" than your average fruit fly, is anthropomorphizing in the extreme. In reality, this is not even worthy of attention.
    • ...it shows what can happen if we give too much power to robots

      Robots are everywhere, and they eat old people's medicine for fuel. And when they grab you with those metal claws, you can't break free... because they're made of metal, and robots are strong.

      Better give Old Glory Insurance a call today!
  • by Kroc ( 925275 ) on Saturday January 19, 2008 @06:58AM (#22107488)
    A small, off-duty Czechoslovakian traffic warden!

    > What's this?
    It's a red and blue striped golfing umbrella!

    > What's this?
    An Apple, no,
    it's the Bolivian navy armed maneuvers in the south pacific!
  • three laws (Score:5, Funny)

    by neonsignal ( 890658 ) on Saturday January 19, 2008 @07:00AM (#22107496)
    And the lab conversation went something like this:

    "Stuff Asimov."

    "Yeah, Let's see if we can evolve robot politicians instead."
    • None of Three laws forbade lying or destroying other robots -- in fact, they implicitly allow it if it would save a human.
  • Direct link (Score:5, Informative)

    by Per Abrahamsen ( 1397 ) on Saturday January 19, 2008 @07:06AM (#22107522) Homepage
    The submission is someone putting a spin to a story of someone putting a spin to a story based on someone putting a spin on this [current-biology.com] original scientific article.

    • Re:Direct link (Score:5, Informative)

      by mapkinase ( 958129 ) on Saturday January 19, 2008 @09:04AM (#22108096) Homepage Journal
      Short summary of robots

      * There is food and poison. And robots.
      * The signal only with one type of light - blue. (red was emitted by both food and poison).
      * Initially they do not know how to use light.
      * In some colonies, they learned to use it to indicate food, in some - to indicate poison
      * There are two things (among others) researchers measured: correlation between finding food or poison and emitting light, and correlation between seeing light and reacting to light

      So robots could learn either to emit light near food or they could learn to emit light near poison. It turned out that the colonies that evolved to emit light near food are more effective (that makes sense: the only thing you want to know is whether there is food or no food, the fact that "no food" might include poison or absence of it is not important. Basically, if you react on poison-light, then you still have to find food somewhere else, while if you react to food-light (blue+red in one place), then you just eat and relax).

      Now. It turned out that in some colonies significant number of robots emitted light near poison or far away from food, yet significant number of robots associated light with food. The researchers conclude that those colonies started as "blue light means it's food, not poison" colonies (thus, the correlation between blue light and positive reaction to it), but later on some sneaky individuals evolved that used blue light when they were away from food:

      An analysis of individual behaviors revealed that in all replicates, robots tended to emit blue light when far away from the food. However, contrary to what one would expect, the robots still tended to be attracted rather than repelled by blue light (17 out of 20 replicates, binomial-test z score: 3.13, p < 0.01). A potential explanation for this surprising finding is that in an early stage of selection, robots randomly produced blue light, and this resulted in robots being selected to be attracted by blue light because blue light emission was greater near food where robots aggregated.

      I have skimmed through the text and I did not find the experiment that first comes to mind: why did not they measure the correlation between seeing red light, emitting blue light and going to blue light for each individual robot. It would be interesting to know how many robots used blue light to deceive, yet believing the majority about blue light. May be it is there somewhere, I did not read really carefully.

      Hilarious quote:

      spatial constraints around the food source allowed a maximum of eight robots out of ten to feed simultaneously and resulted in robots sometimes pushing each other away from the food
  • by erwejo ( 808836 ) on Saturday January 19, 2008 @07:11AM (#22107538)
    The headline should read that robots have realized a strategic advantage of misleading other robots. The sophistication of such a strategy is amazing when humanized, but not so out of line with simple adaptive game theory. Agents / Bots have been "misleading" for a long time now during prisoners dilemma tournaments and no one seemed concerned.
  • "Learning" to lie? (Score:3, Insightful)

    by ta bu shi da yu ( 687699 ) * on Saturday January 19, 2008 @07:13AM (#22107544) Homepage
    It doesn't sound like they learned to lie. It sounds like they were preprogrammed to, and the other robots weren't programmed to be able to tell the difference. How is this insightful or even interesting?
    • by Aladrin ( 926209 )
      Read the whole thing again. They gave it 30 'genes' that determine what it does. The 4th colony apparently managed to end up with a combination of genes that 'lie'. This is not impossible, and can happen without external influence.

      Notice that they are in the 50th generation. That's 49 dead generations of robot that had to compete or work together for 'food' and avoiding 'poison'. It doesn't surprise me at all that one of the 4 colonies ended up with extremely competitive genes.
      • I note that in your response you had to put quotes around the word lie.
        • Because they did not choose to lie and were not even aware that they were lying. Their sensors detected poison, and their genes directed them to blink their lights in such a way that indicated to other robots that they had detected food instead. It was instinctual behavior, not learned behavior or behavior based on a decision. We usually reserve the word lie to mean someone has made a conscious decision to be dishonest.
        • by Aladrin ( 926209 )
          Despite what the dictionary says, I (and most people that I know) only consider it a 'lie' if there was deliberate intent to deceive and the statement was not true. Saying something that is false, but you believe to be true, is not deliberate.

          These robots don't 'think', so it's very hard for me to believe they have intent -at all-. They are simply doing what they are programmed to, even if they are self-programmed via a genetic algorithm.
    • They didn't learn and were preprogrammed. The behavior of lying evolved [wikipedia.org].
  • Evolutionary Conditions for the Emergence of Communication in Robots [urlbit.us] I had to click through 2 or 3 links to get to the actual science and past they watered-down hyped-up news media.

    I don't find it surprising at all that evolving autonomous agents would find a way to maximize its use of resources through deception.

  • when to trust (Score:3, Insightful)

    by samjam ( 256347 ) on Saturday January 19, 2008 @07:29AM (#22107628) Homepage Journal
    The next step is to learn to mistrust, then when to trust and how to form (and break) alliances.

    Then their character wil be as dubious as humans and we won't trust them to be our overlords any more.

    Sam
    • Then their character wil be as dubious as humans and we won't trust them to be our overlords any more.
      ... and the robots won't care about that and what we feel about it. Eep!
  • by ElMiguel ( 117685 ) on Saturday January 19, 2008 @07:31AM (#22107636)

    There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies! (at least two of the latter type in the last month, I believe).

    As reported, this story seems to be nothing more than some randomly evolving bots developing behavior in a way that is competely predictable given the rules of the simulation. This must have been done a million times before, but slap a couple of meaningless anthropomorphic labels like "lying" and "poison" on it and you got a Slashdot story.

    I frequently get annoyed by the sensational tone of many Slashdot stories, but this particular story template angers me more than most because it's so transparent, formulaic and devoid of any real information.

    • So true... (Score:2, Insightful)

      by Racemaniac ( 1099281 )
      these kind of stories are so stupid... make some simple interactive robots, make it possible to have them do something "human" at random, and then declare you've got something incredible....
      if you make it possible for them to lie, and not possible for others to defend against the lie, then yes, lieing bots will appear, and since the others are defenceless, they will have an advantage, but somehow this doesn't shock or surprise me...
      at least here they had to "learn" it (more like randomly mutate to it, but s
      • It would be interesting to know what the "genes" were to see just how well defined the base was for the lying to arise.
      • I dunno. The guy that did voice recognition through genetic algorithm on a small FPGA with no clock and not enough gates to make a clock was pretty impressive.
    • Yeah, don't anthropomorphize robots ... they don't like it when you do that.

      Seriously though, I think the article remains interesting mainly because of the angle that ties it back to the evolution of human interaction, how we came to be the species that has taken cooperation to absurd new heights, while at the same still having those among us who can't be trusted ... clearly, as machines ourselves, we've gone through all these "evolutionary steps" ourselves that we now see in the very machines we're making.
    • Re: (Score:3, Insightful)

      by autophile ( 640621 )

      There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies!

      That, or maybe you're upset that things thought to belong exclusively to the animal kingdom are really just computation (with a bit of noncomputation thrown in, thank you Gödel and Turing).

      I'm just sayin'. :)

      --Rob

      • That, or maybe you're upset that things thought to belong exclusively to the animal kingdom are really just computation (with a bit of noncomputation thrown in, thank you Gödel and Turing).

        Or perhaps you're projecting the way your own opinions are influenced by your emotions on random strangers in the Internet. Let's just agree that playing amateur psychoanalyst on people we know nothing about is not very productive.

  • next skill (Score:2, Insightful)

    by H0D_G ( 894033 )
    yes, but can they learn to love?
  • Well so much for this new approach to Captcha. [xkcd.com] If robots lie, all bets are off!

    I even went and implemented it in PHP for Wordpress. [evilcon.net]

    • No, no, that's why he tells them not to lie. (The robots in TFA received no such instructions.) Second Law compulsion is thus in effect!
  • by HangingChad ( 677530 ) on Saturday January 19, 2008 @07:53AM (#22107746) Homepage

    Wait until flight management systems pick up that little trick. Those trees look kind of close but the auto-pilot says we still have three thou-

  • The next thing the robots who survive will learn is how not to be gullible. This is only if the experimenters built in hardware and/or software that either allows robots to observe other robots, or allows dying robots to signal to others that they ate poison at position X before they die. This will allow other robots to not eat the poison, and learn that the one robot deceived them.

    The next thing these robots will learn is how to beat the crap out of the robot who deceived them. Then the robots will go i
  • ... major breakthru in artificial intelligence.... someone is bound to win the grand turing award now.
  • If they've learned to lie, the next logical step is to program them with laws.

    then when they break the laws ....

    Oh silly me. They won't evolve lawyers until they've invented money.

  • I for one welcome our new prevaricating overlords.
  • we've found a way to produce artifical politicians.
  • This is the risk (Score:4, Interesting)

    by Z00L00K ( 682162 ) on Saturday January 19, 2008 @11:11AM (#22109142) Homepage Journal
    when we want to have human-like robots with initiative ability.

    I wonder what will happen if the factor "punishment" comes into play. Maybe we get some robots that like humans doesn't respond to punishment?

    Serial-killer robots would be a new high (or low) in the evolution.

    One couldn't help but to realize that the need for the Three Laws of Robotics [wikipedia.org] is closing in! It's no need for those laws in a controlled environment like where this occurred, but when it's robots in the society we are talking about it's a different issue. Even if they aren't humanoid (or especially). What about a robot mind in a school bus that suddenly figures out that kids are mean and considers suicide by jumping off a bridge?

  • by JustASlashDotGuy ( 905444 ) on Saturday January 19, 2008 @11:30AM (#22109318)
    The programmers told the machines to give out false informations. The programmers told the others machines to trust what they are told. How is it so shocking that the 'lying' machine gave out false information while the other machines believed them?

    I have an excel spreadsheet that 'learned' to add 2 columns together as soon as I used the =SUM function. It was quite amazing.
  • As a Grad Student, I studied evolutionary algorithms, and my Thesis involved evolving locomotive behaviors [erachampion.com] in Virtual Agents. While evolved behaviors are interesting, its not surprising that the lying behavior eventually evolved. Evolution will reward behavior that imparts a better chance of survival, and in this case, the lying behavior increased the Agent's chance of survival and replication, therefore it was selected for by the evolutionary algorithms.

    The biggest problem with simulated evolutionary s

  • GLaDOS (Score:4, Funny)

    by Z80xxc! ( 1111479 ) on Saturday January 19, 2008 @12:41PM (#22110042)

    Of course AI's lie!

    As part of a required test protocol, our previous statement suggesting that we would not monitor this chamber was an outright fabrication. Good job. As part of a required test protocol, we will stop enhancing the truth in three, two, o--

    Sheesh, who doesn't know that.

  • by PPH ( 736903 ) on Saturday January 19, 2008 @01:11PM (#22110358)
    ... Diebold files an injunction to block further research, citing prior intellectual property rights in this area.
  • Lie? (Score:5, Insightful)

    by Rostin ( 691447 ) on Saturday January 19, 2008 @01:49PM (#22110694)
    Several folks have pointed out that the headline inappropriately anthropomorphizs what is really just a solution discovered by a genetic algorithm. That might be true. If it is, let's be consistent. People don't lie or tell the truth, either, because our brains are also just a solution discovered by a genetic algorithm.

Mausoleum: The final and funniest folly of the rich. -- Ambrose Bierce

Working...