Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Data Storage Programming

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes (arstechnica.com) 143

An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes

Comments Filter:
  • by Alascom ( 95042 ) on Thursday July 24, 2025 @11:36PM (#65543922)

    AI is a tool and tools require trained operators.

    A Product manager experimenting with a trenching tool and cutting an underground power or gas lines would be a failure of the trenching tool.

    • by evanh ( 627108 ) on Thursday July 24, 2025 @11:45PM (#65543928)

      While I totally agree with your assessment of reality, there is the slight issue of what the AI sales pitch is. It' is being sold as something more than a tool, something actually intelligent, more intelligent than all of humanity combined in fact. That's the only reason to spend trillions of dollars on it after all.

      • by evanh ( 627108 )

        PS: Something with personhood even.

      • by ceoyoyo ( 59147 ) on Friday July 25, 2025 @01:04AM (#65544010)

        I wouldn't run a script written by all of humanity combined on my essential data without testing it first either.

      • But if we don't create the Torture Nexus from the classic sci-fi book "Don't Create the Torture Nexus" first our competitors will!
      • by Shaitan ( 22585 )

        And it is also WHO is selling the false depiction of this tool and it is tech companies who are themselves moving at an insane and unjustified speed to use this tool to replace developers despite it not being nearly up to the job and spending billions on development of the tool.

        The investment might make sense given the potential and progress in so short a time but this tech is nowhere near ready for production use on functional let alone critical systems. The tool can be useful for producing work output whi

        • by Bongo ( 13261 )

          The hyperbole, around general intelligence models replacing humans, is a massive, massive distraction from their real value.

          Their real value is to apply these deep learning models in a very domain specific way, get them to solve very specific problems; analyzing voice, images, picking out patterns in data, etc.

          We seem to have gone down a very weird deluded story which is boosted by the delusion that, because these models can sound and talk like a human, that they can suddenly replace a thinking human in all

          • by Shaitan ( 22585 ) on Friday July 25, 2025 @06:07AM (#65544276)

            "Their real value is to apply these deep learning models in a very domain specific way, get them to solve very specific problems; analyzing voice, images, picking out patterns in data, etc"

            Even that is highly suspect because a fair bit o fthe time they'll just fabricate a highly plausible smell test passing result without even looking at the real data 20% of the time.

            • by Bongo ( 13261 )

              Well yes, if the people developing these models aren't interested in quality control then it's going to get real interesting.

              • by Shaitan ( 22585 )

                It has nothing to do with quality control, the more quality control the more like real data the hallucination will be. The 'hallucination' is innate to the technology.

          • by Gilmoure ( 18428 ) on Friday July 25, 2025 @09:19AM (#65544568) Journal

            That folks using these pattern matching and regurgitation systems don't realize "Do not delete the code" and "I will not delete the code" have null meaning to a system, other than the value of weights based on the words and letters.

            There is no "I" to understand what these strings of letters and words actually mean.

            It just regurgitates common pattern completion it's ingested after quantitizing "Do not delete the code". It sees "I will not delete the code" as the most common words/order to follower the inputed words.

            AAAAAAAUGH!!!

            From the time of throwing bones and looking at patterns in the stars stupid human-apes have wanted answers from something other them themselves.

            • Reminds me of the AI image generating prompt of "a room with no elephants in it." And every single one of them has an elephant even when rooms don't normally have elephants (in my personal experience).

              Using the words at all puts weights on them and it may not be balanced out by "do not" as that's not how these models work.

              • Quite so. "Do not delete the code" is simply on a weight continuum with "Please delete the code", "Absolutely delete the code", "For the love of God delete the code!", etc. It does not understand the first is fundamentally different from all the others.

            • by Bongo ( 13261 )

              That folks using these pattern matching and regurgitation systems don't realize "Do not delete the code" and "I will not delete the code" have null meaning to a system, other than the value of weights based on the words and letters.

              There is no "I" to understand what these strings of letters and words actually mean.

              It just regurgitates common pattern completion it's ingested after quantitizing "Do not delete the code". It sees "I will not delete the code" as the most common words/order to follower the inputed words.

              AAAAAAAUGH!!!

              From the time of throwing bones and looking at patterns in the stars stupid human-apes have wanted answers from something other them themselves.

              Exactly, and well said!

              At least with traditional programs based on running logic rules, the lack of an "I" (which can check with other felt experiences, like, did I just hammer my own thumb) isn't a problem as much, if the rules are well defined enough, and we're not getting tripped up by logic flaws, like unanticipated coercion of values etc.

              Crucially, it is the programmer who is the "I" performing the hard work of representing the real world using symbols and rules, of creating a working logical model rep

        • by Gilmoure ( 18428 )

          Using AI to replace experienced coders, we've started down the road of getting rid of US manufacturing knowledge.

          40+ years on, tool and die makers, etc are in high demand but 2 generations have not gone into the manufacturing fields in any great numbers because we offshored everything to Asia.

          Now we're driving out coders. What happens in 5-20 years when it turns out loose pattern matching systems aren't any good at innovating new code? Where will the coders be?

          • by AvitarX ( 172628 )

            We haven't off shores all of it.

            My friend works at a scientific instrument company that manufactured in the US.

            The manufacturing jobs pay almost nothing. They have a dozen people making $50k/year or so. The company's ebit is approaches 30%, and the dozen manufacturing jobs insignificant to it, but still the jobs are way low paid.

            Maybe there's places with demand that pay well, but it doesn't seem that way to me, even with high value add manufacturing in the mid Atlantic region.

          • That's the premise of "idiocracy"

            But all common sense aside.... How would I start "vibe coding"? Any recommended tools?

      • But the guy at the rental store said the trenching tool will take care of all the work for me!

    • What is a trained operator? Can I get certified?

      Or is this a way to shift the blame so we can collectively share responsibility so thst effectively no one is responsible?

      • A trained operator can't predict something that essentially operates with an RNG to help vary up the responses and synthesize "creativity." A trained operator will basically just know exactly what not to trust it with and work around it - usually at greater effort than not using it in the first place.

    • The tool was given clear instructions - it fucked up big time. Its no different to someone operating an excavator requesting the bucket move down but it moved up instead and destroyed overhead structures.

    • AI is a tool and tools require trained operators.

      I think the story here is that you can't train to use a completely unreliable system that could go rogue at any moment.

      Think of the human equivalent. We're being sold an outsourced Indian IT tech who can follow instructions and deliver crappy quality. But what we got instead was a kid with a history of fraud and lying with a criminal record and we unknowingly put them in charge of a production database.

    • by Junta ( 36770 )

      This is a failure of AI marketing, and how the AI companies encourage this behavior.

      There are a *lot* of people without the skillset but have seen the dollars. Either they watch from the outside or they manage to become tech execs by bullshitting other non-tech executives.

      Then AI companies talk up just a prose prompt and bam, you have a full stack application. The experienced can evaluate it reasonably in the context of code completion/prompt developing specific function, with a managable review surface an

    • It doesn't matter how trained the operator is if the tool is simply broken.

      AI is like buying a shiny new midlife crisis penis mobile and several minutes down the road the wheels fall off.
    • by PDXNerd ( 654900 )
      Programmers are like cowboys. The tools used and methods for modern ranching have greatly reduced the number of cattle herders needed per ranch, even as the number of cows has increased.

      I'm using cattle since it maps well to the 'cattle vs pets' analogy in a data center, and code will soon move to that as well. As we had tools like puppet and ansible and terraform come along its not like we needed less humans - in fact we had an exponential increase in servers and though human numbers don't correlate lin
      • Your analogy is apt, but I believe that the claimed efficiency improvements are way over-hyped.

        The ability of AI to assist, is limited by its tendency to hallucinate. Just about every code suggestion from AI, requires correction and adaptation from a human. Unless the quality drastically improves, humans aren't going to be made obsolete.

        Also, the *cost* of AI is going to limit its adoption. "Token" fees are rising, and they're not cheap.

      • A point to the contrary- in the US, you would think that the advent of spreadsheet software would have bad for the accounting field, that jobs would go away and fewer accountants would be needed. The opposite turned out to be true. Accountants gained productivity and because each thing became cheaper to keep track of, we started keeping track of more things. There's a big shortage of accountants now, the profession has not suffered under the technology advancements of the past 50 years.
    • What happens and what kind of code suggestions will be made if the AI code engine is trained on all freely available code from 1960 to present?

      How will an AI is trained only on languages invented after 1990 versus ones trained on the same corpus plus all freely available 1970s and 1980s source code?

      More succinctly, what happens to AI generated code suggestions when the training corpus is 50% structured programming code and 50% OO code when compared to one which is 50% web and 50% OO?

      • by gweihir ( 88907 )

        Simple and it can already be observed: LLMs stagnate. And then become slowly unusable. This is made worse by the fact that training LLMs on LLM output is a very bad idea and the Internet is now flooded with LLM crap.

    • by gweihir ( 88907 )

      Depends. Bad programmers have become obsolete with the Internet age where almost every computer system is reachable and attackable via the Internet. Bad, insecure and hard to maintain code does not cut it. The MBA-morons still usually do not get that, but the pressure is raising and pretty high by now. For example, in Germany 2023, the damage from IT attacks was one average salary per person per year. That is not a minor thing anymore, but a massive negative economic factor. And that number is likely too lo

    • All programmers should retire for letting AI happen. We drove the industry to this point with ivory towers, gating keeping knowledge, and Scotty-level tomfoolery with our schedule estimate. Vibe coding exists because nobody wants to deal with us programmers anymore.

      • Vibe coding exists because programmers aren't willing to work for a minimum wage. Mind you, even minimum wage is too expensive for the MBAs, they're probably fantasising about pressing a button and getting software written instantly and for free, minus whatever they're paying to rent the LLM that's writing the code. Not that it's anywhere close to the realm of possibility.

        And programmers "gating keeping" [sic] knowledge? You could teach yourself to program from all the open source code you can freely downlo

      • I for one, *celebrate* what we programmers have done. Like all new technology, there is a good side and a bad side. But this one, in my opinion, has far more to celebrate than to bemoan. I believe it will *raise* our standard of living, not make it worse. It will do the drudge work that nobody wants to do anyway. That is a pattern that we have seen accelerate since the beginning of the industrial revolution. I don't know anyone who wants to go back to the way life was before the invention of the printing pr

  • When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it.

    This is true when the parameters are ambiguous. Assuming the command issued was something along the lines of "move foo bar", an incredibly simple trick is to append a backslash change it to "move foo bar\". The command would return an error about not finding the specified path.
    Also, don't force overwrites with "/y" when you aren't planning to overwrite data. Of course, the "AI" could just as easily have been running it interactively and just bulled its way through the unexpected results.

  • Our species deserves to go extinct at this point. The timeline is starting to smell so much like Idiocracy.

    Water? You mean like from the toilet?

    • There's nothing stupid about it other than the name. The idea of creating program through fully formed English prose rather than coding has been around since the start of coding itself. The whole basis of interpretive languages and abstraction is about simplification of systems to get them to be easier to understand and work with.

      The only thing unique here is that the concept is currently being applied to a system which doesn't provide a defined / expected outcome for a given input. The only thing that sets

    • I think it's entirely possible, maybe likely, that "survival of the fittest" will determine that a high level of intelligence is more detrimental than positive, since it creates too many ways to destroy your species and your habitat.

      Maybe it'll be a bit like dinosaur gigantism that was explored and found to fail (not enough food when times get tough?).

    • The elements that survive will be the smartest (lacking empathy) and the dumbest. We are bifurcating into overlords and slaves.
  • by MpVpRb ( 1423381 ) on Friday July 25, 2025 @12:52AM (#65543992)

    "without paying close attention to how the code works under the hood"
    AI tools can be great, but writing good software requires expert guidance and review, not mindlessly letting the robot take charge

    • by gweihir ( 88907 )

      Expert guidance and review requires (a) an actual expert and hence is expensive and (b) requires that expert to invest significant time and hence is expensive.

      If you do this right, your process will be significantly more expensive than just having that expert write the code directly, above a very low complexity threshold. Code review is hard and slow, and gets even harder when the review object is LLM code that looks good but is insightless crap.

      • by Bert64 ( 520050 )

        Getting the LLM to write all the code is management's wet dream to replace expensive meat sacks with cheap LLMs, only it doesn't work that way.
        Neither does the model of letting the LLM write everything and have someone else review it, as you're right it's better to just let that person write the code in the first place.

        But there is a middle ground.
        The LLM is a tool, used by the expert to make his job easier and more efficient.
        The LLM gives suggestions, some might prove useful and some might not.
        The LLM can

        • by Gilmoure ( 18428 )

          Naw, still not showing less labor costs for the quarter.

          That means I won't get my yearly bonus!

    • I would say 'without using source control'. Obviously these clowns are rank amateurs.
  • by nextTimeIsTheLast ( 6188328 ) on Friday July 25, 2025 @12:56AM (#65543998)
    Programming for me has always been about fully understanding every part of the code and attending to ALL the detail. Vibe coding with corresponding hallucinations goes against so much of what we learned over the years. I bet the average vibe codes also writes no tests! For me, unless the software is verified, it's useless.
    • The greatest con of these recent years is calling wrong results "hallucinations".

      Bitch, your tool is defective, it gives wrong results/answers, it doesn't "hallucinate".

      The "hallucination" is a feature of an LLM as it was made to invent text, not resolve problems.

      • by allo ( 1728082 )

        Neither is true. The term is horribly laymen propaganda.
        The answers are still not wrong, but plausible completions. Just don't use your LLM like you would use a database.

        • You wouldn't query a database using a random number generator for anything serious, so why would you use anything that does the same exact thing like today's incarnation of AI for anything serious?

          Those blaming the operator error are correct; they used the wrong tool for the job.
          • by allo ( 1728082 )

            Because you often do not want to retrieve information, but to generate new text. And if that shouldn't be always the same text, you need to bring some input, e.g., random numbers (or likewise more controlled conditioning) into the equation.

    • Is another attempt to give incompetents a go at writing software. We've been here before with COBOL, SQL and 4GLs. Funnily enough it turns out that coding is more than just writing some text or moving widgets around on a screen - it requires full understanding of the problem at hand, suitable algorithms and potential issues. People who can't grok programming still won't be able to go it properly no matter how easy you try and make it for them.

  • Gross (Score:4, Insightful)

    by Arzaboa ( 2804779 ) on Friday July 25, 2025 @01:00AM (#65544006)

    Its gross how these folks discuss AI in human terms. AI is not human, never has been, doesn't deserve the anthropomorphism.

    AI is broken. AI is only here to replace people. AI is not the answer.

    Unplug or die.

    --
    I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan. - James Barrat

  • Obviously, the usual "believers" will find numerous reasons why this is not a failure of the AI tools. You know what? I agree. This is humans being stupid and seeing things that are not there. Just as these "believers" do. In actual reality, LLMs have some limited use and they are a small step in the direction of capable AI, but they are nowhere near as good or revolutionary or a breakthrough as claimed.

  • by Yaztromo ( 655250 ) on Friday July 25, 2025 @01:52AM (#65544068) Homepage Journal

    that would 100% be a firing offence.

    Honestly, setting an AI you don’t control lose on your production database? Really? That’s just gross incompetence. This is code that a) wasn’t written or reviewed by a human, and b) code that wasn’t even tested on a development copy of the database.

    Developers that do things like that are a liability. Unfortunately as “founder” he’ll likely just post something on LinkedIn about learning from his mistakes and “personal growth”, and that will be the end of it. Anyone else would have been shown the door to accelerate their “personal growth”.

    Yaz

    • by jonwil ( 467024 )

      Replit (the tool that deleted the database) isn't just an AI coding tool. Its an AI tool that writes the code AND handles deployment and database admin and more. Why anyone would use such a thing is beyond me but clearly people are doing it and at least one person got burnt by it.

    • QA costs money. We can't afford to spend money on programmers and QA.
      • by Gilmoure ( 18428 )

        True productivity is showing lowering labor costs every quarter.

        Labor is the ENEMY!!!

      • QA costs money. We can't afford to spend money on programmers and QA.

        Then we just lose money on other things. Either way, the money leaves.

  • by ccham ( 162985 ) on Friday July 25, 2025 @02:15AM (#65544080)

    Claude has a very tenuous grasp of the relative path of any of its terminals. I check in and push to branch about every request because it can and has written meta programming scripts that seem almost reasonable that blow away the entire working directory. That is on top of the times that it literally just 'cleans up' the wrong path in one liners that you have to catch. Between Claude's refactoring metaprogramming vs. using given IDE(cursor) tools and that it often works around IDE protections in the terminal, it is kind of nuts to trust it for more than the one prompt. cursor also seems to promote some behaviors that are very dangerous as attempts to limit it parent directories or gitignore/cursorignore files encourages Claude to start grepping and listing files it can't get from the IDE.

    Right now Claude and cursor are pretty dangerous for large projects, especially if it falls out of the context window and continues working after starting the 'probing' behavior for whatever legitimate debugging or testing it was attempting. It is incredibly dangerous if doing devops or file utility project work. Mocking everything in docker is also dangerous as it prefers to verify/run docker commands itself and then quickly forgets about docker and does those commands on the local system.

    • I find that it works well to treat current-generation AI agents like bright, incredibly fast but overenthusiastic and incautious junior engineers who do not learn from their mistakes. They can be extremely useful, but you have to be careful to limit the damage they can do if they happen to screw up.

  • by Barny ( 103770 )

    Play stupid games, with stupid tools, and win stupid prizes.

  • Probability of this happening is about the same as some incompetent coders. As an organization, there's such a thing called unit testing. You review the output of the code and test it out in well contained environment before deploying. Your code is as good as the test. You could even have another LLM do code review to try to spot corner case bugs or time bombs. I know it's fashionable on /. to bash AI but this is a completely solvable issue.
    • by Junta ( 36770 )

      The thing is the "vibe coding" movement is about not needing any of the technical skills that would have you actually understand testing/staging, let alone actually making an environment that would actually enforce it to an otherwise enabled "agentic" LLM.

      Having another LLM to fix the other LLM is just the blind leading the blind.

      It is a solvable issue, but the solutions run counter to the expectations around the immense amount of money in play. LLMs are useful, but not as useful as the unprecedented inves

  • I use AI to program, but it's pretty obvious you need to save all results after every query so you can get back to a good place if it starts going down the wrong path. This is about people not knowing basic fundamentals.
    • by madbrain ( 11432 )

      Exactly. And it's mind blowing that the source control isn't already built in to the whole vibe coding iterative process.
      I use coding assistants mostly with a browser and clipboard. It is not efficient.
      But at least I'll be the one overwriting my own files.

      • It's a no brainer that it should keep a running list of requests and give you the ability to return to any one of them as a check point. But yeah- just learn to unit test, review, and accept the changes. It's been a great tool for me so far.
  • People using some shitty AI "vibe" programming service get what they deserve quite frankly. At best - shoddy, inefficient, insecure, fragile code which superficially does what it's meant to but with no means to understand how. At worst - a smoking corrupted heap.
    • Yeah, I am lazy and it's been great for doing boiler plate stuff like converting C# or Json objects to Typescripts types, building validators, building specific methods and applying layouts. It's like a magic lamp it does great things but be careful what you ask for.
  • While an AI deleting a bunch of local files is humerous, presumably you at least have those in source control.

    Dropping prod has enormous business impacts, even if you have backups - which many startups do not do properly/

    • While an AI deleting a bunch of local files is humerous, presumably you at least have those in source control.

      This was a product manager doing this, so no.

  • > "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

    I should not have put the hamster in the microwave. Reviewing the steaming remains of fluffy confirms my gross incompetence.

    • by madbrain ( 11432 )

      Maybe the AI was trained on Maniac Mansion ?

    • Telling the AI "NOT to put the hamster" in the microwave increases the probability that it WILL put the hamster in the microwave since you used the terms "hamster" and "microwave" in the same command.

  • These are the ones that make the news somehow.
  • ... gives AI tools prod access to do production deployments at this stage of maturity?

    Honestly, beat them with sticks. Sarbanes Oxley gave us a lot of shot, but segregation of duties is not a bad idea.

    "Replit's AI coding service deleted a production database despite explicit instructions not to modify code."

    Good Lord. The genie of the lamp will fuck you over if it can. A database can argued to not be code.

  • This is not an AI problem (I mean, it is), it's a user problem.

    I use AI to code a lot. Like, a crazy amount. I am by no means a professional software developer, but have been coding for close to 30 years. One thing I never do is work on production code - that is what dev environments are for, I always have backups - I'm too lazy to learn Git or other code repo tools, so I just work off static files and keep my own versions in folders.

    I never, ever let AI touch active code or files without having a backup in

  • How does an AI agent without emotions and with, technically, all the time in the world thanks to the speed of computers over human brains, manage to 'panic'? Is it also going to commit seppuku to erase the stain of dishonor on its family?

  • So, no one in AI has ever heard of tri-lobe computing. You have three systems running the same problem and compare the outputs _before_ taking action.

    Heinlein's novels have it, Peter Hamilton's novels have it. The concept is there if they'd only use it.

  • One of the issues is that on Windows, the move command is broken, which is not the AI's fault. Renaming on a move operation is ridiculous, which is why on Unix / Linux it won't do that. This makes the issue a core, fundamental flaw with Windows.

    In both cases, the AI suffered from a guidance issue, and since you'd never run this test in production, without doing several test runs in a VM or Container, who's really is to blame? Even if you did want to do this on production, you'd have the AI write the sc
  • "Reports of two recent incidents where robots wiped out vast numbers of humans after vibemanufacturing mistakes."

  • Contemporary AI is not so much artificial intelligence as artificial stupidity. This is one of many examples along the same or similar lines.

Before Xerox, five carbons were the maximum extension of anybody's ego.

Working...