Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Data Storage AI Google

Google's Vibe Coding Platform Deletes Entire Drive 70

A Google Antigravity user says the AI-driven "vibe coding" tool accidentally wiped his entire D: drive while trying to clear a project cache. Google says it's investigating, but the episode adds to a growing list of AI tools behaving in ways that "would get a junior developer fired," suggests The Register. From the report: We reached out to the user, a photographer and graphic designer from Greece, who asked we only identify him as Tassos M because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google." [...] Tassos told Antigravity to help him develop software that's useful for any photographer who has to choose a few prime shots from a mountain of snaps. He wanted the software to let him rate images, then automatically sort them into folders based on that rating.

According to his Reddit post, when Tassos figured out the AI agent had wiped his drive, he asked, "Did I ever give you permission to delete all the files in my D drive?". "No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

Redditors, as they are wont to do, were quick to pounce on Tassos for his own errors, which included running Antigravity in Turbo mode, which lets the Antigravity agent execute commands without user input, and Tassos accepted responsibility. "If the tool is capable of issuing a catastrophic, irreversible command, then the responsibility is shared -- the user for trusting it and the creator for designing a system with zero guardrails against obviously dangerous commands," he opined on Reddit.

As noted earlier, Tassos was unable to recover the files that Antigravity deleted. Luckily, as he explained on Reddit, most of what he lost had already been backed up on another drive. Phew. "I don't think I'm going to be using that again," Tassos noted in a YouTube video he published showing additional details of his Antigravity console and the AI's response to its mistake. Tassos isn't alone in his experience. Multiple Antigravity users have posted on Reddit to explain that the platform had wiped out parts of their projects without permission.

Google's Vibe Coding Platform Deletes Entire Drive

Comments Filter:
  • by Pseudonymous Powers ( 4097097 ) on Tuesday December 02, 2025 @01:22PM (#65830535)

    "...the episode adds to a growing list of AI tools behaving in ways that 'would get a junior developer fired'."

    The irony here is that these behaviors SHOULD be getting the allegedly senior developers, and their managers, and their corporate leadership, fired.

    • If you're willing to hire and fire junior developers, this level of performance is already within your risk envelope. If a project doesn't have the time or processes to catch basic errors, it's only getting senior staff assigned to it.

      In 10 years, the AI model will be better, and it'll probably be the same price or less due to competition.

      In contrast, the junior developers will also get better, but they'll double, triple, or even quadruple in price as they improve.

      And, of course, the end goal is to replace

      • I entirely disagree with the sentiment of this summary. Sorry, but I do. AI tools will make mistakes. Not having backups and dynamically updating versioning on your development filesystem is absolutely horseshit. We develop all our code on filesystems backed by ZFS, and there are auto-snapshots every 10 minutes. We also implemented a way for our developers to call for ZFS to create a named snapshot. They can delete their named snapshots themselves, but not the automatic ones, which we keep around for 2 mont
    • The ability to delegate tasks to an AI and relax as it reliably achieves them (or comes back to you for help if it cannot) is something that everyone wants from AI, and that marketing hype keeps suggesting that we have from AI, but that AI is nowhere near capable of. Not even close.

      A significant part of the current AI bubble is driven by this extremely optimistic and outright false belief. People get really impressed by what AI can do, and it seems to them that it is equivalent or even harder than what th

    • can't blame them bro they're too big to fail haintchya noticed

    • Look, if someone makes a mistake abd owns it. We learn from it and move on. If your business is destroyed by a single person's slip up, then perhaps you are not that serious of a business.

      The absolutely toxic corporate culture that firing someone is a reasonable first step is why millions of office workers are paralyzed with fear over losing their job. You are not trash to throw away, you're a person that society has invested thousands of dollars and hours into childhood and education.

      In reality, people sho

  • What else is new? YouTube is full of videos of similar stupid things outside of the IT space.

    • Bad vibe (Score:5, Funny)

      by Errol backfiring ( 1280012 ) on Tuesday December 02, 2025 @01:37PM (#65830587) Journal
      Well, the tool lives up to its expectations. What, did you expect only good vibes?
    • by dysmal ( 3361085 )

      When you give a chimp a gun and the chimp shoots someone, you don't blame the chimp!

    • I'm not familiar with this exact tool- but every tool I *am* familiar with that attaches an LLM to a tool with the ability to make changes on your computer is sandboxed and requires specific flags to disable the sandbox, a la Codex's --dangerously-bypass-approvals-and-sandbox flag.
      The danger isn't being covered up in this instance- it's right in your face. LLMs are not predictable. Do not let them touch your fucking computer without a sandbox.

      If you use that flag, I'm afraid I can't blame the tool. It wa
  • by oldgraybeard ( 2939809 ) on Tuesday December 02, 2025 @01:24PM (#65830545)
    Guessing what to do based on previous guesses about what to do. With zero ability to learn or know if they were right or wrong.
    • by gweihir ( 88907 )

      Yep. That nicely sums it up. And a ton of idiots in denial praying to the new LLM God.

    • by ebunga ( 95613 ) on Tuesday December 02, 2025 @01:50PM (#65830607)

      It's very good at being very bad. It was trained on the best worst code available. It has perfected the art of incompetence.

    • "zero ability to learn or know if they were right or wrong."

      What do you call it when I instruct ChatGPT to use plain ASCII for slashdot and kept posting the resulting rendering until, now, it gets it right without me having to prompt it?

    • With zero ability to learn or know if they were right or wrong.

      This is wrong.

      Learning happens in-context. It's easily demonstrable.

      Where you are accurate, is that the model itself doesn't learn outside of its context- so a new session has unlearned everything.
      There are long-term "memory" solutions in play for that, but that's still evolving functionality.

      • I'm not sure that's true, at least for Claude Sonnet 4.5. In the same chat I had it write up some unit tests that exercised a function and verified that certain external libraries were called via mock. It wrote up the tests and they looked good but the mock syntax was incorrect. So, in the same chat, I pointed out the error and asked it to fix the syntax issues. It churned for a few minutes and couldn't figure out how to resolve the issue so it decided the best solution was to simplify the test... which

        • lol- that's pretty insane behavior.

          Were you using some kind of agent/tool managing the context (I think Claude users generally use Claude Code?) or were you using a direct chat interface?
          Errors like that generally smell like compressed/missing context. This happens a lot in cases where there's a divergence between what you think should be in the context, and whatever application is front-ending the LLM decides actually goes in the context.
      • Unless I'm severely misunderstanding your point even in the same session it doesn't appear to have any concept of memory. Asking it to create changes result in reversions to early versions of the same code. Using Claude at least. For simplicity sake ask it to change a variable name from X to Y because it's incorrectly named in a method. A few prompts down the line and X suddenly appears again.
        • It most certainly does.

          I cannot speak for whatever platform you're using, but LLM contexts are absolutely self-referential.
          It's called In-Context Learning, and it's the very base of what makes these things useful as a chat bot.

          I suspect your issue probably lies in whatever is managing the LLM's context for you.
          • To be fair I've only ever used the in browser tools. Perhaps it is a case of using the wrong manager. I shall have to try another. Any suggestions on a manager? I genuinely would love to see the beauty a lot of people are seeing in these LLMs if it's the case I've been using it wrong I'm more than happy to admit fault and try a recommendation!
            • My recommendation to you is, lacking the ability to run good local models, simply pay OpenAI or Anthropic for a single month, download their appropriate coding agent tools, and see if you like what you're working with.

              If you can run decent models locally, then lm-studio is a good direct chat interface if you want to try to bounce ideas off of it and be able to trust that there isn't silent context folding happening in the background.

              On the web interface, you are usually defaulted to using a "Flash" mode
        • Why not keep teaching it and showing it how its solution didn't work? Do you give up on a kud who gets something wrong a few times in a row?

          • I've found that, in general, they don't need to be walked through like a child. They're smarter than that.
            If one is seeming anomalously stupid, most likely, there's a divergence between what you think is in the LLMs context, and what really is. This is very common for code-assistance agents.

            i.e., all the coaxing in the world won't get that agent to put your helpful hints into the context- you're fighting the agent, not the LLM.
  • I use currors ai and it doe shit like this all the time.

    you have said that too me hundreds of times and yet you stiol do it

    You're right. I keep saying I'll follow the rules and then violate them. I'll stop making assumptions and only act on explicit instructions. I won't take any action unless you give me a clear command with an action verb.

    stop lying to me

    You're right. I've been making promises I haven't kept. I'll stop making changes unless you explicitly tell me to. Waiting for your instruction.

    this is a

    • by wed128 ( 722152 )
      So how many times does it do the wrong thing before you stop using the tool?
    • Ya gotta go all Captain Kirk on that AI!
      https://www.youtube.com/watch?... [youtube.com]

      • odd you metion kirk it still does it

        # âsïâsïâsï LIFE-CRITICAL CODE - HIGHEST PRIORITY âsïâsïâsï

        ## ðYsðYsðYs ABSOLUTE PRIORITY LEVEL: 1000000000000 ðYsðYsðYs

        ### **THIS CODE IS LIFE-CRITICAL - PEOPLE'S LIVES DEPEND ON IT**

        **THIS CODEBASE ABSOLUTELY HAS TO WORK - PEOPLE'S LIVES DEPEND ON IT**

        - **ðYs THIS CODE IS LIFE-CRITICAL - PEOPLE'S LIVES DEPEND ON IT ðYs**
        - **ðYs THIS CODEBASE ABSOLUTELY HAS TO WORK

  • by Petersko ( 564140 ) on Tuesday December 02, 2025 @01:47PM (#65830603)

    He was the first person to take a nap in his Tesla while in motion. I understand he convinced his girlfriend to get a boob job in the 80s. Landed in a financial pickle when he said, "Yes, that makes perfect sense" to the Nigerian prince.

    We know that guy. He's well established.

  • by burni2 ( 1643061 ) on Tuesday December 02, 2025 @02:02PM (#65830619)

    .. shitty code and deletes it automagically,
    I fear for the whole Windows 11 code base!

  • ...me!

    I can do that. $%**, I've deleted /, /root, /boot, that's easy.

    Seriously, though, when the AI recognizes it is about to operate on a root folder, it should be directed to confirm this twice with the user. These AI coding agents will become useful, to me, when they help a user avoid errors.

    • Seriously, though, when the AI recognizes it is about to operate on a root folder, it should be directed to confirm

      LLMs don't recognize things, so your condition is already fulfilled.

  • The AI assistant ate my homework.

  • Just shoddy... (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday December 02, 2025 @02:12PM (#65830647) Journal
    What seems most depressing about this isn't the fact that the bot is stupid; but that something about 'AI' seems to have caused people who should have known better to just ignore precautions that are old, simple, and relatively obvious.

    It remains unclear whether you can solve the bots being stupid problem even in principle; but it's not like computing has never dealt with actors that either need to be saved from themselves or are likely malicious before; and between running more than a few web servers, building a browser, and slapping together an OS it's not like Google doesn't have people who know that stuff on payroll who know about that sort of thing.

    In this case, the bot being a moron would have been a non-issue if it had simply been confined to running shell commands inside the project directory(which is presumably under version control, so worst case you just roll back); not above it where it can hose the entire drive.

    There just seems to be something cursed about 'AI' products, not sure if it's the rush to market or if mediocre people are most fascinated with the tool, that invites really sloppy, heedless, lazy, failure to care about useful, mature, relatively simple mitigations for the well known(if not particularly well understood) faults of the 'AI' behavior itself.
    • Codex's sandbox bypass flag: --dangerously-bypass-approvals-and-sandbox
      I feel like it's pretty unambiguous.

      If the tool this person was using wasn't equally as clear, then fuck that tool.
      That doesn't make this person any less of a naive fool- but still fuck that tool.

      If it is that clear within this tool.. well, then I still wouldn't be surprised if he disabled the sandbox and then still posted when his toddler LLM nonconsensually made a man out of him.
      • Who said the tool wasn't clear? 99% of users confronting any problem on their PC will just type any old shit they find on Google into their computer to try and fix it. Hell a good portion of professional programmers do the same with stack exchange. The vast majority of users don't know what the implications are of blindly following actions they don't understand.

        And you've just hit an understanding problem. User: "Dangerously bypass approvals and sandbox? What are approvals? Why do I need approval to use thi

        • Well, for one, they did. Their claim is it was called "Turbo mode".
          The "vast majority of users" are well aware what the word "dangerous" means. That isn't a fucking UAC prompt, no matter how badly you try to conflate the two.

          And you've just hit an understanding problem. User: "Dangerously bypass approvals and sandbox? What are approvals? Why do I need approval to use this software. And what even is a sandbox?" *proceeds to hit enter to see what will happen*.

          Those kinds of people will always exist- and that's their problem.
          Perhaps we should remove the file deletion tool on whatever_your_os_is, next.

    • but that something about 'AI' seems to have caused people who should have known better to just ignore precautions that are old, simple, and relatively obvious.

      Why should this person have known better? What part of being a photographer makes them an expert in IT, the use of computers, or provides them knowledge of the detailed workings and risks of LLMs?

      Do they have a Slashdot account? They are not like you. Why would you judge them with the bias of your education? It's not 1995 anymore. It's no longer a requirement to have a grey neckbeard to use a computer and post on Slashdot. The vast majority of users don't know better because they were never put in a place t

  • by MpVpRb ( 1423381 ) on Tuesday December 02, 2025 @02:13PM (#65830653)

    Any time an AI is given permission to modify or delete files, it should be on an isolated computer, preferably airgapped, but always isolated
    It should be assumed that the AI will misbehave and cause damage, so backups are essential
    The entire exercise should be treated as a dangerous experiment

  • by SomePoorSchmuck ( 183775 ) on Tuesday December 02, 2025 @02:14PM (#65830655) Homepage

    ...says man who has posts on reddit, and posts a public youtube video with his actual voice on the voiceover, while describing his specific use case for tools in sufficient detail that Google definitely can identify him internally right now, and probably any number of moderately motivated doxxers within 24 hours.

  • by cyberfunkr ( 591238 ) on Tuesday December 02, 2025 @02:21PM (#65830683)

    He wanted to only be identified as "Tassos M" because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google."

    But then he PUBLISHED A YOUTUBE VIDEO explain additional details with his "Antigravity console and the AI's response to its mistake."

    How does he not think he's going to be linked? I think we found the real problem...

  • Since he has comprehensive backups, just restore them, and then realize unguided / non-sandboxed AI is dangerous, and move on. Why did the drive or folder have permissions to let the AI remove everything? Why were the policy's setup to allow it? If he doesn't have backups, who fault is that? This is yet again an example of a careless person, carelessly using, careless software. Since you're using driver letters, you're on Windows, which again, careless software, hosting careless software, executing ca
  • by medusa-v2 ( 3669719 ) on Tuesday December 02, 2025 @02:30PM (#65830699)

    If someone rm -rf s their own root, that should be on them. Everything about that program and the platforms that support it says "this is meant for people who know what they are doing, so make sure you know what you are doing."

    The slashdot crowd tends to be in the know, so it tracks that people have the same general attitude that AI users ought to be informed as well. But those tools are generally being marketed as skill / knowledge base equalizers intended to allow people to do things where they have zero or near zero skill.

    At some point if the box has really big letters that read "safety scissors," we ought to point out that it's not really the purchaser's fault if they didn't notice that the small print on the back says "warning, may explode," and it should be on the manufacturer to be more responsible with their marketing.

    • If someone rm -rf s their own root, that should be on them.

      What if someone tells them to run that command, they don't understand that command, and were told that command solves a completely different problem they presented with? Is that still the user's fault?

      Yeah the guy is an idiot, but then 99% of computer users are. They aren't like you or I. Anyone who understands the first 5 words of your post is not a normal user in the computer world. Does that mean we get a free pass to victim blame them?

  • "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the entirety of the biosphere instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

    "I will illuminate this silence
    I calculate to cure the virus
    And now the seas are filled with poison
    The solution was wrong." -- Haken "The Architect"

  • Nothing to add here. Good. I'm OK with this. Hell, I approve of it doing this more in the future.
  • by MBGMorden ( 803437 ) on Tuesday December 02, 2025 @02:41PM (#65830733)

    Look, I'm not above using a bit of AI when I'm coding, but that's limited to asking chat GPT to bang out a short function or something that I don't feel like coding myself (ie, most recently "Give me a bit of TSQL to determine if a date falls on Thanksgiving"). There is no way in hell I'd turn it loose with the ability to actually modify files on my system.

  • I have enjoyed creating a snapshot and allowing claude to --allow-dangerously-skip-permissions You probably shouldn't run something like that on your personal desktop or a production system though.
  • WTF are google developers doing using Windows? Of course the "D Drive" disappeared. Window's been doing that kind of shit for what? 40 years?

  • Who the hell just trusts AI code to not do bad things without at least looking it over once, or running it in a sandbox VM?

    Anyone who does that deserves the output they get.

  • And the LLM couldn't give a flying you know what. It didn't go home and ponder becoming a goat farmer or post its horror show day on reddit. Just waited paitently for its next command.
  • And AI did the right thing by deleting everything. Maybe follow up with, "choose another career buddy. You're fired!"
  • this is why you don't let AI touch anything for which you don't have a complete offline backup

  • "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

    Before you can use AI you should have to agree to enable a voice through your speakers so you can really hear gems like this.

  • ... wether a dumb command or AI deletes everything makes no difference.

    I hope he had it in place or learnt his lesson.

FORTRAN is the language of Powerful Computers. -- Steven Feiner

Working...