Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Bug Linux

OpenZFS Native Encryption Use Has New(ish) Data Corruption Bug (phoronix.com) 16

Some ZFS news from Phoronix this week. "At the end of last year OpenZFS 2.2.2 was released to fix a rare but nasty data corruption issue, but it turns out there are other data corruption bug(s) still lurking in the OpenZFS file-system codebase." A Phoronix reader wrote in today about an OpenZFS data corruption bug when employing native encryption and making use of send/recv support. Making use of zfs send on an encrypted dataset can cause one or more snapshots to report errors. OpenZFS data corruption issues in this area have apparently been known for years.

Since May 2021 there's been this open issue around ZFS corruption related to snapshots on post-2.0 OpenZFS. That issue remains open. A new ticket has been opened for OpenZFS as well in proposing to add warnings against using ZFS native encryption and the send/receive support in production environments.

jd (Slashdot reader #1,658) spotted the news — and adds a positive note. "Bugs, old and new, are being catalogued and addressed much more quickly now that core development is done under Linux, even though it is not mainstreamed in the kernel."
This discussion has been archived. No new comments can be posted.

OpenZFS Native Encryption Use Has New(ish) Data Corruption Bug

Comments Filter:
  • by HBI ( 10338492 )

    I read through the entire thread over there and all people keep talking about is filesystems that implement redundancy. It's like md doesn't exist. And md is tasty and delicious. I really do not get it.

    • For much of my life, I've avoided md because I didn't know how to use it and I was afraid that if something went wrong while removing a failed drive or rebuilding the new drive, I could create a mess that results in losing data on all of my drives rather than just the failed drive. After learning how simple md is in Linux, I was just about ready to start using it. However at the same time I've been looking into Proxmox with Ceph and I don't know if it makes sense to use Ceph and md together - it seems lik
      • by HBI ( 10338492 )

        Hah, I saw what you did there.

        My current media server volume was created in early 2013 and has migrated between Gentoo and Ubuntu, and RAID5 and RAID6 with no issues. Did the one at a time increase in drive sizes twice so far. Reinstall Linux, mount md device, really that simple. I recommend it heartily. I use it elsewhere but that's my crown jewel. I'd used hardware RAID controllers before that for close on to 20 years and it's really just so much better in Linux.

    • Re: (Score:3, Informative)

      by rwbaskette ( 9363 )

      md is great for sure, but requires the drive firmware to accurately report errors. ZFS can do parity checking on reads and writes allowing you to know if your data is stored and read correctly and recover in the right configuration.

      • by HBI ( 10338492 )

        I suppose the ability to overcome bad hardware is pretty impressive. Of course trusting data to hardware that doesn't report errors correctly - that's another matter.

        • To your last point, I’d say no one is immune

          https://www.tomshardware.com/n... [tomshardware.com]

          https://support.lenovo.com/us/... [lenovo.com]

        • Of course trusting data to hardware that doesn't report errors correctly - that's another matter.

          You do this all the time.

          ZFS was designed by engineers that knew _exactly_ what assurances the RAS features of Sun hardware provided, on every cache, on every bus, every chip, and that was quite a lot more than what Intel servers had, or in a lot of ways have today probably.

          You use ECC and RAID because of hardware that doesn't report errors correctly, and those don't cover all the gaps.

          • by HBI ( 10338492 )

            ECC maybe. I use RAID because single mass storage devices are unreliable, period, whether they report errors correctly or not. SSD or spinning rust.

    • all people keep talking about is filesystems that implement redundancy. It's like md doesn't exist.

      When you break down very complex problems of data reliability into a single word "redundancy" I'm sure you would be confused as to why people don't just use MD. It's like saying solving the wave equation is the same as adding 2+2 together because both are just "math" and why do people even bother learning math beyond grade 2.

      ZFS has an insanely rich feature set, "redundancy" is just one tiny part of it, and if that were all people cared about, they would just use md.

    • I used to use md for setting up software RAID 5. Decided to try OpenZFS with my new NAS. I definitely regret it... it's one of those cases of people reinventing the wheel to replace it with something far more complicated with features nobody really needs. It's working, but I'm really not confident in it. I should really reimage it before it gets too full...

      • by cen1 ( 2915315 )
        What exactly is complicated with ZFS? You create a pool and the datasets in a few commands and you're done with the basics. You could go with TrueNAS or Xigmanas which abstract all the details in a nice UI also. Compared to something like LVM which makes me want to hang myself whenever I encounter it, ZFS is easy.
    • by caseih ( 160668 )

      I don't use ZFS for redundancy, although it does have that. I use it for its subvolume and snapshot capabilities, and also its resistance to bitrot and silent file corruption. I have used BtrFS in the past, which also has those same capabilities. I never could get BtrFS tuned very well, and no one could tell me specifically what to look at. Performance was totally abysmal on spinning rust after a while. Very high CPU load numbers from waiting on the disk all the time. I've never had that issue with ZFS o

  • Does anyone know if ZFS snapshots can use deduplication when encrypted with LuKS? I would imagine that LuKS is a layer outside of the file system and that ZFS functionality should be unaware of the effects of LuKS, but I'm not willing to bet my deduplicated data on it.
    • Luks below the filesystem is transparent, whatever you use on top of it will not know the difference and work as if raw/unencrypted

There is very little future in being right when your boss is wrong.

Working...