Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Links Software

Developer Shares A Recoverable Container Format That's File System Agnostic (github.com) 133

Long-time Slashdot reader MarcoPon writes: I created a thing: SeqBox. It's an archive/container format (and corresponding suite of tools) with some interesting and unique features. Basically an SBX file is composed of a series of sector-sized blocks with a small header with a recognizable signature, integrity check, info about the file they belong to, and a sequence number. The results of this encoding is the ability to recover an SBX container even if the file system is corrupted, completely lost or just unknown, no matter how much the file is fragmented.
This discussion has been archived. No new comments can be posted.

Developer Shares A Recoverable Container Format That's File System Agnostic

Comments Filter:
  • Thanks, looks interesting. I can see some applications for use in long term storage... it's better to get some data back rather than lose it all.

  • by Anonymous Coward

    That's an interesting property, but what's the use case?

    How often does your filesystem get corrupt and instead of restoring from backups, you curse the fragmented tar file that can't be reassembled?

    How practical is it to keep files in an sbx container rather than extracting them? Can apps read files inside an sbx container?

    • Re:why? (Score:5, Interesting)

      by MarcoPon ( 689115 ) on Saturday April 29, 2017 @06:10PM (#54326577) Homepage

      That's an interesting property, but what's the use case?

      I can't say I know them all, or even the best/killer ones, but I listed some on the readme. Probably the most immediate/interesting application would be on a digital camera, for photos/video.

      Can apps read files inside an sbx container?

      Yes. The blocks are of a fixed size, so the format is seekable and reading from it is far simpler than, say, reading from a ZIP file.

      • Re:why? (Score:4, Informative)

        by KiloByte ( 825081 ) on Saturday April 29, 2017 @06:58PM (#54326721)

        So the only failure mode this protects from is corruption of metadata while every data block remains intact. On any sane filesystem, that sounds useless: the only cases this might happen are filesystems that can't handle unclean shutdown (FAT, ext2) or the disk lies about barriers. And those cameras that still use FAT have software you can't update, so you can't install that SBX thingy -- if you could, you'd be better off switching to a better filesystem.

        In its present state, I'd suggest you scrap the whole project, it's a waste of time.

        On the other hand, it would be an entirely different story if you added some form of erasure code that operates on amounts of data bigger than a single sector (most storage devices already have per-sector erasure codes).

        • Indeed. Give it built in redundancy so that the data could be recovered reliably after almost any not-completely-terminal disk failure, and *then* you'd have something I'd be extremely interested in. Can't tell you how much archived data I've lost over the years due to "bit rot"

          Yeah, I should have had it archived in three different locations, but who actually does that for personal data?

          • Yeah, I should have had it archived in three different locations, but who actually does that for personal data?

            From what I've seen, a typical intelligent person learns about the importance of backups after around 30 data loss events.

      • You should read through this and see if you can adopt any of the methods mentioned to eliminate lost data in the container completely.

        https://www.usenix.org/legacy/... [usenix.org]
        • I'm probably familiar with most of those concepts, but it's nice to see them all in one well presented document which is also a sort of nice historic artifact. I think I stumbled upon it a long time ago and then lost track of it. Thanks.
  • ...but this is better than a backup, how, exactly?

    • It's a bit of a different thing. Think about a digital camera that could save on a SDCard both a JPEG and a JPEG in a SBX container. If the SD file systems get corrupted (maybe the batteries given up just when writing), your chances or getting back the JPEGs are so-so (depends on how much/if they are fragmented), but you could surely recover the SBX files.
      • What will be written first? If it's the SBX, then why wouldn't the battery give up while writing the SBX file? Your picture will be lost.
        • by MarcoPon ( 689115 ) on Saturday April 29, 2017 @07:39PM (#54326877) Homepage
          That one yes, but not the others already saved. A fragmented JPEG instead is pretty difficult to recover if the file system is inconsistent. The usual recover tools would easily find the first fragment, and then proceed from there collecting sectors in sequence, which may or may not contain the right data.
          • Wow, man, they're all over you with poor assumptions. Sometimes it's hard for people to accept that someone they don't know can do brilliant things. Good luck, and keep programming!
      • How are you going to recover an SBX file containing a JPG file if the batteries give up when writing?
        If you're saying the batteries give up after writing the data but before updating the filesystem's metadata, then any recovery program that supports that filesystem will be able to recover the data. And really, this is a problem with older shit like FAT and NTFS and ext2 (for which there are plenty of tools available).

        • by MrL0G1C ( 867445 )

          "How are you going to recover an SBX file containing a JPG file if the batteries give up when writing?"

          That's a silly strawman it really is, I don't hear anyone suggesting you can magically recover a file that hasn't even been written yet.

  • Basically an SBX file is composed of a series of sector-sized blocks.

    What if your file system and/or hardware uses a different sector size? Didn't those change size over the last decades?

    • Re:Fail? (Score:5, Informative)

      by MarcoPon ( 689115 ) on Saturday April 29, 2017 @06:17PM (#54326593) Homepage
      The default block size used is 512 bytes, which is a suitable sub-multiple of every sector size used by most system after the CP/M days. One example of system that doesn't plays well with is Amiga Old File System (which use 488 bytes per blocks, IIRC). Actually it's the only FS/platform that I found not working, among the ones I managed to test (a bit over 20, I listed them in the readme, just above the tech spec).
      • AS400 has 520 byte sectors, 512 for user data and 8 bytes for system data.
        • Interesting, didn't know that. Of course it's possible to create a new block version with a suitable block size (the 3 current ones differs just in the block size: 512, 128 or 4K - mostly for experimenting and verifying that the tools worked correctly with different versions), but that would be a bit like cheating. Will add the AS400 to the not working list, thanks.
          • The other challenge is that the as400 natively distributes sectors of every object across all disks in the system always (for performance reasons, parallel IO).

            So recovery would require reading all disks to reconstruct one object.
            • I see. It surely is a corner case, but an interesting one. My only experiences with AS400 was seeing some at some customer premises, and occasionally having to transfer some file from/to but nothing more. I remember an external IBM 5 1/4" floppy drive that was the length of an arm, and similar cost! :) Oh and BTW, SBXScan can scan and collect block positions from multiple images, even if it would be of no help in this situation. The idea was that if one keep 2 or more copies of an SBX file on different med
              • Ya, probably not your target market for multiple reasons, but interesting to know about differences between systems.
                • I was searching what to use as a file system name for the AS400 one, to add a note to the readme, but then I read again what you wrote in your first reply (sorry, was around 04:00am here): "520 bytes sectors, 512 for user data". If that's the case, then SeqBox should work just fine. The essential thing is that an SBX block, of 512 bytes, remains whole/integral, and that seems to be the case. The only things to keep in mind would be to use an adequate step with SBXScan when searching for the blocks (520 or 5
                  • It probably depends on what exactly is in the 8 bytes. I know there is a sector index (among other things), but not sure if that is necessary for reconstruction (sequencing) or not.
        • if compression is enbabled then it's 522 bytes (a 2 byte trailer)
      • Amiga OFS has 488 _user_ bytes per block. The rest is the block header, which can be used to, I don't know, recover blocks even when part of the disk is lost, for example. The actual block size was still 512 bytes like everybody else uses, because that's something the hardware generally supports.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

  • I did a quick read of the code and see that it relies on a magic cookie in the first four bytes of every physical sector to identify a block. This may not work for files small enough to fit entirely within the MFT on NTFS since that data isn't guaranteed to be aligned on a physical sector. There are other filesystems that store small file segments in the metadata structures as well.
    • I think the limit for NTFS is something like 640 bytes. A 1 byte file encoded in SeqBox format would occupy at least 1 block for the data, plus 1 for the metadata (with attributes like file name, date, size, etc.), so 1024 bytes minimum.
      • The MFT limit is closer to 1K. But if your minimum size is 1K that should be fine.

        Next question - for the encoding of the file, you're putting a 16-byte header in front of every blocksize-piece of data, correct? If that's the case, and if you're storing the entire block of original data after that pre-pended header, then how are you assuring that the spill-over piece of data will be on a contiguous block on the disk? For example, say you're encoding a single 4096 byte file using a 4K blocksize. The SBX-eq
        • Assuming 4K block, a 4096 bytes would end up occupying 12228 bytes. 4096 for the metadata block, then 2 data block of 4096. The last one, would contain just 16 useful pad, and the rest would be just padding (with 0x1A bytes). Of course with very small files this isn't efficient, but it's not usually a problem. Overhead is just a bit over 3% with the default 512 bytes block, and less than 1% if 4KB blocks are forced.
          • So if I'm understanding you correctly, the first 4K block is meatdata-only (no user data), then each additional block can contain up to 4096 - 16 bytes of user data (first 16 bytes of each block is the header).
            • Yes. Just keep in mind that the default blocksize is 512 bytes, as a reasonable compromise between overhead and compatibility with most file systems / platforms, older ones included. You can check the readme.md for the complete file specs, near the end.
              • Ok, I missed the data layout details at the end of the article. Rather than creating a super-set SBX file that holds all the original user data + 16-byte headers did you consider just storing the SHA-256 of every user block of data and storing that in the SBX file? You can reconstruct the file during your scan by SHA-256 every block read and matching it against the list of hashes in the SBX file.
                • I thought about something along that lines (I was considering blake2b/s as a faster hash), but I choose to do it this way considering that the SBX file could, at least in some case like the digital camera one, be the only copy, as it's easily decoded on the fly and seekable. But keeping just a list of hashes and some metadata is surely interesting too.
                  • I think in practical applications users would prefer just storing the hash. It's just over 6% storage overhead vs 103%.
                    • It certainly possible in some scenario. In others it could be more practical to create a SBX file, and then keep just that one, instead of having the original file and another file to keep track of, for a larger total overhead.
                    • For archival perhaps. But then they're denied access to use the original file unless they unpack it first.
                    • I'd say for archival or read only. It's trivial to read from the file directly, and for some application a simple plug-in may do the job (like there are audio players that can read audio files inside ZIP archives, for example).
                      About the the separate file with hashes, instead, the main issue would be that if the file system is in an inconsistent / damaged state, that file too would be inaccessible. So it would need to be kept somewhere else, and that would complicate things a lot.
                    • Realistically speaking applications aren't going to support the notion of a metaformat like this. If there's demand for this type of redundancy it would be better achieved by implementing it in the filesystem. And most modern file systems log their metadata updates anyway so the likelihood of not being able to reconstruct the file extents is rather low vs the probability of media-level corruption affecting the user data itself. As for the SBX of hashes not being locatable due to metadata corruption, you ca
                    • As for the SBX of hashes not being locatable due to metadata corruption, you can avoid that by applying a header to the SBX blocks themselves.

                      OK, I see what you mean.

                    • I'll definitely experiment with creating a file with just the hashes + metadata, to be then encoded in a SBX, so that it will be possible to do both things (standalone SBX file, or normal file + just the hashes in an SBX). Thanks for the nice discussion (and sorry for the delay, was about 05:00am here :) ).
  • I mean the chances of the filesystem being corrupted without the file itself also being corrupted seem small to none to me.

  • by Gravis Zero ( 934156 ) on Saturday April 29, 2017 @07:27PM (#54326833)

    Unlike HDD controllers, SSD controller do wear-leveling, so there is no guarantee that your data will be written as as a contiguous block of memory (regardless of what the filesystem says), only that it will be in 4096 byte blocks. Recovering deleted data from a SSD is no simple task because it means you need to know or guess the controller behavior for wear-leveling in order to go back and find the order of previously written data. With this you would be able to just read the raw memory even after the controller has been reset and still be able to recover the data. I think it would be a nice option to have a filesystem be able to encode user files in something like this highly recoverable format. The only real problem is that the file has to be completely rewritten even if you only modify part it in order to differentiate the new version from the old version.

    • Note that the 4KB block has just come up in some examples, but the default blocksize is 512, and I think that's reasonable to assume that a block of that size will not be broken down in a smaller parts.
      • The memory used in SSDs are all 4096 byte blocks of NAND FLASH memory. 512 bytes is the sector size for HDDs... though they may have changed that in recent years.

        • Sure. In general, 512 byte as a sector size is considered legacy, nowadays. More so, I don't see many chances of a 512 byte block being broken down further.
        • by tlhIngan ( 30335 )

          The memory used in SSDs are all 4096 byte blocks of NAND FLASH memory. 512 bytes is the sector size for HDDs... though they may have changed that in recent years.

          Incorrect. Modern SSDs use large page NAND which has anywhere from 128kiB to 1MiB block sizes.

          In NAND, you can only erase on block boundaries. However, when you write, you can write on page boundaries, of which there can be anywhere from 16 to 128 pages per block. Small page NAND (old NAND) had 512 byte pages (and typically 32 pages per block, givi

          • I only meant that "sectors" were 4096 bytes but thanks for the additional info. I suppose it only makes sense that they also use larger blocks in order to achieve higher read/write throughput rates.

    • by Dwedit ( 232252 )

      Usually when an SSD fails, you get some stupid small device. Intel SSDs give you a 8MB hard drive named "BAD CONTEXT" which can't be read from or written to, and JMicron drives give you a 4GB drive named "JM-Loader 001".
      When this happens, you don't get to see your actual disk sectors at all. Without access to the actual contents of your drive, having a container format that lets you recover data won't help.

      • I've not had that experience, however, it seems like a good reason to have an open source SSD controller firmware, so that you can force it to let you access it.

  • It seems to me this would be a lot more useful if it directly incorporated forward error correction.

    • I thought about it, but at least initially I choose to keep it simple. One can always process the file in some way before creating the SBX, for example creating a RAR archive with recovery records, and then encoding that one.
      • thought about it, but at least initially I choose to keep it simple. One can always process the file in some way before creating the SBX

        Since loss and recovery takes place at the block level, it's best if you arrange for error recovery (and compression) to take into account block boundaries.

  • There is some confusion as to what this is actually doing.
    Most filesystems have use special structures to store the name and location of your files on the drive. Directories, cluster bitmaps, etc etc. The reason why it's difficult at best to recover files from a hard drive when parts of the filesystem have been damaged is that it's difficult to identify where on your hard drive the files are. Besides the special filesystem directories, no where else stores information on what is stored where. If you lose the directory it's hard to tell one file's data from another on your hard drive.

    That is where SBX comes in. What it does is make sure that every physical sector that stores data for a particular file is labelled with a number that identifies that file, and a sequence number so you can reconstruct what order that piece is in the original file. Really, for the amount of overhead, something like that should be embedded into every filesystem. Basically a distributed backup of all the filesystem metadata.

    Some people are criticizing this that is solves non problems. I disagree. While it isn't the solution to global warming, it is both simple and clever (and will thus suffer from a lot of people who will disparage it out of a "well anyone could have thought of that" attitude). It won't save you from a full hardware crash. It won't save you from physically bad sectors in that file. What it will save you from is accidental deletion and from loss of the filesystem's metadata structures. How often does this happen? Twice to me from failures of a whole-disc-encryption system driver.

    I wouldn't use this for every file, but for critical ones, sure. Why not. The problem is, where it is most useful, for very volatile files that change a lot (databases etc) between backups, is where it can't really be used until/unless different applications start supporting it. So it unfortunately has limited use in the places where it would really help the most. Like I said above, this sort of thing really needs to get rolled into a filesystem. The amount of overhead it costs is meaningless in today's storage environment.

    • by Anonymous Coward

      Reiserfs already did this. You could break the filesystem and it can (in most cases) reconstruct the whole filesystem by scanning it for data.

      An amusing bug, if you ran the filesystem reconstruction on a filesystem that contained other reiserfs filesystem images, it would turn into a mess. They fixed this by adding a mechanism to escape out data that looked like the magic metadata identifiers.

      Deleted data was also ressurrected. Sometimes this was useful, sometimes not so much.

      • It's somehow similar, but not really the same thing. From what I gathered (but I'm surely not an expert in ReiserFS), ReiserFS can scan the disk to locate and recover its file structures / indexes, that in turn would enable to find the files again. SeqBox enable one to recover a SBX container without even considering any FS structure, but instead just scanning the raw bytes for the file itself, because each of its blocks is made recognizable. You can zero out all the file system info, partition table, etc.
    • Don't ZFS, ReiserFS and Btrfs all already have something similar inherent in their file systems?
      • It would seem like it but: a) this doesn't need to be applied at a filesystem level and b) it isn't encumbered by licensing issues, a dead project, or an experimental filesystem, in respective order.

        Okay so it is actually experimental, but not be filesystem wide it is also much simpler and able to contain failures.

        • by Anonymous Coward

          it isn't encumbered by licensing issues, a dead project, or an experimental filesystem, in respective order.

          The licensing issue can be disputed. Licensing issues are always a matter of use cases. In this case the license specifically says that it is provided without liability or warranty. While the author might be willing to provide those under a different licensing deal the same can be said for any other filesystem.

          Primarily I would dispute you claim that this isn't an experimental filesystem. It claims to not be a filesystem but it fits every checkbox for one, except perhaps having to delegate block allocation

          • An implementation like this which is so thoroughly understandable with such a short description on Slashdot might be experimental, but not like BTRFS raid. This software might have bugs, but it sounds like most competent systems programmers can debug it in a few hours. People are more comfortable with implementations of simple, neat ideas like this than millions of lines of filesystem code.
    • I can understand a finer-grained meta data model can aid recovery, but isn't the corruption of an SBX container more likely than the filesystem meta system with its backups?
  • Could this be also be used when the file contents are deliberately separated? Eg, distribute the file pieces (sectors?) to different audiences / storage locations, such that one has to get cooperation from all piece-holders to retrieve the net results? Eg: nuclear launch codes, and other less dramatic scenarios.

    • Well, it's probably not the first use case I would think of, but it will surely work and I think I have mentioned splitting in the readme. Just as SBXScan can locate and collect all the good SBX blocks from different images/devices (imagine having a copy of the same SBX file on different media, to add physical redundancy), it can surely reassemble a container from different pieces. The only restriction is that the splitting need to happens on block boundaries (512 bytes by default), which should not really
  • The Lisa and early Macintosh drives supported 532-byte sectors. The extra sectors were used for "tags" - basically a less-sophisticated version of this scheme and without the "block 0."

    For details on why "tags" were eliminated, see Macintosh Technote #94, "Tags," by Bryan Stearns, November 15, 1986.

    • Yes, it was a bit of a common things on older system, from the times where the mass storage hardware was far from precise and reliable, to do things like check that the drive seek really landed the head in the requested track. At least the Mac implementation still kept the usual 512 bytes of useful data per sector (at the price of less common hardware), while for example Amiga OFS end up with an odd 488 usable bytes per sector (but with common hardware).
      • by davidwr ( 791652 )

        Yes, it was a bit of a common things on older system, from the times where the mass storage hardware was far from precise and reliable, to do things like check that the drive seek really landed the head in the requested track.

        "Modern" (probably mid-1980s and newer) hardware had firmware that would do that for you, and for hard disks at least, the firmware started keeping its own meta-data of a sort so that as far as the computer was concerned, the error rate was acceptably low unless there was an actual bad spot on the drive or some other "hard" failure.

  • I want to storage that is File System Atheist!

    (And would that be like Write Only Memory?)

If A = B and B = C, then A = C, except where void or prohibited by law. -- Roy Santoro

Working...