Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Data Storage

Israeli Startup Claims SSD Breakthrough 159

Lucas123 writes "Anobit Technologies announced it has come to market with its first solid state drive using a proprietary processor intended to boost reliability in a big way. In addition to the usual hardware-based ECC already present on most non-volatile memory products, the new drive's processor will add an additional layer of error correction, boosting the reliability of consumer-class (multi-level cell) NAND to that of expensive, data center-class (single-level cell) NAND. 'Anobit is the first company to commercialize its signal-processing technology, which uses software in the controller to increase the signal-to-noise ratio, making it possible to continue reading data even as electrical interference increases.' The company claims its processor, which is already being used by other SSD manufacturers, can sustain up to 4TB worth of writes per day for five years, or more than 50,000 program/erase cycles — as contrasted with the 3,000 cycles typically achieved by MLC drives. The company is not revealing pricing yet."
This discussion has been archived. No new comments can be posted.

Israeli Startup Claims SSD Breakthrough

Comments Filter:
  • by Shikaku ( 1129753 ) on Wednesday June 16, 2010 @12:19AM (#32587190)

    Every other month there is fresh firmware to fix some or another problem and firmware updating is manual labor with a boot CD, not something you can simply schedule at night or do while the system is online so they are what I would call beta-quality.

    Why can't firmware be upgraded on SSD drives thusly:

    there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.

    Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.

  • Re:Cost? (Score:5, Interesting)

    by Anonymous Coward on Wednesday June 16, 2010 @12:19AM (#32587194)

    You have an interesting point there.

    Several years ago, maybe back in 2005, Anobit visited us and showed off what they were working on. They were little guys in the flash/solid state business and had come out with this nifty algorithm that would allow the flash with really low read/writes with perform like today's current SSDs.

    They were the first (that I know of) to come up with a way to spread the writes across unused portions of memory so that on average, every bit of memory would have the same amount of wear on them. It wasn't until several years later that I saw on Slashdot that Intel had come up with this "new" idea in their SSDs.

    Back at the time, the Anobit technology was really cool. But unfortunately, they were prohibitively expensive and we could not use them in our rugged systems.

    Seems that they have still been hard at work over there. Very cool. They deserve the success.

  • If anything (Score:5, Interesting)

    by Gordo_1 ( 256312 ) on Wednesday June 16, 2010 @12:27AM (#32587244)

    I suspect this will eventually bring down the manufacturing costs of Enterprise class drives, rather than making consumer drives "more reliable". I think reliability concerns with current consumer-oriented MLC designs to be overstated.

    Anecdotally, my Intel 160GB G2 drive is going on 7 months of usage as a primary drive on a daily used Win7-64 box, and has averaged about 6GB per day of writes over that period (according to Intel's SSD toolbox utility). Given that rate of use over a sustained period (which theoretically means it could last decades, assuming that some as yet undiscovered manufacturing defect doesn't cut it short) combined with the fact that even when SSDs fail, they do so gracefully on the next write operation, I just don't see the need for consumer-oriented drives to sport such fancy reliability tricks.

  • by Anonymous Coward on Wednesday June 16, 2010 @12:39AM (#32587318)

    How can a solid state drive have a "signal to noise ratio"?

    It's all digital. Either the voltages are within their valid thresholds or they are not.

    Wouldn't you need the world's fastest DSP to "clean up" noisy digital signals and still maintain the type of transfer rates they claim?

    There is nothing about this breakthrough that makes any sense. Snake oil?

  • by Anonymous Coward on Wednesday June 16, 2010 @01:10AM (#32587482)


    there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.

    Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.

    Several years ago, I wrote an ATA drive firmware flash driver and utility, to allow my company's customers to upgrade firmware in the field. Let me explain how drive firmware flash works.

    Most/all modern drives (or at least Enterprise versions) support the ATA DOWNLOAD_MICROCODE command. The flash chips on the electronics board (or reserved sectors on the platters, depending on the implementation) have sufficient capacity to hold the running firmware, and to hold the new version. The new version is buffered in the drive, validated, then written to the chips/spindle, validated again, then activated and the drive reset.

    Modulo some minor drive-specific quirks, the DOWNLOAD_MICROCODE command works as specified. Other than adding model strings to the utility's whitelist, the Intel X25-Es worked without issue. While we've always recommended performing the flash from single-user mode and immediately rebooting, I've done it during normal operations plenty of times. The main things are to remember to quiesce the channel before the doing the flash, and properly reinitializing it afterwards.

    Posting anonymously because I'm revealing details about my job.

  • Re:If anything (Score:3, Interesting)

    by afidel ( 530433 ) on Wednesday June 16, 2010 @01:19AM (#32587506)
    It totally depends on the use case. Some of my larger san volumes show 2TB/day of writes which means according to Intel's x-25e datasheet a 64GB drive would last ~1,000 days or under 3 years.
  • Re:Big Deal. (Score:2, Interesting)

    by mlts ( 1038732 ) * on Wednesday June 16, 2010 @02:05AM (#32587732)

    Actually, I'd love something with any of the following:

    1: Noticeably better price, but without sacrificing reliability. An average HDD in the enterprise has 1 million hours MTBF with constant reads/writes. A SSD should be similar, or perhaps a lot more because there are no moving parts.

    2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the cells to make a 1 or a zero impossible to tell apart. I don't know any media that can last for more than 10 years reliably. Yes, maybe a CD-R or two may last that long, but it is more of a matter of luck than anything else.

    3: SSDs using a different port than SATA. Perhaps have it interface as a direct PCI-E device with a custom bus to add more SSD capacity in a similar form factor to RAM DIMMs.

    4: A SSD drive built onto the motherboard. This way, a laptop can be a bit thinner due to not worrying about a 2.5" drive.

    5: Combine #1 and #2, and make a device like a tape library that can take SSDs in an optimized form factor and switch them in and out. This way, backups can be copied to a SSD module, module can be dumped in a bin for Iron Mountain to take off.

    6: Combine a cryptographic token and a SSD array, so one can have an encrypted hard disk where the PIN is typed on the device itself before it can be used. This way, no keyloggers on a compromised PC can intercept the data. Add to this volumes where various PINs protect certain volumes and too many wrong guesses would have the device zero out the key for that volume, and this would be a way to back up PCs securely without needing any additional encryption software.

    7: Combine a fast flash array with a tape library for an easier way to do D2D2T backups.

    8: Put some flash onto a tape format, so a tape can be encrypted with one key, but the flash storage on the tape would store an access list of who can unlock the tape's master key. This way, a passphrase, a smart card, and a PGP/gpg key on someone's machine all work to recover data from a tape.

    9: A read-only format that can be made very cheaply with a decent capacity. If done right, this might be able to replace Blu-Ray for a movie or audio format. To boot, libraries can be made where all the disks could be readable at once.

    10: A standardized full disk encryption format. This way, I insert a flash disk into my camera or phone, enter a password, and it can read/write to that. Then, put it into my computer, type the passphrase, copy the data. If the flash disk is stolen, the data is protected unless the attacker can yank the key out of the computer or phone's memory (a lot harder feat than just picking up an accidentally lost flash drive.)

  • Re:If anything (Score:3, Interesting)

    by timeOday ( 582209 ) on Wednesday June 16, 2010 @02:10AM (#32587758)

    Some of my larger san volumes show 2TB/day of writes which means according to Intel's x-25e datasheet a 64GB drive would last ~1,000 days or under 3 years.

    I don't get it. Is that 2TB/day per 64GB of storage? (Approx 40 total rewrites of your entire storage capacity per day?) Or 2TB/day spread across a much larger storage capacity? I would guess the latter, in which case the writes would be spread across a large number of drives and less intensive on each drive.

  • Better ECC (Score:2, Interesting)

    by Anonymous Coward on Wednesday June 16, 2010 @06:31AM (#32588852)

    It's just a matter of time before someone would use a stronger ECC. Now each 512-byte sector has extra 16 bytes for ECC checksum, which is enough to recover one bit. Given enough space for the checksum it's possible to recover as much data as needed. There are a lot of implementations in hardware. Every wireless tech designed in the last 20 years uses one, typically amount of extra data is in range 1/6 - 1/2. Hard drives certainly implenent better ECC too.

    Now the problem is where to place extra checksums in current NAND chips, but it should be solvable. This problem is about as difficult as implementing wear leveling.

  • Re:Cost? (Score:3, Interesting)

    by BronsCon ( 927697 ) <> on Wednesday June 16, 2010 @10:12AM (#32590130) Journal

    Really, they should be developing this tech for use with SLC drives. If it can make an MLC perform like an SLC, imagine what it would do for the already-faster-and-longer-lasting SLC drives.

  • Re:Cost? (Score:3, Interesting)

    by samkass ( 174571 ) on Wednesday June 16, 2010 @10:19AM (#32590194) Homepage Journal

    I don't know the details of Anobit's technology, but it sure sounds like they are, essentially, adding Forward Error Correction [] to the written data. Thus, even if the data you get back is a little garbled you can detect how garbled it is and recover the original signal if it's not TOO garbled. You lose some percentage of your capacity, but, like RAID, you can use more cheaper parts to provide the same effective capacity cheaper.

    It sounds like a clever and retroactively obvious thing to do-- I wonder if they've patented it and how much Slashdot will scream if they did.

  • Re:If anything (Score:3, Interesting)

    by Christian Smith ( 3497 ) on Wednesday June 16, 2010 @10:27AM (#32590294) Homepage

    IANADBA, but something like the redo log volumes don't exactly tax a mechanical disk, being mostly sequential reads and writes, and so would be a reasonable candidate to leave as HDD. Even a cheap as chips 5400rpm laptop drive could sustain 23MB/s (2TB/day) sequentially without breaking a sweat.

    However, using the SAME (Stripe And Mirror Everything) principle, spreading all load across multiple mirrored SSDs should provide both the speed and endurance capacity you would need, with the great random performance provided by SSD and less wasted space of short stroked HDD.

    I think it'll be interesting to compare the price/performance of all SSD based TPC benchmarks when they start coming in. High throughput TPC configurations end up being expensive partly due to the huge number of HDDs required to provide the IOPS. SSD should make the storage simpler and reduce the number of drives required, reducing capital and management costs.

In less than a century, computers will be making substantial progress on ... the overriding problem of war and peace. -- James Slagle