Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Israeli Startup Claims SSD Breakthrough 159

Lucas123 writes "Anobit Technologies announced it has come to market with its first solid state drive using a proprietary processor intended to boost reliability in a big way. In addition to the usual hardware-based ECC already present on most non-volatile memory products, the new drive's processor will add an additional layer of error correction, boosting the reliability of consumer-class (multi-level cell) NAND to that of expensive, data center-class (single-level cell) NAND. 'Anobit is the first company to commercialize its signal-processing technology, which uses software in the controller to increase the signal-to-noise ratio, making it possible to continue reading data even as electrical interference increases.' The company claims its processor, which is already being used by other SSD manufacturers, can sustain up to 4TB worth of writes per day for five years, or more than 50,000 program/erase cycles — as contrasted with the 3,000 cycles typically achieved by MLC drives. The company is not revealing pricing yet."
This discussion has been archived. No new comments can be posted.

Israeli Startup Claims SSD Breakthrough

Comments Filter:
  • Cost? (Score:5, Informative)

    by Manfre ( 631065 ) on Tuesday June 15, 2010 @10:51PM (#32587034) Homepage Journal

    If we have to ask how much it costs, we definitely cannot afford it.

    • Re:Cost? (Score:5, Insightful)

      by the linux geek ( 799780 ) on Tuesday June 15, 2010 @10:54PM (#32587044)
      Early adopters will pay for continued R&D, which will then make this affordable for most people down the line. It's how these things work.
    • Re:Cost? (Score:5, Interesting)

      by Anonymous Coward on Tuesday June 15, 2010 @11:19PM (#32587194)

      You have an interesting point there.

      Several years ago, maybe back in 2005, Anobit visited us and showed off what they were working on. They were little guys in the flash/solid state business and had come out with this nifty algorithm that would allow the flash with really low read/writes with perform like today's current SSDs.

      They were the first (that I know of) to come up with a way to spread the writes across unused portions of memory so that on average, every bit of memory would have the same amount of wear on them. It wasn't until several years later that I saw on Slashdot that Intel had come up with this "new" idea in their SSDs.

      Back at the time, the Anobit technology was really cool. But unfortunately, they were prohibitively expensive and we could not use them in our rugged systems.

      Seems that they have still been hard at work over there. Very cool. They deserve the success.

      • Re: (Score:2, Insightful)

        by bm_luethke ( 253362 )

        So, lets assume what you say is true - is this really a nice business that deserves success? Hard to say.

        Obviously if they can do all that is claimed then they "deserve success", though of course that depends on your definition of success. If success means being the richest company in the world showered with personal sex slaves then, no, they really didn't deserve that. If you mean deserve to pay their employees a slightly above average salary for their area and have a slightly above average return for thei

      • by YesIAmAScript ( 886271 ) on Wednesday June 16, 2010 @03:00AM (#32588216)

        Wear leveling was normal for NAND long before that.

        What kind of n00b are you?

        http://www.google.com/patents?vid=6850443 [google.com]

      • Better ECC (Score:2, Interesting)

        by Anonymous Coward

        It's just a matter of time before someone would use a stronger ECC. Now each 512-byte sector has extra 16 bytes for ECC checksum, which is enough to recover one bit. Given enough space for the checksum it's possible to recover as much data as needed. There are a lot of implementations in hardware. Every wireless tech designed in the last 20 years uses one, typically amount of extra data is in range 1/6 - 1/2. Hard drives certainly implenent better ECC too.

        Now the problem is where to place extra checksums in

        • by Guspaz ( 556486 )

          Extra ECC data and fancy controller trickery can't get around the fact that the write limit is a limit of the underlying flash, not the controller...

          • by epine ( 68316 )

            Extra ECC data and fancy controller trickery can't get around the fact that the write limit is a limit of the underlying flash, not the controller...

            Extra ECC data and fancy controller trickery can't get around the fact that the magnetic media density limit is a limit of the underlying magnetic domains, not the controller...

            No wait! Then they invented PRML. Turns out the underlying limit was actually due to engineers lacking vision. All they needed was a new analytic frame of reference. The same deal has happened over and over again with RF spectrum. One man's noise is another man's signal. I just don't know the RF world well enough to cite exampl

      • Re: (Score:3, Interesting)

        by samkass ( 174571 )

        I don't know the details of Anobit's technology, but it sure sounds like they are, essentially, adding Forward Error Correction [wikipedia.org] to the written data. Thus, even if the data you get back is a little garbled you can detect how garbled it is and recover the original signal if it's not TOO garbled. You lose some percentage of your capacity, but, like RAID, you can use more cheaper parts to provide the same effective capacity cheaper.

        It sounds like a clever and retroactively obvious thing to do-- I wonder if th

    • Re: (Score:3, Informative)

      by renoX ( 11677 )

      Not really: their technology is used to make MLC as robust as SLC, so if it cost more than SLC's price, then it's useless..

      • Re: (Score:3, Interesting)

        by BronsCon ( 927697 )

        Really, they should be developing this tech for use with SLC drives. If it can make an MLC perform like an SLC, imagine what it would do for the already-faster-and-longer-lasting SLC drives.

    • by mcgrew ( 92797 ) *

      Anobit Technologies announced it has come to market with its first solid state drive using a proprietary processor

      There are open source processors?

  • Call me when it's 75% cheaper than other "solutions".

    • Cause technological advances are only progress if they mean you can get a cheaper netbook right this second.
    • Re: (Score:2, Interesting)

      by mlts ( 1038732 ) *

      Actually, I'd love something with any of the following:

      1: Noticeably better price, but without sacrificing reliability. An average HDD in the enterprise has 1 million hours MTBF with constant reads/writes. A SSD should be similar, or perhaps a lot more because there are no moving parts.

      2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the cells to make a 1 or a zero impossible to tell apart. I don't know any media that can last for more

      • by Zakabog ( 603757 )

        3: SSDs using a different port than SATA. Perhaps have it interface as a direct PCI-E device with a custom bus to add more SSD capacity in a similar form factor to RAM DIMMs.

        Seriously...? [newegg.com]

      • Re: (Score:3, Informative)

        by vadim_t ( 324782 )

        1: Noticeably better price, but without sacrificing reliability. An average HDD in the enterprise has 1 million hours MTBF with constant reads/writes. A SSD should be similar, or perhaps a lot more because there are no moving parts.

        It's a tradeoff. Reliability needs redundancy, and redundancy costs money. So either take the financial hit, or wait until the reliable devices get cheap enough.

        2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the

        • by hitmark ( 640295 )

          i just wish tape was more available in a home use price range for archiving the increasing amount of family data.

      • by dargaud ( 518470 )

        3: SSDs using a different port than SATA. Perhaps have it interface as a direct PCI-E device with a custom bus to add more SSD capacity in a similar form factor to RAM DIMMs.

        Yes, I want SSDs that can replace CD readers in my older laptops (just slide out the whole thing), and/or SSDs that I can plug into the usually unused miniPCI port of my older laptop. None existed last time I looked.

        A standardized full disk encryption format. This way, I insert a flash disk into my camera or phone

        Yes, with an easy way to enter the password on keyboard-less devices, so I won't be afraid to pass through customs with an mp3 player.

        • MiniPCI based SSDs, if they exist at all(I've never seen one) are doomed to forever be super-niche items. Why? Because miniPCIe SSDs became a fairly major product category with the rise of the netbook. (a randomly chosen example [newegg.com]. No endorsement is implied; just to demonstrate how easy to find they are.) Since basically no new laptops are coming out with miniPCI slots, only miniPCIe slots, there just isn't a whole lot of demand. If you actually meant PCIe, though, shop away!(assuming your laptop has a large
      • 2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the cells to make a 1 or a zero impossible to tell apart. I don't know any media that can last for more than 10 years reliably. Yes, maybe a CD-R or two may last that long, but it is more of a matter of luck than anything else.

        Meh. Copy it off and back on every five years.

        The main problem with long term data on SSD is charge leakage. That does not cause mechanical wear (unlike lots of writes). If you archive data to an SSD, then periodically re-write it, its perfectly fresh again. Doing so will give you decades of safe storage without ever getting near the write limits. And doing so will not take much time due to the inherent speed of the media, and will get both faster and cheaper "for free" as the systems improve over time -- t

        • by hitmark ( 640295 )

          sounds like something that snake oil gibson was pushing some years ago, a program to strengthen the magnetic pattern on the HDD.

      • by mcgrew ( 92797 ) *

        I don't know any media that can last for more than 10 years reliably.

        Acid-free paper does. I have a book at home that was printed in 1886.

        • by mlts ( 1038732 ) *

          I should have stated computer media, because a quality book in a decent environment can last centuries, perhaps more as archival and preservation technologies improve.

          Digitial media doesn't fare as well. Paper tape swells and gets misaligned. Punch cards can get put out of order and don't have the density to handle modern storage. Magnetic domains on tape drives get scrambled. CDs and DVDs suffer from oxidation on the dye layer. Photos fade [1]. Hard disks get mechanical issues such as bearing failure

          • by mcgrew ( 92797 ) *

            I think the way to keep an archive for life would be to continuously back it up before the origial media degrades. I have a lot of CDs that are copies of CDs that are no longer readable. That's the beauty of digital media; copies are identical.

            I'm not too sure about photographic storage, as film is easily scratched and can degrade in other ways as well. Better to back up early and often.

      • by IICV ( 652597 )

        4: A SSD drive built onto the motherboard. This way, a laptop can be a bit thinner due to not worrying about a 2.5" drive.

        I bet you this will be standard on MacBook Pros within the next five years.

        • by mlts ( 1038732 ) *

          Other than the fact of upgradability/expandability, I wouldn't mind that. If the Flash drive were on a mPCIe card, or perhaps even a superfast MicroSD card, that would be a nice compromise between space and ability to get a larger disk.

          • by IICV ( 652597 )

            Other than the fact of upgradability/expandability, I wouldn't mind that.

            Haven't you been paying attention to Apple's mobile products? Size and battery life trump everything, up to and including expandability and serviceability. If they can compress the MacBook by ten millimeters with a motherboard-integrated Flash drive, they're going to do it. Hell, if you open up a MacBook, the CD drive takes up the most space - followed by the hard drive; until Apple pulls an Apple (circa 1999) and removes the CD like t

            • by mlts ( 1038732 ) *

              Apple has already done that with the MBAir. I do think that the rest of the MB line will go exclusively flash once there are motherboard based SSDs that have 250GB or more.

    • Call me when it's 75% cheaper than other "solutions".

      From the description (and a lot of guesswork), it sounds a bit like they might have put in a basic RAID system, but using separate memory chips instead of drives. In terms of price vs performance/capacity, RAID has been a good solution, so this might well make sense, IF they don't try to make it out to be some black box filled with magical gold dust, rather than a simple application of existing tech in a new area.

    • Depends on what you consider cheap.... Are you talking about dollars flying out of your wallet? It seems to me that if you go with the other solutions, fewer dollars will fly much more often.

      Lets just say:
      Standard SSD sustains 3,000 cycles and costs $100
      Anobit SSD sustains 50,000 cycles and costs $500

      With the same usage, you will have gone through 16+ standard SSD drives before your Anobit SSD fails. So for 5 times the cost, you get 16 times the usage.

      If we break that down to cost per write cycle, the va

  • by moogied ( 1175879 ) on Tuesday June 15, 2010 @10:59PM (#32587068)
    Never do what you can in hardware, in software. ...and we can't do this in hardware! :)
    • So true, and it looks like they didn't manage to do it in software. They claim to improve both durability AND performance; http://storage-news.com/2010/06/16/yet-another-ssd-breakthrough/ [storage-news.com] has a comparison of the quoted performance numbers for this drive, and they appear to be lower than quoted numbers for a competitor's MLC-based SSD.
  • by guruevi ( 827432 ) on Tuesday June 15, 2010 @11:06PM (#32587106)

    With Enterprise SSD's (SLC) still in the $100/GB range, we're far away from general acceptance in the datacenter. MLC also has the problem of being slow to write to vs. SLC which is one of the important metrics when considering SSD's to accelerate your classic spindles. SLC's are reliable enough to last for at least 3 years even fully loaded at 3 or 6 Gbps.

    I used some Intel X-25-M and Intel X-25-E's in my environment as they are affordable and generally get the highest scores in IOPS and throughput respectively read and write caches and the performance is way under my expectations. The Intel X-25-E's don't work well under heavy loads on LSI controllers (throws errors and SCSI bus resets) while he Intel X-25-M's do work fine. Every other month there is fresh firmware to fix some or another problem and firmware updating is manual labor with a boot CD, not something you can simply schedule at night or do while the system is online so they are what I would call beta-quality. Especially once fully filled the IOPS performance drops from ~3000 IOPS like a brick to ~1000 IOPS which a small set of hard drives can fulfill so the only good thing it's left for is latency.

    We'll see what the Vertex 2 EX brings (Sandforce 1500 controller) which has an advertised 50k IOPS although that might be more marketing than anything. I'm still waiting on a decent priced SAS SSD which can actually sustain 5-10000 IOPS by itself even when fully loaded.

    • Re: (Score:3, Informative)

      by XanC ( 644172 )

      Isn't it more like $10/GB?

    • Re: (Score:3, Interesting)

      by Shikaku ( 1129753 )

      Every other month there is fresh firmware to fix some or another problem and firmware updating is manual labor with a boot CD, not something you can simply schedule at night or do while the system is online so they are what I would call beta-quality.

      Why can't firmware be upgraded on SSD drives thusly:

      there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.

      Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.

      • by Anonymous Coward on Wednesday June 16, 2010 @12:10AM (#32587482)

        ...

        there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.

        Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.

        Several years ago, I wrote an ATA drive firmware flash driver and utility, to allow my company's customers to upgrade firmware in the field. Let me explain how drive firmware flash works.

        Most/all modern drives (or at least Enterprise versions) support the ATA DOWNLOAD_MICROCODE command. The flash chips on the electronics board (or reserved sectors on the platters, depending on the implementation) have sufficient capacity to hold the running firmware, and to hold the new version. The new version is buffered in the drive, validated, then written to the chips/spindle, validated again, then activated and the drive reset.

        Modulo some minor drive-specific quirks, the DOWNLOAD_MICROCODE command works as specified. Other than adding model strings to the utility's whitelist, the Intel X25-Es worked without issue. While we've always recommended performing the flash from single-user mode and immediately rebooting, I've done it during normal operations plenty of times. The main things are to remember to quiesce the channel before the doing the flash, and properly reinitializing it afterwards.

        Posting anonymously because I'm revealing details about my job.

        • It gets interesting if the drive is behind a RAID controller. We just did that and it took a while to get it right and work around the bugs in pass through mode.

    • "Especially once fully filled the IOPS performance drops from ~3000 IOPS like a brick to ~1000 IOPS which a small set of hard drives can fulfill so the only good thing it's left for is latency."

      Does your environment support trim natively? Just curious.

      My environment does not, and after a week or two I start to notice performance going south and remember to run the 'optimization' utility intel offers. This on an X-25M, G2.

      As an aside, I've noticed that your average Dell workstation cannot support two X-25's.
      • by guruevi ( 827432 )

        TRIM doesn't work when your drive is actually filled 100%. I use it as cache, not as a data carrier. Even so, in the datacenter, drives are frequently filled to such a capacity that even TRIM won't do much and TRIM only works when you know what blocks are supposed to be empty something a lot of data carriers (in the datacenter) don't know (eg. RAID controllers, iSCSI targets, ...).

    • by jez9999 ( 618189 )

      Especially once fully filled the IOPS performance drops from ~3000 IOPS like a brick to ~1000 IOPS which a small set of hard drives can fulfill so the only good thing it's left for is latency.

      What about noise, heat, and energy usage?

    • One thing you need to be careful about with the Intel SSDs is that they have some serious firmware bugs with their SMART implementation. Issuing a SMART command while the controller is busy with other non-SMART commands can brick the SSD and require a full reset or power cycle to fix.

      If you are getting bus errors on your controllers and not issuing SMART commands then it probably isn't the SSDs fault.

      In anycase, SSDs have plenty enough going for them to warrant the significantly increased cost per GB of st

    • Why don't you grab a PCIe SSD? An ioDrive or something? Those can score 150k IOPS in real-world tests, for only a couple thousand dollars. If IOPS matter more than capacity, they deliver.

  • by drizek ( 1481461 ) on Tuesday June 15, 2010 @11:24PM (#32587222)

    How is this different/better than the sandforce controllers we already have?

  • If anything (Score:5, Interesting)

    by Gordo_1 ( 256312 ) on Tuesday June 15, 2010 @11:27PM (#32587244)

    I suspect this will eventually bring down the manufacturing costs of Enterprise class drives, rather than making consumer drives "more reliable". I think reliability concerns with current consumer-oriented MLC designs to be overstated.

    Anecdotally, my Intel 160GB G2 drive is going on 7 months of usage as a primary drive on a daily used Win7-64 box, and has averaged about 6GB per day of writes over that period (according to Intel's SSD toolbox utility). Given that rate of use over a sustained period (which theoretically means it could last decades, assuming that some as yet undiscovered manufacturing defect doesn't cut it short) combined with the fact that even when SSDs fail, they do so gracefully on the next write operation, I just don't see the need for consumer-oriented drives to sport such fancy reliability tricks.

    • Could be they are just trying to counter all the unfounded bad opinions that seem to exist about SSD drives. For one, I have met more than one IT engineer who seems to believe that a SSD will fail(and be rendered completely unusuable) after 1 or 2 years of regular use :/
      • Re: (Score:3, Interesting)

        by afidel ( 530433 )
        It totally depends on the use case. Some of my larger san volumes show 2TB/day of writes which means according to Intel's x-25e datasheet a 64GB drive would last ~1,000 days or under 3 years.
        • Also don't forget that the size of the writes can make a big difference. If it writes in 512-byte sectors then writing one byte causes the same wear as writing 512. I've got no idea how to predict a shortened lifespan given this fact, it's highly dependent on the user's usage habits. All I can say for sure is that the only time a drive will get close to it's expected lifespan is if it's used in something like a video editing environment where all writes are large, contiguous files. God help you if you ran s
          • This is called write amplification and it depends on many factors: Linearity of writes by the computer, how often the computer tells the SSD to flush dirty data to media, the size of the SSDs ram cache, the ability of the SSD to write-combine or scatter/gather sectors, the wear leveling algorithm used by the SSD, and a few other factors.

            MLC flash uses 128K blocks. If a database or log is flushing every 1K you wind up with a 128:1 write amplification effect, for example. With some tuning (for example flus

        • Re: (Score:3, Interesting)

          by timeOday ( 582209 )

          Some of my larger san volumes show 2TB/day of writes which means according to Intel's x-25e datasheet a 64GB drive would last ~1,000 days or under 3 years.

          I don't get it. Is that 2TB/day per 64GB of storage? (Approx 40 total rewrites of your entire storage capacity per day?) Or 2TB/day spread across a much larger storage capacity? I would guess the latter, in which case the writes would be spread across a large number of drives and less intensive on each drive.

          • Re: (Score:3, Informative)

            by afidel ( 530433 )
            Nope, that's 2TB/day across 20GB (I believe, logged off my corp systems a while ago but it's in the low 10's of GB regardless of the actual size). It's the redo log volumes for a fairly high transaction load OLTP Oracle server.
            • Re: (Score:3, Interesting)

              IANADBA, but something like the redo log volumes don't exactly tax a mechanical disk, being mostly sequential reads and writes, and so would be a reasonable candidate to leave as HDD. Even a cheap as chips 5400rpm laptop drive could sustain 23MB/s (2TB/day) sequentially without breaking a sweat.

              However, using the SAME (Stripe And Mirror Everything) principle, spreading all load across multiple mirrored SSDs should provide both the speed and endurance capacity you would need, with the great random performanc

        • If you are writing 2TB/day to a 64GB drive then you already expect to replace the thing every few years, even if it was a platter, at least in the median case.

          If you are on the extreme end, then platter failures are quite common.
          • by afidel ( 530433 )
            We lose about 1.5-2% of spindles per year, losing effectively 33+% per year of already expensive spindles doesn't really work out too well.
    • You're probably making too high of an assumption about the incremental cost to add this to consumer products. I would be surprised if it's even a single square mm of die area. All depends how they price the IP.
    • by mysidia ( 191772 )

      Makes sense... it will make it less expensive to manufacture reliable enterprise drives.

      New enterprise SSDs can be MLCs using this technology, they may be higher capacity, or provide more profits to the SSD part manufacturers, but will be just as expensive. Enterprises pay for reliability that meets the requirements for their market.

      Consumer market has a lower level of reliability... consumers aren't willing to pay as much for reliability, so reliability will be less.

      You can't provide greater reli

  • by Anonymous Coward

    How can a solid state drive have a "signal to noise ratio"?

    It's all digital. Either the voltages are within their valid thresholds or they are not.

    Wouldn't you need the world's fastest DSP to "clean up" noisy digital signals and still maintain the type of transfer rates they claim?

    There is nothing about this breakthrough that makes any sense. Snake oil?

    • by Ster ( 556540 ) on Wednesday June 16, 2010 @12:33AM (#32587572)

      Say you're talking about a 4-level MLC cell, and say it runs at 3.3V. If the voltage is on [0V, 0.825V), that's 00b; [0.825, 1.65V) is 01b; [1.65V, 2.475V) is 10b, and [2.475V, 3.3V) is 11b. But those are analog voltages - the controller has to read the voltage, do an analog-to-digital conversion, and figure out which level it corresponds to. The ranges listed above are for if you have perfect discrimination - in most cases, it's difficult to differentiate small differences, so they don't use the full range. With better A-to-D and signal processing, they can resolve the differences better, which in turn lets them get more write cycles.

      Those numbers are pulled out of the air for illustrative purposes; I have no idea what the real values are.

      • I'm pretty sure flash chips use analog voltage comparators internally, not A/D's. Though, theoretically, it would be possible to mess with the thresholds for the comparators so if a block had excessive bit errors the thresholds could be manipulated and the block re-read to determine which bits are the most likely culprits. With that information in-hand further error correction could be done.

        That is, normally ECC is calculated without any knowledge about which of the N bits of data might be erroneous. If

      • I've never seen a Flash chip with an analog interface. Citation needed.

      • the controller has to read the voltage, do an analog-to-digital conversion

        If there are only 4 levels then it makes much more sense to use comparators. The number of transistors required would be greatly reduced and the latency almost eliminated. Should one require the flexibility of being able to adjust the reference voltage, they could utilize a DtoA as a reference. DtoA circuitry is much simpler/faster then AtoD circuitry.

        • by Agripa ( 139780 )

          If there are only 4 levels then it makes much more sense to use comparators.

          A comparator is a 1 bit A to D converter. 4 comparators make a 2 bit A to D converter.

          Should one require the flexibility of being able to adjust the reference voltage, they could utilize a DtoA as a reference. DtoA circuitry is much simpler/faster then AtoD circuitry.

          Internally, something like this is already done. During writes, a reference cell or cells are written which are used during reads to adjust or generate the reference

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      It's all digital.

      Actually, once you get far down enough, nothing is :)

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Onc you get far down enough, everything is. Consider Planck time: it's the smallest quantum of time for which there can be "a difference that makes a difference".

  • We already did something very similar to this on the BAIL backup subsystem of the Cassini spacecraft many years ago, and it didn't require a "special" processor.

  • New trend (Score:3, Funny)

    by gringofrijolero ( 1489395 ) on Tuesday June 15, 2010 @11:59PM (#32587428) Journal

    The SSD will have a more powerful CPU than the computer.. All it will need is a graphics and audio chip, more RAM and.. oh... nevermind..

  • Just great, another awesome piece of tech I so desire in my machine that I can't afford. CURSES YOU!
  • The company is not revealing pricing yet."

    They are competing on reliability, so it makes sense the price would be higher.

    The fact they are not advertising the price, strongly suggests they do not intend to compete based on price, and price will be high.

    Marketing rule #1 is shove all the positive aspects of your product in the customer's face.

    Don't talk about the negatives or the disadvantages, if you can avoid it.

    In this case the product's not out yet, so they can avoid talking about the high pr

  • by blind biker ( 1066130 ) on Wednesday June 16, 2010 @12:59AM (#32587710) Journal

    So we can have 50.000 instead of 3000 rewrite cycles. That's great. However, I still like the 100.000 to 1.000.000 rewrite cycles of SLC. Actually, SLC is only 50% more expensive to manufacture (per bit) than two-level MLC - I really don't understand why are manufacturers so enamoured with MLC.

    • Because price is the most important factor here? Reliability has got to good enough, what needs to happen now is a sharp reduction in price.

      • As I said, SLC is only 50% more expensive, per bit, than 4-level (2 bit/cell) MLC. That's hardly helping in a "sharp" decrease in price.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      If I'm buying 100,000 parts, SLC costs 5x more (per bit) than MLC at the present. I'm pretty certain the reason is supply - there's factories churning out an ungodly amount of MLC for use in memory cards, thumbdrives, MP3 players, etc. but SLC really only finds use in the embedded (where I've used it) and enterprise-SSD space.

      MLC isn't *that* bad - the reliability issues you'll find with it are bit errors, not entire lost blocks of data. Add an extra level of error protection and plenty of spare area to han

  • by pslam ( 97660 ) on Wednesday June 16, 2010 @01:14AM (#32587774) Homepage Journal

    This sounds absolutely no different to how all wear-leveled, error correcting flash controllers work. They all use multiple levels of ECC to decrease the error rate. The 'signal processing' they're doing doesn't sound like anything new.

    If there is something new going on here, it's absolutely impossible to decode from the layman's language used in the article. All I hear is "Other vendors use X bits for ECC. We use Y bits and we do it in software instead of hardware.", which is basically just another way of saying "Other vendors have 4 blades, we have 5 blades."

    • You just don't read marketese. Are you an engineer or what? If yes this is not for you.

    • If there is something new going on here, it's absolutely impossible to decode from the layman's language used in the article. All I hear is "Other vendors use X bits for ECC. We use Y bits and we do it in software instead of hardware.", which is basically just another way of saying "Other vendors have 4 blades, we have 5 blades."

      Well, as you can see, their dials go to eleven!

  • Thats a shame... i thought they had developed a Super Star Destroyed. Nothing to see here... move along.

  • The article says this new technology boosts the number of write cycles from 3000 to 50000. Sounds good, but then again, SLC flash in 1991 supported 1million writes and MLC 100.000 writes. Later consumer grade MLC flash claimed to handle 10000 writes and Micron is selling MLC flash that supports 30000 writes and I recall AMD having MLCs with 100.000 writes. Maybe the 3000 writes MLC is high density & as cheap as possible kind of flash and this new Israeli technology works on that. But unless it is cheape

  • Call me old fashioned but "in a big way" simply doesn't cut it for me. It is language used by pikeys. Or by bullies which are a linguistic or intellect stone's throw away from pikeys.

    Do the geek proud and make a bit of an effort when writing. After all, the typical geek reads more than Joe Average -well "he" claims so and I personally do anyway- and hence trains his brain in appreciating well formed sentences.

    Besides, there are so many alternatives to "in a big way".
    • They should have used "up to X % better" or "all new"to make it clear that this is marketing BS.

    • by mcgrew ( 92797 ) *

      After all, the typical geek reads more than Joe Average

      Then why do so many slashdotters spell "lose" with two Os, even though both can be verbs and have completely different meanings? Or don't know when and when not to use an apostrophe, can't tell their from there, etc? For some of them English is a second language but most of them write as if they've never read a book before. Even someone who never reads anything but pulp fiction would do better than that.

  • Two years ago I made a huge mistake letting my gf convince me she needed dead sea dirt for her chronic illness. The result, me and my rented car got nearly STONED to death by Jewish hassidic (sp?) villagers just off the main road. I understand now it's not polite to take pictures of your synagogues shirtless but common, stoning tourists w/o even giving a chance to explain themselves? It did not happen to me but later I heard stories of people killed by Jewish religious students for kissing on the beach. Is
  • Based on this patent, it looks like they write, read back an analog signal, and then use any deviation from expectations to compute a compensation factor for a second write which is the actual data write. In other words, the first write is used to calibrate the data writes. I assume this calibration is done rarely or the write bandwidth would be 1/3 of a non-calibrating system.
    Variations due to process (the cell is smaller or larger than intended) only need to be calibrated once. Variations due to environ

"There is no statute of limitations on stupidity." -- Randomly produced by a computer program called Markov3.

Working...