Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Intel

Intel Plans 'Overclocking' Capability On SSDs 106

Lucas123 writes "Anticipating it will make a 'big splash,' Intel is planning to release an product late this year or very early next that will allow users to 'overclock' solid-state drives. The overclocking capability is expected to allow users to tweak the percentage of an SSD's capacity that's used for data compression. At its Intel Developers Forum next month in San Francisco, Intel has scheduled an information session on overclocking SSDs. The IDF session is aimed at system manufacturers and developers as well as do-it-yourself enthusiasts, such as gamers. 'We've debated how people would use it. I think the cool factor is somewhat high on this, but we don't see it changing the macro-level environment. But, as far as being a trendsetter, it has potential,' said Intel spokesman Alan Frost. Michael Yang, a principal analyst with IHS Research, said the product Intel plans to release could be the next evolution of SandForce controller, 'user definable and [with the] ability to allocate specified size on the SSD. Interesting, but we will have to see how much performance and capacity [it has] over existing solutions,' Yang said in an email reply to Computerworld."
This discussion has been archived. No new comments can be posted.

Intel Plans 'Overclocking' Capability On SSDs

Comments Filter:
  • Awsome (Score:5, Insightful)

    by ciderbrew ( 1860166 ) on Friday August 30, 2013 @08:56AM (#44715475)
    Time to make some watercooling blocks and special fans and make money from those with too much.
    • Over-provisioning doesn't change the temperature and they run at about 2 watts anyway.
      • With your precious data can you really take that chance? Our elite-X watercooling blocks and special fans for power SSD users are designed to protect you*.




        *Product offers no level of protection to your SSD. T&Cs apply.
      • If you read this [tomshardware.com], making sure to click on the first see full content link you will see where the GP was going with this one.
    • Re:Awsome (Score:5, Funny)

      by Thanshin ( 1188877 ) on Friday August 30, 2013 @09:20AM (#44715669)

      Time to make some watercooling blocks and special fans and make money from those with too much.

      Wow! That would be Overclocked!

      Years ago I could buy a cheap overclocked machine and play any overclocked game I could find. Nowadays, it's not so easy. You need an overclocked watercooling block and overclocked fans.

      I only hope MS overclock their efforts and manage to get an overclocking product in time. I'd be very underclocked otherwise.

  • Sandforce... (Score:5, Insightful)

    by Knuckx ( 1127339 ) on Friday August 30, 2013 @09:03AM (#44715523)

    So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware - and then add a feature that allows users to modify the internal constants the firmware uses to do it's job. This can only end very badly, unless Intel and Sandforce do some serious testing to find and fix the data corruption issues, the problems with the drive ignoring the host, and the problems where the drive gets stuck in busy.

    (all problems detailed in this post have been experienced with an Intel branded, Sandforce controller-ed drive)

    • by c0lo ( 1497653 ) on Friday August 30, 2013 @11:14AM (#44716805)

      So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware - and then add a feature that allows users to modify the internal constants the firmware uses to do it's job. This can only end very badly,...

      How come? Personally, I can see the benefits... run the SSD to glowing hot, write your data then cut he power. Upon cooling down, the data will be compressed (by thermal shrinking) in a hardware mode... and is only common-sense that being hard is better than being soft when it comes to compression.

    • by Minwee ( 522556 )

      I don't see the problem there. If they did release an SSD that ran at turbo speed out of the box but threw up all over itself unless you tuned it to run at a snail's pace, then it would be fantastic for Intel.

      Not only would they be able to blow through benchmarks and post amazing scores in their ads and reviews they also would be able to ship barely usable controllers _and_ shift the blame the user for either fiddling with the settings or not applying the recommended stability settings when things invaria

    • So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware

      so... business as usual?

    • by w1zz4 ( 2943911 )
      I have two Intel SSD (120GB 520 and 180GB 330) and never experienced problems you had...
  • by Anonymous Coward

    Translation:
    "It's useless, but idiot gamers will buy anything if we call it over clocked :D"

    • by h4rr4r ( 612664 )

      Pretty much that.
      If gamers really wanted fast disks they would be buying SSDs that plug into PCIe lanes.

      • by Shark ( 78448 )

        Their strategy is going to have to be different though... With CPUs, they can overcharge and cripple chips for the privilege to overclock but with Sandforce controllers, I doubt they'll be the only vendor.

      • Gamers usually have graphics cards taking up all the PCIe lanes. Usually the way they are designed, they are big enough to cover the next slot (or 2). Motherboards are even designed with 2 (seldom used) smaller PCIe slots below the top PCIe slot because they know that the graphics card will take up more than a single row. Also, PCIe has disadvantages, such as not being easily hot swappable. It costs very little for a hot-swap tray, I've used them at work, and I'm definitely putting one in my next build.
    • by bmo ( 77928 )

      "It's useless, but idiot gamers will buy anything if we call it over clocked :D"

      And

      "A certain percentage will fry them and void the warranty and will have to buy more. They lose, we win!"

      --
      BMO

  • by Anonymous Coward

    BSOD

    If you are lucky. A silent killer of data, sneaking around like Jack the Ripper, never known by name, only by result.

    Intel

    Nuf said

  • I guess the word "Turbo" is out of favor these days.

    Time for Seagate to make some real hard drives that spin at 20000 RPM

    • We sure noticed the difference when we switched to 10,000 rpm drives, I'm waiting for the SSDs to mature more before a full commitment, hopefully put them on the PCIe bus when we do.

    • by stms ( 1132653 )

      Time for Seagate to make some real hard drives that spin at 20000 RPM

      I hear the 20000 RPM drives cut roast beef extra thin.

    • I guess the word "Turbo" is out of favor these days.

      Time for Seagate to make some real hard drives that spin at 20000 RPM

      That, or they can keep doing what they already do; make the heads smaller and cram more bits onto each track. Triple the per-track density and you have a drive at 7200rpm that performs like a (completely theoretical) 21,600 rpm drive doing sequential reads. Random reads are for suckers who don't know how to cache.

      • Or something that, AFAIK, has never been done on a production hard drive, like make the arms & heads independent for each platter, and build a 3-platter drive that transparently does RAID 5 internally (ie, presents itself to the outside world as a normal SATA drive, but internally reads & writes in parallel). Or makes more intelligent use of the flash cache by caching the first N sectors of a large file in flash, and the remainder on the drive, so it can begin reading from flash immediately while mo

  • by slashmydots ( 2189826 ) on Friday August 30, 2013 @09:09AM (#44715587)
    Over-provisioning already exists on a ton of different SSDs like Samsung and OCZ. Intel didn't invent anything new and the controller's MHz isn't going anywhere, nor would that be a good idea anyway. One flaw in the data and it's goodbye boot drive data integrity. What a useless "catching up" announcement.
  • would I want to use compression at all, if my goal is speed? If maximizing total capacity is not the concern, I would use none of the drive for compression. I think the point to be taken from this is that Intel is recognizing that storage capacities for SSDs are reaching the point where compression is no longer necessary to make the technology a viable alternative to mechanical drives, and we will now begin seeing the true speed potential of the technology.
    • Re: (Score:2, Interesting)

      All SSDs use compression. It's part of why they're so fast. Also, they're quite small. I find it hilarious that a lot even use encryption by default and yet the controller decrypts and spits out the data. There is actually zero encryption then, even if you plug the SSDs into another system.
      • That's counter-intuitive. "Running processes" on data does not make the data travel faster. If using compression improves speed, there is a bottleneck somewhere that allows the data to pool in cache. When the interface reaches the speeds that eliminate the bottleneck, we'll really have some fast drives.
        • From the little I've read, it seems that the data is copied to a fast buffer, compressed, and then written to the drive's Flash.
          I guess the buffer is necessary because the OS still sees the SSD as just another SATA spinning drive so the controller has to do all the SSD specific stuff like allocating blocks based on wear-balancing.

          So once it's in the buffer, it's just a matter of whether the time to compress a file and store the smaller result is faster than just storing the uncompressed file.
          I can only assu

          • by Arker ( 91948 )

            The compression should be done at a higher level, however. And if things are set up properly it almost always does when it counts. So this sounds suspiciously like the inflated connection speeds I remember from the modem days.

            You would be connected at a much lower speed than the box said, the difference being the 'expected' gain from the built-in compression. In the rare occasion that you were a total idiot and sent large amounts of uncompressed data then expectations would be met. In other cases it was mea

            • by kasperd ( 592156 )

              So this sounds suspiciously like the inflated connection speeds I remember from the modem days.

              This sort of inflated numbers is completely alive to this day, but in different areas. One area, where I have seen it myself, is on tape drives. When LTO5 was being standardized the manufacturers could not keep up with the planned improvements in storage capacity (the plan was that forward from LTO3 each generation would double capacity and increase throughput by 50%). And with LTO6 they were falling even further

              • They have somewhat compensated for that by improving the compression ratio from a factor 2 to a factor 2.5. I have no idea what that number is supposed to mean though

                I'd imagine it compares to the upgrade from DEFLATE (used in PKZIP and Gzip) to LZMA (used in 7-Zip). As I understand the claim being made, the original algorithm compressed a representative corpus of data to 50% of its original size and the new algorithm 40%.

                • by kasperd ( 592156 )

                  As I understand the claim being made, the original algorithm compressed a representative corpus of data

                  There are several problems with that sort of benchmark. The smallest of those problems is the question about who decides what is a representative corpus. The larger problem is that if the developers know what corpus will be used to benchmark the algorithm, they may look at that corpus when deciding what the algorithm should work like. In that case it is easy to intentionally or accidentally get an unfair a

                  • The smallest of those problems is the question about who decides what is a representative corpus.

                    Wikipedia's article about LTO claims that the algorithm is based on Hifn's LZS, and benchmarks are relative to the Calgary corpus.

                    An example of something which is clearly cheating would be to define the compression such that if the input is identical to the benchmark corpus, then the compressed output is simply a single zero bit.

                    This would require the compressor and decompressor to contain an exact copy of the benchmark corpus, which would likely result in copyright problems.

                    In certain scenarios you would get around that sort of "cheating" by measuring not just the size of the compressed data but also the size of the decompression code. That however requires somebody to specify on what platform the code will have to run.

                    I assume it would run on whatever platform the drive's microcontroller uses, and compression on an MCU might not favor use of a multimegabyte corpus. But I see your point that more transparency in this benchmarking would be good for

                    • by kasperd ( 592156 )

                      This would require the compressor and decompressor to contain an exact copy of the benchmark corpus, which would likely result in copyright problems.

                      How much are you allowed to optimize a compression algorithm for a specific input before it would be considered a violation of copyright? If I send you a gziped version of one of my DVDs, that compressed file would be considered a copyright violation, but the gunzip binary would not. You suggest at some point optimizing the compression algorithm would make the

      • This had totally escaped me, you are right. In the article on SSD, Wikipedia states "SandForce controllers compress the data prior to sending it to the flash memory. This process may result in less writing and higher logical throughput, depending on the compressibility of the data."

      • Re:Why... (Score:4, Insightful)

        by jones_supa ( 887896 ) on Friday August 30, 2013 @09:40AM (#44715867)

        All SSDs use compression.

        Citation needed.

      • by 0123456 ( 636235 )

        I find it hilarious that a lot even use encryption by default and yet the controller decrypts and spits out the data.

        Sigh.

        The Intel SSDs encrypt data so you can 'secure wipe' them by just erasing the encryption key, not to make them secure against external attack.

        But yes, I guess that's hilarious to some people.

    • would I want to use compression at all, if my goal is speed? If maximizing total capacity is not the concern, I would use none of the drive for compression. I think the point to be taken from this is that Intel is recognizing that storage capacities for SSDs are reaching the point where compression is no longer necessary to make the technology a viable alternative to mechanical drives, and we will now begin seeing the true speed potential of the technology.

      Precisely! I'd think the compression would be more needed on the slower media, like HDDs, where you want to transfer less data due to the slow speeds. Here, in SSDs, where speeds are orders of magnitude higher, compression would be less necessary. So if compression is more popular in SSDs than HDDs, it would have more to do w/ the fact that HDDs have higher capacities than SSDs, and hence the need for SSDs to compress what wouldn't be necessary for HDDs

    • by qubezz ( 520511 )
      >>would I want to use compression at all, if my goal is speed?

      Compression speeds up SSDs, pretty much universally. The speed of reading and writing to the memory cells is limited, but a 200MB/s data transfer speed becomes 300MB/s after the data is compressed/decompressed on the fly. The current generation of Intel drives do use compression in just this way to speed performance (but not to increase apparent size). I cannot see the advantage to disabling any compression as it is currently used with t
  • by Gothmolly ( 148874 ) on Friday August 30, 2013 @09:31AM (#44715787)

    It's the ancient tradeoff of CPU vs. IO. When you have more of one than you need, burn it to improve the other.

  • by FuzzNugget ( 2840687 ) on Friday August 30, 2013 @09:43AM (#44715889)
    "Overclocking" is technically a misnomer. It's a sort of tweaking, but it's a bit more than that; we could call it ... twerking!
    • When a CPU / GPU / memory runs at a _fixed_ frequency (due to a clock) the term over-clocking is slang and kinda makes sense.

      What other term would you use?

      Engines typically don't function when over revving.

      • Let's say the access times for a flash - which is at the heart of an SSD - is 50ns. If you supply it w/ data faster than that, it won't give you reliable results, so the extra speed just won't be there. You would need more wait states before you can read from it. And writes would be even worse, since those things are in the microsecond range.
  • Why would I want to tweak "how much data is used for compression"? If the drive compresses data internally, why not just do compression for all data?

    And, all the consumer drives are bottlenecked by the SATA bus anyway.

  • by PopeRatzo ( 965947 ) on Friday August 30, 2013 @09:47AM (#44715937) Journal

    "...gimmick?"

  • tl;dr Allow users to adjust the compressed vs. uncompressed section sizes. Compressed goes faster, but rewrites a lot more and thus wears it out faster.

  • by Theovon ( 109752 ) on Friday August 30, 2013 @10:11AM (#44716149)

    They're using "overclocking" here as a metaphor, but people seem to take it literally. Overclocking the drive would involve raising voltage and increasing clock speeds. That's probably possible. But what they're talking about appears to be to give the user the ability to influence the amount of overprovisioning on the drive. For an SSD, the physical capacity is larger than the logical capacity. This is important in order to decrease the amount of sector migration needed when looking for a block to erase. From zero, adding overprovisioning will substantially increase write performance, but at a diminishing rate as you add more extra space.

    As for compression, it does two things. It allows more sectors to be consolidated into the same page, amplifying the very limited flash write bandwidth. And it effectively increases the amount of overprovisioning. These two mean that more compressible data will have substantially higher write performance and somewhat higher read performance. (Although reads are already fast enough, on many drives, to max out the SATA bandwidth.)

    Anyhow, giving the user the ability to tweak overprovisioning seems pretty worthless to me. At best, some users will be able to increase the logical capacity, at the expense of having lousy write performance. Maybe this would help for drives where you store large amounts of media that you write once and read a lot. But how much more capacity could you get? 25%? Another knob might be compression "effort", which trades off compute time against SSD bandwidth. There's going to be a balancing point between the two, and that probably should be dynamic in the controller, not tweaked by users who don't know the internal architecture of the drive. Some writes will take longer than others due to wear leveling, migration, and garbage collection, giving the drive controller more or less free time to spend on compressing data.

    • Considering SandForce sells controllers to multiple vendors isn't the only difference between them how they choose to provision the drives. I know there can be hardware differences, but lets say we have two drives with basically the same internals. Lets also suppose that Drive X is faster than Intel's equivalent, but Intel's is cheaper (not likely but stay with me here). Now you may be able to tweak the Intel drive's settings and get it to match or closely match Drive X for cheaper. That could be a good use

    • They're using "overclocking" here as a metaphor, but people seem to take it literally.

      Because it is a specific technical term that shouldn't be misappropriated for something completely unrelated. This foolishness is what happens when the marketing department steers the ship. Something Intel should have learned their lesson on with the MHz wars and the P4.

      Then again maybe I've just been ahead of the curve all these years when I "overclock" a new ext[2-4] partition with the minimum superuser reserved space. I've also taken a liking to "overclocking" my tarballs by switching from gzip to bzip2.

    • You seem to be using HDD terminology in SSDs, when the analogies simply don't hold good. The terms 'sectors' or 'blocks', which mean different things in HDDs, are almost synonymous in SSDs. Essentially, they mean the minumum erasable areas that one can erase before one can write to one or more locations within that area. There is no concept of 'sectors within the same page' or anything like it. If the flash device in question supports page mode reading or programming, it simply defines the area that c
      • wrong.

        sector: smallest addressable unit of space

        in hdd this is the same for reads and writes. for ssd its the smallest adressanle unit but....

        blocks: no such concept with hdd. traditionally there were cylinders, heads and sectors (addressing scheme) and some folks may have used block to refer to a sector, but normally in data storage a block is the smallest addressable unit in a file system, sometimes called a cluster.

        for ssd its different: it can only write either ones or zeros, not both. by definition, a

        • In SSD's - which ultimately boils down to the NAND flash, there is no such thing as 'sectors' the way you defined it above. The smallest addressable unit there is a word, if one is talking about reads. If one is talking about programming, one can program a page, which is smaller than a block. Also, NAND flash has to be erased first (set to all '1's) before any area of it can be programmed. The only type of flash where one can write either '0's or '1's is the EEPROMs (or E-square), but those things are o

  • by BoRegardless ( 721219 ) on Friday August 30, 2013 @11:24AM (#44716911)

    After you deal with HD & SSD failures, you are only concerned with reliability.

  • by adisakp ( 705706 ) on Friday August 30, 2013 @12:50PM (#44717855) Journal
    It is not an overclock but the ability to adjust the "spare area". This is the percentage of flash on the drive that is not exposed to the user and is used for garbage collection, write acceleration (by having pre-erased blocks), reduction of write amplification, etc. You can emulate more spare area on drives already if you take SSD and format it to less that it's full capacity.

    This is the SSD equivalent to short stroking a hard drive [tomshardware.com].

    It's worth noting that the higher performance and enterprise level drives already have much more spare area but that results in a tradeoff of capacity for performance. They are just going to let you set this slider between consumer level (maximum capacity per $$$) and performance level (higher performance but less capcity).
    • by adisakp ( 705706 )
      FWIW, a larger spare area also increases reliability since there are more free blocks to handle any memory blocks that are bad. Also a larger spare area tends to have an effect on reducing write amplification and reducing redundant data writes during garbage collection -- both of which extends the overall lifetime of the entire drive.
  • Its just unlocking more of the safety margin for general use. Either way, an OC'd CPU might fall over and you lose an online game - a FUBAR'd overclocked SSD could result in bye bye all your data.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...