Forgot your password?
typodupeerror
Intel Hardware

Intel Responds To X25-M Fragmentation Issue 111

Posted by Soulskill
from the problem-what-problem dept.
Vigile writes "In mid-February, news broke about a potential issue with Intel's X25-M mainstream solid state drives involving fragmentation and performance slow-downs. At that time, after having the news picked up by everyone from CNet to the Wall Street Journal, Intel stated that it had not seen any of these issues but was working with the source to replicate the problem and find a fix if at all possible. Today Intel has essentially admitted to the problem by releasing a new firmware for the X25-M line that not only fixes the flaws found in the drive initially, but also increases write performance across the board."
This discussion has been archived. No new comments can be posted.

Intel Responds To X25-M Fragmentation Issue

Comments Filter:
  • Good for them (Score:4, Insightful)

    by mc1138 (718275) on Monday April 13, 2009 @07:30PM (#27565123) Homepage
    I'd much rather have a company own up to an issue, fix it, and move on, rather than deny it or try to use PR to quiet it away.
    • Re:Good for them (Score:5, Insightful)

      by drinkypoo (153816) <martin.espinoza@gmail.com> on Monday April 13, 2009 @07:46PM (#27565251) Homepage Journal

      Makes an interesting contrast to intel's response to the FDIV bug, eh? Between this and the whole linux driver thing I'm almost inclined to suspect that intel has learned that you have to serve your customers.

      • Please gents, think logically here. This is something fixed ~three weeks after reproducing the problem. And it was something that could easily be fixed with a firmware update. It's not like the hardware was broken.

        So, good job Intel for fixing it, but patting them on the back for admitting a problem [on a tiny user base] that was easily fixed is a delusion.

        -bullseye

        • by drinkypoo (153816) <martin.espinoza@gmail.com> on Monday April 13, 2009 @08:00PM (#27565337) Homepage Journal

          So, good job Intel for fixing it, but patting them on the back for admitting a problem [on a tiny user base] that was easily fixed is a delusion.

          You're inferring things from my post that aren't implied. What I'm saying is that intel is perhaps no longer pure, concentrated evil -- not that I want to go start sucking dicks in the executive washroom.

        • How do you know, that is was easy to fix? Did you fix it? Do you know someone who fixes it?
          Or are you perhaps talking out of the wrong orifice? ^^

          • Re:Good for them (Score:4, Insightful)

            by afidel (530433) on Monday April 13, 2009 @09:05PM (#27565681)
            The fact that it was fixed AND QA'd in 3 weeks proves it was easy to fix! If you've ever worked for any large company you know three weeks is kind of the minimum for this kind of thing, you need a couple days for meetings to discuss the problem and brainstorm problems, a few days to formulate actual solutions, a few days to test, a few days for QA (minimum) and then a day or two to package it up, get with the outside content providing group and then hand something over to marketing.
            • I am betting they had the new version ready to go and held it back because they were afraid of regression. When it became apparent they had to go forward they decided to take the risk. Thats why it only took three weeks.
            • The fact that it was fixed AND QA'd in 3 weeks proves it was easy to fix!

              You're assuming it's really fixed. Or that it was fixed without introducing new, yet-to-be-discovered problems. Technological history is replete with examples of "quick fixes" that ultimately had ancillary negative impacts. What if Intel fixed the issue by altering the wear leveling algorithm...thus shortening the projected life of the device? We wouldn't know about it until much later, and by then it would be too late.

              Note I'm not saying this is the case. Far from it. I'm just saying that it comes do

              • by drinkypoo (153816)

                - it was a quick fix that will have no detrimental effects to the product, which somehow magically escaped Intel's legendarily effective quality control processes and made it into a flagship product.

                And I have to say again, you mean like a math bug in their spanking new processor? You're acting like there's no precedent for something like this happening inside of intel, when in fact the opposite is true.

                Here's a third possibility: intel knew the problem existed, but shipped the devices anyway because they figured that nobody would hit the problem before they managed to get a patch out the door -- possibly because they had a deliverable involving shipping a certain number of units, to satisfy some custo

      • Re:Good for them (Score:5, Insightful)

        by adamkennedy (121032) <adamk@c[ ].org ['pan' in gap]> on Monday April 13, 2009 @08:28PM (#27565495) Homepage

        There's a big difference between admitting to a bug that you can fix with a low/no-cost firmware upgrade, and admitting to a bug which requires a massive recall, and announcing to the market you'll be taking a multi-million dollar loss.

        • by SWPadnos (191329)

          There's a big difference between admitting to a bug that you can fix with a low/no-cost firmware upgrade, and admitting to a bug which requires a massive recall, and announcing to the market you'll be charging them more for '486 chips until you pay for the replacements.

          There, fixed that for you.

          • by nanospook (521118)
            Maybe they didnt want to publish this small fix so soon in case a major problem becomes apparent and they actually have to recall the drives physically.. now I'm just being paranoid..
      • Re: (Score:3, Insightful)

        by hairyfeet (841228)

        Well to be fair to Intel as you can see here [wikipedia.org] the odds of anybody hitting the bug(hell the odds of Intel accidentally hitting the bug) were pretty much slim to none. Nobody(including Intel) would have probably ever found out it even HAD a bug if Thomas Nicely hadn't written a program to hunt for primes and ran it on a Pentium. Let's face it: It was 1994. Most folks were running simple spreadsheets and simple games on Windows 3.11 at the time. The odds that they would have actually been doing enough floating

        • Re: (Score:2, Insightful)

          Let's face it: It was 1994. Most folks were running simple spreadsheets and simple games on Windows 3.11 at the time. The odds that they would have actually been doing enough floating point number crunching to actually hit the thing was about the same as hitting the lotto while being struck by lightning.

          While you're correct that "most folks" were not going to encounter the bug, the very people that needed the (then) high-end performance of a Pentium were the ones most likely to encounter it. I was rendering 3D animations on 3D Studio for DOS back then, and it was amazingly heavy on the FPU.

          • by hairyfeet (841228)

            That is why i thought their second recall idea was the way to go and should have been done first. For those that don't remember they originally wanted folks to PROVE they needed floating point, but when folks had a shit fit(and rightly so) they simply said if you were willing to pack up your CPU and ship it to us we will send you a replacement. The simple fact is most folks didn't bother and the ones that did were guys like you that NEEDED floating point.

            So in hindsight I would say the only problem Intel h

            • they simply said if you were willing to pack up your CPU and ship it to us we will send you a replacement.

              most folks caring about the FDIV bug enough to pull their CPU and wait for a return

              FYI. I exchanged my P60 when that bug hit. You imply that you had to give up your processor for a few days, but that wasn't the case. They took a credit card number and placed a hold for the price of the processor. They then shipped you the new process, and you shipped back the old processor after swapping it out (and I b

              • by hairyfeet (841228)

                That is fine if you HAVE a cc, a lot of folks I knew at the time didn't. And of course even fewer of those that HAD a cc AND had a Pentium affected by the FDIV bug had the skills required to change out a CPU on their own. So for them it would have been -contact Intel, give Intel cc number, wait on CPU, get CPU, have local shop change out CPU and then try to get the original CPU back to Intel before they got charged. For your average person that was just more PITA than it was worth for a bug that frankly the

    • Re:Good for them (Score:5, Insightful)

      by elashish14 (1302231) <profcalc4 @ g m ail.com> on Monday April 13, 2009 @08:00PM (#27565331)
      Agreed. Owning up to your mistakes, whether you're a company or an individual, is a sign of dependability and reliability. I don't know about you, but for me that's a major factor when I purchase something.
      • by eebra82 (907996)

        Agreed. Owning up to your mistakes, whether you're a company or an individual, is a sign of dependability and reliability. I don't know about you, but for me that's a major factor when I purchase something.

        That would only work in a perfect world. It's like when IBM admitted to the scratching disk problems it had a few years ago. Even if they admitted the problem fairly early, it didn't stop people from dropping the brand.

        In reality, if Intel admitted the problems, it would go from a rumor/forum discussion to public announcement with worldwide dirt on the company's drives. Furthermore, we don't really know how many drives are really affected by this problem. I have two X-25M disks myself and have not encoun

        • Do you have more evidence besides this IBM situation people drop the brand on early reports of proactively resolved problem products?

          For me, if a company publically says a particular product may have problems, but we will support it to the point we can and we'll double the warranty, I will be very likely to stay with them as a customer on other products. And I might consider the problem product.

          Now if a product gets some widely reported negative publicity on problems that may occur on a small portion of
      • Agreed. Owning up to your mistakes, whether you're a company or an individual, is a sign of dependability and reliability. I don't know about you, but for me that's a major factor when I purchase something.

        But that's the point: Intel hasn't owned up to any mistake. It's issued a new firmware with the nebulous comment that it "increases performance." There's no mention of it fixing anything that was wrong. Intel remains publicly mute on anything being wrong with the prior firmware despite numerous benchmarks and tests showing otherwise.

        If this is "owning up to your mistakes" then I'm going to have to change the definition of the phrase.

      • I'm in a similar situation now. My parents own a Samsung DVD-R120 that plays but won't record DVDs. I thought they were doing something wrong and didn't get around to checking it until this weekend 3 years after they bought it. I discover on the internet many people have the same problem with the AXAA submodel, but almost no one reports the problem with the XAA submodel.

        I'm so frustrated that Samsung didn't proactively contact customers with this device, or at least post a notice to their support forums a
    • Y'know, they contacted the blogger directly, got the actual responsible engineers to listen directly to his concerns, duly investigated and promptly resolved the issue.

      Yeah, they're somewhat restrained in their public communications. They're not PR types, they're engineers. That they've been let out of their cave to communicate with an individual member of the community is a big win, especially since they fixed it with a firmware patch. Let's not expect them to host the press conference too. That's too

      • by edmudama (155475)

        I don't think it necessarily eliminates the value proposition...

        The X25-E claims a petabyte of lifetime random writes, plus it's quite a bit faster.

        There are applications for each out there, though you're right, the majority of users will be perfectly happy with the X25-M

    • Intel could be the first among vendor/OS developers to admit drive fragmentation COULD BE an issue, in certain usage patterns. MS themselves kinda admit too but as you see the negative feedback about NTFS as result of it, I think they may slowly back down from suggesting it to users.

      As a guy in video business, you can't believe how much we are blamed, called stupid, old fashion, not reading OS documents when one sees we defrag drives in certain cases. Windows, OS X, Linux won't really matter. When one half

      • by phayes (202222)

        Intel could be the first among vendor/OS developers to admit drive fragmentation COULD BE an issue, in certain usage patterns.

        Not quite, AFAIK, Anandtech broke the story here [anandtech.com] and though he did say the Intel was the SSD vendor who was the least affected by the fragmentation bug, he also details that OCZ had already made great progress in resolving it's issues and becoming the SSD price/preformance king.

  • In other news, Microsoft has responded to reports that Windows Vista is slow, buggy, insecure and horribly bloated by releasing DOS 3.2

    • C'mon guys, this was actually funny.
      • by joaommp (685612)

        Well I laughed. But it was actually off-topic :P

        And I'm probably getting modded redundant.

        • I authored that comment. I considered it on topic because the topic is the response of a huge corporation to reports of dysfunction in its product.

          The humor was in comparing Intel's quick, honest and effective response to a product dysfunction to Microsoft's failure over decades to respond to the well documented dysfunction in its OS products.

  • by GuldKalle (1065310) on Monday April 13, 2009 @07:40PM (#27565193)

    At first I read fragmentation as in "frag grenade".
    Guess I've been playing too many violent games. Oh, that reminds me - tax reports are due tomorrow, right?

  • by AllynM (600515) * on Monday April 13, 2009 @07:46PM (#27565249) Journal

    Guys,

    You're welcome :).

    Kidding aside, it was great to have a manufacturer as large as Intel work with us and have something good come from it.

    Allyn Malventano
    Storage Editor, PC Perspective

    • by AbRASiON (589899) *

      Get all over the OCZ please, we need as much info as possible.
      We're right at the cusp of SSD's becoming reasonably priced for enthusiasts now (not just ultra-rich enthusiasts) and I for one would like to know about the 120 and 240gb OCZ drives anand has dabbled with.

      Also future products might be nice too, I am almost positive OCZ will have learnt a lot in the past month, we'll see some seriously good products come out within 3 to 6 months in the SSD scene, I'm sure of it

      • by AllynM (600515) *

        I touched on this at the end of this page:

        http://www.pcper.com/article.php?aid=691&type=expert&pid=8 [pcper.com]

        OCZ is getting there, but they are trying to keep up with the IOPS of Intel's 10 channel controller with their own 4 channel controller. Something has to give. In this case it is their Vertex fragments fairly quickly and won't come back on its own. It *requires* a TRIM utility to be run on it to restore full write speed.

        It's a tradeoff. With the new firmware, the X25 goes *slightly* slower with ra

        • by AbRASiON (589899) * on Tuesday April 14, 2009 @12:42AM (#27566833) Journal

          Based on your examination of the situation along with anandtechs and the fact that both OCZ and Intel seem to be aggressively improving these products, it seems to me it might be silly to even consider the X25-M or the Vertex.

          Something tells me the SSD scene is moving so fast that within literally 6 months one of these 2 companies (or a competitor taking note) will have a product superior in size, speed and price to those 2 very very soon.

          It's a good time to have a little bit of patience I think.
          - Scott

          • Pace of progress (Score:3, Insightful)

            by DragonHawk (21256)

            "Something tells me the SSD scene is moving so fast that within literally 6 months one of these 2 companies (or a competitor taking note) will have a product superior in size, speed and price to those 2 very very soon."

            And this is different from the rest of the computer hardware world how? :) Everything is always getting bigger, faster, cheaper, smaller, whatever.

            One thing I've learned is that, in general, one should decide on a budget and make your purchase based on what's available today. Something better is *always* coming down the pike. :)

            • by n1ckml007 (683046)
              Compared to existing technology (magnetic platters), the $/GB is still quite high.
              • "Compared to existing technology (magnetic platters), the $/GB is still quite high."

                Um... so it is, but it seems to me that your statement is kind of a non sequitur [merriam-webster.com]. My post is all about the pace of change, the the slope, how fast things change from "new" to "old". Not the current state.

                • by Sj0 (472011)

                  Computer technology is progressing FAR more slowly than it used to.

                  In 1991, you'd be using a 386 or 486 with 2-4MB of RAM. By 1999, you'd be using an Athlon with at least 512MB of RAM. That's a MASSIVE difference.

                  By contrast, in 2001, you'd be using a Pentium 4 at around 2GHz with about a gig of RAM. Today, you could be using the same machine. Sure, there are some cool technologies that have come out since then -- 64 bit processors are ubiquitous, and multi-core technology is insane, but we're not looking a

                  • by n1ckml007 (683046)
                    That's a good point I have a P4 3.2Ghz that is still used as my main desktop at home. The MOBO is an Abit from 2004, and it runs XP SP3 nicely.
            • by AbRASiON (589899) *

              Unlike processors, memory, DVD drives, monitors or the vast majority of parts in a computer, SSD's are just breaking the infant stage of introduction.
              This is why you see an Intel X25M at 700$ 5 months ago, being 400$ now.

              The whole industry moves fast but the next 2 years SSD's will be catching up to the more 'reasonable' pace of hardware today.
              Therefore it's prudent to probably wait only a small amount of time for huge increases.

            • by maxume (22995)

              Most people only 'want' flash drives at this point, and there are improvements being made that seem to be in addition to the normal doubling cycle (i.e., real world experience is still getting fed into the basic designs of SSDs). So while I agree with you that picking a budget and then buying is the correct approach, the SSD market looks like it will be much better in 12 months, out of line with predictions based on a quick look at the current market.

    • by moon3 (1530265)
      Great research and review.

      I read it right through. Thanks to you, the best SSD drive got even better.

      Anyway, I am still hasitating to put something that lasts only 10,000 erase cycles into my system..
      • by edmudama (155475)

        The X25-M datasheet guarantees 20GB/day for 5 years. How many DVDs to you torrent each day?

    • by JakFrost (139885) on Tuesday April 14, 2009 @01:06AM (#27566899)

      The most interesting thing is the last section on the last page.

      PC Perspective: Intel Responds to Fragmentation with New X25-M Firmware - My Theory - It Can Write Faster [pcper.com]
      by Allyn Malventano 2009-04-13

      My own personal theory is that Intel got things *too right* with their custom controller. ...

      Despite using MLC flash memory, competitors have broken the 200 MB/sec sequential write speed barrier, and have done so with only 4 channel controllers. The X25-M talks to its flash across 10 parallel channels. If the X25-M was truly flash speed limited at 80 MB/sec, other MLC flash would have to be over 6x as fast to achieve stated speeds over the fewer channels available. ...

      My hunch is they expected MLC write speeds to remain relatively low across the marketplace, and like many other products in similar chains, imposed a hard limit of 80 MB/sec to their M series drives. ...

      If an M series drive could write as fast as an E series drive, there would be considerably less market for the latter. ...

      I just think it can go faster than 80 MB/sec.

      I think that Allyn is onto something because if you look at the graph for write speed of the X25-M (MLC) it seems utterly perfect at 80 MB/s, almost like there is an artificial cap on the speed, while the one from the X25-E (SLC) series it produces a standard waveform, like Allyn pointed out, and not an artificial flat line.

      I too believe that Intel is artificially capping the performance of this drive and they might decide to uncap it sometime in the future once the competitors start snapping at their heels or if enough time goes by and they decide to introduce a new SSD MLC based performance/server oriented product line and remove the cap then. This is very similar to the situation with processor multiplier locks that they remove in their performance oriented Extreme processor lines.

      I frankly don't like this kind of behavior from Intel since they know that they have the upper hand so they are just doling out enough performance to beat the competitors and to satisfy the current customers but at the same time holding back to create a market for their X25-E product line with slightly higher performance.

      I think the other shoe will drop sooner or later on the 80 MB/s cap.

      Research

      I've been doing research into Solid State Disks in the last few weeks and this article is yet another one of those for Required Reading in the course of learning about SSD. I've even wrote a detailed post with links to reviews and articles. You can read up on the linked articles to get a good primer on things.

      Solid State Disk Benchmarks [slashdot.org]

      • I agree that it's a bit shady if they are capping like that, but it's up to the competitors to challenge them. If nobody can, then Intel really has earned their advantage and should make as much money off it as the market allows.

  • by Daniel Phillips (238627) on Monday April 13, 2009 @08:06PM (#27565373)

    This was forseen: Intel will ultimately be forced to redesign their flash write algorithms [kerneltrap.org]

    The point of this is, please please please if you are an engineering manager, when you make a collective booboo, no smoke screen please! It is unlikely to go unnoticed, and nothing positive will be achieved for you, your company, your potential customers or your tech audience. Instead, just come clean, admit the problem and get busy on the fix. Down that path lies increased trust, whereas the doublespeak path only erodes credibility. I certainly will be double checking any future claims, because of how this played out.

    Anyway, big props to the team for implementing what appears to be a superior solution. Hey, how about just open sourcing that firmware and let everybody help make it even better? Just a thought.

    • Hey, how about just open sourcing that firmware and let everybody help make it even better? Just a thought.

      Not going to happen.

      They're in a competitive market of bleeding edge technologies such as SSD storage. They will want every advantage they can get. This will require both hardware and software optimizations make their product stand out among the competition.

      I'm sure Intel will open up the open sourced spigot much like ATI has with older products. Just don't count on them from cutting edge products anyt

    • Ummm, what you were discussing in that link has nothing to do with what this firmware is fixing. You were discussing performance decreases over time through ordinary use. This firmware fixes a bug that (so far) has only been able to be replicated under certain benchmark conditions, and has not yet appeared under real world conditions.

      Don't pat yourself on the back too hard.

      • Ummm, what you were discussing in that link has nothing to do with what this firmware is fixing. You were discussing performance decreases over time through ordinary use. This firmware fixes a bug that (so far) has only been able to be replicated under certain benchmark conditions, and has not yet appeared under real world conditions.

        I have no idea what you are talking about. The issue discussed in the post and the issue addressed by intel in the new firmware are the same by all appearances.

    • by pyite (140350)

      The problems which that link discusses are general problems, not Intel's. Even in the worst case, the Intel drive is still better than all the other MLC drives. Anand did a very thorough analysis here [anandtech.com] and it's probably one of the best mainstream pieces of technical writing I've ever seen.

      He basically justifies the whole existence of Anandtech with that one article.

  • Anandtech (Score:5, Interesting)

    by MSG (12810) on Monday April 13, 2009 @08:10PM (#27565389)

    On this subject: I finally got around to reading Anandtech's very long article [anandtech.com] about the current crop of SSD drives. I feel like it was pretty educational, which is good because it took a long time to digest.

    In its discussion of performance degradation as drives are used, the article explains that individual pages of NAND memory can't be rewritten. Early in a drive's life, page are remapped when they are rewritten by the OS. As the drive is used, the drive runs out of pages to remap and is forced to copy a block (typically a 512KiB collection of 4KiB pages) to cache, erase the block and then rewrite the block with the new pages. That explains pretty well why write performance degrades, since writing to a block that has data must perform a read and erase operation in addition to the write. However, that explanation also leaves open the question of how the drive prevents data loss if it loses power. Worst case, the OS issues a write and the drive copies a 512KiB block to cache and erases the block, and then loses power. Due to remapping, literally anything could be in that half a MiB. The data loss could corrupt the file that was being modified, obviously, but also any other file on the drive, or parts of the filesystem itself.

    I figure there's got to be protection against data loss built-in, but I'm not able to find details regarding any individual drive or manufacturer's approach to solving that problem. Does anyone know more about this subject?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      They have a large capacitor in the drive. The DRAM is on the ssd and behind the capacitor. If the drive detects a power failure the data in DRAM is written to the ssd memory before the capacitor loses it's charge. This is my understanding.

      Cheaper ssd drives may not have the on chip DRAM chip. Research it before you put these in servers. Use the write optimized MLC ssd drives are better geared for logging like Suns ZFS intent log and database logs.

    • by jhantin (252660)

      I figure there's got to be protection against data loss built-in, but I'm not able to find details regarding any individual drive or manufacturer's approach to solving that problem. Does anyone know more about this subject?

      Write-ahead would be one simple technique. Keep at least a spare block around, and don't blow away the old block until you've copied what you need to keep.

    • However, that explanation also leaves open the question of how the drive prevents data loss if it loses power. Worst case, the OS issues a write and the drive copies a 512KiB block to cache and erases the block, and then loses power. Due to remapping, literally anything could be in that half a MiB. The data loss could corrupt the file that was being modified, obviously, but also any other file on the drive, or parts of the filesystem itself.

      Good question. I suppose a form of data journaling could be used. P

    • by Silverfish (33092)

      I'm too lazy to go back and reread the entire Anandtech article, but if I remember correctly, it speculates on the amount of memory on Intel's controller and specifically states that Intel doesn't use the controller memory the way you describe, for the exact reason you state. Or perhaps it was the other article they did about the hiccuping drive from... was it OCZ? Either way, I feel pretty confident power loss won't cause data loss (at least not at the fault of the SSD controller)

  • by Colonel Korn (1258968) on Monday April 13, 2009 @08:14PM (#27565401)

    We've all read this by now, right?

    http://www.anandtech.com/storage/showdoc.aspx?i=3531 [anandtech.com]

    The X25 has the same problem as all the other flash drives due to the need to erase in big chunks. Post-slowdown, the X25 is still faster than almost any other SSD that's brand new, and given the same usage, the X25 maintains the huge performance advantage it has from the start. I doubt Intel can really do much to improve this behavior without using TRIM.

    I assume their "fix" will be slight tweaking of writing patterns done mostly to fool the mainstream press that had already been acting foolish by picked up this story without noticing the subtleties (such as the problem being present in all SSDs)

    • by afidel (530433)
      Perhaps the SLC cells in the X25e are just faster than the controller can cope with but after leaving one writing random data for 24 hours I saw no degradation in speed, if my numbers are right that was about 275 writes to every cell in the SSD assuming a 2:1 flash:usable ratio (.170GB/s*3600s/hr*24 hours*.5). I've also not heard of this kind of severe wear penalty for other SLC based devices like the FusionI/O.
      • by AllynM (600515) *

        It's not how much you write to it, it's how you write to it. Write a bunch of small files to random locations and it will fragment, dropping subsequent write speed. There is a pic of the effect on the last page of my article:

        http://www.pcper.com/article.php?aid=691&type=expert&pid=10 [pcper.com]

        What you're thinking of is essentially the response time of the flash itself. Most drives appear to be engineered to assume the flash is at its end of life and keep their timings to that level. No drive I have tested

        • by afidel (530433)
          I was doing 4K random writes to the entire drive using IOMeter, that should be as punishing as anything can possibly be from a small file write perspective.
          • by AllynM (600515) *

            With 4k writes it was likely writing at a lower speed right off the bat and kept that lower value. Try it at higher queue depths and you will get increased parallelism within the drive. Then you should see higher initial speed that will fall off as the drive fragments.

    • Re: (Score:3, Interesting)

      by AllynM (600515) *

      The problem Intel fixed is not the same thing you're thinking of. Anand's methodology was flawed, in that he was writing the OS back to the drive in sector-by-sector mode, which is effectively a large sequential write. This acts to heal drives that write combine and is not in line with how that OS would have got there in reality. The subsequent writes he did accomplished nothing more than seeing how far that particular drive could fill the 'holes' in the partition (i.e. how fast it can perform small rand

      • by xianthax (963773)
        the term "sequential writes" loses meaning in regards to SSD's, a wear leveling algorithm by definition is going to move those blocks around such that they are no longer sequential in nature anyway, assuming that the disk is being used in random nature, that is you do more than just non-stop sequential writes over its life time
        • by AllynM (600515) *

          For write combining SSD's, remapping occurs *within* blocks. You don't necessarily need to write huge sequential files to an X25 to correct this, writes only have to match the block size.

    • Re: (Score:3, Informative)

      by LordKronos (470910)

      As AllynM mentioned, this fix addresses a different problem. If you read in that anandtech article, you will see this:

      Intel's X25-M: Not So Adaptive Performance?

      The Intel drive is in a constant quest to return to peak performance, that's what its controller is designed to do. The drive is constantly cleaning as it goes along to ensure its performance is as high as possible, for as long as possible. A recent PC Perspective investigation unearthed a scenario where the X25-M is unable to recover and is stuck a

  • I thought these SSDs were designed for laptop computers. I read the installation manual, and it didn't give any instructions for what to do if you don't have a CD burner, or if you don't have an optical drive in the computer with the SSD. Or does this update work in UNetbootin [wikipedia.org]?
    • by tlhIngan (30335)

      I thought these SSDs were designed for laptop computers. I read the installation manual, and it didn't give any instructions for what to do if you don't have a CD burner, or if you don't have an optical drive in the computer with the SSD. Or does this update work in UNetbootin?

      Actually, Intel's x25 is designed for desktop/server/enterprise use, not laptop use. It's just that the form factor is the same as a 2.5" laptop SATA drive. Mostly because you can fit an affordable amount of flash and the controller i

      • Re: (Score:3, Informative)

        by edmudama (155475)

        Actually, the the X25-M is for "mainstream" usage, including laptops and desktops. The X25-E is for extreme workloads, including some server usages.

        The X25-M is available both in 1.8" and 2.5" SATA form factors, which are the two most common laptop interfaces today.

        PCIe is a bit more limited in a laptop typically, and if you go that route (as a laptop manufacturer) you're generally locking yourself into a single device vendor, since you'll need custom drivers for whichever PCIe board you choose. SATA, on

      • If you're designing a laptop with an SSD, you won't go the SATA route. You'd use a spare mini-PCIe slot and use a PCIe SSD (a la the Eee and others). Intel makes a board for mini-PCIe. Saves yourself the cost of all the SATA overhead (connector, power, etc)

        Unless you want to have one motherboard for both hard disk and SSD versions of a product. Or do hard disks come in mini-PCIe now? I thought that went out with hardcards back in the mid-1980s [wikipedia.org].

  • by robvangelder (472838) on Tuesday April 14, 2009 @12:28AM (#27566759)

    I once dined at a restaurant that took my order, but minutes later realised they couldnt make it due to stock shortage. I got a different meal, and they told me mine was for free!

    The way a company recovers from a problem can actually turn into a net positive experience for the customer.

    In my case, I'm turned from an unsatisfied customer, to an advocate. For sure, I've recommended friends dine there since then.

    Every interaction is an opportunity to delight the customer. Even those events that at first feel like a disaster unrolling.

  • There is an easy fix that Intel may be able to implement in their flash file system. They just have to lookout for free space wipes.

    If a block is written with all zeros (all the same byte) they could then free the flash sector and mark the sector as a 'monobyte' in the data structure. The advantage is that it wouldn't take any extra space at write time so the FFS wouldn't get into the state where it's got no space to defragment blocks and so normal ATA commands will be able to get it out of it's stuck sl

  • Engineering Culture worldwide has been completely usurped by the marketing and quick-buck executive paradigm. I've been working in engineering for a decade and the notion that a product should work properly before it is released has been thrown out the window and splattered on the street.

    "Get it out the door, and worry about issues later," is the mantra. Final Product Release has become the new Beta Test phase. One look at GMail and you'll see what I am talking about. GMail has been in "Beta" for what, 5 ye

    • You're mistaken that it's really the fault of "quick-buck executives". It's the market, and the people in it.

      My wife often complains to me when some bit of software is manifesting a bug. She asks "why can't they make software that just works?!"

      The answer is, "they can, but you wouldn't buy it". Bug free software is quite expensive. The programmers that write in bug free environments are typically 4-8X less productive, on a line count basis, than programmers who work in non bug free environments.

      Unless such

  • Fragmentation problems?

    It's like playing "52 Card Pick Up", except instead of using 52 cards, you're using 80 BILLION bits!

  • I know the X25-M didn't work with bootcamp before, does anybody know if the firmware update also addresses this issue?

    • Couldn't be bothered to read the article? Check out page 7. It covers it in detail, but
      1) no it doesn't fix anything
      2) not all drives had problems in Bootcamp to begin with

      • by zurmikopa (460568)

        Nope, only had a minute or two free at the time and forgot about it later. (The article is kind of long)

        I was hoping some kind soul like yourself would have read the article and possibly others, see my question and answer it.

        Thanks :)

The major difference between bonds and bond traders is that the bonds will eventually mature.

Working...