Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Intel 34nm SSDs Lower Prices, Raise Performance 195

Vigile writes "When Intel's consumer line of solid state drives were first introduced late in 2008, they impressed reviewers with their performance and reliability. Intel gained a lot of community respect by addressing some performance degradation issues found at PC Perspective by quickly releasing an updated firmware that solved those problems and then some. Now Intel has its second generation of X25-M drives available, designated by a "G2" in the model name. The SSDs are technically very similar though they use 34nm flash rather than the 50nm flash used in the originals and reduced latency times. What is really going to set these new drives apart though, both from the previous Intel offerings and their competition, are the much lower prices allowed by the increased memory density. PC Perspective has posted a full review and breakdown of the new product line that should be available next week."
This discussion has been archived. No new comments can be posted.

Intel 34nm SSDs Lower Prices, Raise Performance

Comments Filter:
  • by CajunArson ( 465943 ) on Thursday July 23, 2009 @12:46PM (#28796927) Journal

    Fortunately I got it for only about ~$300 so I only "lost" $100 with the new ones coming out. That having been said, I don't regret the purchase at all, it is insanely faster than any other laptop drive out there, while being completely silent and power-friendly. As for TRIM support, I've heard that Intel is not going to add it for the older drives, but I'm not sure if that is just speculation or if it's been officially confirmed by Intel (Intel not expressly say the old drives are getting TRIM support is not the same as expressly denying the support). Fortunately, the drives with the newer firmware don't seem to suffer from much performance degradation, so I'm not really obsessed with TRIM anyway.

    Oh and yes, it does run Linux (Arch 64-bit to be precise) just fine.

    I can't wait for next year with the ONFI 2.1 FLASH chips (the new drives are not using the new ONFI standard yet) as well as 6Gbit SATA support. At that point I'll put together a new desktop that only uses SSDs, and turn my existing desktop into a 4TB RAID 1+0 file server to handle all the big files... the perfect balance of SATA & spinning media.

    • by thms ( 1339227 )

      Fortunately, the drives with the newer [non-TRIM] firmware don't seem to suffer from much performance degradation, so I'm not really obsessed with TRIM anyway.

      I wonder how they managed that without the TRIM command, i.e. without the OS telling the HD which parts can be nulled because they are not needed anymore. Did they hide more pages from the OS which are then nulled regardless to hack together something like a buffer? But that would still show terrible write performance once that overflows. Did they implement deep-data-inspection for the most common filesystems so the HD now knows when something is deleted?

      At that point I'll put together a new desktop that only uses SSDs, and turn my existing desktop into a 4TB RAID 1+0 file server to handle all the big files... the perfect balance of SATA & spinning media.

      I'm planning the same thing once the prices are righ

    • Re: (Score:2, Troll)

      by obarthelemy ( 160321 )

      It's always fun to read bleedin' edgers rationnalize how they didn't pay over-the-top for immature first trys that soon got obsoleted.

      So, yes, you only overpaid $100 for a drive which Intel hasn't yet come out and said will never get TRIM, and is 25%+ slower than the new one. Congrats.

      I've got some oil here that will do wonder for your hair ! it is expensive, too.

      • by LordKronos ( 470910 ) on Thursday July 23, 2009 @03:50PM (#28799289)

        I'm not the person you were replying to, but I too bought a X25-M 80GB back in April (though I only payed $300, so I only overpaid by $75). That said:
        1) I've enjoyed the increased performance over the last 4 months. I've done a lot of work where I've benefited from the increased performance, so I feel I've gotten at least a good portion of that $75 in the form of the value of increased productivity (I use this computer for work for my business).
        2) I've had no performance complaints from the new drive. Compared to my old drive, there are nearly zero times that I'm waiting on disk I/O anymore, so if it might be a little slower (and look at the charts in the article...it's not 25% slower) I'm not really noticing where it could be improved.
        3) Obsolete? I do not think that word means what you think it means. My G1 drive is neither "No longer in use" nor "Outmoded in design, style, or construction". It has been surpassed (very slightly) by a newer model, but if that translate to obsolete, then I guess anyone who isn't paying $1000 for a Core i7-975 CPU is also buying obsolete hardware. And of course, anyone who does buy a Core i7-975 for $1000 will promptly be mocked by you when the price drops to $900 or a new model 1/3 GHz faster comes out or something.

        • To me, Trim-less, and at least 25% slower is obsolete. That would be "design".

          I'm happy for you if you think you got your money's worth. After much reading, I finally decided not to get one for the new PC I just ordered.

          • To me, Trim-less, and at least 25% slower is obsolete.

            Again, it is not 25% slower. Most of the tests show 10% at most. Then again, if you are going to compare it to any other drive (you know, other then the drive that was announced only 2 days ago and can't actually be bought from any retailer yet), even the old "slow" model was leaps and bounds above any traditional hard drive on the market for the majority of tasks performed by most users.

        • by Twinbee ( 767046 )

          People like you who are hot on the heels of new technology - we owe you a "thanks". Otherwise, new tech would never get off the ground (same with the Sony OLED TV - super expensive, but I'm grateful to all the people who can afford to buy (and in some sense) 'test' it.

      • Re: (Score:3, Interesting)

        by CajunArson ( 465943 )

        So I'm assuming you are typing your comment in from somebody else's computer, because following your impeccable logic nobody should ever buy any piece of computer technology ever because something else is going to come along and make it obsolete. I can also say that if you are not a hypocrite you'd wake up every single day and loudly thank everyone who does buy technology, because if nobody went out and paid for computers, they would not exist for you to act like a smarmy bitch on.
        I assure y

  • Good move (Score:4, Funny)

    by Junior J. Junior III ( 192702 ) on Thursday July 23, 2009 @12:47PM (#28796945) Homepage

    Getting the prices lower is definitely a move in the right direction. I'm looking forward to moving to SSD in the near future, and not having to worry about hard drive crashes anymore.

    • by Itninja ( 937614 )

      ...not having to worry about hard drive crashes anymore

      God, I hope you are never in the IS department at my company. Or any company for that matter.

      • You mean it may be naive to expect zero failures with the new drives?

        I wouldn't be surprised if the failure profile between moving-parts devices and solid state devices were radically different.

        • by BobMcD ( 601576 )

          Perhaps, but that wouldn't support the current notion of forced obsolescence.

          My suspicion is that they are of an equivalent quality level, and nothing greater than that.

  • by MagicMerlin ( 576324 ) on Thursday July 23, 2009 @12:48PM (#28796973)
    While hard drives will continue to live on for a good while yet where $/GB considerations are paramount (especially archival type applications), the performance advantages of flash drives will soon trump the decreasing cost advantage both for workstation (x25-m) and server (x25-e) environments. The case for flash in servers is even more compelling, where we measure drives in terms of IOPS and a single Intel flash drive performs 10 or 20 times better than the best hard drives on the market for a fraction of the power consumption. Understandably, many IT managers are cautious about adopting new technologies, especially when the failure characteristics are not completely known, but I suspect the advantages are so great that minds are going to start changing, quickly.
    • Comment removed based on user account deletion
      • by afidel ( 530433 )
        I bet one drive could saturate the PERC 6i, I know it can saturate the HP P400 no problem. In fact I got about 2x the 4k random write IOPS when I used it in a workstation with Intel ICH as I did when it was connected to the P400.
      • by Fweeky ( 41046 )

        I've tried an X25-M on a few servers with LSI SAS controllers (as used by PERC 6i, though I don't think I've used that exact chip) and been disappointed to encounter IO hangs and other drives disappearing randomly; even just having an X25-M plugged in is enough to seemingly make the controller rather unhappy. Doesn't appear to be a driver problem, unless it's one shared by FreeBSD, Linux and Solaris.

        Hopefully Intel will do an SAS version at some point; they could compete against 15kRPM drives rather well, I

    • While SSD may be the new kid on the block and show signs of superiority. Hard drives retain a bit of advantage over their non-moving, solid state counter parts. Hard drives can take more write overs than SSD. Flushing the cache to the actual media is still faster on HDD than SSD. SSDs are still very susceptible to static discharge versus HDD due to more surface area having sensitive parts.

      I do agree with the parent. SSD are a big thing and they have some important advantages. However, let's not go p
      • SSDs are still very susceptible to static discharge versus HDD due to more surface area having sensitive parts.

        Well actually, my X25-M drive has no circuitry exposed other than the sata and power connectors. Everything else is completely enclosed, so unless the case is likely to transmit enough of the charge to the circuitry (I have no idea whether or not it would), SSD's should be LESS susceptible to that problem.

        And while you are examining the downsides of SSDs, it's also fair to say that data recovery fr

    • by afidel ( 530433 )
      The x-25e is great, and I use it in a few situations, but at 8x the cost per GB of 15k FC I'm not moving to it wholesale. It's true that for $10K I could get as many IOPS as my $200K EVA, but it would only have the storage of a single drive in the array.
      • The x-25e is great, and I use it in a few situations, but at 8x the cost per GB of 15k FC I'm not moving to it wholesale. It's true that for $10K I could get as many IOPS as my $200K EVA, but it would only have the storage of a single drive in the array.

        ...for 5% of the price, and trivially built without proprietary protocols, hardware, or software support. Let's compare apples to apples, and spend 200k on some sas sata enclosures. good raid cards, and intel x25-e, and see who is kicking whose ass. many,

        • by afidel ( 530433 )
          Watt/IOP they crush HDD, Watt/GB the opposite is true. I use my SAN for a heck of a lot more than just database so I need a much more balanced approach. I have VM's, email, bulk file storage, content management, and various drives from application servers all mounted in the same array. If you're big enough to have arrays dedicated to just database then for sure SSD's are the future for that niche, but I doubt that's more than 25% of the SAN market.
          • Watt/GB the opposite is true.

            Actually, not true. The 80GB X25-M uses 0.15 watts at load. That's 0.001875 watt/GB. Scaling up to 2TB, you are talking about 3.7 watts total under load. At idle, the X25-M is 0.06 watts. That's 0.00075 watt/GB, or 1.5 watt at for 2TB. I don't know if any magnetic hard drive can match that, much less a 2TB model.

            Then again, it's a silly comparison at the moment, since your electric cost per kwh would have to be insane before you'd recover the price difference of the drive itself

      • I'm interested in seeing hybrid drives, say, in a laptop form factor, with 50% of the storage on flash media and 50% on platters. If the system is smart and can move write-once-read-often data to the flash partition, and can keep oft-changing files on the platters, that'd be awesome.

        Put my OS on SSD for super-fast booting. Put my photo library for fast browsing, but if I start editing a picture, put the edit data on the platters until I'm done. I'm sure some of the decision-making could be done by the
    • Because everyone knows how Ferraris have made trucks redundant so quickly !

  • Having gotten 2 out of 3, does Intel make a trifecta here, or is there some lurking downside (e.g. limited write cycles etc.)?
  • AnandTech writeup (Score:5, Informative)

    by tab_b ( 1279858 ) on Thursday July 23, 2009 @01:02PM (#28797137)
    AnandTech [anandtech.com] has a nice writeup too. If the price curve drops like the first-gen X-25M [diskcompare.com] we should all be happy pretty soon.
    • by Spoke ( 6112 )

      I suspect we'll see the 2nd gen X-25M launch at prices similar to the current X-25M, and then drop down to the $225/80GB that you can get them in 1,000 unit quantities over the next couple months.

      The competition for these Intel drives is at least 2-3x behind in random IOPs. Too bad the streaming write performance didn't go up significantly, because that's the only place where the Intel drives lag behind their competition.

      • I suspect we'll see the 2nd gen X-25M launch at prices similar to the current X-25M, and then drop down to the $225/80GB that you can get them in 1,000 unit quantities over the next couple months.

        Although they aren't yet in stock, zipzoomfly is already listing the price at $223.25 (though you can't preorder).

        Too bad the streaming write performance didn't go up significantly, because that's the only place where the Intel drives lag behind their competition.

        Actually, for the G1 versions, the enterprise versio

        • by Spoke ( 6112 )

          Although they aren't yet in stock, zipzoomfly is already listing the price at $223.25 (though you can't preorder).

          Nice! If they actually end up selling at that price at launch, I will be impressed.

          I have a Vertex and while the performance has been great, it doesn't seem to be very mature compared to regular disks.

          For example, I've personally had these problems with it:

          1. Firmware flash tool doesn't work on all computers. Have to remove it and move it to another computer to flash it.
          2. Have seen the drive

  • by ironwill96 ( 736883 ) on Thursday July 23, 2009 @01:18PM (#28797337) Homepage Journal

    ..and it is fantastic. This was the largest performance increase i've seen on computers in over a decade. I was going to go with a Velociraptor because I knew how important drive access latency was but then Intel patched the fragmentation issue that was worrying me.

    I got mounting rails to fit the drive into my desktop case so i'm using it as my primary desktop drive for OS, some applications (Adobe Design Premium Suite runs great on it! Photoshop CS4 loads in 3-4 seconds!), and my main games. I then have a 1.5 TB secondary drive to store my data and music collection etc. I paid around $430 for my 80GB Intel X25-M so being able to get the 160GB for that same price is a fantastic improvement. I will definitely only be going SSD in my machines from now on. Everything loads faster, I get consistently fast boot times even after months of usage.

    It is amazing to see Windows XP load up and then all of the system tray apps pop up in a few seconds. You can immediately start loading things like e-mail and Firefox as soon as the desktop appears and there is no discernible lag on first load like you will get with SATA drives since they are still trying to load system tray applications.

  • reliability? (Score:4, Insightful)

    by Goffee71 ( 628501 ) on Thursday July 23, 2009 @01:26PM (#28797437) Homepage
    How can reviewers be impressed by reliability when they've only had the units for, at most, a year? When these things hit the five-year mark running perfectly well with no data loss in the home/work environment, then I'll be interested.

    Ok, they may have been stress tested in factories by the manufacturers, but reviewers don't do that sort of work.
    • If you can get a regular hard drive to the five year mark running perfectly well with no data loss, you can consider yourself moderately lucky. Rotating media is what RAID was invented for.

      All you'd need to do to demonstrate to me the greater reliability of an SSD is drop it and a regular hard drive onto the table a couple of times while they're running and see which one keeps running. That would be enough to get me impressed by increased reliability. Regular hard drives are delicate beasts.

      • If you can get a regular hard drive to the five year mark running perfectly well with no data loss, you can consider yourself moderately lucky.

        There's nothing lucky about it. Unless you are just straining the drive constantly or don't have any adequate ventilation in your box, an HDD lasting 5 years if not longer is a pretty mundane thing for quite some time.

        • Having recently come from a job supervising two rows of racks of servers, the hard disk failure rate seemed to match well with a 3 year expected lifetime.
          • Well drives in servers are also put through far more strain than a home desktop so their failure rate would be expected to be earlier than a 5 year mark for a consumer drive in a home PC.

      • by dfghjk ( 711126 )

        "Rotating media is what RAID was invented for."

        Poor grammar aside, you need an education on what RAID was really developed to address.

    • by sshir ( 623215 )
      The troubling aspect of it all is that SSD's controller is a kind of a black box.
      As a result, reliability is application specific! Much more so than regular spinning drives.
      And I'm not talking about "flash cell rewrite limit". The thing is, the controller uses undisclosed/patented/whatever algorithms to place your writes at particular addresses on flash. They need to be tricky because of 4k_write/512k_erase problem of the flash technology.
      So if you do a "right" combination of small and large writes you
      • Oh, and you know all the algorithms that are on your hard drive controller out of the top of your head, do you? Or those on your motherboard? Or your OS? Or the applications you run on them? Especially with Intel, I do trust the market place to have some influence in them testing their drives really well before supplying them to customers. If these drives start failing in large numbers they'll have serious problems.
      • Me too. That's why I personally audited the command queueing, bad sector replacement, error checking, and head positioning algorithms in my mechanical disk.
    • Most of my HDDs (Maxtor, WD, Seagate) over the past ten years have not lasted more than 2 or 3 years... My last system drive (WD 320GB) died after ~6 months - Just finished the RMA a few weeks ago.

    • Re:reliability? (Score:5, Informative)

      by AllynM ( 600515 ) * on Friday July 24, 2009 @05:20AM (#28804869) Journal

      My personal X25-M (the one that started all of my reviews and Intel's subsequent patching of the fragmentation slowdown issue with the X25-M series), has had over 10 TB of data written to it. Most of those were sequential writes spanning the entire drive (HDTach RW passes). SMART attribute 5 on that drive is currently sitting at a whopping "2". That equates to only 8 bad flash blocks. It's actually been sitting ag 2 for a while now, so those blocks were likely early defects.

      I suspect it will take most users *way* over a year to write greater than 10 TB to their 80 GB OS drive. Considering mine is still going strong after that much data written, I don't think there's anything to worry about.

      Allyn Malventano
      Storage Editor, PC Perspective

  • Would you run DeFrag on an SSD like you do on an HD? After all, sequential reads are still sequential reads.
    • Except that SSDs randomly relocate data and doing a software defragment doesn't make files any more contiguous.

    • AFAICT, sequential vs. random loses its meaning with SSDs. The access time to any arbitrary block is equal, regardless of whether it's right next to the current one or on a different chip on the other end of the board.

      • Not only that, but there's no way a standard defrag program would be able to tell where data is physically located with a SSD. Block addresses are mapped by the controller to actual locations because wear-leveling needs to be able to move data behind the scenes. This is transparent to the OS; the disk will still report back the same data for a given logical block address, but said data can be physically located anywhere.
    • Re: (Score:2, Informative)

      by sshir ( 623215 )
      Actually, surprisingly, you do need to run a kind of defragmentation.
      Just not the usual one.
      That's because writing in flash is in pages (4k?) but erase can be done only in blocks of 512k. So what happens is that controller have to do some insane job of joggling your writes and rewrites to spread or combine or whatever... on the fly...
      As a result, after intensive use, the address space become fragmented, just like memory heap in regular software after lots of allocations/deletions.
      Currently, the only way
      • There seem to be some defragmentation applications that say they can change some of the characteristics of the writing. I would be very wary of using these kind of applications - it's uncertain that they'll do any good.

        For the Vertex drive there is an application that can perform the TRIM command for unused sectors. It's quite new so I would look up if it fits your OS - and only if there is no native TRIM support in the OS of course.

        For these kind of Intel drives (especially the latest): unless you do very

    • Any kind of memory can be become fragmented after some time in use. Defragging in the traditional sense may not be as necessary (as before) as the memory addressing scheme is much faster than before and, therefore, read operations for address spaces far apart are not going to be a problem. I mean, what's the difference if the next segment of code/data is FFFFFFFF away from the last address? Nothing! There are no heads to move from location 'X' to location 'Y' therefore, the throughput is sustained. Traditio

  • These SSDs contain a RAM cache that's powered by the host PC IO bus. Why don't they have a battery in the SSD? The OS thinks that everything ACKed as sent to the storage unit is written, but a power failure kills the cache before it's flushed. A little battery charged off the host PC IO bus would make these drives even more reliable than spinning discs.

    • Re: (Score:3, Informative)

      by RoboRay ( 735839 )

      I think the UPS will cover that.

    • Re: (Score:2, Insightful)

      I suspect they have a capacitor large enough to finish committing their buffers. At least they seem to see little performance degradation with write barriers, and do retain all the files they should when I pull the power while writing. (I didn't do a proper test, but it seems to work correctly, assuming your OS does.)

      (And for the record, any OS that still thinks anything the HD acks is written is living in a dream world, it hasn't been true for 15 years on consumer disks.)

    • Most mechanical disks also have a RAM cache which is not backed by a battery and have the same limitation.
    • Because if the PC itself does not have time to properly shut down, your data will be cut in half anyway. A proper journaling FS would take care of any FS problems at least. The only thing you would gain is 32 MB of data saved. But if that data would be the start of a file write instead of a read, you might be off worse. You might consider ZFS if you are really paranoid, so you can roll back.

      If the flash drive is not busy it might be hard to catch it when there is data in the cache. These things have such in

    • Re: (Score:3, Insightful)

      by shiftless ( 410350 )

      The OS thinks that everything ACKed as sent to the storage unit is written,

      What does it matter what the OS "thinks"? When power is lost, all of its "thoughts" disappear. When you power it back on it reloads its "thoughts" from the DISK, thus there can be no confusion.

  • Was 50 nm. WTF? (Score:2, Interesting)

    by HiggsBison ( 678319 )

    (Yes, I know the new parts are 34 nm)

    I thought the progression of feature size went: 90 nm, 65 nm, 45 nm, 34 nm.

    But the graphics processors seem to be using 55, and these SSDs are being reduced from 50.

    I thought they had to pour gazillions into standardizing fab construction, steppers, and all the equipment. So is some plant manager stumbling in with a hangover one morning and accidentally setting the big dial for 50 or 55 or something? What's the deal here?

  • That's what Anandtech found out during "desktop" testing.

    (And, I assume, OS, Apps and Documents loads)

    That's it. 25% faster during the, what, 1% of the time your PC spends actually loading stuff off the disk ?

    The rest of the time, you get nothing.

    That's not worth $200 to me.

    On the Enterprise front, I wouldn't know how compelling that is (or not). But on the consumer front ...

    • by Kneo24 ( 688412 )

      It all boils down to how you value your time. Don't rush to be so skeptical when there's clearly a market out there for them already. You may not value your time in that way as much as a person who already shelled out the money for an SSD.

      Personally, from my own first hand experience, I think it's worth it. Everything just feels more responsive. I normally don't do the whole early adopter thing even though I have some FU money laying around, but this time I did do it. The difference you notice is just like

    • I've had a Shuttle XPC on my desk for a while - ie the drive light is visible at a glance. So anytime I'm noticing a lag or delay,I'll glance at the drive light and more often then not it's 'on' as some app churns a ton of data to the drive for whatever reason. Point is, I find myself glancing at the drive light often enough I can see where an SSD would be a huge improvement. If the 80GB drives drop to $150 or so at some point - expect more people to start thinking it's worthwhile...
    • That's compared to the first generation X25-M. If you've got one of those, by all means keep it (I plan to). If you DON'T already have an SSD, then getting one is often regarded as one of the most cost effective performance upgrades you can make at this point in time. Of course, that will depend on what you do. If gaming is your thing, then a faster hard drive isn't going to mean much as long as you've got sufficient ram.

      • Nope, that was compared to a rotating HD...

        Strangely, the benchmark has disapeared from their review a couple of hour aftert hey posted it. They must have gotten a call from their advertisers.

  • the new product line that should be available next week.

    I am fighting the urge to head down to Puget Systems in Auburn, WA and see if they really have the SSDSA2MH160G2 [pugetsystems.com] for sale for $490.55. My guess is it isn't quite ready to be sold yet and was merely indexed by Google.

    Must. Control. Checkbook.
  • Part HDD, part SSD?

    During operation, the SSD data is mirrored onto the HDD in the background, or, better yet, the HDD is larger and the most frequently used data is kept on the SSD but you get the whole capacity of the HDD.

    • My most used data is OS + applications. An SSD is big enough to hold both. Data, especially MM can be kept on a HDD. Backups can be made to HDD. You would need special chips and such to put everything together. There were some hybrid drives (ok, with a minimum of slower flash with less leveling) but they failed. If it is ever really required, I expect people would be able to do it in the OS.

    • of doubling production costs and increasing complexity.
    • Well, aside from just putting your OS and most commonly used apps on an SSD, what you're describing is a hybrid drive. You CAN buy these. I think Samsung makes a bunch. But apparently, you can emulate that in software (any HDD + SSD), at the cost of some processor overhead I guess. Microsoft has their implementation called ReadyBoost thats integrated into Vista and 7. No idea how well it works though.
  • I might be in the market for an SSD soon, so I put some note together based on my reading of the articles in the topic and elsewhere. I thought I'd share them here so I can just Google them later.

    • The first and second-gen Intel X25-M disks don't have a huge performance delta. (The 2G is slightly faster in most cases.)
    • Sequential read is maxed out around 260MB/sec on all high-performance SATA-II SSDs.
    • The M models suck at sequential writes, but the E models are great.
    • The M models (MLC) outperform all other dis

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...