Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Intel 34nm SSDs Lower Prices, Raise Performance 195

Vigile writes "When Intel's consumer line of solid state drives were first introduced late in 2008, they impressed reviewers with their performance and reliability. Intel gained a lot of community respect by addressing some performance degradation issues found at PC Perspective by quickly releasing an updated firmware that solved those problems and then some. Now Intel has its second generation of X25-M drives available, designated by a "G2" in the model name. The SSDs are technically very similar though they use 34nm flash rather than the 50nm flash used in the originals and reduced latency times. What is really going to set these new drives apart though, both from the previous Intel offerings and their competition, are the much lower prices allowed by the increased memory density. PC Perspective has posted a full review and breakdown of the new product line that should be available next week."
This discussion has been archived. No new comments can be posted.

Intel 34nm SSDs Lower Prices, Raise Performance

Comments Filter:
  • by CajunArson ( 465943 ) on Thursday July 23, 2009 @12:46PM (#28796927) Journal

    Fortunately I got it for only about ~$300 so I only "lost" $100 with the new ones coming out. That having been said, I don't regret the purchase at all, it is insanely faster than any other laptop drive out there, while being completely silent and power-friendly. As for TRIM support, I've heard that Intel is not going to add it for the older drives, but I'm not sure if that is just speculation or if it's been officially confirmed by Intel (Intel not expressly say the old drives are getting TRIM support is not the same as expressly denying the support). Fortunately, the drives with the newer firmware don't seem to suffer from much performance degradation, so I'm not really obsessed with TRIM anyway.

    Oh and yes, it does run Linux (Arch 64-bit to be precise) just fine.

    I can't wait for next year with the ONFI 2.1 FLASH chips (the new drives are not using the new ONFI standard yet) as well as 6Gbit SATA support. At that point I'll put together a new desktop that only uses SSDs, and turn my existing desktop into a 4TB RAID 1+0 file server to handle all the big files... the perfect balance of SATA & spinning media.

  • by MagicMerlin ( 576324 ) on Thursday July 23, 2009 @12:48PM (#28796973)
    While hard drives will continue to live on for a good while yet where $/GB considerations are paramount (especially archival type applications), the performance advantages of flash drives will soon trump the decreasing cost advantage both for workstation (x25-m) and server (x25-e) environments. The case for flash in servers is even more compelling, where we measure drives in terms of IOPS and a single Intel flash drive performs 10 or 20 times better than the best hard drives on the market for a fraction of the power consumption. Understandably, many IT managers are cautious about adopting new technologies, especially when the failure characteristics are not completely known, but I suspect the advantages are so great that minds are going to start changing, quickly.
  • by slack_justyb ( 862874 ) on Thursday July 23, 2009 @01:13PM (#28797261)
    While SSD may be the new kid on the block and show signs of superiority. Hard drives retain a bit of advantage over their non-moving, solid state counter parts. Hard drives can take more write overs than SSD. Flushing the cache to the actual media is still faster on HDD than SSD. SSDs are still very susceptible to static discharge versus HDD due to more surface area having sensitive parts.

    I do agree with the parent. SSD are a big thing and they have some important advantages. However, let's not go putting the cart in front of the horse and say that the era of SSD is here upon us. Cost, durability, performance, and longevity are some important areas where SSD needs improvements. In some departments of each of those categories SSD wins hands down. But SSD doesn't win enough in those areas to justify the incredibly high price of the drive. So it is a bit premature to start waving the banners right now.
  • by clawsoon ( 748629 ) on Thursday July 23, 2009 @01:35PM (#28797533)
    If you can get a regular hard drive to the five year mark running perfectly well with no data loss, you can consider yourself moderately lucky. Rotating media is what RAID was invented for.

    All you'd need to do to demonstrate to me the greater reliability of an SSD is drop it and a regular hard drive onto the table a couple of times while they're running and see which one keeps running. That would be enough to get me impressed by increased reliability. Regular hard drives are delicate beasts.

  • by Anonymous Coward on Thursday July 23, 2009 @02:42PM (#28798291)

    You get a small savings if the OS does not have to issue a bunch of small commands to get the fragments but can issue one larger command on the SATA bus.

    Also defraging them can possibly put them into contiguous blocks as many defragmenters move the whole file around anyway.

    The improvement is not quite as dramatic as doing it to a 10 year old system that has a nearly full drive and its never been defragmented. But there is a measurable difference.

    If you value your data a defragment also may not hurt either. As having a 'contig' blob of data many times makes it easier to recover your data.

    Am I saying defragment it all the time? No. Every couple of months probably would not be out of the question though. Not as necessary as used to be, but useful...

  • Re:Oooh. (Score:4, Interesting)

    by CopaceticOpus ( 965603 ) on Thursday July 23, 2009 @02:55PM (#28798453)

    Let's make some wild predictions based on recent price trends. (Trends found [mattscomputertrends.com] here [mattscomputertrends.com]). Over the last few years, flash memory has been increasing in GB/$ at a rate of 185% per year. Meanwhile, hard drives have slowed to only 42% improvement per year.

    Based on these trends, here is the estimated cost of 10 TB using either technology:

    July 2009: Platter = $750 [newegg.com], Flash = $28,125 [google.com]

    July 2010: Platter = $528 [google.com], Flash = $9,868 [google.com]

    July 2014: Platter= $130 [google.com], Flash = $150 [google.com]

    July 2019: Platter= $23 [google.com], Flash = $0.80 [google.com]

    July 2024: Platter= $4 [google.com], Flash = $0.004 [google.com]

    In July 2024, a 10 PB flash drive would cost $42 [google.com]! Of course, we can't assume these trends will continue, but it seems a good bet that we won't be worrying about the size of our mp3 collections. The traditional hard drive may only have five years of competitive life remaining.

  • Was 50 nm. WTF? (Score:2, Interesting)

    by HiggsBison ( 678319 ) on Thursday July 23, 2009 @03:18PM (#28798805)

    (Yes, I know the new parts are 34 nm)

    I thought the progression of feature size went: 90 nm, 65 nm, 45 nm, 34 nm.

    But the graphics processors seem to be using 55, and these SSDs are being reduced from 50.

    I thought they had to pour gazillions into standardizing fab construction, steppers, and all the equipment. So is some plant manager stumbling in with a hangover one morning and accidentally setting the big dial for 50 or 55 or something? What's the deal here?

  • Re:Oooh. (Score:3, Interesting)

    by Randle_Revar ( 229304 ) <kelly.clowers@gmail.com> on Thursday July 23, 2009 @03:59PM (#28799375) Homepage Journal

    >Sequential read on the SSD is over 6x faster, and sequential write is 2x faster,
    >but for the performance where it matters the difference is much more noticeable.
    >Random read on the SSD is nearly 140x faster, and random write is over 40x faster.

    So
    >Not random writes, not sequential reads, and not anything not HD-related.
    is wrong.

    It also seems to me that you don't really need to say
    >[no performance increases on] anything not HD-related.
    or
    >They don't help with anything CPU-, RAM-...-intensive
    when you are talking about hard drive upgrades.

    And of course it does help with I/O intensive stuff if that I/O is to the HD.

    My RAM and CPU speed are fine, but my second upgrade (when I can afford it) will be an SSD (my first upgrade will be a video card - I currently have an Intel x3100, good for bleeding edge Xorg stuff, but low-powered, and Radeon[HD] will be catching up before I can afford it).

    P.S. I wish slashdot would quote like a mail client (or a *chan), Also, the preview should not leave out blank lines if they will be present in the final post

  • by CajunArson ( 465943 ) on Thursday July 23, 2009 @04:33PM (#28799817) Journal

    So I'm assuming you are typing your comment in from somebody else's computer, because following your impeccable logic nobody should ever buy any piece of computer technology ever because something else is going to come along and make it obsolete. I can also say that if you are not a hypocrite you'd wake up every single day and loudly thank everyone who does buy technology, because if nobody went out and paid for computers, they would not exist for you to act like a smarmy bitch on.
        I assure you that the new drive's performance is quite fine for the amount of money I paid for it, and (because I'm a lot smarter than you) I was quite aware that newer and better drives were on the horizon, but I still made my purchase and have no regrets. Since I use my laptop for work that you can't even comprehend, I know I'm getting the value out of it that I put into it, making it a fair deal.

  • by LordKronos ( 470910 ) on Friday July 24, 2009 @08:05AM (#28805505)

    Wear leveling does not extend the drive life in any way...It simply causes it to maintain capacity as long as possible

    But that IS extending the life. Without wear leveling, if I've got an 80GB drive and I store 50GB of data on it which I frequently modify, then after X years that 50GB will be worn out and I'll be left with 30GB. That isn't enough for me to use, so essentially the drive is dead as far as I'm concerned. Now consider a drive with wear leveling. After X years, I will only have used up 5/8 of the write cycles across the entire drive. I can still use the drive for another 0.6X years. Wear leveling has extended the useful life of the drive by 60%.

    But even for more typical usage, it's possible for wear leveling to actually extend the number of writes that can be done if the wear leveling works in certain ways. For this to make sense, you have to understand how SSD storage is organized. Much like a HDD, which is organized into sectors, clusters, platters, etc, we have a similar organization with SSDs. You have bytes grouped into pages, and multiple pages are grouped into blocks (and it goes on from there).
    The smallest group of data which you can write on an SSD is a page. However, the smallest group you can erase is a block.

    SSDs don't allow you to overwrite a page with your new data. Instead, you must first erase it and then write the new data to it. The problem here is that you have to erase a block at a time, but the rest of the pages in the block could already contain other data. So what happens is that the controller copies all of the pages that you don't want to modify from that block into cache, erases the block, writes back all of the page that are staying the same, and then writes your new block. Now surely you can see the problem here...you've only intended to write to one single page, but you've also used up a write cycle for every single page that you DIDN'T modify.

    So how can wear leveling help this? Well, lets say that block consists of 10 pages, and only 9 of those pages are filled. You now want to modify one of those 9 pages. Well, instead of doing an erase, which uses up a write cycle on 9 of the 10 pages, the wear leveling can simply say "OK, I won't erase page 4...instead I'll just remember that I don't care about the data stored there. I'll also write this new data for page 4 into page 10 and remember that the data is now stored there". Thus to make that modification, we only use up a write cycle on a single page instead of 9 of the pages. Now, the next time we go to make a write, we'll have to erase the entire block and write to 9 of those pages. However, we'll once again have an empty page, so on the 3rd write we can do the same thing we did the first time. As a result, instead of a single modification writing to 9 pages each time, it averages 5 page writes each time (alternates between 1 and 9 pages).

    Of course the wear leveling can be extended to perform the same type of thing across multiple block. The advantage here would be that, as lots of data gets modified, each page may eventually be migrated out of that block without the block having to be erased. Eventually, we could end up with the block being empty and then we can erase it without rewriting pointlessly to any of the page (or if it's almost empty, we'll only rewrite a few pages).

    Other things wear leveling could do is recognize that some blocks never seem to get modified, and then shuffle that data to a different spot on the drive so that you don't end up with certain blocks that suffer almost no write-wear while other blocks are reaching their limit.

    I don't know which specific techniques current SSD drives implement, but these are a few possibilities. I'm sure there are others.

If God had not given us sticky tape, it would have been necessary to invent it.

Working...