Intel 34nm SSDs Lower Prices, Raise Performance 195
Vigile writes "When Intel's consumer line of solid state drives were first introduced late in 2008, they impressed reviewers with their performance and reliability. Intel gained a lot of community respect by addressing some performance degradation issues found at PC Perspective by quickly releasing an updated firmware that solved those problems and then some. Now Intel has its second generation of X25-M drives available, designated by a "G2" in the model name. The SSDs are technically very similar though they use 34nm flash rather than the 50nm flash used in the originals and reduced latency times. What is really going to set these new drives apart though, both from the previous Intel offerings and their competition, are the much lower prices allowed by the increased memory density. PC Perspective has posted a full review and breakdown of the new product line that should be available next week."
I've got one of the G1 Drives (Score:4, Interesting)
Fortunately I got it for only about ~$300 so I only "lost" $100 with the new ones coming out. That having been said, I don't regret the purchase at all, it is insanely faster than any other laptop drive out there, while being completely silent and power-friendly. As for TRIM support, I've heard that Intel is not going to add it for the older drives, but I'm not sure if that is just speculation or if it's been officially confirmed by Intel (Intel not expressly say the old drives are getting TRIM support is not the same as expressly denying the support). Fortunately, the drives with the newer firmware don't seem to suffer from much performance degradation, so I'm not really obsessed with TRIM anyway.
Oh and yes, it does run Linux (Arch 64-bit to be precise) just fine.
I can't wait for next year with the ONFI 2.1 FLASH chips (the new drives are not using the new ONFI standard yet) as well as 6Gbit SATA support. At that point I'll put together a new desktop that only uses SSDs, and turn my existing desktop into a 4TB RAID 1+0 file server to handle all the big files... the perfect balance of SATA & spinning media.
Re: (Score:2)
Fortunately, the drives with the newer [non-TRIM] firmware don't seem to suffer from much performance degradation, so I'm not really obsessed with TRIM anyway.
I wonder how they managed that without the TRIM command, i.e. without the OS telling the HD which parts can be nulled because they are not needed anymore. Did they hide more pages from the OS which are then nulled regardless to hack together something like a buffer? But that would still show terrible write performance once that overflows. Did they implement deep-data-inspection for the most common filesystems so the HD now knows when something is deleted?
At that point I'll put together a new desktop that only uses SSDs, and turn my existing desktop into a 4TB RAID 1+0 file server to handle all the big files... the perfect balance of SATA & spinning media.
I'm planning the same thing once the prices are righ
Re: (Score:2, Troll)
It's always fun to read bleedin' edgers rationnalize how they didn't pay over-the-top for immature first trys that soon got obsoleted.
So, yes, you only overpaid $100 for a drive which Intel hasn't yet come out and said will never get TRIM, and is 25%+ slower than the new one. Congrats.
I've got some oil here that will do wonder for your hair ! it is expensive, too.
Re:I've got one of the G1 Drives (Score:5, Insightful)
I'm not the person you were replying to, but I too bought a X25-M 80GB back in April (though I only payed $300, so I only overpaid by $75). That said:
1) I've enjoyed the increased performance over the last 4 months. I've done a lot of work where I've benefited from the increased performance, so I feel I've gotten at least a good portion of that $75 in the form of the value of increased productivity (I use this computer for work for my business).
2) I've had no performance complaints from the new drive. Compared to my old drive, there are nearly zero times that I'm waiting on disk I/O anymore, so if it might be a little slower (and look at the charts in the article...it's not 25% slower) I'm not really noticing where it could be improved.
3) Obsolete? I do not think that word means what you think it means. My G1 drive is neither "No longer in use" nor "Outmoded in design, style, or construction". It has been surpassed (very slightly) by a newer model, but if that translate to obsolete, then I guess anyone who isn't paying $1000 for a Core i7-975 CPU is also buying obsolete hardware. And of course, anyone who does buy a Core i7-975 for $1000 will promptly be mocked by you when the price drops to $900 or a new model 1/3 GHz faster comes out or something.
Re: (Score:2)
To me, Trim-less, and at least 25% slower is obsolete. That would be "design".
I'm happy for you if you think you got your money's worth. After much reading, I finally decided not to get one for the new PC I just ordered.
Re: (Score:2)
Again, it is not 25% slower. Most of the tests show 10% at most. Then again, if you are going to compare it to any other drive (you know, other then the drive that was announced only 2 days ago and can't actually be bought from any retailer yet), even the old "slow" model was leaps and bounds above any traditional hard drive on the market for the majority of tasks performed by most users.
Re: (Score:2)
People like you who are hot on the heels of new technology - we owe you a "thanks". Otherwise, new tech would never get off the ground (same with the Sony OLED TV - super expensive, but I'm grateful to all the people who can afford to buy (and in some sense) 'test' it.
Re: (Score:3, Interesting)
So I'm assuming you are typing your comment in from somebody else's computer, because following your impeccable logic nobody should ever buy any piece of computer technology ever because something else is going to come along and make it obsolete. I can also say that if you are not a hypocrite you'd wake up every single day and loudly thank everyone who does buy technology, because if nobody went out and paid for computers, they would not exist for you to act like a smarmy bitch on.
I assure y
Good move (Score:4, Funny)
Getting the prices lower is definitely a move in the right direction. I'm looking forward to moving to SSD in the near future, and not having to worry about hard drive crashes anymore.
Re: (Score:2)
God, I hope you are never in the IS department at my company. Or any company for that matter.
Re: (Score:2)
You mean it may be naive to expect zero failures with the new drives?
I wouldn't be surprised if the failure profile between moving-parts devices and solid state devices were radically different.
Re: (Score:2)
Perhaps, but that wouldn't support the current notion of forced obsolescence.
My suspicion is that they are of an equivalent quality level, and nothing greater than that.
the era of the SSD is here (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I've tried an X25-M on a few servers with LSI SAS controllers (as used by PERC 6i, though I don't think I've used that exact chip) and been disappointed to encounter IO hangs and other drives disappearing randomly; even just having an X25-M plugged in is enough to seemingly make the controller rather unhappy. Doesn't appear to be a driver problem, unless it's one shared by FreeBSD, Linux and Solaris.
Hopefully Intel will do an SAS version at some point; they could compete against 15kRPM drives rather well, I
the era of the SSD is not far away. (Score:2, Interesting)
I do agree with the parent. SSD are a big thing and they have some important advantages. However, let's not go p
Re: (Score:2)
Well actually, my X25-M drive has no circuitry exposed other than the sata and power connectors. Everything else is completely enclosed, so unless the case is likely to transmit enough of the charge to the circuitry (I have no idea whether or not it would), SSD's should be LESS susceptible to that problem.
And while you are examining the downsides of SSDs, it's also fair to say that data recovery fr
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Actually, not true. The 80GB X25-M uses 0.15 watts at load. That's 0.001875 watt/GB. Scaling up to 2TB, you are talking about 3.7 watts total under load. At idle, the X25-M is 0.06 watts. That's 0.00075 watt/GB, or 1.5 watt at for 2TB. I don't know if any magnetic hard drive can match that, much less a 2TB model.
Then again, it's a silly comparison at the moment, since your electric cost per kwh would have to be insane before you'd recover the price difference of the drive itself
Re: (Score:2)
Put my OS on SSD for super-fast booting. Put my photo library for fast browsing, but if I start editing a picture, put the edit data on the platters until I'm done. I'm sure some of the decision-making could be done by the
Re: (Score:2)
Because everyone knows how Ferraris have made trucks redundant so quickly !
Re: (Score:2)
Faster, Cheaper, Better (Score:2)
AnandTech writeup (Score:5, Informative)
Re: (Score:2)
I suspect we'll see the 2nd gen X-25M launch at prices similar to the current X-25M, and then drop down to the $225/80GB that you can get them in 1,000 unit quantities over the next couple months.
The competition for these Intel drives is at least 2-3x behind in random IOPs. Too bad the streaming write performance didn't go up significantly, because that's the only place where the Intel drives lag behind their competition.
Re: (Score:2)
Although they aren't yet in stock, zipzoomfly is already listing the price at $223.25 (though you can't preorder).
Actually, for the G1 versions, the enterprise versio
Re: (Score:2)
Nice! If they actually end up selling at that price at launch, I will be impressed.
I have a Vertex and while the performance has been great, it doesn't seem to be very mature compared to regular disks.
For example, I've personally had these problems with it:
1. Firmware flash tool doesn't work on all computers. Have to remove it and move it to another computer to flash it.
2. Have seen the drive
I have a G1 Intel X-25M (Score:5, Informative)
..and it is fantastic. This was the largest performance increase i've seen on computers in over a decade. I was going to go with a Velociraptor because I knew how important drive access latency was but then Intel patched the fragmentation issue that was worrying me.
I got mounting rails to fit the drive into my desktop case so i'm using it as my primary desktop drive for OS, some applications (Adobe Design Premium Suite runs great on it! Photoshop CS4 loads in 3-4 seconds!), and my main games. I then have a 1.5 TB secondary drive to store my data and music collection etc. I paid around $430 for my 80GB Intel X25-M so being able to get the 160GB for that same price is a fantastic improvement. I will definitely only be going SSD in my machines from now on. Everything loads faster, I get consistently fast boot times even after months of usage.
It is amazing to see Windows XP load up and then all of the system tray apps pop up in a few seconds. You can immediately start loading things like e-mail and Firefox as soon as the desktop appears and there is no discernible lag on first load like you will get with SATA drives since they are still trying to load system tray applications.
Re: (Score:2)
And here I thought I was the only one who wanted to reply that way...
reliability? (Score:4, Insightful)
Ok, they may have been stress tested in factories by the manufacturers, but reviewers don't do that sort of work.
Compared to rotating media... (Score:2, Interesting)
All you'd need to do to demonstrate to me the greater reliability of an SSD is drop it and a regular hard drive onto the table a couple of times while they're running and see which one keeps running. That would be enough to get me impressed by increased reliability. Regular hard drives are delicate beasts.
Re: (Score:2)
If you can get a regular hard drive to the five year mark running perfectly well with no data loss, you can consider yourself moderately lucky.
There's nothing lucky about it. Unless you are just straining the drive constantly or don't have any adequate ventilation in your box, an HDD lasting 5 years if not longer is a pretty mundane thing for quite some time.
Re: (Score:2)
Re: (Score:2)
Well drives in servers are also put through far more strain than a home desktop so their failure rate would be expected to be earlier than a 5 year mark for a consumer drive in a home PC.
Re: (Score:2)
Lucky enough that I would recommend regular backups rather than depend on your luck with the hard drive.
Wouldn't you?
Based entirely on my own experience and that of those around me? No, not really. For extremely critical information, sure, but I don't really bother backing anything up as it's pretty much all replaceable and I've never really had a hard drive fail before the 5 year mark. By the time I've ever had a drive fail it's been probably 8-10 years old and is storing nothing of extreme value anyway so anything that may get lost is easily replaced.
Re: (Score:2)
Considering this topic is about consumer SSDs I figured we were talking about home desktops. Of course in a business environment you would back things up because it is critical information and as I said in my post:
For extremely critical information, sure,
Re: (Score:2)
I've not always held on to a single hard drive for 5 years, but I've never had a desktop hard drive fail on me, ever. I've probably owned 20 different drives over the past 15 years.
I have had exactly one laptop drive fail, but that was almost certainly due to having the laptop fall off the passenger seat repeatedly while using it for GPS.
Re: (Score:2)
"Rotating media is what RAID was invented for."
Poor grammar aside, you need an education on what RAID was really developed to address.
Re: (Score:2)
As a result, reliability is application specific! Much more so than regular spinning drives.
And I'm not talking about "flash cell rewrite limit". The thing is, the controller uses undisclosed/patented/whatever algorithms to place your writes at particular addresses on flash. They need to be tricky because of 4k_write/512k_erase problem of the flash technology.
So if you do a "right" combination of small and large writes you
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Most of my HDDs (Maxtor, WD, Seagate) over the past ten years have not lasted more than 2 or 3 years... My last system drive (WD 320GB) died after ~6 months - Just finished the RMA a few weeks ago.
Re:reliability? (Score:5, Informative)
My personal X25-M (the one that started all of my reviews and Intel's subsequent patching of the fragmentation slowdown issue with the X25-M series), has had over 10 TB of data written to it. Most of those were sequential writes spanning the entire drive (HDTach RW passes). SMART attribute 5 on that drive is currently sitting at a whopping "2". That equates to only 8 bad flash blocks. It's actually been sitting ag 2 for a while now, so those blocks were likely early defects.
I suspect it will take most users *way* over a year to write greater than 10 TB to their 80 GB OS drive. Considering mine is still going strong after that much data written, I don't think there's anything to worry about.
Allyn Malventano
Storage Editor, PC Perspective
Would You Run DeFrag on an SSD? (Score:2)
Re: (Score:2)
Except that SSDs randomly relocate data and doing a software defragment doesn't make files any more contiguous.
Re: (Score:2)
AFAICT, sequential vs. random loses its meaning with SSDs. The access time to any arbitrary block is equal, regardless of whether it's right next to the current one or on a different chip on the other end of the board.
Re: (Score:2)
Re: (Score:2, Informative)
Just not the usual one.
That's because writing in flash is in pages (4k?) but erase can be done only in blocks of 512k. So what happens is that controller have to do some insane job of joggling your writes and rewrites to spread or combine or whatever... on the fly...
As a result, after intensive use, the address space become fragmented, just like memory heap in regular software after lots of allocations/deletions.
Currently, the only way
Re: (Score:2)
There seem to be some defragmentation applications that say they can change some of the characteristics of the writing. I would be very wary of using these kind of applications - it's uncertain that they'll do any good.
For the Vertex drive there is an application that can perform the TRIM command for unused sectors. It's quite new so I would look up if it fits your OS - and only if there is no native TRIM support in the OS of course.
For these kind of Intel drives (especially the latest): unless you do very
Re: (Score:2)
Any kind of memory can be become fragmented after some time in use. Defragging in the traditional sense may not be as necessary (as before) as the memory addressing scheme is much faster than before and, therefore, read operations for address spaces far apart are not going to be a problem. I mean, what's the difference if the next segment of code/data is FFFFFFFF away from the last address? Nothing! There are no heads to move from location 'X' to location 'Y' therefore, the throughput is sustained. Traditio
No Battery? (Score:2)
These SSDs contain a RAM cache that's powered by the host PC IO bus. Why don't they have a battery in the SSD? The OS thinks that everything ACKed as sent to the storage unit is written, but a power failure kills the cache before it's flushed. A little battery charged off the host PC IO bus would make these drives even more reliable than spinning discs.
Re: (Score:3, Informative)
I think the UPS will cover that.
Re: (Score:2, Insightful)
I suspect they have a capacitor large enough to finish committing their buffers. At least they seem to see little performance degradation with write barriers, and do retain all the files they should when I pull the power while writing. (I didn't do a proper test, but it seems to work correctly, assuming your OS does.)
(And for the record, any OS that still thinks anything the HD acks is written is living in a dream world, it hasn't been true for 15 years on consumer disks.)
Re: (Score:2)
Re: (Score:2)
Because if the PC itself does not have time to properly shut down, your data will be cut in half anyway. A proper journaling FS would take care of any FS problems at least. The only thing you would gain is 32 MB of data saved. But if that data would be the start of a file write instead of a read, you might be off worse. You might consider ZFS if you are really paranoid, so you can roll back.
If the flash drive is not busy it might be hard to catch it when there is data in the cache. These things have such in
Re: (Score:3, Insightful)
The OS thinks that everything ACKed as sent to the storage unit is written,
What does it matter what the OS "thinks"? When power is lost, all of its "thoughts" disappear. When you power it back on it reloads its "thoughts" from the DISK, thus there can be no confusion.
Was 50 nm. WTF? (Score:2, Interesting)
(Yes, I know the new parts are 34 nm)
I thought the progression of feature size went: 90 nm, 65 nm, 45 nm, 34 nm.
But the graphics processors seem to be using 55, and these SSDs are being reduced from 50.
I thought they had to pour gazillions into standardizing fab construction, steppers, and all the equipment. So is some plant manager stumbling in with a hangover one morning and accidentally setting the big dial for 50 or 55 or something? What's the deal here?
25% faster game level loads. (Score:2)
That's what Anandtech found out during "desktop" testing.
(And, I assume, OS, Apps and Documents loads)
That's it. 25% faster during the, what, 1% of the time your PC spends actually loading stuff off the disk ?
The rest of the time, you get nothing.
That's not worth $200 to me.
On the Enterprise front, I wouldn't know how compelling that is (or not). But on the consumer front ...
Re: (Score:2)
It all boils down to how you value your time. Don't rush to be so skeptical when there's clearly a market out there for them already. You may not value your time in that way as much as a person who already shelled out the money for an SSD.
Personally, from my own first hand experience, I think it's worth it. Everything just feels more responsive. I normally don't do the whole early adopter thing even though I have some FU money laying around, but this time I did do it. The difference you notice is just like
Re: (Score:2)
Re: (Score:2)
That's compared to the first generation X25-M. If you've got one of those, by all means keep it (I plan to). If you DON'T already have an SSD, then getting one is often regarded as one of the most cost effective performance upgrades you can make at this point in time. Of course, that will depend on what you do. If gaming is your thing, then a faster hard drive isn't going to mean much as long as you've got sufficient ram.
Re: (Score:2)
Nope, that was compared to a rotating HD...
Strangely, the benchmark has disapeared from their review a couple of hour aftert hey posted it. They must have gotten a call from their advertisers.
Next week? (Score:2)
I am fighting the urge to head down to Puget Systems in Auburn, WA and see if they really have the SSDSA2MH160G2 [pugetsystems.com] for sale for $490.55. My guess is it isn't quite ready to be sold yet and was merely indexed by Google.
Must. Control. Checkbook.
How about a hybrid model? (Score:2)
Part HDD, part SSD?
During operation, the SSD data is mirrored onto the HDD in the background, or, better yet, the HDD is larger and the most frequently used data is kept on the SSD but you get the whole capacity of the HDD.
Re: (Score:2)
My most used data is OS + applications. An SSD is big enough to hold both. Data, especially MM can be kept on a HDD. Backups can be made to HDD. You would need special chips and such to put everything together. There were some hybrid drives (ok, with a minimum of slower flash with less leveling) but they failed. If it is ever really required, I expect people would be able to do it in the OS.
Sounds like a good way (Score:2)
Re: (Score:2)
executive summary (Score:2)
I might be in the market for an SSD soon, so I put some note together based on my reading of the articles in the topic and elsewhere. I thought I'd share them here so I can just Google them later.
$3 per GB (Score:5, Insightful)
Re:Oooh. (Score:5, Informative)
Last year when the x25-m first came out the 80 gig version cost $595, or just a little less than $7.50/gig. Now the same 1st gen drive costs $314 with a -10 dollar discount and free shipping on newegg, or about $3.92/gig.
The new 2nd generation drive 80 gig drive sells for $225, or $2.81/gig. If it follows the same price trend as the 1st gen model around this time next year it should cost ~125 dollars, or about $1.53/gig.
Here are the quick results of the xbench of my 5400rpm 160gig drive in my two year old macbook pro:
Compare those to the results of the new drive here: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3607&p=4 [anandtech.com]
Sequential read on the SSD is over 6x faster, and sequential write is 2x faster, but for the performance where it matters the difference is much more noticeable. Random read on the SSD is nearly 140x faster, and random write is over 40x faster.
Couple that performance difference with the lower power consumption, lower noise, and higher threshold for damage, and its a no brainer as to what is the single most price-efficient possible upgrade you can make to a laptop to boost overall performance, responsiveness, and battery life.
I wish I could justify buying one now, but I can't. However, 12 to 18 months from now I will probably be shopping around for a new laptop, and when I do I won't be settling for anything but a SSD. The benefits are just to great to ignore.
Re:Oooh. Questions Still Remain... (Score:2)
As long as they don't wear out in months, instead of years. I'm still leery of just how quickly you can start killing one of these when it's hosting the swap file. And I have yet to hear data on just how many R/W cycles 34nm cells are good for yet.
Re: (Score:2)
If you're really that worried about it just throw a velociraptor or something in your machine and put your swap file on that and use the SSD for everything else.
Re: (Score:2)
I bought a 4GB Gigabyte iRAM box specifically for the swap file on an SSD system.
Re: (Score:2)
Unless you system is maxed out on ram, I don't see the point. 4GB of extra ram will give you the same ability as a 4GB swap file. I've never had any problems running either windows or linux with no swap as long as you have sufficient ram (under windows, the only downside is that I think it won't be able to give you any debug info if the entire OS crashes, because the swap file is where it dumps the crash log)
Swap file that doubles as a hibernation file (Score:2)
I've never had any problems running either windows or linux with no swap as long as you have sufficient ram (under windows, the only downside is that I think it won't be able to give you any debug info if the entire OS crashes, because the swap file is where it dumps the crash log)
Unless you have the system set to share one file between swap and hibernation, and your combined swap-and-hibernation file is smaller than RAM. I've read comments in other articles telling how someone had to close programs before the computer could hibernate properly; otherwise, it would just suspend.
Re:Oooh. Questions Still Remain... (Score:5, Informative)
This has been covered many times. It's a good number. I can't recall the article, but basically if you write 20GB per day, you'll get more than 5 years out of it thanks to wear leveling and extra space (SSDs actually have more capacity than they make available to you). Now, you might scoff at that but:
1) 20GB/day is a lot for the typical user.
2) People who routinely do more than 20GB/day probably need a lot more storage than SSDs currently provide (you are talking about filling the drive in 4 days) so you probably won't be using an SSD for those purposes anyway
3) People who buy into SSDs at this point in time are typically more on the cutting edge, and are likely no have moved on before the drive wears out.
4) When the drive finally does start having problems, my understanding is that it won't just fail and you'll have lost data. The failure should happen on write, and if it fails to write that will be detectable. If it writes successfully, then it should be readable. If it does fail, I believe that part will just be marked inaccessible and the data will be written somewhere else. The drive should (again, as far as I know) provide details of the failure to SMART and other disk utilities, so the problem can be detected before it progresses to a critical stage. This is much better than magnetic media, where the typical failure is that you go to read data and it is suddenly inaccessible.
Of course, this is all just what I've read about previous generations. I have no data about the 34nm, but I have no reason to suspect it's any worse.
PS. If you want to know how much you currently write to disk and you run a linux system, check out /proc/diskstats. The 10th column should be number of sectors written. Each sector is 512 bytes, so take value*512/1024/1024/1024 and you'll get the number of GB each device has written since bootup.
Re: (Score:2)
The interesting difference between SSDs and platter based drive, is a write failure is not a warning sign that your heads are about to crash and you're going to lose the whole drive, it's just a failure of that one sector. Given extreme use over an extended period, the sectors would start to fail one by one, but no data should be lost, the drive capacity would start to shrink but the rest of the drive would be fine. Head crashes will be a thing of the past, thank god.
Even when the entire device's writes a
Re: (Score:2)
Re:Oooh. Questions Still Remain... (Score:4, Informative)
Cool, thanks for the tip!
cat /proc/diskstats | grep "[sh]d[a-z] " | awk '{print $10 "*512/1024/1024/1024"}' | bc -l
Re: (Score:2)
Re: (Score:2)
When the drive finally does start having problems, my understanding is that it won't just fail and you'll have lost data. The failure should happen on write, and if it fails to write that will be detectable. If it writes successfully, then it should be readable. If it does fail, I believe that part will just be marked inaccessible and the data will be written somewhere else. The drive should (again, as far as I know) provide details of the failure to SMART and other disk utilities, so the problem can be detected before it progresses to a critical stage.
That's what I've read too, but my experience has been different. I had two of the first affordable SSDs, made by OCZ and with the infamous JMicron controller. I was having serious issues with data corruption quite soon after OS installation and I wasn't sure if it was something with the controller and Linux. I ended up using some *nix utility designed to fill the drive with a byte combination and then read it back and see if it was correct. There was apparently a multi-megabyte section of the drive that
Re:Oooh. Questions Still Remain... (Score:4, Informative)
Yes, I'm aware of what is in that document (that's how I figured out what the columns were to begin with). That document skips over the first 3 columns of the output for it's numbering (major device number, minor device number, and device name). It considers column 4 to be field 1. Not sure why they wrote the document that way, but PsychiKiller's command above uses awk to print out the 10th column, and that does indeed give you the number of bytes written.
Re: (Score:3, Interesting)
But that IS extending the life. Without wear leveling, if I've got an 80GB drive and I store 50GB of data on it which I frequently modify, then after X years that 50GB will be worn out and I'll be left with 30GB. That isn't enough for me to use, so essentially the drive is dead as far as I'm concerned. Now consider a drive with wear leveling. After X years, I will only have used up 5/8 of th
Re: (Score:2)
Intel rated the first generation X25-M's at 100GB/day for 5 years, I'd be surprised if these were significantly worse.
Re: (Score:2)
One assumes they are MLC which are still good for about 10,000 write cycles. SLCs for 100,000.
The controller does a very good job of cycling "sectors" used, so the whole disk gets good use, rather than the same areas being overwritten constantly. The MTBF for SSDs is much higher than for conventional drives as a result, although the figure is less relevant as it's much more down to usage than anything else.
Keep enough free space on the drive for the controller to do its cycling, don't use it for constant
Re: (Score:2)
If they'd use OUM for their memory modules they'd not have to worry about r/w cycles for a drive that is used for swap.
Re:Oooh. (Score:4, Interesting)
Let's make some wild predictions based on recent price trends. (Trends found [mattscomputertrends.com] here [mattscomputertrends.com]). Over the last few years, flash memory has been increasing in GB/$ at a rate of 185% per year. Meanwhile, hard drives have slowed to only 42% improvement per year.
Based on these trends, here is the estimated cost of 10 TB using either technology:
July 2009: Platter = $750 [newegg.com], Flash = $28,125 [google.com]
July 2010: Platter = $528 [google.com], Flash = $9,868 [google.com]
July 2014: Platter= $130 [google.com], Flash = $150 [google.com]
July 2019: Platter= $23 [google.com], Flash = $0.80 [google.com]
July 2024: Platter= $4 [google.com], Flash = $0.004 [google.com]
In July 2024, a 10 PB flash drive would cost $42 [google.com]! Of course, we can't assume these trends will continue, but it seems a good bet that we won't be worrying about the size of our mp3 collections. The traditional hard drive may only have five years of competitive life remaining.
Re: (Score:3, Funny)
Let's make a few predictions based on recent trends:
July 2007: number of wives = 0
July 2009: number of wives = 1
July 2011: number of wives = 2
July 2013: number of wives = 3
July 2015: number of wives = 4
July 2017: number of wives = 5
July 2019: number of wives = 6
July 2021: number of wives = 7
Gosh, I'll need to implement wear levelling soon, too.
Extrapolation: almost as good as copulation.
Re: (Score:2)
"boost overall performance, responsiveness, and battery life"
That's the crux: SSDs boost performance pretty much only when doing random reads. Not random writes, not sequential reads, and not anything not HD-related. Basically, you're boosting boot times, app launch, game level load... and anything else that has to do with disk access, exclusively.
$200-400 is a lot to pay for a boost, even sizeable, in those rare occasions. They don't help with anything CPU-, RAM- or I/O-intensive. And cost pretty much the
Re: (Score:2)
So what the hell are you talking about? Did we catch you talking about something you havent research at all, again?
Re: (Score:2)
Try and put things in perspective. On a desktop computer, what % of the time is spent doing disk access ? Actually, what % of the time is spent doing blocking disk access, because background ones are not really noticable, fast or slow.
Anandtech found sequential writes to be 50% faster than an HD (192 vs 120 MB/s). That's good, but not incredible, especially if your OS or HD does any kind of write caching.
Same remark for random writes, though SSD's advantage is much larger then: small % of time spent doing t
Re: (Score:2)
Try and put things in perspective. On a desktop computer, what % of the time is spent doing disk access ? Actually, what % of the time is spent doing blocking disk access, because background ones are not really noticable, fast or slow.
It isnt what % of the time is spent doing disk i/o .. On my system a low % of time is also spent rendering 3D graphics.. just the same I have a good 3D graphics card (8800GT.)
Anandtech found sequential writes to be 50% faster than an HD (192 vs 120 MB/s).
The only HD's that push that much write speed are also expensive. We arent talking about those $100/TB drives here, which are going to struggle with 60MB/sec sequential write. Drives like the velocirapter approach $1/GB. If you are in the market now for a high performance HD, the jump to SSD isnt as much as people make it out to be.
Higher level tests show really negligible performance gains.
Re: (Score:3, Interesting)
>Sequential read on the SSD is over 6x faster, and sequential write is 2x faster,
>but for the performance where it matters the difference is much more noticeable.
>Random read on the SSD is nearly 140x faster, and random write is over 40x faster.
So
>Not random writes, not sequential reads, and not anything not HD-related.
is wrong.
It also seems to me that you don't really need to say
>[no performance increases on] anything not HD-related.
or
>They don't help with anything CPU-, RAM-...-intensive
Re: (Score:2)
http://images.anandtech.com/graphs/intelx25mg2perfpreview_072209165207/19505.png [anandtech.com]
tests say sequential write = 50% faster than HD, not twice as fast. Maybe 2x faster than older SSDs, but not than HDs. Again, that's 50% when you're doing disk writes, which really is not that often. Plus those disk writes need to be "blocking", not done in the background.
My point is that SSDs boost performance in the very rare cases where
1- you're doing "blocking" disk IO
2- SSD are significantly faster than HDs
That's not a lot.
Re: (Score:2)
A high performance 10K RPM drive is going to cost $0.66/GB at best, while the 15K RPM drive will easily cost $1.00/GB or even a lot more. SSD's are considerably faster than these things, and look even better compared to the commodity drives.
Re: (Score:2)
Define laughably. The first SSD I bought, around '94-95ish was 128KB. At the same time, my laptop had a 60MB hard disk. The hard disk was worth about £80, the SSD cost £30, so the SSD cost around 180 times as much per unit storage. The SSD had a few serious limitations. The transfer speed very slow, but most importantly it was a single cell so the only way of reclaiming space after deleting / modifying a file was to copy everything off, format it, and copy everything back.
Thi
Re: (Score:2)