Intel's First SSD Blows Doors Off Competition 282
theraindog writes "Intel is entering the storage market with an ambitious X25-M solid-state drive capable of 250MB/s sustained reads and 70MB/s writes. The drive is so fast that it employs Native Command Queuing (originally designed to hide mechanical hard drive latency) to compensate for latency the SSD encounters in host systems. But how fast is the drive in the real world? The Tech Report has an in-depth review comparing the X25-M's performance and power consumption with that of the fastest desktop, mobile, and solid-state drives on the market."
Oh Yeah? (Score:5, Funny)
My SBDs will blow THEIR doors off.
Re: (Score:2)
Re: (Score:2)
Re:Oh Yeah? (Score:4, Funny)
That's the beauty of it. You will never know!
Re: (Score:3, Informative)
--sabre86
Re:Oh Yeah? (Score:5, Funny)
That's not pedantry, it's rigour.
Thank you for taking the time to correct my misuse of a sixty year old acronym in an off-hand quip replying to a fart joke. It is this level of attention to detail which makes slashdot what it is.
Well, a step in the right direction (Score:5, Insightful)
A step in the right direction, but at $600 per 1000 I am gonna wait a bit longer before jumping on the SSD bandwagon.
Re:Well, a step in the right direction (Score:5, Funny)
Why? They're almost free at 60 cents each :-P
Re: (Score:2, Funny)
Why? They're almost free at 60 cents each :-P
Verizon cents.
Re: (Score:2)
I'm happy my WD Velociraptor for right now. The Velociraptor is $300 for 300GB which is still steep but it beat or matched the tested SSD's in quite a few tests.
The Velociraptor even beat the Intel SSD in several tests such as Windows Boot time (and it creamed it on anything that involved large amounts of writing / content creation since the Velociraptor gets 107MB/s write compared to 80MB/s).
Re: (Score:2)
Re: (Score:3, Funny)
Plus RAID-0 ain't all it's cracked up to be. I had a Dell XPS600 with RAID 0 and one of the drives went kaput. Guess what happens to all the other drives then ? They're useless. 4X drives in RAID-0 means you have four times the chance of having a dead weight for a system.
RAID0: Optimised for failure.
Re: (Score:3, Insightful)
Plus RAID-0 ain't all it's cracked up to be. I had a Dell XPS600 with RAID 0 and one of the drives went kaput. Guess what happens to all the other drives then ? They're useless.
RAID-0 is exactly what it's cracked up to be. It just may not have been what you're looking for.
Re: (Score:3, Interesting)
The review is slashdotted at the moment so I can't RTFM, but...
If a velociraptor beat an SSD in boot time, well, something is wrong with their test, or perhaps the bios was waiting on the SSD to initialize (entirely possible based on the added intelligence on their controller chipset). I just went from an SLC SSD to a velociraptor and the difference is painful. Boot time is slower. The system is just 'laggier'.
You can't judge the differences between SSD and HDD from charts and graphs on review sites. Re
Re:Well, a step in the right direction (Score:5, Interesting)
A step in the right direction, but at $600 per 1000 I am gonna wait a bit longer before jumping on the SSD bandwagon.
I'd place an order for one this instant if I could. My company uses a relatively small database, on the order of 40GB of online data. It's running on 4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0. By all accounts, this single SSD would out-seek the Cheetahs, meaning that our website can serve more customers and more quickly. This is a total no-brainer for a lot of applications, even at the current price.
Re:Well, a step in the right direction (Score:5, Informative)
Before rushing to buy these for database use, I would want a good look at MTBF values. Especially MTBF values for really heavy use, which may be completely different from estimated desktop use.
Are you sure? (Score:3, Insightful)
Quote
4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0.
End Quote
What company would really want to run their DB on a Raid 0 (Striped) Disk setup? Does this not put it at risk from a single spindle failure?
Re: (Score:3, Insightful)
What company would really want to run their DB on a Raid 0 (Striped) Disk setup?
One who replicates the data to slower backup systems.
Does this not put it at risk from a single spindle failure?
If those were the only spindles involved, sure.
Re:Well, a step in the right direction (Score:4, Informative)
It's running on 4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0.
I hope you know how volatile RAID 0 can be. A problem with any single one of those drives will screw up the whole works until you can restore from a backup. I can understand wanting to avoid RAID 5/6 if there are a lot of writes to your DB as performance of those arrays in writes are notoriously bad and RAID 1 would be a doubled hardware cost increase, but the ability to stay up and hot swap in drives after a failure is priceless.
Re: (Score:3, Interesting)
I hope you know how volatile RAID 0 can be.
Oh yeah, but we can do a bare-metal recovery in an acceptable amount of time, so a failure is more along the lines of "dangit, break out the tapes".
To answer other posters while I'm at it:
That chassis is maxed out on RAM. We could buy a newer, bigger system but this SSD would serve about the same ends for a lot less money and effort. Besides, at some point you have to flush those cached writes out to disk. Right now, that is sometimes a bottleneck on our system. If we could magically make those writes s
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Re:Well, a step in the right direction (Score:4, Interesting)
I hope you know how volatile RAID 0 can be. A problem with any single one of those drives will screw up the whole works until you can restore from a backup
Oh my, pardon me, I am rolling on the floor laughing, biting the carpet and frightening the cat (ROFLBTCAFTC).
I remember reading these exact same arguments in articles written during the early days of computing, when people were complaining of the multi-platter nature of modern disk packs. These started hitting the market around 1963 I think. The argument went -- if you stack all those platters together, the failure of one platter would trash the entire set! Oh noes...
Re:Well, a step in the right direction (Score:4, Insightful)
And...what? It doesn't?
Re:Well, a step in the right direction (Score:5, Insightful)
Have you tried just putting 16GB of RAM in the database server? Nearly 16GB of cache for a 40GB database should work pretty well.
More geenrally, it's time to start thinking about DB servers that satisfy all reads from memory. It won't be long before the RAM available in a commodity sever is larger than many shops' database. Your caching model would want to be very different if you know you can cache everything.
Re: (Score:3, Insightful)
Disclaimer: I work in the data warehousing industry.
Re:Well, a step in the right direction (Score:4, Insightful)
It won't be long before the RAM available in a commodity sever is larger than many shops' database.
First law of data: data always expands to fill all available storage.
Second law: doubling your storage only buys you half the extra time you expected.
Final law: no storage is ever enough.
Re: (Score:2)
Well it's about $600 in bulk. I imagine the retail price will be a bit more than that. But suppose you can get one for just $600. What else could you do with the money?
You can buy 8 gigabytes of RAM for about $150 (you can even get ECC for that price if it doesn't have to be the fastest clocked RAM). So $600 would let you pimp out your server with 32 gigabytes of RAM - actually, not so much these days. I'd bet that for many applications the RAM will give a better performance increase than going to SSDs
Re: (Score:2, Informative)
Replying to you, since you seem serious, as opposed to sibling.
That's $600 per 80GB drive, with a minimum order of 1000.
You can't buy a single drive for $600. Or at least, not from Intel.
but is it fast enough (Score:2, Funny)
to run vista, or do you need a RAID array of these drives.
Re: (Score:3, Funny)
to run vista, or do you need a RAID array of these drives.
Vista does a lot better with slow hard drives than XP or most other operating systems, thanks to superfetch or whatever silly name they give to the precache of apps.
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
Funny it improves the speed on my DS4400's DSs4500's and DS4800's....
Re: (Score:2)
RAID doesn't improve speed, at least not by a large amount. RAID will save you time if a drive died and you can get your data back quicker. As for normal performance speed you are just as good with 1 drive.
Uh.... What ?
Re:but is it fast enough (Score:5, Informative)
Re:but is it fast enough (Score:5, Informative)
That depends entirely on what kind of RAID [wikipedia.org] we're talking about...
Re: (Score:2)
RAID is an acronym for Redundant Array of Inexpensive Disks. There is no Redundant part in aid 0.
Yes, and "hacker" means "enthusiastic computer explorer". Really, give up. RAID 0 is still recognized as RAID, regardless of what the "R" should mean, by everyone.
Re: (Score:2)
The point is, most people don't know or care what an acronym means anymore and it just becomes a term unto itself such as RADAR and PATRIOT Act.
More Details and Benchmarks Here (Score:5, Informative)
The PCMark Vantage tests are especially impressive: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=7 [hothardware.com]
Damn it intel (Score:5, Funny)
You were only supposed to blow the bloody doors off!
Re: (Score:2)
Do you think I overreacted, Hal?
It's not the speed, it's the storage (Score:5, Insightful)
Re: (Score:2)
Yeah, i have 3TB of HDD's on my desktop. Someone let me know when they make 3TB SSD's that i can afford. :)
-Taylor
Re:It's not the speed, it's the storage (Score:4, Interesting)
At current improvement rates, I think that you're looking at 7-10 years before SSD becomes cheaper than 3.5" form factor drives for sheer storage. We seem to have been lagging at around a terabyte for a while. Meanwhile it seems that SSD is doubling in capacity per $ at it's 'sweet spot' each year at the moment.
Going by performance improvements, it'll only be a 2-4 years before companies start replacing their platters with solid state for intensive database operations, especially those biased towards reads. Those 10k-15k RPM drives are significantly more expensive and store less than 7200/5400 RPM drives.
The article mentions $595. Looking up, a 300GB 15k HD is $400 for an OEM. That's 5 times the size of the 80GB SSD mentioned in the article. Figure on a doubling each year, that'd be 3 years before the SSD exceeds current models. Figure in the lower power requirements and such, and I can see SSDs selling well before reaching parity based purely on size - their improved seek time, lower power demands, etc...
Re:It's not the speed, it's the storage (Score:5, Insightful)
Or you split up your expectations.
Honestly, how much space do you need for the OS and programs? Have an SSD for these functions, and a traditional HDD for pure space requirements. That'd be more economical too, at least in the short term.
Re: (Score:2)
Re: (Score:2)
I have a 74gb raptor, split into two 36gb halves (xp/ubuntu) and I get by on that as my OS drive. Seems enough for any programs I use plus 4 current-ish (wow, oblivion, cod4, civ4, plus any expansions for each) games. Its a little restrictive but I really don't regularly play 4 games anyway so I'm fine with tossing one for spore or whatever comes next. Admittedly, if I didn't have a 360 and did all my gaming on the PC, this probably wouldn't be sufficient.
This isn't to say you're doing it wrong or anythi
Re: (Score:2)
Seconded.
What gets really interesting is if you start thinking about these access times and such on your swap partition/file/drive/whatever. It's a hell of a lot less expensive than a ton of extra RAM, but still performs quite well, especially in random access. 80GB of an Intel SSD is still a lot cheaper than the equivalent amount of RAM, too.
Re: (Score:2)
Actually, it is the speed - the write speed. Most SSD's on the market right now have extremely slow write speeds, to the point that it can make running an OS off them quite painful.
First get performance to parity with hard drives on write (they already kill them on reads due to lack of seek times), and then start ramping up the capacity. I expect we'll see both of these well underway by the end of next year. 200GB SSD, anyone?
Nice, now maybe Vista will be snappy (Score:2)
These things cut latency by 2 orders of magnitude. Defrags are no longer necessary. 250MB/s damn near saturates the newest SATA gear.
Write/Read speed parity would be nice.
More details and Benchmarks here (Score:4, Informative)
Benchmarks start here: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=4 [hothardware.com]
Blows doors off? I call bullshit. (Score:2, Interesting)
If anyone's seen the results, it's in first place in speed but not in a "door blowing manner". It's just slightly faster than the next guy. "Blows doors off" reads like marketing spooge trying to overhype something that has a small or no advantage over the next contender. Misleading title.
Re:Blows doors off? I call bullshit. (Score:5, Insightful)
If anyone's seen the results, it's in first place in speed but not in a "door blowing manner". It's just slightly faster than the next guy.
Pardon me, but it is "blowing down the doors" (and the house too) in some tests, like this one [techreport.com]. More than 3x the number of transactions of the second fastest flash drive? 7x faster than the slowest SSD drive? And the traditional HDDs are so crushed at the bottom I can't make out a ratio, but 30x or more? That is just ownage of the highest level. Yes, the write speeds aren't exactly compelling but for IO and read-heavy uses it's completely mindblowing.
Re: (Score:2)
Re:Blows doors off? I call bullshit. (Score:5, Funny)
Pardon me, but it is "blowing down the doors" (and the house too)
Yes, the write speeds aren't exactly compelling but for IO and read-heavy uses it's completely mindblowing
Great, first the doors, then the house and now your mind...
I guess if there's anything we've learned is this drive really blows.
Re: (Score:2)
I'm being anal but ... you realize IO implies reading AND writing, right?
Re: (Score:2)
I'm being anal but ... you realize IO implies reading AND writing, right?
Short answer, you have read and write performance as you'd see in normal laptop/desktop/workstation use which consists of fairly mixed size and randomness. Then you have what is typically database transactions - a huge number of small read/writes which tend to saturate the controller not the actual medium unless the underlying medium is extremely fast to respond. Those specificly interested will check out read IOPS, write IOPS and various mixes for various block sizes, but in many ways its a separate metric
Re: (Score:2)
These Intel drives are $595. Your $4,500 would buy 7 of these, for 560GB of storage, and 1750MB/s read / 490MB/s write in aggregate. Slice the speeds in half because you'll never balance loads that well, and you still get 875MB/s / 245MB/s. Slower writes but faster reads and 7 times the capacity.
Another option is to run your database/whatever entirely in ram.
I haven't priced machines with 64GB of RAM this month, but it was a little spendy last time I looked.
Re: (Score:2)
Re: (Score:2)
They did blow the doors off the competition because they actually have engineers that get it. They were able to make an MLC based flash disk that is not only faster in every manner but has an amazing MTBF. This brings cheaper SSDs within reach. Look at how thorough the assessment of their MTBF calculations are and it really shows they paid attention to every detail.
Re:Blows doors off? I call bullshit. (Score:5, Insightful)
If you read the article, NCQ actually makes sense. The Intel drive actually finishes requests before the CPU gets around to asking "are you done yet?". That time between the drive finishing and the drive being told what to do next is spent idle. By supporting NCQ, the drive can convince the CPU to send large batches of commands and get rid of that latency.
It's faster for the same reason that FTP is faster than IRC DCC. FTP just keep sending bytes as long as the other end doesn't close the connection. IRC DCC sends a packet, waits for a reply, sends the next packet, and so on.
One test they never run - FRAGMENTATION (Score:3, Interesting)
Since SSD don't really have "sectors", do they fragment files the same way as HDD?
Also, what would the defrag speeds be?
Re:One test they never run - FRAGMENTATION (Score:5, Informative)
Re:One test they never run - FRAGMENTATION (Score:5, Informative)
Yes, it would wear the disk out faster, but your original premise is flawed.
Clustering locations would allow for accessing large chunks of data with one fetch, instead of lots of little fetches. If you're old enough, think back to the Blitter on the Amiga and moving contiguous chunks of memory as opposed fragmented blocks.
Remember, RAM can get fragmented just as badly as a hard drive.
Commercial uses don't fragment (Score:4, Interesting)
What would be interesting would be to put an Oracle database block interface on these puppies, instead of the normal filesystem interface. then you'd just have the database say to the storage "get me block X" and it appears. No filesystem overheads - which given the speed of these things could turn out to be significant.
Looks like we'll be back on RAW "disks" for databases. Plus ca change!
Re:One test they never run - FRAGMENTATION (Score:5, Informative)
A good SSD has wear-leveling and write-combining techniques that keep the SSD "defragmented" automatically.
And it doesn't matter if the FS clusters are far apart as long as they are close to the SSD's hardware cluster sizes or the SSD intelligently combines them (which is what I believe Intel is doing since they claim a write amplification of only 1.1).
It's possible that the Samsung SLC chip stores data for the wear-leveling and write-combining operations which would remap the MLC in a non-fragmented way.
BTW, let me give you a naive wear-leveling / write-combining algorithm. I'm sure Intel has a better one because they've invested millions of dollars of research and the one I'm about to present to you could be done by a CS101 student:
1) You have a bit more than 80GB free for an 80GB drive (extra memory to take care of bad sectors just like a normal hard drive plus a small amount of required for the wear-leveling / writecombining)
2) You treat most of the storage as a ring buffer that consists of blocks on two levels: the native block size and a subblock size. The remaining storage (or alternate storage which may be the Samsung SLC chip on the MLC drives) is used to journal your writes and wear-leveling.
3) You combine all writes aligned to the subblock size into a native block and write them out to the next free native block in the ring buffer and keep a counter for the write to the block. If you run into a used block, and increment a counter (for wear levelling) and if the counter is below a certain value, you skip it to the next free block, otherwise you move the used block (which has been stagnent) to a more frequently writtento free block (which will now take less of a burden since it's had a stagnant block moved into it).
4) Anytime you make a write, the new sectors are updated in the memory area used for journaling / wear-level / sector remapping.
Assuming your reads can be done fairly quickly at the subblock level, it never matters if you have to "seek" for the reads and the drive won't fragment on writes because they are combined into native block sizes.
Re: (Score:2)
Even tech without this will usually allow lists or queued fetching to hide the overhead of many little fetches.
The important thing is to have the subblock size to be at least large enough that the time penalty for switching native blocks is minimal compared to the actual time of
Re: (Score:2)
So do those RAM defraggers even work? Or do they not help?
Re: (Score:2)
I have no idea if the products work. RAM does get fragmented, but nothing a quick reboot won't fix. Hard drives need explicit defragmenting, but that scares me with RAM. I don't want Program A trying to move crap around in memory. Actually, I can't even see *how* they work, moving other program's data around. If program B expects to find a data block at $C000, it better be there and not bounced around by some defrag program.
I personally wouldn't waste my time on them.
Re: (Score:3, Interesting)
I don't mean to attack directly, but you seem to be just well informed enough to be dangerous. First, you seem to think a quick reboot is something that should be no big deal and happen rather often. This is kind of appalling. If you need to reboot a computer often (more than to install new hardware), something serious is wrong with it or it's OS.
Secondly, this phrase, "Hard drives need explicit defragmenting" is misleading as all hell. Hard drives do not need defragmenting. They're made of platters, he
Re: (Score:2)
Generally not. What a RAM defragger does, all it CAN do, is request a metric fuckton (not imperial... people often get those confused) of memory, which shoves almost everything currently running into swap, and then it releases it, so hopefully the OS reads the pages back from the swap in larger cohesive blocks.
In short, no, there's no reason for it. If there was, it'd be recommended for use on memory-hungry server applications in the enterprise, and I have never seen that. Operating systems have improved
Re: (Score:2)
You really don't need to defrag your SSD / USB flash drives. Just as there are defrag utilities for your hard drives, there are defrag utilities for your RAM in your PC. Last time I ran one of those was perhaps 10 years ago. Do a Google search for RAM Defrag and you will find these. The time's I've done it with RAM where to clean up after programs with memory leaks, not for the real defrag use.
The fact is in very few cases do you ever want to do this. The benefits just are not there to justify an
Re: (Score:2)
There's only one benefit you can have from defragmenting a solid state disk -- you free up space.
On a heavily fragmented drive, the information on how to jump all over the disk to read the file has to be stored somewhere. Depending on the file system, it can either be in a block allocation map or file allocation table (BAM or FAT), which grows quicker the more fragmented the disk gets, or in continuation blocks (extents), where the end of a file block tells the file system "jump to sector NNNNN block MMMM"
Re: (Score:2)
SSD don't have seek times so all blocks have the same access times which means that fragmentation isn't an issue.
Re: (Score:2)
Re: (Score:2)
Re:One test they never run - FRAGMENTATION (Score:4, Informative)
You can't grow a file in the middle. There don't exists any filesystem call that can do that.
Fragmentation only happens if you append to a file, but that kind of fragmentation should not be a problem for ssd, because all blocks(Except the last) will be full, and ssd don't read the 'next' block, any faster then any other black.
Gonna Take a Little While Yet (Score:2, Insightful)
However, they're going to need to get a lot cheaper, and we're going to need to see capacities in the hundreds of gigabytes before they start to take off, but take off they will.
Re: (Score:2)
Re: (Score:3, Informative)
Write rates aren't THAT impressive, good but meh.
Less heat depends on the device, I've seen plenty of HOT SSDs, presumably due to the density of silicon in them and being first generation devices
Better power consumption ... where? Every SSD I've seen doesn't have a power saving mode, in power saving mode, as a general rule, mechanical drives are less hungry than SSDs.
They are really only compelling if you need fast seek times or for use in a laptop where shock (head strikes) is a potential issue at this p
Re:Gonna Take a Little While Yet (Score:4, Informative)
Here's my concern in a nutshell:
Assuming a degenerate workload, with a naive algorithm that never remaps existing data except when it is written, death is swift. Assume a 256 KB flash block. Assume a 4 GB flash device with 2% spare. Assume 70 MB/sec. transfer rate. Assume TCQ/NCQ so that you can queue up requests without waiting for the previous request to complete. At 2%, you have about 81.92 MB of spares, or about 328 spares. You have to erase a block containing 256KB at once (one entire flash block). Write random data on a single data block over and over without caching. At 70 MB/sec. divided by a 256 KB block, you can write 280 blocks per second. That comes to about 1.17 seconds to go through all of the spares once. With a 10,000 erasure limit, that means you destroy all the spares in 2.38 hours. At that point, no further writes can occur because erasing and rewriting a block in place is inherently unsafe. Obviously for a 60 GB disk, multiply the numbers by 15. Even with 100,000 cycle flash, one could kill a drive with a naive algorithm in about four months. Okay, so it wouldn't be quite that fast because you'd have to issue write cache flush instructions between each write, but you're in the ballpark.
On the flip side, with a typical workload, a drive would likely last several years even with such a naive algorithm. This is why I'm concerned. It is quite possible for a company to implement a remarkably naive wear leveling algorithm and mostly get away with it except for a few unlucky people who end up with data loss. We saw this in the HD industry not too long ago with IBM claiming after the fact that their drives were not designed for continuous use. With such a history of reliability corner-cutting from storage vendors, I think there's good reason to expect better transparency from the flash drive vendors about how they are doing wear leveling, particularly if these products are expected to be used in enterprise installations as this drive supposedly is. Fool me once and all that....
I won't even get into the question of how one can possibly achieve anything approaching a 1.1 write amplification rate short of building custom flash chips that allow per-page erasure.... Maybe for certain synthetic workloads, but not for a degenerate workload (e.g. write blocks sequentially with a stride length of the same size as (or larger than) the physical flash block size until you exceed the capacity of the write cache, rinse, repeat).... Otherwise, that seems at least an order of magnitude lower than is plausible. I'd have to see white papers explaining exactly how they're doing this miraculously good wear leveling before I'd trust any low-cycle-count SSDs in anything resembling a production server....
Increase the capacity and lower the price (Score:2)
Other than that, we can already say that the days of magnetic media are numbered. The technology is here, we now only need to wait a bit. I give it three to four years at most.
NAND versus Memristor? (Score:2)
How different is NAND flash memory compared to Memristor technology and would Memristors make a better SSD?
Re: (Score:2, Insightful)
Real use for SSD (Score:3, Insightful)
Western Digital blah blah, 2.5" mobile blah blah. How do they compare to the mainline Hitachi and Seagate 15k Fibre Channel? EMC's SSD offerings? I want to know what I can expect for data warehousing on Oracle RAC.
Re: (Score:2)
Grab a price/performance ratio on all of those you listed, compare it to the ratio this Intel SSD has, and get back to me. Then put them in a RAID config. Not to mention... how many of those will fit into a 2.5" form factor? I don't think any of them. This is big news for mobile speed, and for compact datacenter needs.
far from the first (Score:2)
http://www.pcworld.com/businesscenter/article/149792/intel_launches_smaller_ssd_for_netbooks_minidesktops.html [pcworld.com]
intel appears to have actually jumped into the SSD foray before this.
unfortunately, reviews have been lackluster.
SSD on PS3? (Score:5, Interesting)
Re: (Score:2)
Yes. Much like the OS on a computer being installed on an SSD is a lot like the TRS80, Commodore, Apple2, Atari, etc with the OS built into a ROM-- except this time around we get to write to it freely.
Solid state storage should be very interesting in the coming years. Especially if there becomes a way to issue more writes before failure.
Thinking about using SSD for external backup (Score:2, Insightful)
Anyone know about the general longevity of these devices?
The shelf life of a hard drive isn't incredibly impressive.
Price is over-rated (Score:5, Interesting)
I ended up buying a refurb Dell laptop for around $1000 with a 64 gig SSD. Was it the latest and greatest? Nope. But it was about $150/200 more than a similarly priced computer with a traditional drive (which of course, was larger). Since the only significant problems I've ever had with my two prior Dell laptops (admittedly a small sample) involved the hard drive, going with the SSD (especially when you include the "cool" factors -- both temperature and nerd-ism) was an easy decision.
But the point is that as SSDs become more prevalent, they become available at cheaper prices. I'm sure that as the Intel drives are rolled out, the "obsolete" drives currently on the market will continue to fall in price and become available to bottom-dwelling cheap-o-s like me who may not be able to justify $1000, but can rationalize $200 without a whole lot of difficulty.
It's only a teeny bit faster than my Velociraptor (Score:2)
In fact my VR destroys it in write speed...I'll stick with it for now.
Write Speed (Score:2)
Why don't they put a 1GB RAM on the thing with a battery and create a huge write cache? This ought to make the write speed almost a non-issue.
Re:Your SSDs (Score:5, Funny)
Those're STDs.
Re:Your SSDs (Score:5, Funny)
Those're STDs.
It burns when I read/write
Re: (Score:2, Funny)
Re: (Score:2, Informative)
Re:where is the (Score:5, Funny)
Probably right next to the dlsyexia tag.
Re:where is the (Score:5, Funny)
The preferred spelling is lysdexia.