All Solid State Drives Suffer Performance Drop-off 150
Lucas123 writes "The recent revelation that Intel's consumer X25-M solid state drive had a firmware bug that drastically affected its performance led Computerworld to question whether all SSDs can suffer performance degradation due to fragmentation issues. It seems vendors are well aware that the specifications they list on drive packaging represent burst speeds when only sequential writes are being recorded, but after use performance drops markedly over time. The drives with better controllers tend to level out, but others appear to be able to suffer performance problems. Still not fully baked are benchmarking standards that are expected out later this year from several industry organizations that will eventually compel manufacturers to list actual performance with regard to sequential and random reads and writes as well as the drive's expected lifespan under typical conditions."
To test (Score:5, Interesting)
Just place SSD drives to usenet or torrent servers and use them as /var/log mountpoints... you soon see real tests how well those work when comparing to old fashion harddrives!
Re: (Score:2, Interesting)
Fill it with applications, OS (Mac, Win, Linux over 3 drives) , mps3, lots of jpegs, text files and short and long movie files (2~50~650 mb)
Get the RAM down to 1-2 gb and let the OS's thrash as they page in/out and watch the 3 computers over a few weeks.
Automate some HD intensive tasks on the small amount of space left, let then run 24/7.
Hope that Mac or Linux will keep files in different ways and use the little space in strange ways too. We can hope OS X and t
Just a small dip in performance (Score:5, Informative)
And this seems like the sort of issue that will be resolved in the next generation, anyway.
Re:Just a small dip in performance (Score:4, Funny)
Yeah, that's a great solution. Wipe a nice fat drive array, then start over. Right. Wipe it. Got it.
Re:Just a small dip in performance (Score:5, Informative)
Totally different issue. The problem here is inherent in all flash drives, but can be mitigated by clever controller design. AnandTech made an extensive report on this issue [anandtech.com].
Re:Just a small dip in performance (Score:5, Informative)
All flash drives *do not* have the issue. What really happens is your small write IOPS will be on the low side and your sequential writes will always be full speed *unless* you implement some form of write combining. The write combining cheats a bit by taking small random writes and writing them in a more sequential fashion to the flash itself.
The catch is that when you come past that now fragmented area, the controller has to play musical chairs with the data while trying to service the write originally requested by the OS. End result - slower write speed.
Some well behaved controllers (Intel, Samsung) will take a little extra time to defragment the block while it's servicing the sequential write. Optimized controllers (Intel M series) will now rarely fall below their advertised write speed of 80 MB/sec.
Other more immature controllers leave the data fragmented and simply move the whole block elsewhere. This results in a compounded fragmentation, which can eventually drop write speed to 1/3 to 1/5 of it's write speed when new.
I authored the original articles on the matter:
http://www.pcper.com/article.php?aid=669 [pcper.com]
http://www.pcper.com/article.php?aid=691 [pcper.com]
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
I've been curious if this is an issue when using a RAW flash device and a modern file system designed for flash. Would you get any better performance and simplicity by moving the wear leveling and write combining up to the OS level?
Re: (Score:2)
File systems designed for flash drives should have an improvement. Typically when you delete something on your computer, the filesystem does not actually remove the file from the device, it just marks the area taken by the file as available. However, this takes place at a level above where the drive lives, so the flash drive does not know what is important and what has been "deleted" - all it sees is data on the drive. This means that the drive is can't always just have an area available to write a file
Re: (Score:3, Funny)
Authoritative comments, on Slashdot? Are you sure you're on the right site?
Re: (Score:2, Insightful)
Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."
Right, the answer to every Linux problem. Redefine the unavoidable to "not a problem" while the other guy "has it" and "it is bad".
Re:Just a small dip in performance (Score:5, Informative)
To elaborate on the issue, think about how regular disks have sectors that tend to be around 512 bytes. This expectation is "baked" into a lot of filesystems, because of how ubiquitous it is.
This has to do with fragmentation of device sectors when those sectors don't conform to filesystem expectations. That is, the filesystem writes out a 512 byte sector, but the flash drive can only commit 128kb slices at a time. Each of those slices contains 256 sectors. That's not a good situation, because if the filesystem or the controller attempting to correct for the filesystem's writes handles the situation poorly, then that 128kb slice soon becomes fragmented. The fragmentation within that block causes your problems. The problem is when contiguous 512-byte sectors can no longer be allocated contiguously in the 128kb slices. Say you have a few sectors there, a few there, a larger chunk there, maybe even a whole 128kb slice that all corresponds to a single linear mapping of filesystem sectors.
All filesystems that write out blocks smaller than 128kb have this problem. Or whatever the size is the flash drive uses. The problem is twofold: filesystems were written with hard disk drive specifications expected, and those specifications were allowed to become so ubiquitous and the momentum behind them so great that writing for a different set of hardware becomes more difficult. And the second problem is that because of the inertia in expecting every SATA device to be a hard disk, that both filesystem driver and hardware manufacturers have come to expect, there is no "alternative" addressing or better or dynamic method of accessing a disk.
Everything, to the filesystem, is a spinning disk. If you made a disk that actually had 128kb sectors, you would probably encounter the same problem these flash drives have. The stupid realization now, a decade or more on from when many of these standards were drafted, is that these standards permitted very little, if any, deviation. All SATA disk devices are written with the expectation that they are spinning media. All SATA disk devices and the filesystem and the OS have expected this, and certain particular numerical definitions for things like sectors for so long, that no one ever thought to write a standard that would allow filesystem writers to create a filesystem that reasonably deals with new devices.
And because the SATA layer is so simplified and dumbed down for spinning media, there's no way in the SATA communication channel to manually defragment a sector. Apparently SATA/SAS are so set in stone, so stubborn that Intel and every other SSD manufacturers has to emulate a spinning media device, and there's no other good way to do it.
Literally, the filesystem cannot fix this problem, the OS cannot fix this problem. Because at that level of operation, the disk is seen as another dumb spinning disk with 512 byte sectors. You can't actually "see" where those sectors are located or attempt to defragment them from that level. The only way to do it is to issue a SATA command that wipes the entire drive, which allows the controller to flush its internal sector mapping.
Re:Just a small dip in performance (Score:4, Interesting)
Come again? I'm not aware of SSDs doing any mapping at that level of granularity; to do so would mean that a 256 GB hard drive would require something like 2 GB of storage (assuming an even 32-bit LBA) just to hold the sector mapping table, and that would have to be NOR flash or single-level NAND flash, making it really freakishly expensive.
All SSDs that I'm aware of work like this: you have a logical block address (sector number). These are grouped (contiguously) into pages (those 128KB "slices" you're referred to). If you have a 128 MB page size, then sectors 0-255 would be logical page 0, sectors 1-511 are logical page 1, and so on.
These logical pages are then arbitrarily mapped to physical pages. This results in a mapping table that is a much more plausible 8 megabytes in size for a 256 GB flash drive.
Thus, any fragmentation within a flash page is caused entirely by the filesystem being fragmented, not by anything happening in the flash controller. The performance degradation from full flash has nothing to do with fragmentation. It is caused by the flash controller having to erase an already-used page before it writes to it. The only way to avoid that is to erase unmapped spare pages ahead of time.
Re: (Score:2)
I believe you are wrong, though I'm a bit fuzzy in the head at the moment.
http://hardware.slashdot.org/comments.pl?sid=1227271&cid=27883769 [slashdot.org]
Re: (Score:2)
I think I see part of the confusion in this thread. I've read some of the papers and they confused me at first. There are two terms called a "block". There is a logical block (512 bytes) and there is a flash block, which is a group of flash pages that must be erased together. Flash pages can be reordered within flash blocks, at least if you're talking about temporary log blocks. I haven't found any evidence that reordering is done in 512-byte quantities. Technical papers tend to use the word "sector"
Re: (Score:2)
> I'm not aware of SSDs doing any mapping at that level of granularity;
> to do so would mean that a 256 GB hard drive would require something
> like 2 GB of storage (assuming an even 32-bit LBA) just to hold the
> sector mapping table, and that would have to be NOR flash or single-level
> NAND flash, making it really freakishly expensive.
I'm sorry to inform you, that having said that, you showed to have no clue about SSD. NAND flash is specifically designed to accomodate for mapping needs becaus
Re: (Score:2)
Yes, I know that extra NAND storage is traditionally used for the mapping. I was initially thinking this would increase write amplification by an order of magnitude or more until the device was full. I have since thought through this a few dozen more times and convinced myself that this could be avoided.
Re: (Score:2)
a 256 GB hard drive would require something like 2 GB of storage (assuming an even 32-bit LBA) just to hold the sector mapping table
That's part of the 7.4% difference between 256 GB and 256 GiB.
Re: (Score:2)
It's true that all modern harddrives can hold more data than they tell the operating they are capable of, but it has nothing to do with the GB vs. GiB thing - that's all about the difference between 2^30 and 10^9.
Re: (Score:2)
It's true that all modern harddrives can hold more data than they tell the operating they are capable of, but it has nothing to do with the GB vs. GiB thing - that's all about the difference between 2^30 and 10^9.
What I'm trying to say is that the underlying flash chips can hold 256*2^30 bytes, but the controller allocates closer to 256*10^9 bytes of that to user data. The rest is for sector remapping tables and for spare sectors to replace erase blocks that have been erased too many times.
Re: (Score:2)
To be honest, I donot see why SSD's can't use a more reasonable sector size. It seems to me that if SSD manufacturers want to try and replace hard drives, they could have known that most filesystems will use block sizes below 4-8k -- and that they do this for good reason.
Hard drives can easily handle blocks of 128 kB as well but performance suffers when using such large blocks (mostly due to inefficient cache use -- hard drives these days take about the same time to randomly read/write 512 bytes as 128 kB)
Re: (Score:2)
The reason flash memoy don't use such small erase blocks is that the cost of the flash die will go up as the size of the erase block goes down sice the circuitry to enable erase of a block must be replicated more often.
Re:Just a small dip in performance (Score:5, Insightful)
Grandparent, and the whole chain, are *way* off base here. SSD LBA remap table fragmentation has absolutely nothing to do with file system fragmentation. ext3 will cause just as much of a slowdown as NTFS would. I share the same appreciation for Linux as everyone else around here, but in this case it is in no way the magic bullet we might like it to be.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
Hold out your hands, you just won a dozen internets!
Re:Just a small dip in performance (Score:5, Informative)
Yeah, but the SSD wear leveling sits at a level below RAID. When you write to a specific area of the SSD, the wear leveling can remap that to whereever it needs.
Re: (Score:2)
Re:Just a small dip in performance (Score:5, Funny)
So this is a non issue for Windows users?
First generation SSD immaturity... (Score:2, Interesting)
... these things aren't going to be a big deal in the long run, I mean who wasn't expecting some amount of technological immaturity? We shouldn't forget though that even with it's immaturity it's still much faster then hard disk drives but the SATA interface controller was not designed to handle such high speeds, not to mention much software is not geared, nor optimized for SSD usage.
Still price has come down considerably on many SSD's over the last 6 months, I was thinking about picking up an X-25 M for a
Re: (Score:2)
"This has nothing to do with technological immaturity or firmware bugs. It's simply a matter of SSDs using an optimization that hard drives can't due to their non-trivial seek times, and people are starting to realize that that optimization doesn't always work."
Actually it does, you should check out ANAND's article here:
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8 [anandtech.com]
There definitely IS some immaturity (broadly speaking) if you go through the entire article about how other SSD's were designed an
Re: (Score:2)
summary: SSD's are faster than HDD's, but SSD's degrade in a bad way due to fragmentation and some have a stutter. But the fastest/best SSD's even when degraded are still faster than the fastest HDD's (they are also very expansive in comparison). If you use SSD's as the /usr or something like that, which doesn't get a lot writes/changes (and use the noatime mount-option) you should have a more responsive system.
Re: (Score:2)
Thanks for that! :)
"Drastically affected its performance" (Score:5, Informative)
"Drastically effected its performance"
This is patently false. Whats really happening is that SUSTAINED WRITE PERFORMANCE decreases by about 20% on a full drive as compared to a fresh drive. You might say 20% is too much, and I'd probably agree with you, except that ONLY sustained write performance is being affected.
Your read speed will not decrease. Your read latency will not increase. Unless you're using your SSDs as the temp drive for a high definition video operation (And why the hell would you for that? Platter drives are far better suited to that task between sequential write speed and total storage space) then you have nothing to worry about.
This happens on all drives, as the article title correctly states. The solution is a new write command that pre-erases blocks as you use them, so the odds that you have to erase-then-write as you go along are decreased. Win7 knows how to do this.
Nonetheless, it is totally overblown and your SSD will perform better than any platter based drive even when totally full.
Re: (Score:2)
Unless you're using your SSDs as the temp drive for a high definition video operation (And why the hell would you for that? Platter drives are far better suited to that task between sequential write speed and total storage space)
Not if you use torrents, they're very much non-sequential as it's downloading tons of pieces all over the file. When I had a regular HDD as OS disk it was big so I'd download torrents to it - always slowed the machine down, was better to use a different disk but leaving hundreds of GBs unused didn't seem to make sense either. With an SSD you hardly notice the torrents running, I usually download to SSD, watch it then move it to file server. Everyone should get an SSD really, it's the greatest revolution sin
Re:"Drastically affected its performance" (Score:5, Informative)
20% is too little. I've seen drives, even SLC drives, drop by more than 50%. Only some drives bounce back properly. Others rely on TRIM to clean up their fragmentation mess.
A more important note is that some initial TRIM implementations have been poorly implemented, resulting in severe data corruption and loss:
http://www.ocztechnologyforum.com/forum/showthread.php?t=54770 [ocztechnologyforum.com]
I posted elsewhere regarding the fragmentation issue here:
http://hardware.slashdot.org/comments.pl?sid=1227271&cid=27883769 [slashdot.org]
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
"your SSD will perform better than any platter based drive even when totally full"
I suggest you first read up on that:
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8 [anandtech.com]
Not just any SSD, some have stutter, some degrade in very bad ways, I would say: "if you choose wisely your SSD will perform better than any platter based drive. But you won't be buying the cheapest SSD" or something of that nature.
Good SSD's are very expensive in comparison to HDD's.
good article on the reasons behind this (Score:5, Informative)
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=4 [anandtech.com]
Anandtech has a very detailed article that explains all about this and some ways to recover the lost speed (sometimes).
Wait just a minute... (Score:4, Funny)
...you mean to tell me that fragmentation *reduces* the performance of storage???
Re:Wait just a minute... (Score:5, Informative)
...you mean to tell me that fragmentation *reduces* the performance of storage???
Fragmentation on hard disks reduces performance because of the time it takes to physically move the disk heads around. There are no physical heads to be moved around in SSDs, therefore it's perfectly reasonable to assume that that mechanism of performance hit will not occur on SSDs, and therefore it's not an issue. I did a small test [googlepages.com] years ago on the effects of flash memory fragmentation in a PDA, and I, and most people I discussed the matter with seemed to be quite surprised with the results at the time. I never got a good technical explanation of why the performance hit was so large. Doubt that's the same mechanism at work as with modern SSDs, but sort of relevant anyway.
Re:Wait just a minute... (Score:5, Informative)
The reason the performance hit is large is because writing to SSDs is done in blocks. Fragmentation causes part-blocks to be used. When this happens, the SSD must read the block, combine the already-present data with the data it's writing and write the block back, rather than just overwriting the block. That's slow.
Re: (Score:2)
Since the memory controllers in SSDs deliberately distribute your data across the flash memory, "fragmentation" in its usual sense is pretty meaningless.
Re: (Score:2)
But if a certain write needs to modify 100 blocks instead of 10 due to fragmentation, fragmentation is a major performance factor.
Re: (Score:2)
A real skill needing real support. The cheaper units are in a race to the bottom with what ever they can buy off the shelve.
Other firms try to spread high end from pro desktop users.
If you want a memory controller that works, you will pay.
No brand is ready to upset that mix at this time.
Old stock to sell before they can hire professionals at the low end.
At the top end, why end a good thing?
My X301 still runs fast (Score:2, Interesting)
My PATRIOT ssd still runs fast (Score:2)
I've used it as a desktop drive for four months so far, and, using hdparm -T as a benchmark (I know, I know, but it's on a desktop!) it has the same throughput as it ever did. I download torrents to it.
It would copy completed torrents to a platter-drive at 60MB/sec when new and will still do 60MB/sec now.
I don't see a problem. Opera/Firefox open in less that one second (by wristwatch!) instead of ten on a platter-drive (Pentium-M, 1.6 GHz). The whole computer seems more responsive -- even modern in terms of
Re: (Score:2)
hdparm -T ? How is that an indication of your SSD ?
from the manualpage:
"This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test."
Re: (Score:2)
I purchased a Lenovo X301 with a 120 GB flash drive last September and have been nothing but pleased with the performance of the drive. I boot Vista and also run openSUSE in a vm. The drive speed is high and consistent. The drive in the X301 is supposed to have better controllers than some, and it certainly does better than a USB stick.
Any theoretical problems with write speed don't appear to me to affect typical real world use.
I have also a SSD in a macbook air and one thing I am very pleased with is the consistent speed of the SSD HD (less with the air and its prevalent heating issues never fixed by Apple). I assume the entire issue is way overblown, since there might be some degration but given that it occurs only in continous writes and that is a rare situation you wont notice in real world use. In fact normal ops usually are a mixture of read, random write and calculation cycles and the advantage to normal hds really is huge!
Just use an intellgent defragger (Score:5, Funny)
One that can relocate MFTs, most used files and swap to the chips on the outer edge of the circuit board, where the throughput is faster.
Re: (Score:3, Funny)
NAND is the culprit (Score:5, Informative)
The fundamental problem with NAND-based solid-state drives is that they use NAND flash memory--the same stuff that you find in USB flash drives, media cards, etc.
The advantages of NAND is that NAND is both ubiquitous and cheap. There are scads of vendors who already make flash-memory products, and all they need to do to make SSDs are to slap together a PCB with some NAND chips, a SATA 3Gb/s interface, a controller (usually incorporating some sort of wear-leveling algorithm) and a bit of cache.
The disadvantages of NAND include limited read/write cycles (typically ~10K for multi-level cell drives) and the fact that writing new data to a block involves copying the whole block to cache, erasing it, modifying it in cache, and rewriting it.
This isn't a problem if you're writing to blank sectors. But if you're writing, say, 4KB of data to a 512KB block that previously contained part of a larger file, you have to copy the whole 512KB block to cache, edit it to include the 4KB of data, erase the block, and rewrite it from cache. Multiply this by a large sequence of random writes, and of course you'll see some slowdown.
SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.
Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.
Re: (Score:2)
A smart device might do things a bit differently. It will not do your described cycle of read-block/change-data/erase/write-same-block. Instead it will buffer up enough changes until it has a full block and then write it to a _different_ block. One that is already preerased. There is no need to store sectors in the original order - just keep a table with sector locations.
A small capacitor makes it safe to delay writting by storing enough power to do emergency flush during powerloss.
I am sure makers of these
Re:NAND is the culprit (Score:4, Informative)
Samsung has begun manufacture of their PRAM which promises to be a replacement for NAND:
http://www.engadget.com/2009/05/05/samsungs-pram-chips-go-into-mass-production-in-june/ [engadget.com]
Wikipedia writeup on PRAM:
http://en.wikipedia.org/wiki/Phase-change_memory [wikipedia.org]
This type of "flash" memory will make much better SSD drives in the near future.
Re:NAND is the culprit (Score:4, Interesting)
This is excellent news. As you allude, PRAM will finally make good on the promise of solid state storage. It will allow for both higher reliability and deterministic performance, without the ludicrous internal complexity of Flash based devices.
I can't help but cringe every time I hear the terms Flash and SSD used interchangeably. If anything, the limitations inherent to Flash devices described by the GP mean they have more in common with a hard disk, as they also have an inherent physical "geometry" which must be considered.
PRAM will basically look like a simple linear byte array, without all the nonsense associated with Flash. Even if Flash retains a (temporary) advantage in density, it will never compete with hard disks on value for bulk storage, nor will it ever compete with a proper SSD on a performance basis. It makes for a half-assed "SSD", and I can't wait for it to disappear.
Wear leveling and power management (Score:2)
Power management can turn off sections of the flash memory. This is good, of course, to reduce battery consumption in laptops and netbooks. But the process of turning a section's power off and then turning another section's power on can slow down the access. With very random access, expect that to simply happen a lot. So random hopping around the storage, while not as slow as a mechanical hard drive, will be slower than sequential.
Add wear leveling into the picture and you have a layer of memory transla
Re: (Score:3, Informative)
SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.
Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.
Or...I dunno, maybe they could create a filesystem specifically for NAND flash [wikipedia.org].
http://en.wikipedia.org/wiki/JFFS2 [wikipedia.org]
Re: (Score:2)
Or...I dunno, maybe they could create a filesystem specifically for NAND flash.
It makes much more sense for existing filesystems to include awareness of SSD and use them accordingly. ZFS is doing this; eventually others will, too.
Therefore Headline is FALSE! (Score:2)
So according to what you're saying, and what the Anandtech article said, the headline is just plain Wrong!
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1 [anandtech.com]
The slowdown is only particular to NAND Flash. Dynamic RAM based solid state drives don't suffer from this phenomenon. (Gigabyte i-Ram and ACARD ANS-9010) However, they are definitely also Solid State Drives.
Re: (Score:2)
Re: (Score:3, Interesting)
Is there a fundamental reason why they can't just shrink the block size?
Re: (Score:2)
It increases the cost of the flash dies, or it reduces performance (By interleaving across less dies)
Re: (Score:2)
Both means should be provided. That way we can still have SSD devices emulating regular hard drives. But even if the direct layer access is not provided, a standard command code to identify block sizes, and a standard command code to erase a block (e.g. discard it from wear leveling translation for now) would be partly useful. One could erase all the blocks and thus reset the wear leveling over the whole device. Of course, you better have a few backup copies of the data.
Not News (Score:3, Interesting)
This is old news, and both the Intel drives and the OCZ Vertex have updated firmwares/controllers that remedy (but do not completely solve) the issue.
When we get support for TRIM, it will be even less of an issue, even on cheapo drives with crappy controllers/firmware.
The issue won't be completely solved ever, because of how SSD arranges flash memory and how flash memory can't really be overwritten in a single pass.
See anandtech's write up if you want details.
http://www.anandtech.com/printarticle.aspx?i=3531 [anandtech.com]
Re: (Score:2)
Meh, if you see the random write data rate of the OZC drive (which uses a controller chip from a 3rd party company) the SSD drives totally obliterate the other drives in write speed, safe the expensive Intel ones.
We'll just wait a bit and buy either OCZ or another party that uses the controller with a stable firmware. Currently you will have to be on the lookout for bad/old firmware from OZC, or buy a drive that messes up write performance, or one darn expensive one from Intel.
Of course, if you mostly start
Re: (Score:2)
OCZ Vertex drives have new controllers with good frimware.
Get your facts right.
Re: (Score:2)
The Anandtech article is dated 18 march of this year. They've just received some new firmware from OCZ. Are you saying that all the drives in the channel are already flashed with this firmware? Otherwise is it more prudent to say that OCZ drives may have new controllers with good firmware. Although with the current house on Vertex drives the probability of drives with correct firmware is increasing. My retailer however does not list the BIOS version of the thing.
Personally I'll just wait a bit longer until
Re: (Score:2)
IMPORTANT NOTE: To continually improve and optimize the Vertex SSD for the latest platforms OCZ will constantly release new firmware updates. Detailed firmware information can be found on our support forums and a step-by-step flashing guide is available here
All VERTEX drives contain the good controller.
Just upgrade the firmware when you get it if you're worried. Either way, the good controller with the "bad" firmware is still awesome.
The OTHER OCZ drives are using the older crappy controller all other cons
Re: (Score:2)
Found it.
As far as I know, this is the one of the only reviews (if not the only) at the time of publication thatâ(TM)s using the new Vertex firmware. Everything else is based on the old firmware which did not make it to production. Keep that in mind if youâ(TM)re looking to compare numbers or wondering why the drives behave differently across reviews. The old firmware never shipped thanks to OCZ's quick acting, so if you own one of these drives - you have a fixed version.
Re: (Score:2)
Rereading my post it is a bit strong though. I'll have some coffee and let my mood and writing skills get up to par again.
What I was trying to say is that you may not get a drive with the latest firmware if you are shopping at your local store. Even if you can put the firmware up directly, it *will* cost you time and inconvenience, and the chances of another bug popping up are much higher than when you buy, for instance, a hard drive.
Sorry if I've offended you in any way.
Re:Not News (Score:5, Informative)
Intel has solved theirs about 95%, but they are helped by their write speeds being limited to 80 MB/sec. With the new firmware, it is *very* hard to get an X25-E to drop below its rated write speed.
http://www.pcper.com/article.php?aid=691&type=expert&pid=5 [pcper.com]
OCZ has not yet solved it. They currently rely on TRIM, and in my testing that alone is not sufficient to correct the fragmentation buildup. IOPS falls off in this condition as well.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:3, Informative)
Correction to my last. I was speaking of X25-M, not E.
Re: (Score:2)
http://www.pcper.com/article.php?aid=691&type=expert&pid=5 [pcper.com]
OCZ has not yet solved it. They currently rely on TRIM, and in my testing that alone is not sufficient to correct the fragmentation buildup. IOPS falls off in this condition as well.
Sorry, but that article does not say anything about Vertex drives, it does not say anything about firmware updates of Vertex drives and I've seen nothing in the Anandtech article requiring you to use TRIM commands to get the more balanced performance.
As a storage editor, I would like you to point to an article refuting Anands claims about the Vertex. Mods, someone just chiming in as an editor of a PC mag should not get free mod points, even if they provide a link. Although the part of the Intel drives is in
Partly True (Score:2)
Re: (Score:2)
What I understood is that OCZ relies on a (single) controller which is more like a regular CPU underneath, and can be updated through firmware. The last firmware should be able to solve the problem, but you need to backup and restore all your data for it to work. See the Anandtech article for more details.
If I remember correctly the dual controller path is true for other SSD drives using the Micron controller. Not the Vertex.
Re: (Score:2)
1 print page! (Score:2)
http://www.computerworld.com/action/article.do?command=printArticleBasic&taxonomyName=Storage&articleId=9132668&taxonomyId=19 [computerworld.com]
Anandtech talked about this back in March (Score:2)
Tom's Hardware (Score:4, Insightful)
About 2 months ago.
Re: (Score:2)
Which article would that be, or did you mean Anandtech ?:
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1 [anandtech.com]
Re: (Score:2)
Re: (Score:2)
Probably one of the best most information-packed articles I've seen in ages.
This is definitely a limitation of the technology (Score:2)
However...
As long as they can get the read, write, random read, random write performance to be substantially better than a hard disk - across the board I don't care too much. .ico files on it (754 bytes?) in a zip file.
Example: many many years ago, on my 286, I extracted a floppy disk with 1,800
That took about an hour to do.
I then learned about 'smartdrv.sys' (or was it EXE?)
The time to do it went from an hour to about 30 to 60 seconds.
The way the FAT16 worked on my machine with a 20mb drive and a 286 CPU
Anandtech explained this months ago (Score:2)
Once again I'm shocked by how terrible Slashdot is for anything hardware related. Just as has been said every time anyone has mentioned these pathetic articles from magazines like Computerworld (!), THIS ISN'T NEWS - ANANDTECH EXPLAINED IT VERY CLEARLY MONTHS AGO.
http://www.anandtech.com/storage/showdoc.aspx?i=3531 [anandtech.com]
Even without reading an article, I'm surprised this isn't intuitively obvious to most Slashdot users. I'm also surprised that the majority of hardware articles posted here come from jokes like C
Re: (Score:2, Funny)
Re:All? (Score:4, Insightful)
Re: (Score:2, Informative)
Can a fail be insightful?
Re:All? (Score:5, Funny)
Re: (Score:2)
It can be a revelation.
It was when one happened to me, on one of the two "3 million hour MTBF" SSD drives I purchased approximately a month before.
Re: (Score:2)
Re:All? (Score:5, Funny)
Not *ALL* (Score:3, Informative)
Looks like it. they're all borked. Every single one of them. I said so in the title, and I only bother reading the title in Slashdot stories these days.
http://4onlineshop.stores.yahoo.net/an5insax1ram.html [yahoo.net]
The ANS9010 and 9010B suffer no such issues since they are ram-based. They also have a CF backup slot in addition to a backup battery. Very slick and a better solution for a boot drive than a typical SSD if you absolutely must have maximum speed. Pricing with RAM is comparable to an enterprise-level SSD
Re:HMMMMMMM (Score:5, Informative)
Think I'll stick with the tried and true IDE/SATA tech.
Psst: SSD drives connect via SATA.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2, Informative)
Actually, SSDs don't currently support TRIM. The last I read, the manufacturers can't agree on a common way to do it (there are 2 competing ideas about how to implement it). They are actually trying to do the right thing by working out their differences and implementing it all the same, rather than both doing their own thing. Until they come to an agreement, they can't release any firmware supporting TRIM. For the same reason, Windows 7 doesn't support TRIM either, and it is supposed to be shipping without
Re:Not a bug. (Score:5, Informative)
Re: (Score:2)
Pardom my knowledge for being ever so slightly out of date. As of less than a month ago, that wasn't the case:
April 13, 2009
http://www.pcper.com/article.php?aid=691&type=expert&pid=8 [pcper.com]
"...when Microsoft finally adds it to Windows 7 (it's not in there yet)"
Also regarding the trim support on the Vertex:
"The Indilinx guys have gone ahead and implemented a draft form of TRIM and OCZ has released a (very) beta version of a TRIM tool for use with their Vertex series drives...As of this writing the tool caus
Re: (Score:2)
It's true that TRIM is very much in it's infancy, but the clouds aren't as dark in SSD's future as they once were (even as little as a month ago). Many [ocztechnology.com], many [intel.com], many [samsung.com] companies see SSD's as the future of storage and I'm inclined to agree with them. With that kind of muscle propelling development and increased consumer interest fuelling funding, the landscape is and will continue to change very rapidly.
My own take on things, FWIW, is that tapes will go the way of the floppy and spinning disks will become near
Re: (Score:2)
Tests show that atleast flash SSDs do have non-zero seek times, though the seeking is 10-100 times faster than with harddisks, but there is a penalty.
Re: (Score:2)
Firewire.
Re: (Score:2)
Re: (Score:2)
In case someone is wondering: no technical parts of this article say anything interesting. USB is certainly not a good replacement. USB-2 is way too slow and has terrible latency. USB-3 will be better, but it won't go faster than current SATA-2. And no, the write speed of SSD's won't surpass SATA-2 for any reasonably numbers of chips for hard drive replacement.
The only real competitor for SATA currently is the PCI-e bus. If you would create an SSD with seriously low latency and many parallel chips, hooking
Re: (Score:2)
Ext3 not fragmenting is a myth anyway. Ext3 fragments just as badly with multiple concurrent writes going on (like happens on a drive with log files or lots of relatively slow updating files -- like downloads).