"Limited Edition" SSD Has Fastest Storage Speed 122
Vigile writes "The idea of having a 'Limited Edition' solid state drive might seem counter-intuitive, but regardless of the naming, the new OCZ Vertex LE is based on the new Sandforce SSD controller that promises significant increases in performance, along with improved ability to detect and correct errors in the data stored in flash. While the initial Sandforce drive was called the 'Vertex 2 Pro' and included a super-capacitor for data integrity, the Vertex LE drops that feature to improve cost efficiency. In PC Perspectives's performance tests, the drive was able to best the Intel X25-M line in file creation and copying duties, had minimal fragmentation or slow-down effects, and was very competitive in IOs per second as well. It seems that current SSD manufacturers are all targeting Intel and the new Sandforce controller is likely the first to be up to the challenge."
Can you hack one in? (Score:2)
Is the cap left off the board so you can just put one in yourself or is it size-reduced as well?
Re: (Score:2)
There's a blank space with pads, yes, but i don't think it would be a good idea to just solder it in there. Going out on a limb here, but the supercap may require firmware support to actually complete the writes when the main power is yanked. Then again maybe its just wired in parallel with the power from the SATA connector, dunno. I'm not much help :)
Re: (Score:1)
Re: (Score:2)
If it doesn't require firmware someone will figure it out. It's unlikely it requires firmware though, unless they specifically decided it had to somehow. I can believe some other parts might be omitted though (maybe a diode or jumper as mentioned) but I bet it's no biggie. Now, what's the SMT supercap part they use?
Re: (Score:1)
Splicing two 1500 uF caps inline on the 5v and 12v PSU lines respectively would be cheaper, more effective, and safer that attempting surgery on this little PCB. It won't void the warranty, and it'll provide significantly more reserve current than the tiny cap normally soldered to that pad. And it will power the entire SSD drive for moments after the cord is ranked, so you won't have to worry about whether soldering a tiny cap to that pad required a drive firmware update as the reserve "lights out" curren
oh god... (Score:1)
"I was so eager to test it that I pounded on this drive all night "
Possible poor choice of words?
Re:oh god... (Score:5, Funny)
"I was so eager to test it that I pounded on this drive all night "
Possible poor choice of words?
"Er, I was testing IOs per second."
Re: (Score:2)
Maybe you were testing table insertions in SQLer plus?
"improve cost efficiency" - press releases on /. (Score:2)
Re: (Score:1)
Calling the article a 'press release' unfairly tarnishes OCZ. Their press release is still full of press release though:
http://www.ocztechnology.com/aboutocz/press/2010/362 [ocztechnology.com]
Re: (Score:2)
Re: (Score:1)
Right, but this isn't PR, PC Perspective thinks they are a news site (and they didn't simply parrot the OCZ press release).
"to lower the cost" (Score:2)
should be
"to lower the cost"
Misleading title (Score:5, Informative)
Crucial RealSSD C300: http://www.tweaktown.com/reviews/3118/crucial_realssd_c300_256gb_sata_6gbps_solid_state_disk/index5.html [tweaktown.com]
Fusion-IO: http://storage-news.com/2009/10/29/hothardware-shows-first-benchmarks-for-fusion-io-drive/ [storage-news.com]
Re:Misleading title (Score:5, Informative)
- We included some early C300 results with the benches. The C300 will read faster (sequentially) under SATA 6Gb/sec, but it is simply not as in most other usage.
- Fusion-IO - good luck using that for your OS (not bootable). Fast storage is, for many, useless unless you can boot from it.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:3, Informative)
"Fusion-IO - good luck using that for your OS (not bootable)."
Not until Q4, when we release the firmware upgrade to get it working.
Then, your point will be moot.
Re: (Score:1)
Re:Misleading title (Score:4, Informative)
I've got a copy of the fusion-IO faq from early 2008 that reads as follows:
> Will the ioDrive be a bootable device? ...Then it was promised for the Duo (and never happened). ...Then it was promised for the ioXtreme and even it was released without the ability.
> This feature will not be included until Q3 2008
Don't get me wrong, I'm a huge fan of fusionIO, but you can only fool a guy so many times before he gives up hope on a repeatedly promised feature.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
Re: (Score:2)
"Seriously? It's going to take you over three years to write the two hundred or so lines of x86 assembly required to let the BIOS see your product as a disk?"
Not every thing uses BIOS, you know. There's more than just BIOS and EFI.
Then to add to that - NOT ALL BIOS ARE THE SAME.
Something tells me you've never done hardware development before.
Re: (Score:2)
Then to add to that - NOT ALL BIOS ARE THE SAME.
They arent exactly the same, but come on.. during bootup they are 99.5% the same, meaning that they have all the same I/O interrupts.. they leverage the interrupt vector table at the same location.. thats the table where you install your own I/O handlers during your devices initialization..
I can say for certain that your problem is not incompatible bios's, that it is almost certainly a programmer who doesnt know what hes doing selling you a load of horseshit.
Re: (Score:3, Interesting)
Not every thing uses BIOS, you know. There's more than just BIOS and EFI.
True, but we're talking about booting Windows and Linux, maybe *BSD and Solaris. That basically means BIOS. EFI if you want to boot OS X too, but EFI anywhere else will emulate a BIOS for the purpose of interfacing with disk controllers. You need just enough working to get the boot loader to read the kernel.
Then to add to that - NOT ALL BIOS ARE THE SAME.
And yet every ISA, PCI, or PCIe IDE or SCIS controller manages to work with every BIOS that supports the correct bus. How? Because this stuff has not changed significantly for a decade. That's one
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
So use a regular SSD for the OS, and multiple ioDrives for heavy DB work, and whatever else you can throw at it?
Re: (Score:2)
I don't think the OS needs any kind of fast media for boot. Just boot from USB stick or similar and set the Fusion-IO as root device. The USB stick will be fast enough to transfer the 20-40MB that are required to load the kernel.
Re: (Score:2)
I, for one, will never buy another OCZ product again. I bought a "Solid Series" a little over a year ago when newegg reviews (about a dozen at the time) only had good things to say about them. They were pretty fast in the beginning.
About half-a-year later, the thing started stuttering for seconds on end, much worse than any non-broken spinning disk I encountered. It was a little over half full, that's it. Turns out that they put in crappy controllers, I guess. Not fully sure. Now the company says they
Re: (Score:1)
Welcome to jumping on a new technology, you got burned, as everyone with the exception of Samsung and Intel drives of the time used the same Jmicron controller. OCZ actually went and designed some cache and paired controllers into their middle offering (I forget the name), and I believe switched to Samsung controllers and single layer flash for a time on the high end. (I don't know what their current offerings are)
Everyone else for the most part kept selling parts that used the same chip as the OCZ Value Se
Re: (Score:2)
Their Solid Series 2 is pretty good. Ridiculously cheap. It's reliably fast for read speeds, at least. But stay away from any of the older SSDs that had those horrible JMicron controllers.
How hard can it be? (Score:5, Interesting)
I'm kinda fed up waiting for the SSD manufacturers to get their act together. There's just no reason for drives to be only 10-50x faster than physical drives. It should be trivial to make them many thousands of times faster.
I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is just... sad. It's also telling that it's a factor of two cheaper to just go and buy four SSDs and RAID them using an off-the-shelf RAID controller! (*)
Meanwhile, FusionIO [fusionio.com] makes PCI-e cards that can do 100-200K IOPS at speeds of about 1GB/sec! Sure, they're expensive, but 90% of that is because they're a very small volume product targeted at the 'enterprise' market, which automatically inflates the price by a '0' or two. Take a look at a photo [fusionio.com] of one of their cards. The controller chip has a heat sink, because it's designed for performance, not power efficiency!
This reminiscent of the early days of the 3D accelerator market. On one side, there was the high-performing 'enterprise' series of products from Silicon Graphics, at an insane price, and at the low-end of the market there were companies making half-assed cards that actually decelerated graphics performance [wikipedia.org]. Then NVIDIA happened, and now Silicon Graphics is a has been because they didn't understand that consumers want performance at a sane price point. Today, we still have SSDs that are slower that mechanical drives at some tasks, which just boggles the mind, and on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500 [amazon.com]. I mean.. seriously... what?
Back when I was a young kid first entering university, SGI came to do a sales pitch, targeted at people doing engineering or whatever. They were trying to market their "low-end" workstations with special discount "educational" pricing. At the time, I had a first-generation 3Dfx accelerator in one of the first Athlons, which cost me about $1500 total and could run circles around the SGI machine. Nonetheless, I was curious about the old-school SGI machine, so I asked for a price quote. The sales guy mumbled a lot about how it's "totally worth it", and "actually very cost effective". It took me about five minutes to extract a number. The base model, empty, with no RAM, drive, or 3D accelerator was $40K. The SSD market is exactly at the same point. I'm just waiting for a new ''NVIDIA" or "ATI" to come along, crush the competition with vastly superior products with no stupid compromises, and steal all the engineers from FusionIO and then buy the company for their IP for a bag of beans a couple of years later.
*) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!
Re:How hard can it be? (Score:5, Informative)
Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.
Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.
No, it's because the thing's running an Xilinx Virtex5 FPGA. It also costs a ton as it's using 96GB of SLC NAND, and is part of a fairly modular design that is reused in the io-drive Duo and io-drive Quad.
If you're referring to the older JMicron drives that failed utterly at 4K random reads/writes, then you're mistaken. That was the case of a shit controller being exposed. Even the Indilinx controllers, which paled next to the Intel chip, outclassed mechanical drives at the same task.
If you think that's bad, consider that the Virtex5 they're using on it costs on the order of $500 for the chip itself. You linked the "pro" model, which supports multiple devices in the same system in some fashion. You want this one [amazon.com], which is only $900. Both models use MLC NAND, and neither are really intended for mass-market buyers (you can't boot from them, after all.)
Re: (Score:3, Interesting)
Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.
Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.
From what I gather, the performance limit is actually largely in the controllers, otherwise FusionIO's workstation class cards wouldn't perform as well as they do, despite using a relatively small number of MLC chips. Similarly, if the limit was caused by Flash, then why is it that Intel's controllers shit all over the competition? The Indilinx controllers got significant speed boosts from a mere firmware upgrade! There's a huge amount of headroom for performance, especially for small random IOs, where the
Re: (Score:2)
I also disagree that people are running out of expansion slots. On the contrary, other than a video card, I haven't had to use an add-in card for anything for the last three machines I've purchased.
It used to be that you had a dedicated slot for your graphics card (AGP or PCIe), maybe an AMR or CNR slot that noone actually used and all the other slots were PCI. High end server/workstation boards had PCI-x but even there in general you could still put most cards in most slots (unless the card manufuacturer w
Re:How hard can it be? (Score:4, Interesting)
> Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes.
There's the thing. Most SSD's are only using the legacy transfer mode of the flash. The newer versions of ONFi support upwards of 200MB/sec transfer rates *per chip*, and modern controllers are using 4, 8, or even 10 (Intel) channels. Once these controllers start actually kicking the flash interface into high gear, there will be no problem pegging SATA or even PCI-e interfaces.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
When do you see the introduction of bootable PCIe FusionIO type cards for the consumer?
Re: (Score:2)
Their market is the extreme I/O per second niche, and that niche will never grow to include consumers, who dont need 100,000+ IOPS. They just want more bandwidth, and flash can provide that as well as a more-than-enough (1000+) boost in IOPS (and even today the price point for Flash SSD's, whi
Re: (Score:2)
I said "FusionIO type cards". IE, an SSD integrated directly onto a PCIe card. It doesn't need to be insanely fast and expensive, or developed by FusionIO.
Re: (Score:2)
The reason is because while SATA 3.0 is still barely adopted, we need a SATA 4.0 that is at least 4 times faster at a minimum. Since this cannot happen any time soon, the only solution is a different pipe.. such as the PCI lanes.
Re: (Score:2)
Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.
How about the PCIe bus? It's already reasonably mature technology and there's a huge installed base. They can build small cards and huge cards.
I'm looking for an SSD for the OS and programs to reside on, mounted read/only almost all the time (only writing when I need to upgrade it). This does not need sheer density as 16GB will be sufficient (that's GB, not TB). What I want is sheer SPEED. Speed of access and speed of transfer. Single level cells, not multi-level cells, is all that would be needed. A
Re: (Score:2)
Re: (Score:2)
'FWIW, the FusionIO product is not a simple drive replacement the way a SSD is. It doesn't boot and requires drivers to operate, plus the "control logic" is not self-contained but rather part of the driver."
Everything you address is fixed at the end of this year with a firmware upgrade.
Re: (Score:2)
Promises, promises. I like FusionIO, I have 8 of the cards. But they have been promising this fix in a few quarters since they released the cards, man.
C//
Re: (Score:2)
Everything you address is fixed at the end of this year with a firmware upgrade.
Funny, they've been saying that exact thing for the past two years. Fortunately this time we can trust them. You know, because the year ends in a zero.
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Yah. And that's the one overriding advantage to SSDs in the SATA form factor. They have lots and lots of competition. The custom solutions... the PCI-e cards and the flash-on-board or daughter-board systems wind up being relegated to the extreme application space, which means they are sold for tons of money because they can't do any volume production and have to compete against the cheaper SATA-based SSDs on the low-end. These bits of hardware are thus solely targeted to the high-end solution space wher
Re: (Score:3, Interesting)
You are basically saying contradictory things:
"lots and lots of competition" is the opposite of an "overriding advantage". It's a huge disadvantage. No company wants to enter a market with massive competition.
The PCI-e cards aren't any more "custom" than the SATA drives. Is a 3D accelerator a "custom" PCI-e card? What about a PCI-e network card? Right now, a SATA SSD and a PCI-e SSD is actually more or less the same electronics, except that the PCI-e card also has a SATA controller built in.
There's zero nee
Re: (Score:3, Insightful)
I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is. SATA has a massive amount of infrastructure and momentum behind it for deployments ranging the gauntlet, small to large. That means SATA-based SSD drives are going to be in very high volume production relative to PCI-e cards. It DOES NOT MATTER if the PCI-e card is actually cheaper to produce, it will still be price
Re: (Score:3, Insightful)
I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is.
I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?
Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a co
Re: (Score:2)
I believe that there will be a true Hard Card revival because of the facts of this current market.
SATA 3.0 adoption will be slow (motherboard with the 6Gb sata are noticeably more expensive) and
Re: (Score:2)
I believe that there will be a true Hard Card revival because of the facts of this current market.
This current market? Laptops are now, what, 60% of total PC sales? They passed the 50% mark a year or so back, but I haven't been paying attention much since then. Laptops don't have multiple internal PCIe slots. There is some advantage in custom form-factors for fitting inside a laptop, but the 1.8" and 1" hard disk form factors are a logical place to go. Maybe a PCIe bus rather than SATA sounds sensible, but it needs more wires (and more motherboard traces), which makes things much more expensive an
Re: (Score:2)
If a PCI-e SSD at the same price as an equal capacity SATA drive provided literally 100 times the performance, would people ignore it because.. wait... it's a funny shape for a drive?
No, of course not. But it cannot happen, because you have to recoup your driver creation and maintenance costs, for plural operating systems.
C//
Re: (Score:2)
I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?
I think you've missed the boat entirely. I'm a small business owner and as a business owner, I buy the cheapest computers that allow my employees to get their work done. This means they're MATX form factor and as you stated earlier, everything is on the board (Video, Sound, Networking) and are lucky to have even a PCIe-16 slot for a video card upgrade. So where are all the business desktops with four or more PCIe slots? I've never seen one in a MATX business class board but I have seen plenty of boards with
Re: (Score:2)
Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a couple of 2TB spinning disk drives in there for holding your movies and photos and all that junk that doesn't need 100K IOPS.
You may have missed it, but the desktop is dead. The major markets of SSD are notebooks and servers. Modern servers are 1U or blade and have ~0 available PCI-e slots. Notebooks don't have any PCI-e slots either, and manufacturers can't yet make models without support for regular hard drives.
Re: (Score:2)
The PCI-e cards aren't any more "custom" than the SATA drives.
You don't have to write driver software for all of the individual platforms you might support if you pick SATA. So, yes, in that sense SATA is less "custom" than the PCIe interface, because the PCIe approach requires quite literally so much more customization work.
C//
Re: (Score:2)
Meanwhile, there's already several 20-80 Gbps PCI-e ports on every motherboard
ROFLMAO
Pretty much every board has one x16 slot (though in some cases it may be x8 or even x4 electrical). However given that most desktop users buying SSDs will probablly be using this for graphics and the fact that some boards don't like anything but a graphics card in this slot it can't really be considered as a general purpose slot.
The remaining PCIe slots on most boards (if there are any, there are still machines being made w
Re: (Score:2)
"We Lose Money On Each Unit, But Make It Up Through Volume"
Take a look at memory sticks and memory cards - they're just one of the dumbest chips possible wrapped in a few cents of plastic. Multiply it up to desired SSD size. It actually comes out to quite a bit in parts before you start trying to build an SSD out of it. Now I haven't looked at FusionIO's products in a while but their early products at least were basically banks of RAM with a battery powered backup. Neat, but didn't really help unless you co
Re: (Score:2)
Re: (Score:2)
*) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!
Let me just point out, I bought 2 SSD drives and used my onboard RAID, only to find out that I was limited to 1 PCIe lane due to the onboard controller's design, and thus was running at the speed of a single one of my SSDs instead of 2, realizing no performance gains from RAID0.
Not really impressed with OCZ (Score:4, Interesting)
At least not the Colossus I bought. Write speeds are great but read speeds suck compared to the Intels. The Colossus doesn't even have NCQ for some reason! There's just one tag. The Intels beat the hell out of it on reading because of that. Sure, the 40G Intel's write speed isn't too hot but once you get to 80G and beyond it's just fine.
The problem with write speeds for MLC flash based drives is, well, its a bit oxymoronic. With the limited durability you don't want to be writing at high sustained bandwidths anyway. The SLC stuff is more suited to it though of course we're talking at least 2x the price per gigabyte for SLC.
--
We've just started using SSDs in DragonFly-land to cache filesystem data and meta-data, and to back tmpfs. It's interesting how much of an effect the SSD has. It only takes 6GB of SSD storage for every 14 million or so inodes to essentially cache ALL the meta-data in a filesystem, so even on 32-bit kernels with its 32-64G swap configuration limit the SSD effectively removes all overhead from find, ls, rdist, cvsup, git, and other directory traversals (64-bit kernels can do 512G-1TB or so of SSD swap). So its in the bag for meta-data caching.
Data-caching is a bit more difficult to quantify but certainly any data set which actually fits in the SSD can push your web server to 100MB/s out the network with a single SSD (A single 40G Intel SSD can do 170-200MB/sec reading after all). So a GigE interface basically can be saturated. For the purposes of serving data out a network the SSD data-cache is almost like an extension of memory and allows considerably cheaper hardware to be used... no need for lots of spindles or big motherboards sporting 16-64G of ram. The difficulty, of course, is when the active data-set doesn't fit into the SSD.
Even using it as general swap space for a workstation has visible benefits when it comes to juggling applications and medium-sized data sets (like e.g. videos or lots of pictures in RAW format), not to mention program text and data that would normally be throw away overnight or by other large programs.
Another interesting outcome of using the SSD as a cache instead of loading an actual filesystem on it is that it seems to be fairly unstressed when it comes to fragmentation. The kernel pages data out in 64K-256K chunks and multiple chunks are often linear, so the SSD doesn't have to do much write combining at all.
In most of these use-cases read bandwidth is the overriding factor. Write bandwidth is not.
-Matt
Re: (Score:3, Interesting)
Matt,
Totally with you on the Colossus not being great on random-IO, that's why we reviewed one!:
http://www.pcper.com/article.php?aid=821&type=expert&pid=7 [pcper.com]
The cause is mainly that RAID chip. It doesn't pass any NCQ, TRIM or other ATA commands onto the drives, so they have no choice but to serve each request in a purely sequential fashion. The end result is even with 4 controllers on board, the random access of a Colossus looks more like that of just a single Vertex SSD.
Allyn Malventano
Storage Edito
Don't we want raw access + NILFS? (Score:1)
Re:Don't we want raw access + NILFS? (Score:4, Funny)
I think I would prefer MILFS on top, don't you?
Re: (Score:2)
Anyone else... (Score:2)
Anyone else agree that SSD speeds are plenty fast for the tasks given to it? When I shop for SSD drives I look for a reputable company that doesn't stutter like crazy with reads and writes for the lowest price. I've owned Intel X25Ms as well as other brands and I can't tell the difference in performance. Of course, the benchmarks do paint different numbers.
But who is REALLY gonna notice that 0.03ms difference in "seek time" for one SSD over another and 150MB/sec over 220MB/sec sequential? SSDs these day
Re: (Score:2)
No. I want speed. I want to be able to suck 16GB of the OS out of the SSD and into RAM in 0.03ms. So there :-)
Re: (Score:2)
By and large, for ordinary user space apps and workloads you are certainly right. But even some home users do intense things, such as video encoding or 3 rendering that, because of high intensity I/O that can be associated with that, will certainly benefit from faster disk. Now, if one hasn't already upgraded to SSD, I'll say this: one is missing out on the best upgrade you can do for your daily experience of your computer, barring a really nice monitor.
C//
Limited Edition = artificial scarcity (Score:2)
It should be illegal to label products like this. The only thing limited, is the mental capacity of those who buy it because of this label. ;)
Re: (Score:2)
I specifically avoid products with such a label because I know that means I can't replace it if it fails. One exception is Mountain Dew's limited edition with real sugar (but that's not something that fails).
Re: (Score:2)
I specifically avoid products with such a label because I know that means I can't replace it if it fails.
I agree, it's stupid, and unless they pre-determine the run size it's meaningless, but the general statement above applies to most computer parts if your window is much bigger than a year.
Photoshop on a Monochrome Mac? (Score:2)
Photoshop 1.0 actually ran on a B&W Mac? Seriously? What's the point in that?
Although, if anyone know where I can find a copy of this for my Mac Plus, let me know...
Re: (Score:2)
Sorry, wrong article. Somehow got redirected after logging on.
Re: (Score:3, Insightful)
Because we're talking about the home/enthusiast market, which is completely different (including and especially in price point) from the enterprise storage market.
Re: (Score:3, Informative)
Why does it matter if they get their blazing fast speed by fragmenting all the data all over the place? On hard disks fragmentation is a bad thing, on SSDs it's a good thing, what's your point?
Re:Marketspeak, or as normal people call it: lies. (Score:5, Insightful)
If "almost a halt" is 200MB/s read speeds as opposed to 260, I think I can live with it before I upgrade to my TRIM firmware, which negates the whole issue... whoops, I started using TRIM on my home drives months ago.
Seriously, the SSD market has exploded in the last 12 months. It's gone from being an expensive tool useful to enthusiasts to a not-quite-as-expensive-but-faster-than-any-number-of-hard-drives-can-provide utility that's worth five times it's price, especially for enterprise users.
* Proud owner of 1 intel SSD, 3 OCZ SSD's and administrator of about 3TB of SSD SAN and >8GB FusionIO cache with a bunch of spinning magnetic domains in the background that we can't get rid of fast enough
Re: (Score:2)
Why just for "enterprise" users, and what does Star Trek have to do with it anyway?
Would SSDs make a big difference for people who create and edit sound or video? If you tell me it'll improve the performance of my digital audio workstation or video editing software, I'll blow my fat tax refund check on some SSDs right now. Can I just hook up SSDs to the SATA controller on my machine?
Hell, I'm just full of questions. Irish coffees tend to do that to me.
Re: (Score:1, Informative)
If you're doing non linear sound and video editing with multiple simultaneous streams coming out of many files, and those files are greater than the amount of your available RAM (common), then yes, an SSD would make a big difference. You'd also experience comparatively blazing boots on your workstation. And yes, SSDs will connect to your SATA controller. As far as the system is concerned, they are hard drives. Very very fast hard drives.
Re: (Score:1)
Make sure you get either an Intel X25-M (though the biggest one they offer is 160GB) or something with an Indilinx controller (OCZ Vertex, for example, up to 256GB). Stay away from anything with a JMicron controller - those drives might be cheaper for bigger sizes, but the performance is crap.
Re:Marketspeak, or as normal people call it: lies. (Score:5, Insightful)
Irish coffee's bring out the best in everyone ;)
Reason I started using them at home was due to video editing - not very useful for encoding when you can rarely outpace your CPU's capability to encode stuff, but for random seeking/non-linear stuff/extracting streams/muxing, SSD's are a boon. Depending on your workload you can even get away with using crappy SSD's that are shit at random workloads but awesome at sequential.
TBH though you'll get the most noticeable improvement with using it as your system drive; apps start almost instantly and there's never any thrashing as $bloaty_app loads. Heck, my linux machines boot in 5s with the comparatively cheap OCZ Agility drives; the difference is less noticeable in windows however. Try running a laptop off an SSD for a month and then go back to a mechanical drive - the apparent slowness will drive you crazy :)
The benefits for enterprise users are especially large because 20k of SSD can replace 100k of fibre channel whilst getting 10x the performance and greater reliability. Plus Picard totally loves SSD's as he can rest his tea, earl grey, hot, on them without risking Data loss.
Re: (Score:3, Informative)
Try running a laptop off an SSD for a month and then go back to a mechanical drive - the apparent slowness will drive you crazy :)
Not to mention the battery life decrease, HD -> SSD got me 40% longer battery life on my netbooks. About 11 hours in total now, which is the way it should be. Plus no more worries about vibrations, decreased heat and it's quieter.
Re: (Score:2)
Thanks, MrNemesis. That's exactly what I'm going to do. I'm happy with the data throughput that SATA drives with big caches give me for streaming samples or video, but it would be great to have my system a little peppier.
I'm going to haunt the online stores later today, as soon as I get some breakfast.
I'm liking mine. (Score:1)
I'm using the Patriot Torqx m28 [newegg.com] that I got at Fry's. Peppy doesn't begin to describe it. I'm seeing 8800 small random read IOPs with Iometer, and 28000 sequential. Compare this with about 180 IOPs for a 15k SAS hard drive - it's over 40 times as fast. Boot time is well under 30 seconds. My Core2duo laptop is usable for work again, and I can finally work with virtual machines in a reasonable manner.
Any one you get is going to be better than spinning disk, but the newer ones really are much better and m
Re: (Score:1)
You can hook it up to the SATA.
Though if you have a spare PCIe slot, that would probably give you more throughput. Of course, the model that plugs into PCIe is different from the SATA model.
It's likely you would notice the benefit regardless.
Re: (Score:2)
>8GB FusionIO cache with a bunch of spinning magnetic domains in the background that we can't get rid of fast enough
Is that supposed to be TB? Don't ioDrives come in 160GB multiples?
Mind you, if I had 8TB of ioDrives, there'd be no need for anything else. Each one of those has read speeds of close to 1GB/sec, and enough IOPS to beat a dozen of the next best competitor. Now if only they cost 15x less per GB.
Re: (Score:2)
I've got a question here, if you don't mind me asking:
Are SSDs more prone to errors than disk drives? If so, why?
There seems to be some strangeness about SSDs and if I try to go read some technical papers on them on a Friday night when I've half a snoot full, it's going to make me all headache-y.
And regarding this "trim" function, can't they just make the nodes smaller?
And is this problem just going to go away once they get the manufacturing capacity to make gigantic SSDs the way they make gigantic hard dr
Re: (Score:2)
Re: (Score:3, Informative)
This is completely backwards. It is hard drives which fail without warning. See Google's recent paper on the futility of S.M.A.R.T. And when an HDD fails, your data is _gone_. The best you can hope for is spending huge amounts of money to put the platters into another drive and reading the data back. The predominant failure mode for flash is erase cycle endurance exhaustion, upon which time the flash reverts to being read-only. Compared to a HDD the flash failure mode is hugely desirable. You can als
Re: (Score:2)
Except that most modern SSDs actually have a 50,000,000 erase cycle limit, not 100,000. For reference, an X25-M writing continuously as fast as it could, constantly, wouldn't hit this until 140 years.
The other nice thing here is that with hard disks, the risk disk failure is constant when compared to the capacity of the device, with SSDs, the risk halves as the capacity doubles, because the controller can spread its writes out more.
SSDs really are *massively* more reliable than HDDs. They last for longer
Re: (Score:1)
This is completely backwards. It is hard drives which fail without warning.
I hate to break it to you but SSDs also fail without warning and go completely dead. Visit the PC Enthusiast forums and do some searches. 100% dead SSDs are very common despite their short time in the market. It's scary how many of these things are going belly up after only months of use.
Platter based disks and SSDs both have big complex controller boards and those controllers are subject to failure. The only thing you're missing with SSDs is the risk of mechanical failure. If a platter based disk is t
Re: (Score:3, Informative)
The predominant failure mode for flash is erase cycle endurance exhaustion, upon which time the flash reverts to being read-only. Compared to a HDD the flash failure mode is hugely desirable.
At my company a few years ago we purchased two OCZ SSD drives (using the now infamous JMicron controllers). They were for two identical systems, but we kept having problems with the first one we were setting up (using Linux). Everything would seem fine at first, but the system would start crashing and become unbootable within hours. We formatted and started over a couple of times replacing various pieces of hardware. Eventually we narrowed it down to the SSD by using a command (I forget the command use
Re: (Score:2)
So in other words you bought some cheap crap from a fly-by-night called "OCZ" and installed it in your servers, and were surprised when it didn't work. Do you also buy hard drives from no-name Taiwanese, or do you stick to Seagate, Western Digital, and Hitachi?
Crap is crap, whether HDD or SSD. And for that matter, memory. You can buy some janky erroneous DRAM from OCZ as well, but I don't recommend it.
Re: (Score:2)
So in other words you bought some cheap crap from a fly-by-night called "OCZ" and installed it in your servers, and were surprised when it didn't work.
The same "cheap fly-by-night" that this review is about?
Re: (Score:3, Informative)
I've had one in my laptop for about 8 months and write gigabytes to it every day, particularly suspending VMWare images to disk. It still writes at 140 MB/s sustained (to ext3 filesystem, not just raw write speed). That might be slower than when it was new, I don't remember, but it destroys any laptop harddrive. This drive was expensive though, like $800 II
Re: (Score:2)
I've heard they're working with a Trim function thingy to remedy this, but I haven't really paid attention since.
If you're going to take the time out to post and bitch, at least read up and know what you're bitching about. They've had TRIM for a while now, and Indilinx firmware can collect fragmented nodes during pause time. [engadget.com]
These are solved problems you're bitching about.
What do you do for an encore? Complain about how annoying it is to get across town in a horse and buggy?
Re: (Score:2)
Disgusting!
Re: (Score:1)
Up until 2003, I used an old PowerMac as my main machine, running Mac OS 7.6.1. I kept the system folder on a RAM disk, and booted off that. Blazing fast, but had the bad habit of losing all data whenever there was a power failure (nothing a periodic mirror-to-disk couldn't remedy, though). I still use that old PowerMac all the time (running Mac OS 9.2.2 now) and keep a pers
Re: (Score:2)
I think Seinfeld, rather, had it right. "Limited to what, how many you can sell?"
Re: (Score:3, Insightful)
Re: (Score:2)
Maybe in 1995 [wikipedia.org].
Re: (Score:2)
Re: (Score:2)
I see a few reasons
1: Most people will be running either windows which pretty much means you use NTFS whether you like it or not (or you could use fat32 but that isn't exactly going to be any better). Even on linux i'd imagine the number of clueless newbies who would set up a standard filesystem on the device and quickly ruin it would be pretty high. This means high RMA expenses and pissed off users.
2: putting the wear leveling control on the drive puts the drive manufacturer in control of it. That means th