Intel Unveils SSDs With 6Gbit/Sec Throughput 197
CWmike writes "Intel announced a new line of solid-state drives (SSDs) on Monday that are based on the serial ATA (SATA) 3.0 specification, which doubles I/O throughput compared to previous generation SSDs. Using the SATA 3.0 specs, Intel's new 510 Series gets 6Gbit/sec. performance and thus can take full advantage of the company's transition to higher speed 'Thunderbolt' SATA bus interfaces on the recently introduced second generation Intel Core processor platforms. Supporting data transfers of up to 500MB/sec, the Intel SSD 510 doubles the sequential read speeds and more than triples the sequential write speeds of Intel's SATA 2.0 SSDs. The drives offer sequential write speeds of up to 315MB/sec."
Finally, decent write speed from Intel ... (Score:2)
on a (rich) consumer SSD. But, while I'm loving all the Marvell / Sandforce / Intel hypersonic speed-worthiness, how about a decently fast, really affordable solid state drive? How much longer will these be 20x the per GB cost of a HDD?
Re: (Score:3)
Re: (Score:3)
Having been a somewhat early adopter of SSDs, I got bitten a couple of time by the JMicron bug - a pox on them. I've decided I won't buy another SSD until I can get a 128 GB drive for $100
Re: (Score:3)
It uses a Marvell controller, not Intel controller (Score:2)
Having been a somewhat early adopter of SSDs, I got bitten a couple of time by the JMicron bug
Everything I've read so far suggests that if you are buying SSDs you want to go with Intel.
Note that this new Intel SSD is the first Intel-branded SSD that uses a non-Intel controller. It uses the same Marvell controller [techreport.com] used in the well-regarded Crucial RealSSD C300.
I've also read about Intel's great combo of performance and robustness, but that reputation is mostly a result of Intel's controllers. JMicron, a manufacturer of SSD controllers, got its buggy reputation from early JMicron-based SSDs. Marvell's controller performance has been proven in many reviews, but Intel's "endorsement" gives
Re: (Score:2)
"mostly a result of Intel's controllers"
Actually, the controller is just the hardware. It is the software running on it which makes it smart. Intel owns the software I would think.
Re: (Score:2)
Thanks for that tip. I've been buying computer and printer memory from crucial but it was great to red the review on A&D and see how well their drives scored.
Re: (Score:2)
As for
Re: (Score:2)
Never had any issues with my Vertex 2 waking up from hibernation.
Re:Finally, decent write speed from Intel ... (Score:5, Interesting)
The cost of an SSD is paid back by the speed, not the capacity. What I find strange is that shops list SSD's by EUR/GB instead of EUR * s/MB. The speed is the defining attribute, not the capacity.
Re: (Score:2)
> Your MP3's and movies do not require the high throughput.
And more importantly, your MP3's and movies do not require the random reads and writes which is an SSD's greatest strengths.
Re: (Score:2)
Re: (Score:3)
That is what ZFS with L2ARC ('level 2'-cache) does, it uses the SSD as a cache for the slower but bigger disks.
On Linux a fairly new development called bcache does something similair
Re: (Score:2)
Re: (Score:2)
>>IMHO SSD's should be used as "something in between your HDD and your memory"
You're talking about what is essentially Vista/Win7's readyboost feature, using flash as a cache between RAM and HDD.
You could do this now. Just buy an external SSD and plug it in.
Re: (Score:2)
Having said that: Yes, I do think cache is a good use of SSD's. Just not the best use imaginable.
Re: (Score:2)
The speed of these devices on SATA3 (500 MB/sec) is almost that of PC133 DRAM (570MB/sec). While you could make that argument for a modern system, anything spec'd to run on a P3/1st gen Athlon "should" be able to use the HD as raw memory with similar performance to those machines.
Re: (Score:2)
Unlikely. If it's sitting on a plain jane PCi bus, the max throughput is only 133 MB/s (32 bits * 33 MHz). And the latency is likely 10 to 100 times higher.
Re: (Score:2)
IMHO SSD's should be used as "something in between your HDD and your memory"
Unfortunately, only ZFS and Windows Vista or later with Readyboost and Superfetch will actually DO this without application support. There is an experimental dm-cache module for Linux which does this (it's a block-level cache) but it does not work on contemporary kernels and appears to have been abandoned.
Your idea has a little merit, though, because hybrid drives already exist. Of course, they have even poorer driver support.
Re: (Score:2)
HDD tech is established... it's like complaining that 1.5 TB disks cost $100 when a backup tape the same size costs $50. You're paying for speed and essentially today's device helps pay the companies to make tomorrow's devices as well.
The thing I really want to see is better syncing of devices. Really I don't NEED more than 64GB or 128GB on a laptop, certainly not for my day job... and I'm one of the biggest users of HDD storage at my company. If there was a good, solid way to sync to a 1TB drive easily an
Re: (Score:2)
>>HDD tech is established... it's like complaining that 1.5 TB disks cost $100 when a backup tape the same size costs $50.
Except two or three high speed HDDs in a RAID configuration will outperform a SSD on most tasks, and cost about 10x less.
I keep looking at SSDs, but their price and performance just aren't where they need to be to buy one other than... just 'cause they're cool. If they even had a 512GB model available for $200 (which is twice as much as I paid for my three 500GB drives 6 years ago)
Re: (Score:2)
No a mediocre SSD will crush a very expensive RAID-5 in read tests. If you go up to something like RAID-15 across 30 drives or so, yeah the raid will be better. SSD really is much faster otherwise no one would be all that excited about paying 20x the cost.
Re: (Score:2)
Except two or three high speed HDDs in a RAID configuration will outperform a SSD on most tasks, and cost about 10x less.
Of course, however ... ... maybe you could but it would look very funny and be way less portable.
(1) SSDs own platter drivers in random read/write. What people do on their machines? Mostly random reads/writes!
(2) SSDs consume less power, are dead silent, have no moving parts.
(3) You can't cram three 15k SCSI drives into a MacBook. Well
Re: (Score:2)
-- How much longer will these be 20x the per GB cost of a HDD?
They spread will come down but there will be a spread until long after no one is using HDD. http://en.wikipedia.org/wiki/File:Disruptivetechnology.gif [wikipedia.org]
I'd agree with your 20x right now at say the 256 g - 500 g levels. But lets say when SSD is around 3t the spread might only be 10. At 20t the spread might only be 5. And maybe at 20t people start to switch in mass. This at first drives the spread back up close to 10 but within 2 years, the spr
Re: (Score:2)
Except that even "spinning iron rust" drives themselves have slowed down, they don't go up in capacity every six-eight months like they used to. Maybe parallel recording has already hit its limits?
Re: (Score:2)
I meant perpendicular, not parallel.
It would be nice to have a 15 seconds window to edit our posts.
"Thunderbolt SATA bus interfaces"? (Score:5, Informative)
The SATA 3 ports on Cougar Point platform have nothing to do with Thunderbolt.
Re: (Score:2)
Is there any reason you can't transfer hard drive data over PCIe?
Re: (Score:2)
Re: (Score:2)
Somebody is confused. .
With Intel's naming (lack of) conventions I'm surprised anyone ISN'T confused.
Re: (Score:3)
Thunderbolt only supports two protocols, DisplayPort and PCI-E. Other controllers can hang off the end of the PCI-E channel and drive other protocols from there but Thunderbolt itself is certainly only DisplayPort and PCI-E [intel.com].
Re: (Score:2)
that makes it simpler to implement as a device. Basically the consumer equivalent to 10Gb Ethernet/8GB Fiber Channel on servers that can speak Ethernet or Fiber Channel drive... (and virtualize anything else)
Consumers care about Video and Audio devices mostly, maybe a smattering of other things. This brings back the PCI enclosure again and should open up Macs to things like Robotics, test equipment, etc that requires dedicated hardware cards most Macs can't have.
Re: (Score:2)
Could this also possibly mean external GPUs? I wouldn't mind something half the size of a Mac mini stacked under if it meant a dedicated GPU with its own RAM.
Re: (Score:2)
Re: (Score:2)
that's the only way to feed the darn Thunderbolt bandwidth! Even to max Thunderbolt you'd need at least 3 of these in RAID of some fashion, meaning something to control them, like Drobo.
Wear usage? (Score:2)
I know this problem has (probably) been satisfactorily addressed but if one were to use such a super fast drive for an application that had extremely heavy usage (swap space for the OS or an program like Photoshop) wouldn't it cause those sectors to be read/written to many many times very quickly? Doesn't each "cell" have a limited number off times it can be accessed before it fails? (on the order of 100,000 i think). And wouldn't that case the drive to fail (sooner rather than later because it is so fast
Re:Wear usage? (Score:4, Informative)
Again, I'm sure the SSD drive manufacturers have looked at this problem very closely, I'm just concerned that's all.
So, look up the specs, then. Current write cycles are over 1,000,000 per cell. Modern wear-leveling algorithms combined with extra blocks and ECC mean that it's more likely that some other component will fail before your SSD will.
Besides, if you were really concerned, and not just trolling, wouldn't you have the same issues with your hard drive, too? Doubly so in a laptop?
Re: (Score:2)
Far from it. Depending on the litho, at 25nm for instance you're down to 3,000 program/erase cycles. And even at 34nm, you're still little better than 10k cycles. The overprovisioning and ECC required at these scales is massive.
But yes, studies have been done and it takes an industrial strength workload to kill an SSD. If one of these is in your home machine, you likely won't kill it. If you think you might, then you should already have practices in place to
Re: (Score:2)
Re:Wear usage? (Score:5, Informative)
But yes, studies have been done and it takes an industrial strength workload to kill an SSD. If one of these is in your home machine, you likely won't kill it. If you think you might, then you should already have practices in place to deal with disk failure.
Just as important to note is the failure mode for flash memory is for it to become read-only; in other words, it simply becomes impossible to delete what is written on your drive, which is a perfect reminder to get a new one. Given that this sad event will be nearly ten years from now, it should be dirt cheap to buy a replacement drive.
When you do, though, don't forget to remove the metal shell on the old drive and cook it in the microwave for a minute or two to destroy your old data. It's not like you're going to be able to sell the drive used anyway.
Re: (Score:3)
Just as important to note is the failure mode for flash memory is for it to become read-only;
Are you sure? I was under the impression that it worked like EPROM, where the bits were set high by the erase cycle and data was written by grounding the bits which needed to be zero.
That being the case, it's more likely that the data would be corrupted (since it would fail to set bits anymore) which is actually the sort of thing one of my old USB keys started to do.
'Course, there might well be logic in the controller to detect this and put the drive into read-only mode when it runs out of non-defective
Re: (Score:2)
No, it doesn't work that way. Each cell contains a charge that is the pattern for a single or multiple bit. If wear leveling doesn't move the cell to another while the drive is turned on, the charges will leak and your pattern will either be all ones, all zeroes, or somewhere in between. The directory structure will also be corrupted. Yeah, you won't be able to write new data, but you probably won't reliably read what's already written.
ugly opportunity for malware (Score:2)
How about a vicious piece of malware? Could a piece of code be written to circumvent the wear-leveling algorithm and carpet-bomb your SSD with repetitive writes so that it's worthless overnight? Could be a real PITA in cases like the Macbook AIR where the SSD drive is built into the mobo. It's not a case of just paying for a new SSD to replace.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
From what I've read, it would take constantly overwriting the entire drive continuously for years to cause it to fail.
Re: (Score:2)
I don't know about the first model(s), but the latest MacBook Air uses SSD modules [fosketts.net], it's not built into the motherboard.
Re: (Score:2)
So, look up the specs, then. Current write cycles are over 1,000,000 per cell. Modern wear-leveling algorithms combined with extra blocks and ECC mean that it's more likely that some other component will fail before your SSD will.
Are you looking at MTBF numbers or something? Expensive 34nm flash has 10,000/cell, cheap 34nm 5000/cell and 25nm is down to 3000/cell.
And yes, you can kill an SSD that way, already done it. Granted, I tortured it in pretty much every way possible by running torrents and Freenet 24/7 and keeping it 90%+ full all the time. It died after about 1.5 years with 7000 writes/cell average, 15000 highest.
Fortunately for me I still managed to sneak it in as a warranty repair, even though they aren't supposed to do t
Re: (Score:2)
So, look up the specs, then. Current write cycles are over 1,000,000 per cell.
That's only almost true (it's about 100.000 reqrite cycles per cell) for SLC cells. SSDs unfortunately use MLC cells with a rewrite cycle number of 5000.
But hey, never miss an opportunity to call someone a troll, right?
Re: (Score:2)
Too bad Newegg lists 17 SLC SSDs. They are also insanely expensive at $10/GB. What was that about trolling?
Re: (Score:2)
Re: (Score:3)
Insofar as actual swap (as in paging physical ram in and out) goes, it depends heavily on the amount of ram the machine has and the type of work load. Mostly-read workloads with data sets in memory that are too large can be cached in swap without screwing up a SSD. A large dirty data set, however, will cause continuous heavy writing to swap space and that can shorten the life of SSD-based swap very easily.
A system which needs to page something out will stall badly if the pageout activity is rate-limited,
Re: (Score:2)
This is why I believe in hybrid setups. The other replies to this are talking about mere millions of writes.
There are no replies talking about mere millions of writes. The one reply that did mention 1 million was talking about per cell and is also very wrong. Furthermore, we do not speak of write limits with SSD's. We speak of erase limits. We should learn the difference before being critical. SSD's write in 4K Pages but erase in Blocks. OCZ is currently using 128 Pages per Block, so Blocks are 512K on their devices.
Do you know how many writes the swap file gets every hour you use your computer?
You are attempting to invoke a nebulous unknown to support your argument for hybrids. When you k
Re:Wear usage? (Score:4, Interesting)
Do you know how many writes the swap file gets every hour you use your computer
Not specifically, but my OS - like pretty much any modern OS - gives me a disk I/O total since the last reboot. I last rebooted 11 days ago. Since then, I've written just under 50GB, and read 27.6GB. I don't think reads wear out flash, so we'll ignore the second number. That works out at 1660GB written per year, if the last 11 days have been representative. Assuming perfect weak levelling, that's 6.4 rewrite cycles per year for a 256GB SSD. A drive that can 'only' handle 3,000 rewrites will therefore wear out after about 450 years. If it does a tenth as well as that, then it will last for almost as long as hard drives have existed.
The advantage of hybrids is the same as the advantage of CPU cache - you get almost the same performance as a very large SSD for a much lower price. Given infinite funds, you'd build a computer that had gigabytes of SRAM, but it's much more expensive than DRAM, so you only have a few megabytes of SRAM and a lot of DRAM, which gives you almost as good performance but costs vastly less (the relative complexity means that the lower bound for SRAM is about six times the price per bit of DRAM - in practice it's higher). If someone else is paying, I'd take the pure SSD solution. If I had to balance price against performance, then ZFS's L2ARC is currently likely to be the best solution (unless it's a laptop, where space is an issue).
It's worth noting that Oracle is currently looking at using rotating disks as L3ARC, with tapes as the persistent storage. Writes remain fast, because ZFS is copy-on-write, so every write to the tape is just an append operation, but L3 cache misses are very expensive (you need to seek the tape, which can take several seconds, rather than several milliseconds) - for write-heavy workloads, it can give very good performance per dollar.
Re: (Score:2)
Re: (Score:2)
Its a large number of writes not reads. Something like swap space gets allocated and even after it no longer reflects memory it stays empty for "a while". As long as "a while" is an hour or more on average you are fine.
Re: (Score:2)
Swapping to flash is dumb. But more importantly, with modern memory prices, swapping is dumb. You're not going to end up in swap unless an application misbehaves, and then the swapping is going to bring your system to a standstill. Better to run out of memory and let the OOM system kill processes. You can get 4GB SODIMMs now. Swap? That's over.
Laptop Backup Times... (Score:2)
ought to be severely reduced if you can pass the info off at those data rates to a similarly fast external drive.
Then if you want to archive to a "slow" spinning hard drive, the external SSD could supply the data at the slower rate of the HD
Re: (Score:2)
That's, effectively, 5 disks to go slightly over their read speed. Then again, you'd get 10 TB (=9.09 TiB) on the external enclosure, while the biggest of these will be 250 GB.
Why not battery backed RAM straight on the bus? (Score:2)
With some lazy writing to solid-state-chips.. :D
Yeah, I am dreaming.. Sigh!! :-/
Re: (Score:3)
The DDRDrive X1 almost fits your design. It's not on the memory bus, but on the PCI-e bus as a storage device. Bit pricey though per gigabyte.
Re: (Score:2)
That's how expensive SCSI controllers work. They have big read/write cashes of RAM to speed transactions and the RAM has it's own battery should power blink. That allows the captured transactions to be written to disk or recovered from journal on the next IPL. In today's world, that's a lot of hardware to throw at a simple problem though, definitely not for the "consumer" or even "prosumer".
3rd generation X-35M? (Score:2)
IOPS? (Score:2)
IOPS?
It's important to know. [blogspot.com]
Only makes sense (Score:2)
Now that 6GBit/s SATA ports are becoming commonly available on motherboards it's only natural that SSDs follow. Normal HDs can't really take advantage of a 6Gb/s port but SSDs can. These high speed ports will make port multiplier enclosures more useful as well.
There's certainly a lot of use for this sort of thing. SSDs can already replace far more expensive DRAM (and the far more expensive motherboards needed to support lots of DRAM) for numerous problem sets, including mostly-read database accesses. Th
Re: (Score:2)
Re: (Score:2)
No new SLC? (Score:2)
Don't forget about Sandforce/OCZ (Score:5, Informative)
Sandforce has already announced [anandtech.com] its new sata3 controller. On paper it looks like it will have much faster sequential writes than Intel, but it sounds like it will also have a shorter lifetime and shorter data retention times due to the use of 25nm NAND. Intel is wisely sticking with 34nm. It may be more expensive to manufacture, but is superior tech. I can only hope that OCZ changes their mind and decides to at least offer a more expensive 34nm version. OCZ won't be shipping their Vertex 3 drives until Q2 so Intel will have a big head start in the market.
The NAND industry seems to be doing its best to encourage ignorance on the disadvantages of smaller process sizes from the consumer POV and the ignorance seems to be widespread. Getting the facts on this issue can be a bit difficult. Here is a good thread on the topic.
http://forums.anandtech.com/showthread.php?t=2142742 [anandtech.com]
The following post sums it up better than I could. Note his point about data retention times as well. That is a point that is often ignored when the focus is solely on write cycles.
As flash cells are shrunk, they become less good. This is a fundamental feature of the technology. The overall volume of the cell becomes smaller, so less electrons can be stored in the cell (so the signal picked up by the electronics is weaker and less clear, so you get a higher error rate) and the insulating barriers around the cell must be made thinner, in order to save space - allowing the electrons to leak out of the cell more easily (reducing power off data retention time). The thinner insulation also wears out more quickly (reducing life cycles)
It's difficult to define a 'fundamantal' limit for flash, because it may be possible to work around poor performance, and as yet unknown new manufacturing techniques and semiconductor materials may be developed. However, it has been suggested in the scientific literature that 18-22 nm, is the realistic limit. Beyond that, the performance/reliability/lifespan of the flash would be too poor, no matter how much wear levelling, and how sophisticated the ECC codes were.
Enterprise grade SSD flash, will need higher specifications than flash for toy cameras. Enterprise applications are unlikely to tolerate 18 nm flash with 100 write cycles and one lost sector per 100 GB of data stored. However, this probably would be acceptable for toys or throwaway devices.
Some more coverage of the topic:
http://techon.nikkeibp.co.jp/article/HONSHI/20090528/170920/ [nikkeibp.co.jp]
NAND Flash memory quality is also beginning to drop. Chips manufactured using 90nm-generation technology in 2004-05, for example, were assured for about 100,000 rewrites and data retention of about a decade. As multi-level architecture and smaller geometry are introduced, quality is showing a sharp decline. The 30nm 2-bit/cell chips expected to enter volume production in 2009-10 may well end up with a rewrite assurance of no more than 3,000 cycles, and a data retention time of about a year. The first 3-bit/cell chips are hitting the market now, with only a few hundred rewrites.
http://hardforum.com/showthread.php?t=1502663 [hardforum.com]
Flash memory works by trapping electrons. Over time these electrons leak away, until the charge is too small for the data to be read any more. With smaller feature sizes (34 nm instead of 45 or 65 nm) this leakage is more significant and fewer electrons can be stored per bit, thus the time during which the stored value can be maintained is decreased.
http://www.corsair.com/blog/force25nm/ [corsair.com]
Re: (Score:2)
Re: (Score:2)
Is a 32nm version good enough for you?
Yes. Thanks for the link. A lot could change between now and Q2, but that is nice to see. I suspect they would reserve the right to change to different memory at any time. I will keep my fingers crossed that I will be able to purchase a 32nm SF-2000 series drive that maxes out the sata3 interface in both sequential reads and writes. Still, unless consumers become more aware that smaller process size is a bad thing for NAND SSDs I think all of the manufacturers will eventually be peddling NAND at 25nm and be
Sod SATA (Score:3)
Give us fucking SAS already.
Firmware updates (Score:2)
OCZ on the other hand only offers an .EXE tool (32bit only!) that needs an Internet connection and only works if your SSD has an MBR partition style and at least one NTFS formatted partition o
disappointing (Score:2)
Re: (Score:2)
If it's twice as fast as all the other SATA devices I have that are rated 3Gb/s then that'll only average about 40MB/s with peaks around 80MB/s..
Re:that's smokin' (Score:5, Informative)
Your devices are not rated at 3Gb/s, the sata connection was. This device is..Supporting data transfers of up to 500MB/sec,....
Maybe just read the summary :)
Re: (Score:2)
When transferring files between SATA hard drives on my desktop I usually get around 90 MB/s. I suspect it's your devices that are the problem.
Re: (Score:2)
Guys, don't forget to take into account cached disk data/buffers, use free to see how much you've got. Just run hdparm -tT to see the difference between cached reads and non cached reads. If this sounds to technical just do a test with a 20 GB file. This should be enough to make sure cached disk data doesn't give you a false illusion of speed. Also read man hdparm under the -t section.
90 MB/s seems on the upper end to me while 40 MB/s is on the lower end but I have seen it with generic drivers.
Also, try to
Re: (Score:2)
How to clear your disk cache: /proc/sys/vm/drop_caches
echo 3 | sudo tee
By default, bonnie++ will test using file sizes that are twice your RAM, to make sure that disk caches get overrun.
You'll really want to use bonnie++ or iozone instead of hdparm if you're comparing HDs and SSDs, since SSDs really only shine with lots of small files. The hdparm results would be rather meaningless.
For my part, I'd rather drop $100 on a RAM upgrade than $200 on an SSD. Once you have all your files read into disk cache in
Re: (Score:2)
I have never heard of "sata mobos". I heard about motherboards that had a SATA controller hardwired into them. Using the right driver for that specific controller might help in gaining speed.
http://slashdot.org/comments.pl?sid=2016546&cid=35345062 [slashdot.org]
Re: (Score:2)
It is hard to imagine the great lengths of time you would have to invest in finding a collection of SATA 2.0 hardware that bad, so its almost certainly your partition alignment or drivers.
You do know that modern drives require 4096 byte partition alignment, while most older OS's presume that 512 byte is good enough?
Re: (Score:2)
Re: (Score:2)
...30MB/s is about the fastest I've seen from a laptop drive, and that was when it was completely new so every write was a sequential write.
Then you'll like this. This was just run off my Atom 270 netbook (HP Mini 110c) with a 500gb Samsung drive and Kubuntu 10.10 -
wizard@wizard-netbook:~$ sudo hdparm -tT /dev/sda /dev/sda: ;-)
[sudo] password for wizard:
Timing cached reads: 1228 MB in 2.00 seconds = 613.74 MB/sec
Timing buffered disk reads: 312 MB in 3.01 seconds = 103.70 MB/sec
wizard@wizard-netbook:~$
Re: (Score:2)
write test on the same netbook:
wizard@wizard-netbook:~$ sudo dd if=/dev/zero of=/tmp/output.img bs=8k count=256k && sudo rm /tmp/output.img
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 19.916 s, 108 MB/s
wizard@wizard-netbook:~$
Re: (Score:2)
But would you prefer it be used for good or awesome?
$584 for 256GB (Score:3)
$584 for 256GB
Re: (Score:2)
$584 for 256GB
That's $584 for 250GB, in lots of 1000.
Re: (Score:2)
So I'm not sure what makes the Intel drive better than the Crucial one which has been around for many months (and gone through some pains and fixes... ).
A supposedly nerd site like Slashdot linking to low-info press release or marketing article on Computerworld is stupid.
Re: (Score:2)
Software/tuned hardware is what makes the difference.
Intel M25, etc. was better than it's competitors at the time because of the software, because it's performance did not degrade over time.
Re: (Score:2)
I would still stick with the recommendations from: http://www.hardware-revolution.com/best-hard-drive-best-ssd-december-2010/ [hardware-revolution.com]
For the same price or less, you can build a bigger / faster / cheaper RAID of OCZ Vertex2 SSDs. And you wouldn't even have to upgrade your motherboard to 6Gbps SATA 3.0
Re: (Score:2)
that's not awful.
The point of these is to feed devices like Drobo anyway. So three would get you 460GB with one failure. That would also nicely fill up a Thunderbolt connection to one or two computers with speeds and latency way faster than the built in drive.
Re: (Score:2)
Re: (Score:2)
I wonder why they outsourced their controller chip, previous incarnations used an Intel controller.
Marvell acquired some of Intel's sub-divisions (including their wireless and embedded groups) - this happened a couple of years ago when Intel decided to focus heavily on their main processor and support chipset lineup. Given that, there is very likely a tight integration between the two companies, so no real reason for Intel to duplicate work and design their own controller chip.
Re: (Score:2)
Where are you seeing that they're using a Marvell controller, I didn't see that in any of TFAs?
Re: (Score:2)
The perfect compliment to Sandybridge
Actually, yeah. The problem with Cougar Point (the Platform Controller Hub that goes along with Sandy Bridge, not the Sandy Bridge chip itself, BTW) is with the SATA 3.0Gbps ports. So to get the maximum performance of these new SSDs, you wouldn't use those ports anyway.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It does seem more than a little daft to not simply come up with a PCI-E-for-disks standard, and give us a nice little flexible PCI-E cable. However, since SATA disks are supposed to work on SAS I guess there's still some reason for SATA to exist...
Re: (Score:2)
Re: (Score:2)
Oh. Er. Why?