640gb PCIe Solid-State Drive Demonstrated 324
Lisandro writes "TG Daily reports that the company Fusion io has presented a massively fast, massively large solid-state flash hard drive on a PCIe card at the Demofall 07 conference in San Diego. Fusion is promising sustained data rates of 800Mb/sec for reading and 600Mb/sec for writing. The company plans to start releasing the cards at 80 GB and will scale to 320 and 640 GB. '[Fusion io's CTO David Flynn] set the benchmark for the worst case scenario by using small 4K blocks and then streaming eight simultaneous 1 GB reads and writes. In that test, the ioDrive clocked in at 100,000 operations per second. "That would have just thrashed a regular hard drive," said Flynn. The company plans on releasing the first cards in December 2007 and will follow up with higher capacity versions later.'"
Re:Uhh, Price? (Score:5, Informative)
Re:And another question. (Score:4, Informative)
They appear to want to use normal DRAM memory for the running of the drive but then write it permanently to the NAND flash at shutdown/memory full time.
I would assume this involves charging of a small battery and dumping the data later on.
http://www.theinquirer.net/gb/inquirer/news/2007/09/26/hitachi-reckons-solid-state [theinquirer.net]
$30 bucks a gig (Score:3, Informative)
That means their low-end 80GB drive will be around $2400+ or so US dollars depending on tax, shipping, retail prices etc.
Re:Lifespan? (Score:1, Informative)
Re:Gb or GB??? (Score:1, Informative)
> Flynn said the card has 160 parallel pipelines that can read data at 800 megabytes per second and write at 600 MB/sec.
this is apr 10 times faster than 10k SATA
and 7 times than 15k scsi
most importantly - seek time=0
> Flynn set the benchmark for the worst case scenario by using small 4K blocks and then streaming eight simultaneous 1 GB reads and writes. In that test, the ioDrive clocked in at 100,000 operations per second. "That would have just thrashed a regular hard drive," said Flynn.
Wow, $19,200 for 640gb (Score:2, Informative)
Re:Oblig. (Score:3, Informative)
Once you get above $500 desktop computers, it doesn't much matter. A properly tuned system will only use swap, if at all, to drop a few MB from RAM to disk because it's just never accessed. A server that swaps during use is just not set up properly.
Just as expensive as RAM (Score:2, Informative)
If you really have such an environment, I would think that fixing your HA setup would be a priority first - duplicating your servers so they can take eachother's jobs over and providing redundant power. I dont even want to know how many xactions/second a properly memory-stored database can do (once you get rid of the filesystem and driver layers, which this thing would require), Im sure many many more. While disk wont take as many xactions/sec, you can always back dirty ram to disk in huge chunks (1 meg blocks or more) to avoid having to need 100K IOops.
I just dont see any advantage properly tuning your server and process environment couldnt achieve with commodity unspecialized cheap easy to replace parts and a few brains.
Re:Uhh, Price? (Score:5, Informative)
Of course, large back then meant 4G, and the average hard disk was 9G. This is evolutionary, not revolutionary.
Re:Oblig. (Score:1, Informative)
It's not a magic number, as some people assume for some reason. What they mean is this:
Statistically speaking, some of your bits are going to be incorrect in some places after a mean number of writes approximately equal to 1,000,000. At that point, you will have actual physical locations that aren't going to read back what you wrote. You could probably compensate for that like HDD do for bad sectors, but it's almost impossible to tell if your memory is bad.
Maybe the company has found a way to compensate.
Re:Oblig. (Score:5, Informative)
'worn out' flash doesn't spontaneously change state. Bits just get stuck and don't erase correctly.
I don't know how flash drives actually handle this, but it isn't magic or impossible to fix.
Also, the lifetime of modern flash is long enough that it is hardly an issue any more, even for normal desktop use. Maybe you don't want to use it for swap *IF* you swap a lot, but given the cost is in the same ballpark as RAM, you could just buy more RAM.
Talking to the company at demo (Score:5, Informative)
That being said, a few of the guys there said that they pretty much expect these (at the beginning) to do the best sales for companies that are looking to get really really fast database servers going. NOT for scsi san replacements (it's silly to spend $100,000 for something you could get for 10,000 hard drive space wise). Eventually as the price drops... i know of a handful of people who would EASILY pay 1000$ to get one of these on a gaming rig even if it was only 100 gigs. But that right there is already 1/3rd of the price of what it currently is. (assuming it's around 30$ a gig).
Another thing to keep in mind that came up in the conversations... since these are tiny, think about the cost per server rack... and think of the cost per electricity to run. If you take those into consideration, these are actually less expensive that most people would think! A massive rack of hard drives could cost a lot of money in a co-location ... and a lot of electricity to run it all... But then again, we're talking about savings on servers, not general in home use.
When this gets to about 1/3rd of it's current price, that's when you will see these things become TRUELY mainstream both to the average company and home users (be it rich ones who need the latest and greatest).
Fusioni-io [fusionio.com] -- Link to their site.
Re:Oblig. (Score:4, Informative)
The specs for a 256Mb NAND flash memory chip [alldatasheet.com] by Samsung (which is by far the biggest NAND flash manufacturer today) quotes 100k millon write/erase cycles, and this is for an IC commonly used in USB pendrives. The figure usually tends to get worse with increased memory sizes since the memory "element" (float gate) becomes smaller. For example, Modern 16Mb chips, which are the ones i have experience with, usualy quote 1 million W/E cycles endurance.
But, it felt good stroking my ego a bit more
Re:Oblig. (Score:3, Informative)
Re:Oblig. (Score:3, Informative)
-nB
Re:Oblig. (Score:3, Informative)
Re:write limit? (Score:5, Informative)
Cosidering that this drive is 640GB, that means you would need to write somehwere in the region of 61 PETABYTES of information.
You'd have to write to the drive at a perfect 800 MB/s for 941 days to hit that mark.
It could last as long as 30 years, at full write speed of 800 MB/s if it can handle 1M writes per cell.
At the end of the day, semiconductors this large and high quality are certainly better than tiny bits of rust on rapidly spinning platters.
Re:Oblig. (Score:2, Informative)
The limited write cycles of flash drives is pretty much a non-issue. You probably shouldn't put a swap partition on one though.
Re:Oblig. (Score:4, Informative)
Hard disks have a fragmentation issue because sequential accesses are much faster than random ones with a spinning disk. Each time the next sector to get isn't right after the previous one, the head must seek to the start of the next track. Solid state "disks" have true random access, where accessing blocks in random order costs no more than sequential accesses. So while solid state will fragment, it doesn't matter for performance or reliability.
Why the product rocks.... (Score:3, Informative)
If your looking to run a blast/darwin query on 50k files to find the closest match to an unknown dna sequence Either you need to recode a bunch of software to use sql, or you snag a piece of hardware that gives database level performance. 80gigs at $2400.00 is a bloody bargain.
The device is also 10x faster in bandwidth than a normal drive which is comparable to a san, but not such a power hog.
So really the tradeoff rocks for small files. It doesn't have a controller interface latency so its really quick. It should mask a good chunk of hard drive based lag.
Storm
FAQ (Score:2, Informative)
Re:write limit? (Score:3, Informative)
Re:Wow (Score:2, Informative)
Re:Oblig. (Score:3, Informative)
AFAIK for flash, sequential access is still faster than random access, even for NOR flash.
It's quite amazing how slow some flash is (esp NAND flash). Some are 1ms for random access, and others are even 7ms.
For comparison a 15krpm drive has a random access time of about 5-6ms. So if you have RAID10, you might get similar or faster speeds, than a flash drive 10x the total price of a RAID10 system with similar capacity.