Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Technology

640gb PCIe Solid-State Drive Demonstrated 324

Lisandro writes "TG Daily reports that the company Fusion io has presented a massively fast, massively large solid-state flash hard drive on a PCIe card at the Demofall 07 conference in San Diego. Fusion is promising sustained data rates of 800Mb/sec for reading and 600Mb/sec for writing. The company plans to start releasing the cards at 80 GB and will scale to 320 and 640 GB. '[Fusion io's CTO David Flynn] set the benchmark for the worst case scenario by using small 4K blocks and then streaming eight simultaneous 1 GB reads and writes. In that test, the ioDrive clocked in at 100,000 operations per second. "That would have just thrashed a regular hard drive," said Flynn. The company plans on releasing the first cards in December 2007 and will follow up with higher capacity versions later.'"
This discussion has been archived. No new comments can be posted.

640gb PCIe Solid-State Drive Demonstrated

Comments Filter:
  • Re:Uhh, Price? (Score:5, Informative)

    by morgan_greywolf ( 835522 ) on Friday September 28, 2007 @02:14PM (#20785489) Homepage Journal
    FTFA:

    So how much will these cards cost? Flynn told us that the company is aiming to beat $30 dollars a GB, something that should seem very cheap to large corporations, adding "You can drop ship or Fedex this card and be up and running in a few minutes... you can't do that with a storage area network."
    So, let's say they get to $29 a GB, a reasonable price for NAND flash-based memory devices. 640*30==$19,200. Sorry, but that doesn't seem to beat an inexpensive SAN in price. I recently priced out a 12TB iSCSI SAN for a little bit more than that, and even 1-2 TB fibre SAN from IBM should be around the same price.

  • by LiquidCoooled ( 634315 ) on Friday September 28, 2007 @02:15PM (#20785523) Homepage Journal
    Hitachi are saying that they have solved the overwrite problem (at least mitigated it by a factor of 100)

    They appear to want to use normal DRAM memory for the running of the drive but then write it permanently to the NAND flash at shutdown/memory full time.
    I would assume this involves charging of a small battery and dumping the data later on.

    http://www.theinquirer.net/gb/inquirer/news/2007/09/26/hitachi-reckons-solid-state [theinquirer.net]
  • $30 bucks a gig (Score:3, Informative)

    by niola ( 74324 ) <jon@niola.net> on Friday September 28, 2007 @02:19PM (#20785581) Homepage
    According to the article, they are looking at pricing to be 30 dollars a gig. That is pretty pricey.

    That means their low-end 80GB drive will be around $2400+ or so US dollars depending on tax, shipping, retail prices etc.
  • Re:Lifespan? (Score:1, Informative)

    by Anonymous Coward on Friday September 28, 2007 @02:27PM (#20785701)
    Your concern is answered fifty times in every flash HD article here.
  • Re:Gb or GB??? (Score:1, Informative)

    by Anonymous Coward on Friday September 28, 2007 @02:28PM (#20785729)
    RTF Article

    > Flynn said the card has 160 parallel pipelines that can read data at 800 megabytes per second and write at 600 MB/sec.

    this is apr 10 times faster than 10k SATA
    and 7 times than 15k scsi

    most importantly - seek time=0

    > Flynn set the benchmark for the worst case scenario by using small 4K blocks and then streaming eight simultaneous 1 GB reads and writes. In that test, the ioDrive clocked in at 100,000 operations per second. "That would have just thrashed a regular hard drive," said Flynn.

  • by Acecoolco ( 1012419 ) on Friday September 28, 2007 @02:48PM (#20786031) Homepage Journal
    Damn expensive, $19,200 for 640GB......... I want it but cant afford it.. Josh
  • Re:Oblig. (Score:3, Informative)

    by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Friday September 28, 2007 @02:53PM (#20786101) Journal

    Once you get above $500 desktop computers, it doesn't much matter. A properly tuned system will only use swap, if at all, to drop a few MB from RAM to disk because it's just never accessed. A server that swaps during use is just not set up properly.

  • by mathx ( 988938 ) on Friday September 28, 2007 @02:55PM (#20786141)
    considering that large quad-socket boards have space for 8 Dimms per CPU we're looking at 128GB+ per machine and soon to increase. 640Gb isnt that much bigger. Since it's on the memory bus and not a PCI-* bus, its going to be faster than these drives, and it's more expensive right now. By the time they're at $30/GB ram will be alot less than that. The automatic persistence (without scheduling to back the memory to disk, like RAM would need) is the only advantage, so you're putting a high price on that ability - a database with constant accesses that need to go to permanent non-loss storage with a power out.

    If you really have such an environment, I would think that fixing your HA setup would be a priority first - duplicating your servers so they can take eachother's jobs over and providing redundant power. I dont even want to know how many xactions/second a properly memory-stored database can do (once you get rid of the filesystem and driver layers, which this thing would require), Im sure many many more. While disk wont take as many xactions/sec, you can always back dirty ram to disk in huge chunks (1 meg blocks or more) to avoid having to need 100K IOops.

    I just dont see any advantage properly tuning your server and process environment couldnt achieve with commodity unspecialized cheap easy to replace parts and a few brains.
  • Re:Uhh, Price? (Score:5, Informative)

    by walt-sjc ( 145127 ) on Friday September 28, 2007 @02:59PM (#20786187)
    Um, we were using large RAM disks (the kind that hooked up to SCSI, had a built in UPS and disk to dump RAM contents) many years ago (8 now?) to speed up databases. That was limited by the SCSI bus, but access time and latency were near zero (which was awesome.)
    Of course, large back then meant 4G, and the average hard disk was 9G. This is evolutionary, not revolutionary.

  • Re:Oblig. (Score:1, Informative)

    by Beardo the Bearded ( 321478 ) on Friday September 28, 2007 @03:03PM (#20786235)
    Thank you. I came in here to say that. Flash can only be written to a limited number of times. Reading is basically unlimited, but writing ... ugh... wears it out a little. (Not really true, but close enough for this crowd.)

    It's not a magic number, as some people assume for some reason. What they mean is this:

    Statistically speaking, some of your bits are going to be incorrect in some places after a mean number of writes approximately equal to 1,000,000. At that point, you will have actual physical locations that aren't going to read back what you wrote. You could probably compensate for that like HDD do for bad sectors, but it's almost impossible to tell if your memory is bad.

    Maybe the company has found a way to compensate.

  • Re:Oblig. (Score:5, Informative)

    by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Friday September 28, 2007 @03:25PM (#20786553)
    You can tell if flash is bad, if worst comes to worst, by reading after writing. Or reading after erasing, and looking for stuck 1's.

    'worn out' flash doesn't spontaneously change state. Bits just get stuck and don't erase correctly.

    I don't know how flash drives actually handle this, but it isn't magic or impossible to fix.

    Also, the lifetime of modern flash is long enough that it is hardly an issue any more, even for normal desktop use. Maybe you don't want to use it for swap *IF* you swap a lot, but given the cost is in the same ballpark as RAM, you could just buy more RAM.
  • by shdowhawk ( 940841 ) on Friday September 28, 2007 @03:30PM (#20786633)
    Having talked to people at demo, what it pretty much came down to is this... is this a product we should be excited about? Definitely ... is it something that will do well right away? Not at all. The price has to drop before this becomes a really valid and useful tool for the GENERAL PUBLIC / Company. But there are a lot of companies out there willing to pay too much money to get these. Hopefully these big companies buy these up and fund this project as QUICKLY as possible. 7 of these side by side at 320/640 gigs a piece is a SCARY/powerful server.

    That being said, a few of the guys there said that they pretty much expect these (at the beginning) to do the best sales for companies that are looking to get really really fast database servers going. NOT for scsi san replacements (it's silly to spend $100,000 for something you could get for 10,000 hard drive space wise). Eventually as the price drops... i know of a handful of people who would EASILY pay 1000$ to get one of these on a gaming rig even if it was only 100 gigs. But that right there is already 1/3rd of the price of what it currently is. (assuming it's around 30$ a gig).

    Another thing to keep in mind that came up in the conversations... since these are tiny, think about the cost per server rack... and think of the cost per electricity to run. If you take those into consideration, these are actually less expensive that most people would think! A massive rack of hard drives could cost a lot of money in a co-location ... and a lot of electricity to run it all... But then again, we're talking about savings on servers, not general in home use.

    When this gets to about 1/3rd of it's current price, that's when you will see these things become TRUELY mainstream both to the average company and home users (be it rich ones who need the latest and greatest).

    Fusioni-io [fusionio.com] -- Link to their site.

  • Re:Oblig. (Score:4, Informative)

    by Lisandro ( 799651 ) on Friday September 28, 2007 @03:55PM (#20787023)
    our childish "uh oh" introduction, your completely un-cited "a lot of issues" comment, and your vague "I recall" interruption reveal the fact that you're spouting off some crap on a subject you have no direct, real-world experience with. Find another subject with which to stroke your ego kid, because you're looking like a pompous dumbass on this one.

    The specs for a 256Mb NAND flash memory chip [alldatasheet.com] by Samsung (which is by far the biggest NAND flash manufacturer today) quotes 100k millon write/erase cycles, and this is for an IC commonly used in USB pendrives. The figure usually tends to get worse with increased memory sizes since the memory "element" (float gate) becomes smaller. For example, Modern 16Mb chips, which are the ones i have experience with, usualy quote 1 million W/E cycles endurance.

    But, it felt good stroking my ego a bit more :) Thanks!
  • Re:Oblig. (Score:3, Informative)

    by Anonymous Coward on Friday September 28, 2007 @04:20PM (#20787373)
    You can also tell if FLASH is verging toward failure on any particular block by timing the erase cycle. As the part ages out, the erase cycle takes longer. You can also use that information to enhance wear leveling.
  • Re:Oblig. (Score:3, Informative)

    by networkBoy ( 774728 ) on Friday September 28, 2007 @05:01PM (#20787953) Journal
    No you would boot directly from it. GP doesn't understand BIOS very well. This could hook into the boot process much the same as any RAID controller does.
    -nB
  • Re:Oblig. (Score:3, Informative)

    by AK Marc ( 707885 ) on Friday September 28, 2007 @05:07PM (#20788035)
    Not in Windows. It has a swap file by default. If you disable swap, it still swaps. If you have swap enabled and have something running in the background and come back hours later, it's so slow it's painful, since everything gets swapped to disk except the one thing running. "Tuning" may be possible, but well outside the realm of the average user, and even completely disabling swap in all settings, you still have swapping happening.
  • Re:write limit? (Score:5, Informative)

    by TinyManCan ( 580322 ) on Friday September 28, 2007 @05:15PM (#20788115) Homepage
    Somewhere between 100k and 1 million times.

    Cosidering that this drive is 640GB, that means you would need to write somehwere in the region of 61 PETABYTES of information.

    You'd have to write to the drive at a perfect 800 MB/s for 941 days to hit that mark.

    It could last as long as 30 years, at full write speed of 800 MB/s if it can handle 1M writes per cell.

    At the end of the day, semiconductors this large and high quality are certainly better than tiny bits of rust on rapidly spinning platters.

  • Re:Oblig. (Score:2, Informative)

    by nuzak ( 959558 ) on Friday September 28, 2007 @05:29PM (#20788265) Journal
    The wear leveling that pretty much all flash chips do now puts them on par with mechanical HDD's in terms of lifetime. Furthermore, dead cells are blocked off, and the storage space simply shrinks a little bit after a few years -- it doesn't fail catastrophically.

    The limited write cycles of flash drives is pretty much a non-issue. You probably shouldn't put a swap partition on one though.
  • Re:Oblig. (Score:4, Informative)

    by Eccles ( 932 ) on Friday September 28, 2007 @06:03PM (#20788639) Journal
    Oh yeah, how does a solid state drive handle fragmentation? I have heard that they don't fragment, but not from reliable sources, and I just don't see how that is possible unless there is some built in mechanism to close gaps on the fly or something.

    Hard disks have a fragmentation issue because sequential accesses are much faster than random ones with a spinning disk. Each time the next sector to get isn't right after the previous one, the head must seek to the start of the next track. Solid state "disks" have true random access, where accessing blocks in random order costs no more than sequential accesses. So while solid state will fragment, it doesn't matter for performance or reliability.
  • by tempest69 ( 572798 ) on Friday September 28, 2007 @06:42PM (#20789085) Journal
    This device is all about the IO/second.. a 12 TB SAN cant come close..

    If your looking to run a blast/darwin query on 50k files to find the closest match to an unknown dna sequence Either you need to recode a bunch of software to use sql, or you snag a piece of hardware that gives database level performance. 80gigs at $2400.00 is a bloody bargain.

    The device is also 10x faster in bandwidth than a normal drive which is comparable to a san, but not such a power hog.

    So really the tradeoff rocks for small files. It doesn't have a controller interface latency so its really quick. It should mask a good chunk of hard drive based lag.

    Storm

  • FAQ (Score:2, Informative)

    by ZapmanFBM ( 1136713 ) on Friday September 28, 2007 @07:00PM (#20789251)
    For a lot of these questions, you can look at the FAQ here [fusionio.com]
  • Re:write limit? (Score:3, Informative)

    by TinyManCan ( 580322 ) on Friday September 28, 2007 @08:54PM (#20790191) Homepage
    Certainly they are using wear leveling. They are probably (information is thin) also over allocating storage, such that 10-15% can fail before impacting the advertised free space on the device. In reality you will see your full 640GB of storage, which you could write 61PB of data to the very first sector on the disk over and over and never experience any issues. Before the last 'extra' block is used up, you'll get an alert and replace the device.
  • Re:Wow (Score:2, Informative)

    by eatont9999 ( 1036392 ) on Saturday September 29, 2007 @12:21AM (#20791233)
    I hope you guys know that a maximum data transfer rate of 800Mb/s is 100MB/s. We don't know if it is sustained or peak so we are not even guaranteed that. Maybe it does not even matter. A few years ago they came out with this thing called SCSI U320. It is an interface that can support drive speeds up to 320MB/s per channel. If you have a decent controller and a RAID 5, you can get very close to 320MB/s sustained data transfer rates. Don't even get me started on SAS arrays! Sure it may be a little too enterprise level for some people, but before spending the money on unproven hardware, I would choose a proven robust interface. Now, if the author made a mistake and these things run at 800MB/s, then we have a new issue in which we can start talking about 4-10Gb/s Fibre SAN arrays, but I'll leave that for another time. Sorry to shit in anyone's hat, but those are the facts man.
  • Re:Oblig. (Score:3, Informative)

    by TheLink ( 130905 ) on Saturday September 29, 2007 @09:39AM (#20792909) Journal
    "Solid state "disks" have true random access, where accessing blocks in random order costs no more than sequential accesses"

    AFAIK for flash, sequential access is still faster than random access, even for NOR flash.

    It's quite amazing how slow some flash is (esp NAND flash). Some are 1ms for random access, and others are even 7ms.

    For comparison a 15krpm drive has a random access time of about 5-6ms. So if you have RAID10, you might get similar or faster speeds, than a flash drive 10x the total price of a RAID10 system with similar capacity.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...