Hynix 48-GB Flash MCP 129
Hal_Porter writes to let us know that the third-largest NAND chip maker, Hynix, has announced they have stacked 24 flash chips in a 1.4mm thick multi-chip package. It's not entirely clear from the article whether the resulting 48-GB device is a proof of concept or a product. The article extrapolates to 384 GB of storage in a single package, sometime. Hal_Porter adds: "It's not clear if it's possible to write to them in parallel — if so the device should be pretty damn fast. The usual objection to NAND flash as a hard drive replacement is lifetime. NAND sectors can only be written 100,000 times or so before they wear out, but wear leveling can be done to spread writes evenly over at least each chip. I worked out that the lifetime should be much longer than a typical magnetic hard disk. There's no information on costs yet frankly and it sounds like an expensive proof of concept, but it shows you the sort of device that will take over from small hard disks in the next few years."
Database servers (Score:5, Interesting)
swap space / tmpfs / cacheing (Score:2, Interesting)
Flash lifespan in persective (Score:5, Interesting)
Your mileage may vary, but I'd bet that 99% of users would never keep their computer (especially a laptop that is the more likely application for flash-based drives) for long enough to see the disk fail from wear.
Re:Why only 100,000 times (Score:3, Interesting)
I've never seen a study conclude that the write limitation on NAND flash-based devices is a significant impact. Some of the studies have cited worst case scenarios of 50 years of continuous operation. It is far more likely that the device will physically fail due to other means rather than fail due to NAND erasing wear. In any case, I've never seen anyone claim that a solid state disk is going to fail before a mechanical magnetic disk simply due to NAND erasing wear. Indeed, the articles that actually go into it make pretty strong claims that the endurance of flash media is far above that of current mechanical-electromagnetic designs. Three or four times the lifespan.
Re:swap space / tmpfs / cacheing (Score:2, Interesting)
You can get them pretty easily for $20 a pop.
Amazingly enough Amazon has 2GB SD cards cheaper than Newegg. $15 a pop (no free shipping though!)
That is $30 for 4GB, or $60 for 8GB.
Not quite enough to get Vista up and running, but it should do fine for a stand alone Linux box.
I wonder what the throughput would be if a proper hardware controller was put in place and you had 50 of those things in parallel.
Re:swap space / tmpfs / cacheing (Score:2, Interesting)
There will be several million shortly...
# Mass storage: 1024 MiB SLC NAND flash, high-speed flash controller;
# Drives: No rotating media.
From the OLPC Spec [laptop.org]
Re:Database servers (Score:3, Interesting)
promises data retention of 10 years. I would guess that it will function longer than that, but only if you refresh the data.
What about RAID? (Score:5, Interesting)
I'm talking about RAID + flash cards.
Flash cards are everywhere and, although their cost per GB is rather high, a 1GB card is easily affordable (1GB microSD card for less than 10 euros) and prices are dropping constantly. If someone decided to build a RAID card reader, we could easily get a foot in the door. For about 60 euros it would be possible to get something between a slowish but reliable 6GB flash drive or a speedy and snappy 1GB flash drive.
So why exactly didn't anyone thought of this? We already have IDE CF card readers, some models supporting 2 drives, that can be had for about 6 euros. Why not a RAID flash card reader?
Re:swap space / tmpfs / cacheing (Score:3, Interesting)
http://www.geekstuff4u.com/product_info.php?manuf
Not it, but close. Also way too expensive.
Re:Flash lifespan in persective (Score:2, Interesting)
To prevent data loss, these drives will require a good CRC algorithm or a RAID configuration that can repair damaged files when they are moved to new sectors. Also, it might be possible to convert the random access to sequential access, by moving the file the end of a circular stream buffer every time it is written too. But this would lead to fragmentation problems, that might be impossible to solve.