512GB Solid State Disks on the Way 186
Viper95 writes "Samsung has announced that it has developed the world's first 64Gb(8GB) NAND flash memory chip using a 30nm production process, which opens the door for companies to produce memory cards with upto 128GB capacity"
Cost? (Score:2, Interesting)
Re:Four times the memory in three days (Score:2, Interesting)
oh yeah and I agree with the other posts. Call me when it's on its way to my budget, not just store shelves lol.
I bet the HD makers are going to be pissed! (Score:5, Interesting)
What about IOPS? (Score:3, Interesting)
Re:Cost? (Score:3, Interesting)
Re:Four times the memory in three days (Score:3, Interesting)
Re:What about IOPS? (Score:2, Interesting)
Individual flash chips have terrible write performance, mostly due to the slow block erase time. However, you always use multiple chips in high capacity storage devices (anything larger than an MP3 player), and you can start doing fancy tricks with interleaving, or just plain have way more buffer memory to hide the erase time. If you really want to crank out even higher performance, then you stick multiple NAND interfaces of the controller chip and drive it all in parallel.
If you stack about 4-8 chips in a device, you start getting stream throughput comparable to a 15k drive. Also bear in mind that the chips we're talking about here are already stacked 4-8 internally anyway! The limiting factor will probably end up being the NAND flash bus (or number of busses) connecting the controller to the flash chips.
Flash Already Close to Discs (Score:1, Interesting)
Flash is as little as $64:8GB (USB), $8:GB. Removing the redundant USB connectors and packaging, putting it in a single drive the size of a notebook drive, would give an 80GB Flash drive for somewhere closer to $50 than to $80.
FWIW, a 4GB microdrive is $30, or $7.50:GB.
These numbers show that a Flash drive competing directly with a disc drive is already right around the corner. By the time 2010 comes around, what will mainly be different is the upper capacity around 1TB, with probably Flash cheaper than discs.
Re:Debunking SSD life cycle issues (Score:1, Interesting)
Re:I bet the HD makers are going to be pissed! (Score:5, Interesting)
Bandwidth is always measured in 1 MB/s = 10^6 bytes/s, or 1 Mb/s = 10^6 bits/s. Should 1 MB take 1.04 seconds to transfer of 1 MB/s data link? This includes all forms of Ethernet, SCSI, ATA, PCI, and any other protocol I have looked up. If 1 MB/s does not equal 1 MB per 1 s, someone should be shot, that is just not OK.
mega = 10^6 in all other fields. Including other computer terms -- 1 MHz, 1 MFLOP, 1 megapixel, etc.
computer RAM is the only thing that has consistently been labeled using binary approximations to the SI units. And as long as I can remember (computing magazines in the 80s) people have acknowledged that 1 MB = 2^20 is an *approximation* and that mega=10^6.
Mega=10^6 is right. mega=2^20 is wrong. End of story. It happens that it is technically convenient to manufacture and use RAM in powers of 2. No such constraint applies for hard drives, so there is no reason to use the base-2 prefixes. Stupid OSs should be changed to use the SI prefixes when reporting file sizes. RAM should be labeled using the "base-2" prefixes, but they are admittedly somewhat annoying due to lack of familiarity, and since nobody uses base-10 ram, it isn't a big deal.
I already boot from a 4GB memory card. (Score:5, Interesting)
I already boot/run my main Internet-facing server (Ubuntu) from a 4GB memory SSD card to minimise power consumption, and I have more than 50% space free, ie it wasn't that hard to do.
http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]
I'm not being that clever about it: using efs3 rather than any wear-leveling SSD-friendly fs, and simply minimising spurious write activity, eg by turning down verbosity on logs. And laptop-mode helps a lot of course.
Now that machine does also have a 160GB HDD for infrequently-accessed bulk data (so the HDD is spun down most of the time and a power-conserving sleep mode), and it would be good to get that data onto SSD too. But a blend, as in many memory/storage systems, gives a good chunk of maximum performance and power savings for reasonable cost.
Rgds
Damon
Re:I bet the HD makers are going to be pissed! (Score:4, Interesting)
Yes, the HDD manufacturers did it because it was a cheap 5-10% savings, but the excuses were plenty and not all bad. It was confusing every time computer science bumped into one of the other sciences and telecommunications in particular, which inevitably used the SI prefixes. However, instead of actually fixing a problem it became only an even greater mess, invalidating pretty much every rule of thumb because the OS would invariably report something else. That's pretty much proof they didn't want to fix anything, just grab some extra profit.
After that, it was a big mess and with next to no interest in solving it. That's when the people at IEC, not SI, and certainly not pushed by HDD manufacturers, finally said that these units are FUBAR, and the only way to make a long-term solution is to abandon the SI-prefixes and make new and ugly ones, particularly the names. At that point, we're talking 50 years of computer science use against 200 years of other sciences, and with retards messing up the boundary. I think they're ugly as hell, but they're also the only way to go forward from here.
Re:I bet the HD makers are going to be pissed! (Score:5, Interesting)
IO has always been a mixture and compromise. Punched cards could hold 12 * 72 bits (7094 row binary) or 12 * 80 bits (column binary, but don't try to read it with the main card reader). Try to fit THAT into your "powers of 10" scenario!
For the current set of IO devices, capacity measurement was defined by marketing. I saw arguments about it in the trade journals when it was being fought out over hard disks. AFAIK, companies decided independently the choice that was, to them, most advantageous. It was powers of 10. This was not appreciated by any single customer that I was aware of. Some despised it, some didn't care, nobody was in favor. (Yeah, it was a small sample, but it's one that I was aware of. Most didn't care, and many of those weren't interested in understanding.)
But block allocations of RAM are done in powers of two, and these are frequently mapped directly to IO devices. So having a mis-match creates problems. Disk files were (possibly) created as an answer to this problem. (7094 drum storage didn't have files. Things were addressed by drum address. If a piece went bad, you had to patch your program to avoid it. UGH! Tape was for persistent data, drum storage was transient...just slightly more persistent than RAM.) Drum addresses were tricky. I never did it myself, but some people improved performance by timing the instructions so that they would have the drum head right before the data they wanted to read or write to limit lagging. (Naturally this was all done in assembler, so you could count out exactly how many miliseconds of execution time you were committing, and if you know the drum rotation speed, and the latency...
So things tended to be stored in powers of two positions on the drum, unless a piece went bad.
Disks, when they first appeared, were slower than drums, but more capacious. (They were still too expensive and unreliable to use for persistent storage.) But the habit of mapping things out in powers of two transferred from drums storage to disk storage. When files were introduced (not sure about when that was) the habit transferred. This wasn't all blind habit, lots of the I/O techniques that had been developed were dependent upon powers of two. So programmers though of capacity in powers of two. This didn't make any sense to accountants, managers, etc. When computer equipment started being sold by the Megabyte it made sense to the manufacturers to claim powers of 10 Megabytes for stroage, as they could claim larger sizes. (This wasn't as significant for Kilobytes, as 1024 is pretty close to 1000.) It not only made sense to the manufacturers, it also made sense to the accountants who were approving the orders. And when the managers started specifying the equipment...well, everything switched over into being measured by powers of 10.
No conspiracy. Just system dynamics. And programmers still think of storage in powers of 2, because that's what they work in. (This is less true when you work in higher level langauges, but if you don't take advantage of the powers of two that the algorithms are friendly with, it will cost you in performance, even if you don't realize it.)
Re:There are times...... (Score:5, Interesting)
Nano Nano (Score:5, Interesting)
Researchers Develop Technology to Make Terabyte Thumb Drives Possible [gizmodo.com]
Makes a mere 512GB flash chip look a bit sad, doesn't it?
Re:No.. where did you learn this? It's wrong. (Score:3, Interesting)
This is all very well but you are totally wrong. Go download a datasheet of a popular FLASH part. Guess what? The capacity is an exact power of 2.
I'm not just making this up. NAND is naturally base-2 capacity sized. Yes, there is sparing, but pages are normally 2048 byte (or larger these days) with a few extra bytes per 512 for ECC. The non-ECC areas are still power-of-2 based, and the chip area itself is square and ends up being another power-of-2 pages. End result, a power-of-2. I've been working on this stuff for about 6 years now - I'm not just coming up with it randomly.