Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

512GB Solid State Disks on the Way 186

Viper95 writes "Samsung has announced that it has developed the world's first 64Gb(8GB) NAND flash memory chip using a 30nm production process, which opens the door for companies to produce memory cards with upto 128GB capacity"
This discussion has been archived. No new comments can be posted.

512GB Solid State Disks on the Way

Comments Filter:
  • Cost? (Score:2, Interesting)

    by Jeff DeMaagd ( 2015 ) on Sunday October 28, 2007 @11:38AM (#21148403) Homepage Journal
    Capabilities aren't very important if they aren't affordable. So maybe some government contractors can afford those things now, I don't think it would be that interesting to the consumer until SSDs get to a tenth of the cost.
  • by ILuvRamen ( 1026668 ) on Sunday October 28, 2007 @11:49AM (#21148475)
    maybe they created a controller that could read and write from then simultanerously so it's double the read/write speed. I hope so cuz it better be able to beat my sata drives in read write speed otherwise I don't really care how fast the seek time is cuz any file over like 100KB would be slower to open on it than a normal hard drive.
    oh yeah and I agree with the other posts. Call me when it's on its way to my budget, not just store shelves lol.
  • by schnikies79 ( 788746 ) on Sunday October 28, 2007 @11:54AM (#21148505)
    It's no so easy to use the 1,000,000=1mb with this system. Unless they do it anyway.
  • What about IOPS? (Score:3, Interesting)

    by KrackHouse ( 628313 ) on Sunday October 28, 2007 @11:58AM (#21148531) Homepage
    Does anybody know how well flash SSDs perform in RAID arrays? 15kRPM SAS drives are horrendously expensive so if I could plug a couple small flash drives into my RAID card (RAID 0) I'd be a happy camper. Can't find benchmarks anywhere and flash drives have horrible write speeds which means they have terrible OLTP performance.
  • Re:Cost? (Score:3, Interesting)

    by JackMeyhoff ( 1070484 ) on Sunday October 28, 2007 @11:59AM (#21148535)
    They already use http://www.bitmicro.com/ [bitmicro.com]
  • by Jeff DeMaagd ( 2015 ) on Sunday October 28, 2007 @12:00PM (#21148537) Homepage Journal
    The seek times of SSDs should make it such that trying to read and write from the storage array at the same time would seem kind of pointless. It also increases the costs. It would probably go the way of FB-DIMM. FB-DIMM is supposed to allow simultaneous reads and writes to different memory cards, but it's too expensive and has other problems limiting its performance. Now, if the controller designer can apply something like that to a hard drive array, then maybe that would be nice. I think it might be possible to do that in software, make it like a software RAID. Maybe JBOD drive concatenation allows this, I don't know.
  • Re:What about IOPS? (Score:2, Interesting)

    by pslam ( 97660 ) on Sunday October 28, 2007 @12:44PM (#21148767) Homepage Journal

    Does anybody know how well flash SSDs perform in RAID arrays? 15kRPM SAS drives are horrendously expensive so if I could plug a couple small flash drives into my RAID card (RAID 0) I'd be a happy camper. Can't find benchmarks anywhere and flash drives have horrible write speeds which means they have terrible OLTP performance.

    Individual flash chips have terrible write performance, mostly due to the slow block erase time. However, you always use multiple chips in high capacity storage devices (anything larger than an MP3 player), and you can start doing fancy tricks with interleaving, or just plain have way more buffer memory to hide the erase time. If you really want to crank out even higher performance, then you stick multiple NAND interfaces of the controller chip and drive it all in parallel.

    If you stack about 4-8 chips in a device, you start getting stream throughput comparable to a 15k drive. Also bear in mind that the chips we're talking about here are already stacked 4-8 internally anyway! The limiting factor will probably end up being the NAND flash bus (or number of busses) connecting the controller to the flash chips.

  • by Doc Ruby ( 173196 ) on Sunday October 28, 2007 @12:48PM (#21148789) Homepage Journal
    Notebook drives currently cost as little as about $50:80GB, or $6.50:GB, which is a good size for a mobile device, and almost the largest available.

    Flash is as little as $64:8GB (USB), $8:GB. Removing the redundant USB connectors and packaging, putting it in a single drive the size of a notebook drive, would give an 80GB Flash drive for somewhere closer to $50 than to $80.

    FWIW, a 4GB microdrive is $30, or $7.50:GB.

    These numbers show that a Flash drive competing directly with a disc drive is already right around the corner. By the time 2010 comes around, what will mainly be different is the upper capacity around 1TB, with probably Flash cheaper than discs.
  • by Anonymous Coward on Sunday October 28, 2007 @01:11PM (#21148921)

    This has been discussed before.
    Indeed, it has been discussed before. However, the wear leveling algorithms used in flash media appear to be a trade secret, an it's not documented anywhere (that I could find) how they work. Do they make use of some knowledge of the filesystem (FAT/FAT32 for USB sticks) to determine which sectors are currently unused? Or do they have only a small pool of "spare" sectors to cycle through? What happens if I use a journaling filesystem?
  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Sunday October 28, 2007 @01:26PM (#21149061)
    The IBM winchester line of drives from the 70s were always labels in units of 1 MB = 10^6. It is just completely false that hard drives have always been labeled using binary prefixes. Digging around, it appears that early PC/workstation drives in the early 80s were mixed. Some used 2^20, some used 10^6. In the late 80s, consumer hard drives made by Seagate, WD, etc. all converged on 2^N for a few years, before switching to 10^6 in the early 90s.

    Bandwidth is always measured in 1 MB/s = 10^6 bytes/s, or 1 Mb/s = 10^6 bits/s. Should 1 MB take 1.04 seconds to transfer of 1 MB/s data link? This includes all forms of Ethernet, SCSI, ATA, PCI, and any other protocol I have looked up. If 1 MB/s does not equal 1 MB per 1 s, someone should be shot, that is just not OK.

    mega = 10^6 in all other fields. Including other computer terms -- 1 MHz, 1 MFLOP, 1 megapixel, etc.

    computer RAM is the only thing that has consistently been labeled using binary approximations to the SI units. And as long as I can remember (computing magazines in the 80s) people have acknowledged that 1 MB = 2^20 is an *approximation* and that mega=10^6.

    Mega=10^6 is right. mega=2^20 is wrong. End of story. It happens that it is technically convenient to manufacture and use RAM in powers of 2. No such constraint applies for hard drives, so there is no reason to use the base-2 prefixes. Stupid OSs should be changed to use the SI prefixes when reporting file sizes. RAM should be labeled using the "base-2" prefixes, but they are admittedly somewhat annoying due to lack of familiarity, and since nobody uses base-10 ram, it isn't a big deal.
  • by DamonHD ( 794830 ) <d@hd.org> on Sunday October 28, 2007 @01:30PM (#21149097) Homepage
    Hi,

    I already boot/run my main Internet-facing server (Ubuntu) from a 4GB memory SSD card to minimise power consumption, and I have more than 50% space free, ie it wasn't that hard to do.

    http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]

    I'm not being that clever about it: using efs3 rather than any wear-leveling SSD-friendly fs, and simply minimising spurious write activity, eg by turning down verbosity on logs. And laptop-mode helps a lot of course.

    Now that machine does also have a 160GB HDD for infrequently-accessed bulk data (so the HDD is spun down most of the time and a power-conserving sleep mode), and it would be good to get that data onto SSD too. But a blend, as in many memory/storage systems, gives a good chunk of maximum performance and power savings for reasonable cost.

    Rgds

    Damon
  • by Kjella ( 173770 ) on Sunday October 28, 2007 @02:24PM (#21149475) Homepage

    Hell, they even convinced SI. SI have absolutely no authority or experience with determining computer units, and the "solution" they came up with is even more confusing and ugly. How do you tell if MeB or MiB is 2^20 or 10^6? Muppets.
    I think you're doing a bit of revisionist history yourself. SI was there first. The SI units have always been in powers of ten, and have been used in all other branches of science long before there was a "computer science". It was computer scientists that originally redefined them to be powers of two, and in the computer world it was so for several decades. It was confusing but not more so than "if it ends in -bytes, it's a power of 2". Except the floppy drive which is 1.44 "MB" = 1.44*1000*1024 (1987), or the modem speeds which were reported 1 kbps = 1000 bps (1972) because that's what electrical engineers talked, or Ethernet that ran at 10Mbit/s = 10.000.000 bits/s (1980). This lead to a "bytes is powers of two, bits is powers of ten" which made all sorts of fuck-ups possible.

    Yes, the HDD manufacturers did it because it was a cheap 5-10% savings, but the excuses were plenty and not all bad. It was confusing every time computer science bumped into one of the other sciences and telecommunications in particular, which inevitably used the SI prefixes. However, instead of actually fixing a problem it became only an even greater mess, invalidating pretty much every rule of thumb because the OS would invariably report something else. That's pretty much proof they didn't want to fix anything, just grab some extra profit.

    After that, it was a big mess and with next to no interest in solving it. That's when the people at IEC, not SI, and certainly not pushed by HDD manufacturers, finally said that these units are FUBAR, and the only way to make a long-term solution is to abandon the SI-prefixes and make new and ugly ones, particularly the names. At that point, we're talking 50 years of computer science use against 200 years of other sciences, and with retards messing up the boundary. I think they're ugly as hell, but they're also the only way to go forward from here.
  • Sorry, but for certain algorithms it's important that you are working in powers of 2, and that was always called Mega (Bits, Bytes, Words, whatever) or, more commonly Kilo-whatever was 2^10 whatevers.

    IO has always been a mixture and compromise. Punched cards could hold 12 * 72 bits (7094 row binary) or 12 * 80 bits (column binary, but don't try to read it with the main card reader). Try to fit THAT into your "powers of 10" scenario!

    For the current set of IO devices, capacity measurement was defined by marketing. I saw arguments about it in the trade journals when it was being fought out over hard disks. AFAIK, companies decided independently the choice that was, to them, most advantageous. It was powers of 10. This was not appreciated by any single customer that I was aware of. Some despised it, some didn't care, nobody was in favor. (Yeah, it was a small sample, but it's one that I was aware of. Most didn't care, and many of those weren't interested in understanding.)

    But block allocations of RAM are done in powers of two, and these are frequently mapped directly to IO devices. So having a mis-match creates problems. Disk files were (possibly) created as an answer to this problem. (7094 drum storage didn't have files. Things were addressed by drum address. If a piece went bad, you had to patch your program to avoid it. UGH! Tape was for persistent data, drum storage was transient...just slightly more persistent than RAM.) Drum addresses were tricky. I never did it myself, but some people improved performance by timing the instructions so that they would have the drum head right before the data they wanted to read or write to limit lagging. (Naturally this was all done in assembler, so you could count out exactly how many miliseconds of execution time you were committing, and if you know the drum rotation speed, and the latency...
    So things tended to be stored in powers of two positions on the drum, unless a piece went bad.

    Disks, when they first appeared, were slower than drums, but more capacious. (They were still too expensive and unreliable to use for persistent storage.) But the habit of mapping things out in powers of two transferred from drums storage to disk storage. When files were introduced (not sure about when that was) the habit transferred. This wasn't all blind habit, lots of the I/O techniques that had been developed were dependent upon powers of two. So programmers though of capacity in powers of two. This didn't make any sense to accountants, managers, etc. When computer equipment started being sold by the Megabyte it made sense to the manufacturers to claim powers of 10 Megabytes for stroage, as they could claim larger sizes. (This wasn't as significant for Kilobytes, as 1024 is pretty close to 1000.) It not only made sense to the manufacturers, it also made sense to the accountants who were approving the orders. And when the managers started specifying the equipment...well, everything switched over into being measured by powers of 10.

    No conspiracy. Just system dynamics. And programmers still think of storage in powers of 2, because that's what they work in. (This is less true when you work in higher level langauges, but if you don't take advantage of the powers of two that the algorithms are friendly with, it will cost you in performance, even if you don't realize it.)

  • by owlnation ( 858981 ) on Sunday October 28, 2007 @03:03PM (#21149777)

    Maybe human beings are just porn's way of making more porn.
    The great thing about slashdot is that there really are some incredibly smart and funny people (two things that usually go together) here. Take the above quote for example, it is both funny and deeply profound. It is an Hall of Fame quote. Thank you, it made my day.
  • Nano Nano (Score:5, Interesting)

    by shmlco ( 594907 ) on Sunday October 28, 2007 @04:08PM (#21150399) Homepage
    Okay, how about a terabyte in a form factor small enough for a thunb drive, that costs one-tenth the price of traditional flash memory, and is a staggering 1000 times more energy efficient.

    Researchers Develop Technology to Make Terabyte Thumb Drives Possible [gizmodo.com]

    Makes a mere 512GB flash chip look a bit sad, doesn't it?
  • by pslam ( 97660 ) on Sunday October 28, 2007 @11:09PM (#21153451) Homepage Journal

    This is all very well but you are totally wrong. Go download a datasheet of a popular FLASH part. Guess what? The capacity is an exact power of 2.

    I'm not just making this up. NAND is naturally base-2 capacity sized. Yes, there is sparing, but pages are normally 2048 byte (or larger these days) with a few extra bytes per 512 for ECC. The non-ECC areas are still power-of-2 based, and the chip area itself is square and ends up being another power-of-2 pages. End result, a power-of-2. I've been working on this stuff for about 6 years now - I'm not just coming up with it randomly.

This file will self-destruct in five minutes.

Working...