Follow Slashdot stories on Twitter


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Data Storage IT

Samsung Unveils 64-Gbit Flash Memory Chip 150

Posted by kdawson
from the thanks-for-the-memories dept.
Lucas123 writes "The chips can be combined to create a 128-GB flash storage device capable of holding up to 80 DVD movies or 32,000 MP3 music files. The chip was created using 30-nanometer processing technology that was developed with Samsung's self-aligned double patterning technology. Manufacturing will start in 2009; but the article quotes a Gartner analyst who reminds us, 'Samsung has had a difficult time adhering to its timelines for mass production due to the complexity of MLC architectures and ever shrinking process geometries.'"
This discussion has been archived. No new comments can be posted.

Samsung Unveils 64-Gbit Flash Memory Chip

Comments Filter:
  • by T-Bone-T (1048702) on Thursday October 25, 2007 @10:48AM (#21113645)
    I have no idea how they got 80 movies from 128GB. DVD ISOs tend to be 7-10GB and divx rips tend to be 700MB in which case you get either 10-15 movies or over 160 movies.
  • by J0nne (924579) on Thursday October 25, 2007 @11:05AM (#21113881)

    I have no idea how they got 80 movies from 128GB. DVD ISOs tend to be 7-10GB and divx rips tend to be 700MB in which case you get either 10-15 movies or over 160 movies.
    Most recent DVD rips are of the 2CD variety, so 1,4 GB total per movie (which gives us about 85 movies). You can see they know exactly what people use them for ;).
  • by networkBoy (774728) on Thursday October 25, 2007 @11:17AM (#21114087) Homepage Journal
    You're thinking of NOR devices.
    NAND organized flash has good write speeds but poor read speeds and NOR is the other way round.
    The controller has a lot to do with overall performance as well.

    Finally, Hynix has demonstrated a 22 die stack, but not in HVM. Samsung could *possibly* do a 16 die stack, but I'm betting on two packages, each with 8 die when this comes out.
  • by Ledsock (926049) on Thursday October 25, 2007 @11:21AM (#21114157)
    It's Mbits, not Mbytes. Therefore, 128/8*1024/1400~11.7

    Also, they specified DVD movies. Rips from DVDs are usually called AVIs, DivX, XviD, or whatever. If you compress a standard 2 layer DVD down to a little less than a single layer, then you might be able to get 4 crammed in that space, but there'd be some heavy compression.
  • 30nm? (Score:2, Informative)

    by keithjr (1091829) on Thursday October 25, 2007 @11:42AM (#21114495)
    I don't like how the article doesn't state any projected costs. 30nm is on the bleeding edge of process sizes and I'd be surprised if they don't take pretty severe hit to their chip yield as a result. We'll see.
  • by Torne (78524) <> on Thursday October 25, 2007 @12:25PM (#21115213)

    remapping failed blocks from a small pool of reserved good ones
    Is that before or after you save data to that block?!

    During. Flash blocks fail while you are writing to them (or more specifically, when you are reading back the data to verify the write), so you have the data you wanted to write right there to save to another block. Flash blocks, under normal circumstances, don't go bad when they are just storing data or having it read out.

    Now for the serious part of the discussion: How does flash determine when a block failed? I know regular hard disks use this feature too, but how does it determine a block failed also? If a block fails, how would it be able to recover the data contained there? How does wear leveling fit into securely erasing flash storage? Even if you overwrite a block, how can you be sure it was really overwritten?

    Flash block remapping normally works by detecting write failures as above, so you don't need to recover any data. HDDs do it by using ECC, usually by marking sectors as bad after errors are detected and corrected (so unless it's so bad it's gone past the ECC correction threshold you keep your data).

    Wear levelling makes it impossible to securely erase flash storage without taking flash-chip specific measures.
  • by egoproxy (1114835) on Thursday October 25, 2007 @07:46PM (#21121625) Homepage
    According to the IEC standard [] the binary equivalent of Zillion would be Zibibyte.

The universe seems neither benign nor hostile, merely indifferent. -- Sagan