Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Samsung 256GB SSD is World's Fastest 190

i4u submitted one of many holiday weekend slow news day stories which starts "Samsung Electronics announced today the world's fastest, 2.5", 256GB multi-level cell (MLC) based solid state drive (SSD) using a SATA II interface. Performance data of the new Samsung 256GB SSD features a sequential read speed of 200 megabytes per second (MB/s) and sequential write speed of 160MB/s. The Samsung MLC-based 2.5-inch 256GB SSD is about 2.4 times faster than a typical HDD. Furthermore, the new 256 GB SSD is only 9.5 millimeters (mm) thick, and measures 100.3x69.85 mm. Samsung is expected to begin mass producing the 2.5-inch, 256GB SSD by year end, with customer samples available in September. A 256GB capacity is getting large enough to replace hard-drives for good — now just the prices just need to come down further for large capacity SSDs."
This discussion has been archived. No new comments can be posted.

Samsung 256GB SSD is World's Fastest

Comments Filter:
  • by Eccles ( 932 ) on Monday May 26, 2008 @09:06AM (#23543641) Journal
    Looking at a hard drive, it's got lots of moving parts, the need for sealing, etc. One would think that in the long run a solid state drive that is just a few chips and connecting logic would be cheaper to produce once you have the facilities.
  • This is good news... (Score:3, Interesting)

    by Tastecicles ( 1153671 ) on Monday May 26, 2008 @09:09AM (#23543665)
    ...if it can cope with high definition capture it'll be handy for me and my shutterbug family who're always out with various still and video cameras. Nothing worse than shortdropping a notebook and killing the hard disk.
  • by StCredZero ( 169093 ) on Monday May 26, 2008 @09:37AM (#23543881)
    This pales in comparison to the ioFusion drive [tgdaily.com]. The videos show tests being run where they are doing 8 operations at the same time, at blazing speeds, copying multiple DVDs in 5 seconds, and simulating swapping a blizzard of 4kb blocks as fast as RAM. Instead of 2 channels, their cards use 160 channels at the same time. This gives a single card the parallel random access bandwidth of a 1000 disk drive SAN.

    http://www.tgdaily.com/content/view/34065/135/ [tgdaily.com]

    At $30 per gigabyte, it would be great to have a 10-gig for OS and your current favorite MMO game.
  • by SD-Arcadia ( 1146999 ) on Monday May 26, 2008 @09:45AM (#23543933) Homepage
    Once the prices come down and the tech matures a little more, a nice small 32-64GB SSD for the apps and a 1TB+ for storage should be a great overall solution. This could even happen in form of an elegant hybrid unit.
  • Re:MLC, not SLC. (Score:2, Interesting)

    by Anonymous Coward on Monday May 26, 2008 @09:45AM (#23543935)
    Normal SSD drives die in a matter of months on a typical developer's machine so it shouldn't be that hard to test.

    I have first hand experience with this so I laugh when people say flash drives will last longer than their mechanical counterparts. The rewrite cycle count needs to be way, way higher than it currently is. Wear leveling can only do so much and it just gets worse as the drive gets full.
  • Re:Random write ops? (Score:5, Interesting)

    by Fweeky ( 41046 ) on Monday May 26, 2008 @10:10AM (#23544135) Homepage
    Every benchmark I've seen on SSD's have shown random IOPS of between 20 and 120/sec, ranging between cheaper consumer drives and more expensive enterprisey models; writing single blocks to random locations completely demolish their performance because such small writes often require the drive to erase huge blocks.

    New techniques try to avoid this by basically turning random writes into sequential ones; once you've erased a 4+MB block, you put all new writes into that block (you can turn a 0 into a 1 without an expensive erase cycle) and remap it similarly to how it's done with wear leveling. I'm not aware of anyone actually doing this yet, though.
  • by Jeppe Salvesen ( 101622 ) on Monday May 26, 2008 @10:16AM (#23544185)
    How would this perform for index tablespaces and logfiles? I imagine lifetime/health will have to be monitored, but that's already being done with regular platterspinners.
  • by 4D6963 ( 933028 ) on Monday May 26, 2008 @11:51AM (#23545105)

    A most interesting and pertinent question! I think that if such a memory reached the speed of RAM with the capacity of a HDD then we could merge the two concepts into a central memory that would be used for anything. The first real gain with that type of design is that instead of loading (uncompressed) files (from the HDD to the RAM) you could simply point to them, and directly access them. Virtual machines could benefit greatly from that by pausing and resuming their execution instantly, for all their virtualised RAM would be written in a file that would simply pointed to. The same could happen for regular programs. They could have all their memory space in a file (the OS would take care of it), and if the program was to be prematurely killed you could resume its execution state.

    Likewise, it would remove the concept for RAM space, as well as for virtual memory, that is, the OS wouldn't use a single file to put everything in, but rather as many files as it needs for each program (for example). With such a concept the execution for everything I mentioned (programs, OS, virtual machines) could be paused and resumed instantly.

    As for the actual booting of the machine, I'm sure a clever use of it by say copying read-only pieces of memory that are hardware/configuration-independent to another space in memory where they could be modified (or not, maybe you could have a partially read-only OS) would greatly speed up things.

    Somehow I can see that happening in embedded devices, not so soon with desktop machines, but we'd have yet to wait for SSD memory to be fast enough.

  • Re:Random read ops? (Score:2, Interesting)

    by aharth ( 412459 ) on Monday May 26, 2008 @12:09PM (#23545269) Homepage
    How about random reads? I've benchmarked a 16G Samsung SSD and the standard Linux file systems (ext2, ext3) seem to cache read blocks in the (main memory) file system buffers.

    Doing so seems to diminish some of the the possible overall system performance improvements - if I have a SSD I want to use the main memory for either HD io caching or programs. Caching disk blocks from the fast SSD in main memory seems suboptimal.
  • by menace3society ( 768451 ) on Monday May 26, 2008 @12:27PM (#23545457)
    I think the problem is, if you try to use the flash as a sort of buffer, you end up with the worst of both worlds. You're still subject to the same mechanical failure risks for long-term storage as a simple hard disk, but you're also limited by the number of writes you can do to the buffer.

    You can accomplish the same thing, with fewer flaws, by just having two drives.
  • I like SSD but.... (Score:2, Interesting)

    by jeffc128ca ( 449295 ) on Monday May 26, 2008 @01:08PM (#23545953)
    I've been waiting for something to get around the hard drive speed bottle neck for a long time. I do a lot of data analysis on huge data sets, mostly financial market data. I end up doing a massive amount of reads and writes to hard drives which slows things down a lot.

    My main fear with SSD's is the wearing out of blocks and bits. Typical data sets I work with are about 2 gigabytes. I run scripts against the data to look at various patterns and generate forecasting data. I could read and write that data six or eight hundred times in a day's testing. Well over a terabyte a day. How soon before an SSD craps out on me at that pace?

    I would love to have an SSD for the blazing fast access times, but I don't want to have to replace it every six months. I'd pay extra for it, probably 2 to 3 times the traditional hard disk amount. But it has to last a few years at least. The other option of going 64 bit, adding huge amounts of DRAM, and running a RAM disk isn't financially sound at the moment.
  • Which is why I have been wondering: You know how a HDD will have a bit of RAM as a cache to make write more efficient,why aren't they doing that for SSDs? It seems like it would me a lot more efficient if lets say,the SSD has 4Mb blocks to have 8Mb of cache on them so the data can be cached and written in efficient blocks. I'll admit I haven't gotten to read up on the algorithms used with SSD so I don't know if there is a technical problem with that idea or not,but it would seem to me to be a "best of both worlds" kind of solution.

    And speaking of "best of both worlds" what happened to everything is going to be hybrid? A couple of years back all you read was they were going to add anywhere from 256Mb to 8Gb in addition to the 8-16Mb of RAM cache on HDDs to boost the data access and make them even more efficient at writes. What happened? I know I would personally like a hybrid that had,say 8Gb on it so I could have the OS stored in flash with my data and swap stored on the platters. But that is my take on it anyway,YMMV

  • BAARF (Score:3, Interesting)

    by tepples ( 727027 ) <tepples.gmail@com> on Monday May 26, 2008 @01:43PM (#23546383) Homepage Journal

    Easier to miniaturize, certainly. Right now they're doing massive RAID0 to get performance, I wonder what it'd be like if they could do RAID1/5/6 for example - forget hard disk crashes more or less, just replace some flash plug-in modules in your SSD. Ok the electronics could still fry, it could get lost or stolen but mechanical failure seems to be the typical killer.
    I've read that RAID 3/4/5 is unreliable [baarf.com]. As capacities grow, it takes longer to reconstruct a new spare from the surviving drives when one dies. In fact, BAARF contends that capacities have grown to the point that it's likely that another drive will fail during reconstruction. Are there any big drawbacks to RAID 6?
  • Re:Random write ops? (Score:1, Interesting)

    by Anonymous Coward on Monday May 26, 2008 @01:48PM (#23546451)

    [O]nce you've erased a 4+MB block, you put all new writes into that block [...] I'm not aware of anyone actually doing this yet, though.
    Some filesystems (for example ZFS) will do this.
  • by canuck57 ( 662392 ) on Monday May 26, 2008 @03:38PM (#23547543)

    Looking at a hard drive, it's got lots of moving parts, the need for sealing, etc. One would think that in the long run a solid state drive that is just a few chips and connecting logic would be cheaper to produce once you have the facilities.

    Given sufficient amount of time, solid state SSD will likely overtake hard drives. But I think many industry analysts are far too quick to estimate wide adoption if the SSD media over hard drives. It will be slow. And I have heard those predictions 10 years ago.

    Problems exist in SSD adoption, 3 huge ones.

    • density, hard drives seem to be always many steps ahead in density
    • costs, a $100 hard drive w. 1TB version the SSD 1TB cost?
    • write speed/reliability issues with SSD

    Oh, the SSD will creep in, but I don't expect it to wipe out hard drives any time soon. I will say when the 640GB SSD is under $250 it's adoption will soar for laptops. By that time, the 1TB mechanical hard drives will be under $100 and 2 or perhaps 3TB drives may exist as well. But we are optimistically 4-5 years from this point. For the data center, even longer. The write and cost issues must be totally resolved for that, as some drives in busy systems go nuts on writes and can't afford a hit. If I want to buy 10PB of storage, and SSD is twice the price, it will loose.

    What you might see in widespread adoption first is say affordable 64GB versions of SSD for the OS, and a adjunct 1TB hard drive for raw storage.

  • by Joce640k ( 829181 ) on Monday May 26, 2008 @10:15PM (#23550873) Homepage
    Have you seen the average notebook buy in a shop? It's all a game of "find the biggest number".

    As a geek I'm always being asked if such-and-such a laptop is "fast enough", if XX is enough disk space, etc.

    People have no idea what the numbers mean, or how they compare to the numbers six months ago. They don't even know the difference between RAM and hard disk. All they know is they don't want low numbers.

    Bottom line ... "disk capacity" is a number on the little label so it has to keep increasing no matter what.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!