Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Intel's First SSD Blows Doors Off Competition 282

theraindog writes "Intel is entering the storage market with an ambitious X25-M solid-state drive capable of 250MB/s sustained reads and 70MB/s writes. The drive is so fast that it employs Native Command Queuing (originally designed to hide mechanical hard drive latency) to compensate for latency the SSD encounters in host systems. But how fast is the drive in the real world? The Tech Report has an in-depth review comparing the X25-M's performance and power consumption with that of the fastest desktop, mobile, and solid-state drives on the market."
This discussion has been archived. No new comments can be posted.

Intel's First SSD Blows Doors Off Competition

Comments Filter:
  • by Anonymous Coward on Monday September 08, 2008 @02:56PM (#24923327)
    This article at HotHardware, has a few additional tests that show real-world usage models as well as synthetic benchmarks: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/ [hothardware.com]

    The PCMark Vantage tests are especially impressive: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=7 [hothardware.com]
  • by MojoKid ( 1002251 ) * on Monday September 08, 2008 @03:01PM (#24923405)
    This review at HotHardware shows some additional data including a few additional real-world usage models, like PCMark Vantage tests: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/ [hothardware.com]

    Benchmarks start here: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=4 [hothardware.com]
  • by mooingyak ( 720677 ) on Monday September 08, 2008 @03:10PM (#24923543)

    That depends entirely on what kind of RAID [wikipedia.org] we're talking about...

  • by bunratty ( 545641 ) on Monday September 08, 2008 @03:11PM (#24923557)
    The reason you defrag a hard disk is because the time to read a file is much less if the drive doesn't have to a random-access seek while reading the file. SSDs have fast performance whether they need to seek randomly or not, so why would there be a need to defrag an SSD disk? I would think it would only wear out the drive faster.
  • by Anonymous Coward on Monday September 08, 2008 @03:16PM (#24923621)

    Replying to you, since you seem serious, as opposed to sibling.

    That's $600 per 80GB drive, with a minimum order of 1000.
    You can't buy a single drive for $600. Or at least, not from Intel.

  • by GreyWolf3000 ( 468618 ) * on Monday September 08, 2008 @03:21PM (#24923699) Journal
    Yeah, but the reason it speeds up mechanical hard drives is because your kernel can schedule I/O on multiple spindles, effectively parallelizing your I/O. Flash chips don't have to batch up a lot of transactions in memory and then block the process for long periods of time. Flash does not typically operate synchronous to the bus speed it's connected to, so you could get some speed benefits by accessing multiple banks in tandem, but probably not as much.
  • by chill ( 34294 ) on Monday September 08, 2008 @03:23PM (#24923723) Journal

    Yes, it would wear the disk out faster, but your original premise is flawed.

    Clustering locations would allow for accessing large chunks of data with one fetch, instead of lots of little fetches. If you're old enough, think back to the Blitter on the Amiga and moving contiguous chunks of memory as opposed fragmented blocks.

    Remember, RAM can get fragmented just as badly as a hard drive.

  • by Gloy ( 1151691 ) on Monday September 08, 2008 @03:31PM (#24923843)
    System boot time is a function of many different factors, of which storage read and write speeds are only two.
  • by TheSunborn ( 68004 ) <mtilsted.gmail@com> on Monday September 08, 2008 @03:54PM (#24924177)

    You can't grow a file in the middle. There don't exists any filesystem call that can do that.

    Fragmentation only happens if you append to a file, but that kind of fragmentation should not be a problem for ssd, because all blocks(Except the last) will be full, and ssd don't read the 'next' block, any faster then any other black.

  • by adisakp ( 705706 ) on Monday September 08, 2008 @04:00PM (#24924243) Journal
    You never want to defrag SSD's. It just wears out the disk.

    A good SSD has wear-leveling and write-combining techniques that keep the SSD "defragmented" automatically.

    And it doesn't matter if the FS clusters are far apart as long as they are close to the SSD's hardware cluster sizes or the SSD intelligently combines them (which is what I believe Intel is doing since they claim a write amplification of only 1.1).

    It's possible that the Samsung SLC chip stores data for the wear-leveling and write-combining operations which would remap the MLC in a non-fragmented way.

    BTW, let me give you a naive wear-leveling / write-combining algorithm. I'm sure Intel has a better one because they've invested millions of dollars of research and the one I'm about to present to you could be done by a CS101 student:

    1) You have a bit more than 80GB free for an 80GB drive (extra memory to take care of bad sectors just like a normal hard drive plus a small amount of required for the wear-leveling / writecombining)

    2) You treat most of the storage as a ring buffer that consists of blocks on two levels: the native block size and a subblock size. The remaining storage (or alternate storage which may be the Samsung SLC chip on the MLC drives) is used to journal your writes and wear-leveling.

    3) You combine all writes aligned to the subblock size into a native block and write them out to the next free native block in the ring buffer and keep a counter for the write to the block. If you run into a used block, and increment a counter (for wear levelling) and if the counter is below a certain value, you skip it to the next free block, otherwise you move the used block (which has been stagnent) to a more frequently writtento free block (which will now take less of a burden since it's had a stagnant block moved into it).

    4) Anytime you make a write, the new sectors are updated in the memory area used for journaling / wear-level / sector remapping.

    Assuming your reads can be done fairly quickly at the subblock level, it never matters if you have to "seek" for the reads and the drive won't fragment on writes because they are combined into native block sizes.
  • by arth1 ( 260657 ) on Monday September 08, 2008 @04:12PM (#24924407) Homepage Journal

    Before rushing to buy these for database use, I would want a good look at MTBF values. Especially MTBF values for really heavy use, which may be completely different from estimated desktop use.

  • by BitZtream ( 692029 ) on Monday September 08, 2008 @04:14PM (#24924427)

    Write rates aren't THAT impressive, good but meh.

    Less heat depends on the device, I've seen plenty of HOT SSDs, presumably due to the density of silicon in them and being first generation devices

    Better power consumption ... where? Every SSD I've seen doesn't have a power saving mode, in power saving mode, as a general rule, mechanical drives are less hungry than SSDs.

    They are really only compelling if you need fast seek times or for use in a laptop where shock (head strikes) is a potential issue at this point in time.

  • by Dancindan84 ( 1056246 ) on Monday September 08, 2008 @04:22PM (#24924555)

    It's running on 4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0.

    I hope you know how volatile RAID 0 can be. A problem with any single one of those drives will screw up the whole works until you can restore from a backup. I can understand wanting to avoid RAID 5/6 if there are a lot of writes to your DB as performance of those arrays in writes are notoriously bad and RAID 1 would be a doubled hardware cost increase, but the ability to stay up and hot swap in drives after a failure is priceless.

  • Re:Oh Yeah? (Score:3, Informative)

    by sabre86 ( 730704 ) on Monday September 08, 2008 @05:13PM (#24925381)
    Maybe I'm being a bit pedantic, but in the dive bomber context, the SBD isn't the category of ship borne dive bombers, it's a specific one. SBD stands for Scout Bomber, Douglas -- aka, the Dauntless [wikipedia.org] -- in the pre-1962 Navy naming scheme [wikipedia.org].

    --sabre86
  • by dgatwood ( 11270 ) on Monday September 08, 2008 @07:11PM (#24926799) Homepage Journal

    Here's my concern in a nutshell:

    Assuming a degenerate workload, with a naive algorithm that never remaps existing data except when it is written, death is swift. Assume a 256 KB flash block. Assume a 4 GB flash device with 2% spare. Assume 70 MB/sec. transfer rate. Assume TCQ/NCQ so that you can queue up requests without waiting for the previous request to complete. At 2%, you have about 81.92 MB of spares, or about 328 spares. You have to erase a block containing 256KB at once (one entire flash block). Write random data on a single data block over and over without caching. At 70 MB/sec. divided by a 256 KB block, you can write 280 blocks per second. That comes to about 1.17 seconds to go through all of the spares once. With a 10,000 erasure limit, that means you destroy all the spares in 2.38 hours. At that point, no further writes can occur because erasing and rewriting a block in place is inherently unsafe. Obviously for a 60 GB disk, multiply the numbers by 15. Even with 100,000 cycle flash, one could kill a drive with a naive algorithm in about four months. Okay, so it wouldn't be quite that fast because you'd have to issue write cache flush instructions between each write, but you're in the ballpark.

    On the flip side, with a typical workload, a drive would likely last several years even with such a naive algorithm. This is why I'm concerned. It is quite possible for a company to implement a remarkably naive wear leveling algorithm and mostly get away with it except for a few unlucky people who end up with data loss. We saw this in the HD industry not too long ago with IBM claiming after the fact that their drives were not designed for continuous use. With such a history of reliability corner-cutting from storage vendors, I think there's good reason to expect better transparency from the flash drive vendors about how they are doing wear leveling, particularly if these products are expected to be used in enterprise installations as this drive supposedly is. Fool me once and all that....

    I won't even get into the question of how one can possibly achieve anything approaching a 1.1 write amplification rate short of building custom flash chips that allow per-page erasure.... Maybe for certain synthetic workloads, but not for a degenerate workload (e.g. write blocks sequentially with a stride length of the same size as (or larger than) the physical flash block size until you exceed the capacity of the write cache, rinse, repeat).... Otherwise, that seems at least an order of magnitude lower than is plausible. I'd have to see white papers explaining exactly how they're doing this miraculously good wear leveling before I'd trust any low-cycle-count SSDs in anything resembling a production server....

  • by Anonymous Coward on Monday September 08, 2008 @07:24PM (#24926959)

    You should probably check out Texas Memory Systems. They sell a number of solutions to your problem.

  • by Anonymous Coward on Tuesday September 09, 2008 @01:02AM (#24929395)

    There is no concept of "modify a physical block" in flash devices. Most flash devices will make a new block when ever you modify even a byte in it. It's done for wear-leveling and also how flash can only clear whole groups of block or write a block. No clearing a single block and writing it again.

    I know I'm not explaining it well. Just google JAFFS and YAFFS2 and see how they work. I believe there is one good article from IBM.

    Ah! here it is - http://www.ibm.com/developerworks/library/l-flash-filesystems/

    -1tsm3

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...