Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware

Intel Takes SATA Performance Crown With X25-E SSD 164

theraindog writes "We've already seen Intel's first X25-M solid-state drive blow the doors off the competition, and now there's a new X25-E Extreme model that's even faster. This latest drive reads at 250MB/s, writes at 170MB/s, and offers ten times the lifespan of its predecessor, all while retaining Intel's wicked-fast storage controller and crafty Native Command Queuing support. The Extreme isn't cheap, of course, but The Tech Report's in-depth review of the drive suggests that if you consider its cost in terms of performance, the X25-E actually represents good value for demanding multi-user environments."
This discussion has been archived. No new comments can be posted.

Intel Takes SATA Performance Crown With X25-E SSD

Comments Filter:
  • by Enderandrew ( 866215 ) <enderandrew.gmail@com> on Monday November 24, 2008 @10:25PM (#25881127) Homepage Journal

    This just screams dedicated database storage.

    • by beakerMeep ( 716990 ) on Monday November 24, 2008 @10:36PM (#25881203)
      Is it just me or have we gone full-frontal-funnyfarm with the analogies and adjectives here?
      • by symbolset ( 646467 ) * on Monday November 24, 2008 @11:27PM (#25881611) Journal

        You be the judge [techreport.com]. I would consider a factor of 80x improvement in IO/s over the best HDD, and 2x your best competitor (yourself) "wicked-fast door blowing screams" if you're looking at transaction processing for a database or other IOPS bound application. This is not the review that's overzealous about a 4% processor speed improvement. Stripe that across 5 or 10 of these bad boys and the upside potential is, um, noticable? If we can't get a little enthusiastic about that what does merit it? A flame paint job and racing stripes? A Ferrari logo? The next step up from here is RAMdisk. Yeah, it's not going to make Vista boot in 4 seconds. Is that the metric that's driving you?

        Capacity is still lacking at 32GB, but obviously they could expand it now and 64GB will be available next year. Naturally if they wanted to make a 3.5" form factor they could saturate the bandwidth of the interface and stuff 320GB into a drive with no problem if they wanted to court the folks who can (and most definitely would) pay $10,000 for that premium product (HINT HINT). Obviously the price bites, but they can get it for this, so why not? Naturally for challenging environments (vibration, rotation, dropping under use, space applications, heat) it's a big win all the way around. Isn't SATA 3.0 (6Gbps) due soon?

        I think I foresaw some of these improvements here some years ago. I'm glad to see them in use. If I were to look forward again I would say that it might be time to abandon the euphemism of a hard disk drive for flash storage, at least for high end devices. You can already reconfigure these chips in the above mentioned 320GB drive to saturate a PCIe 2.0 x4 link (20Gigatransfers/sec), which makes a nice attach for Infiniband DDR x4. The SATA interface allows a synthetic abstraction that is useful, but the useful part is that it's an abstraction -- you don't need to continue the cylinder/block/sector metaphor once you accept the utility of the abstraction.

        • by afidel ( 530433 )
          Hmm, at 4K DB IOPS for $719 that compares very favorably with my SAN 250K DB IOPS for ~$250K. Now for the same 7.5TB of RAID10 storage it would cost $337,050 without controllers so the SAN still wins out, but things are getting very interesting. I would expect to see drives like this make it into a high performance storage tier from SAN vendors very soon if they don't already have such an option in their lineup.
          • by symbolset ( 646467 ) on Tuesday November 25, 2008 @12:13AM (#25881941) Journal

            Now plug these things into your SAN -- because they plug right in -- and do the math again. 50% price premium for 80x the aggregate IOPS and 10x the bandwidth? Your SAN needs new connectors to handle the speed.

            This is a slam dunk. Admit it.

            • by afidel ( 530433 )
              Depends on what I need, but it does allow for an interesting mix. Throw 300GB 10K disks for bulk storage fronted by a couple GB of write cache and these drives for things like transaction logs and you are looking at a real winning combination. Might bring down price too since you will need so many fewer spindles for the storage and should definitely bring down power consumption. The hardest part would be new installations where you don't know what you biggest users will be and what your mix will be. One of
              • Re: (Score:3, Informative)

                by DXLster ( 1315409 )

                "our Lotus Notes servers are almost as rough on the SAN as the DB servers, huge numbers of IOPS and almost as much storage."

                I have no idea if you fall into this category, but many, MANY Domino administrators who implement with a SAN do so in a suboptimal fashion. A Domino server should have considerable resources on local drives even if you are committed to a SAN for your primary data storage. All system NSFs should be stored on local drives, as well as your transaction log (you are using transaction logs

                • by afidel ( 530433 )
                  We don't HAVE local drives, we use HP 1U servers with HBA's and boot from SAN. It's much cheaper to use a pizza box with SAN storage than to build a big beefy local storage server, especially when we built ours 2.5 years ago. SFF drives have increased local storage spindle counts pretty significantly though so the outlook might be different today. It's also significantly easier to manage growth on the SAN, we unfortunately lost the war on quota's so planning for growth is basically analyzing trends and grow
              • by drsmithy ( 35869 )

                Throw 300GB 10K disks for bulk storage fronted by a couple GB of write cache and these drives for things like transaction logs and you are looking at a real winning combination.

                Like this [sun.com] ?

                One of our biggest surprises is that our Lotus Notes servers are almost as rough on the SAN as the DB servers, huge numbers of IOPS and almost as much storage.

                Sounds like your Lotus servers might need some more RAM ?

                • by afidel ( 530433 )
                  Nope, we generally run with a couple GB free, email is just a high I/O application. I've always tried to build my email servers like a DB server (separate raid 10's for email and logs) because modern email servers ARE database servers.
        • $ per GB is the deal killer here. Most of our customers balk over spending more on regular drive space when you are talking about terabyte dbs. I could only see these being good for maybe log or temp space until the price comes way down.

    • by Jah-Wren Ryel ( 80510 ) on Monday November 24, 2008 @10:48PM (#25881297)

      This just screams dedicated database storage.

      NO, THIS JUST SCREAMS DEDICATED DATABASE STORAGE!!!

      filter
      fodder

    • My first thought is builds. I have to do Windows CE 5.0 builds all the time and they're almost entirely I/O bound. I've also compiled Xfree86 before at another job. It seems like the really large compiles are mostly I/O bound. The CPU doesn't peg, but the hard drive light stays lit.

      Something like this would be fantastic for development. I really want one.

      • by GigaplexNZ ( 1233886 ) on Monday November 24, 2008 @11:22PM (#25881567)
        Try using a RAM disk [wikipedia.org].
        • Re: (Score:3, Informative)

          by cheater512 ( 783349 )

          I often do compiles (Gentoo) on a ram disk.

          Linux desktop systems doesnt use anywhere near the amount of ram modern systems have so just make a tmpfs mount and the compiles fly. :)

          • Re: (Score:3, Informative)

            by Godji ( 957148 )
            I'm a Gentoo user too. My CPU and hard drive are decent (Core 2 Duo 3.33 Ghz, Western Digital 500 Gb RE2). I build on the root filesystem. I've never seen an I/O bound build, it's always the CPU. What are you people talking about?
            • I mainly do it on my laptop so the HDD isnt as fast as desktops.

              Still it does make a reasonable difference especially for big compiles. Try it out for yourself.
              Merging is significantly faster since its copying Ram to the hard drive instead of hard drive to hard drive.

              • by Godji ( 957148 )
                I've used it on a laptop before, and I didn't notice I/O boundness either. Also, for large builds, the merge time is insignificant compared to the compilation time.
          • I often do compiles (Gentoo) on a ram disk.

            So do I, but it's built-in and my system refers to it as "cache". Why not let the OS decide what to store in RAM? It's really good at that kind of stuff.

    • Did you even read the summary?

      We've already seen Intel's first X25-M solid-state drive blow the doors of the competition

      Oh, gimme more of that door knob!

    • Re: (Score:3, Interesting)

      by Eunuchswear ( 210685 )

      So why the fuck isn't it SAS?

  • Considering I have a couple of HP DL380 G5s with 2.5" 72GB 15K SAS drives, each set me back about $600 (after education discount) ... the cost of this drive $738.84 with a truckload of performance to boot is a heck of a deal.

    • Re: (Score:2, Insightful)

      by Hal_Porter ( 817932 )

      Yup, and it uses SLC chips so it has an enormously long lifetime, around 70 years according to Intel.

      • 70 years of doing what exactly?
        Its entirely workload dependent.

      • by afidel ( 530433 )
        That's at a measly 100GB of writes per day, in a DB server it could easily see 10-100x that so between 7.5 years and 8 months.
        • Re: (Score:2, Insightful)

          Since you seem to know about this, how long would a normal Disk last in that environment?

          • by afidel ( 530433 ) on Tuesday November 25, 2008 @02:35AM (#25882889)
            Well I just checked and a hot disk in my SAN has done a bit over 15M 128K writes in the last 3 weeks so about 1.92TB in 21 days or close to 100GB per day. I have replaced 3 drives out of 150 in the last 2.5 years (well 5 total but 2 were precautionary from the SAN vendor when trying to troubleshoot another issue). This is a pretty lightly utilized SAN, we need it more for capacity then pure I/O. I can see a busy installation doing 10x what we do without even pushing the same hardware to its limit.
        • by Splab ( 574204 )

          It sure depends on your environment, my database would love these drives. We are going to be no where near 100GB writes a day for a long time, but the massive IO increase would come in handy for our reads.

    • SAS has taken over SCSI's role as the drive of choice to sell to customers who are willing to overpay for the cache of owning a premium product.

      You could have guessed that from the full name: Serial Attached SCSI. I guess if you can't buy a new media technology from Sony, this'll work.

      • 15K drives exist for a reason (at least they did until now), and they're only available in SAS or FC. I suspect the SAS version is actually the cheap one.

        • And neither one is as reliable or has the IOPS of three standard SATA drives that cost 50% of the price for 10x the storage.

          Math. It's a wonderful thing. Use it with your salesman.

          • by Splab ( 574204 )

            So is ignorance apparently.

            SAS is duplex, SATA isn't. I'll take one SAS drive over 3 SATA drives any time when it comes to performance.

            • Re: (Score:3, Interesting)

              by symbolset ( 646467 )

              Here's where I would go with a useful link [techworld.com]. "Duplex" doesn't necessarily mean what you think it means in this context. The use of this term bare is misleading, as perhaps the marketing person who invented the meme intended to be.

              "FYI, SAS full duplex means that one channel can be used for data traffic and the other channel can be simultaneously used for command traffic. Both channels cannot be simultaneously used for data. So when Mr Batty says 6Gb/s is available and that's 4x SATA I, he is technically correct, but end users will not see 4x performance."

              If you can't sell on the features, it's ok for some people to make stuff up when they're selling. But not us, here, ok? Let's be honest with one another here around the water cooler.

          • by drsmithy ( 35869 )

            Math. It's a wonderful thing. Use it with your salesman.

            Have you used your "math" to figure out how much more tripling rack space requirement and doubling the power consumption will cost ?

            • There are actually a few compelling use cases... Short of space is one of them. Server consolidation has freed up a lot of rack space lately, though, so most people have the space.

              And we're talking about drives that burn under one watt running full out. How many do your SAS 15K RPM drives burn?

              • by drsmithy ( 35869 )

                And we're talking about drives that burn under one watt running full out. How many do your SAS 15K RPM drives burn?

                Which SATA drives use less than a watt *at all*, let alone "full out" ?

    • It is. But those HP DL:380 G5 systems were a silly design. For the same price, you can put in 6 3.5" drives in other layouts and get up to 3 times the overall storage with no perceptible speed loss. Those G5's are too much price for too little performance: if I'm going to invest in 8xSAS drives, and spend the electricity and cooling on them, I want to get some significant storage space from it.
  • Seems to me the *target* for this drive would be the same buyer as 15k sas/scsi drives. Those are suspiciously absent from the tests...
    • by Anonymous Coward on Tuesday November 25, 2008 @01:21AM (#25882361)

      Anyone who tells you SSDs are a replacement for disks is at best talking about some niche workloads and at worst trying to sell you a line of the old BS. SSDs render 15k rpm disks obsolete all right, but not in the way you're suggesting.

      To get the same capacity as you'd get out of those hot, expensive disks - which is not a lot given what you're paying for them - you'd need to spend much more, and you'll likely find that performance levels off quickly when you saturate out your HBAs and/or CPUs and/or memory bus and/or front-end connectivity. Much better to combine a few of these with slow, cool, cheap disks to maximise both performance and capacity at a lower price than the 15k disks.

      Let's take an example.

      Suppose you have 14 146GB 15k rpm disks. They cost you $2000 apiece, or $28000. Each one gives you 300 IOPS, for a total of 4200 (we'll ignore the costs and inefficiencies of the hardware RAID dinosaur you're probably using these with; if we didn't, you might start to feel stupid about it). So you spent about $6.67/IOPS or $14/GB, plus the power and cooling to keep those disks spinning. Not cheap. Not particularly fast. Not really great in any way.

      Suppose instead that you want to replace them with these 80GB SSDs. You'll probably pay your vendor around $1400 for them (figure 60% margin like they're getting on those FC drives you've been buying from them). Now you need 26 of them to get the same capacity, costing you $36400. But you get about 12000 read IOPS each (write latency suspiciously omitted from this fluff piece, but we'll dubiously assume it's similar - it almost certainly isn't anywhere close) for a total of 312000. Too bad your HBA can do only about 140000, so you'll max out there on random reads. And if we're talking about block sizes larger than 512 bytes, latency will be higher. So you've spent $0.26/IOPS, which is great, and you've saved money on operating costs as well. But you actually spent a lot more in total - $18/GB - and woe unto you if you need more capacity; demand for storage tends to double every 12-18 months, and adding in 80GB chunks at $18/GB is going to hurt. Sure, prices will drop, but not fast enough to be competitive with the multi-TB disks we're already seeing today.

      Finally, suppose instead that you buy 2 of these SSDs to act as log devices and then buy 4 1TB 7200rpm SAS disks for $350 each. You've spent $4200 and you've gotten 24000 IOPS. That's $0.18/IOPS or $0.48/GB, and you've actually spent much less in absolute terms as well. You're still spending only a tiny fraction in power and cooling of what you were spending on the original all-disk solution, and you've got twice as much total storage capacity. Best of all, you can now grow your storage in two dimensions, not just along a line fixed by the disk vendors. Need more IOPS? Add another SSD or two. Need more capacity or streaming bandwidth? Add some more rotating rust.

      This approach gives you the best of all worlds, something you can't get by blindly replacing all your disks with SSDs. In other words, you get to pick the spot along the performance/cost/capacity curve that's right for your application. Using only SSDs, only slow disks, or only expensive disks doesn't do that. Upon a moment's though, this should be obvious: when your computer needs to perform better, adding DRAM is usually the best way to make that happen. When it needs to store more data, adding disks is the way to go. You don't add disks to improve performance (one hopes... if you need to do that, your storage vendor is probably taking you to strip clubs) and you don't add DRAM to increase storage capacity. This is no different. Flash occupies an intermediate spot in the memory hierarchy and has to be thought of that way. It's exciting to see the prices fall and capacities rise like they have, but I don't think a lot of people really understand yet just how SSDs are going to change things.

  • I still think the biggest deterrent is lifetime. I want to buy an Aspire One, but I'm pretty disappointed at some of the things that I'll have to do with the SSD. No swapping, no journaling, no logging or timestamps. Sounds like it's still a step backwards to me. Still needs a little more time.
    • by elashish14 ( 1302231 ) <profcalc4@@@gmail...com> on Monday November 24, 2008 @10:50PM (#25881319)
      Nevermind. "Even with 100GB of write-erase per day, it'll take more than 72 years to burn through the drive." I should RTFA. But still, much room for improvement.
      • A 72 year lifespan? How much more improvement do you need? It seems like price is the only remaining hurdle for SSDs.

      • by catch23 ( 97972 )

        why do you need improvement with that? I assume you'll probably replace the drive in 10 years... 10 years ago consumer PCs used 8gig hard drives. I don't see many 8gig hard drives lying around today. Assuming you'll replace it in 10 years, you can do 700GB of write-erase per day, which means you could reflash the entire drive 20 times a day... How often do you do that on your Aspire One?

    • by Anpheus ( 908711 )

      Why no swapping, journaling, logging or timestamps? Wear leveling is pretty standard fair for SSDs and AFAIK, at least three of those won't write a significant amount of data to the disk.

  • This costs $22/GB, and has a write speed of 170 MB/s. A 2GB stick of DDR2-800 costs $12-$20/GB, and has a speed of 6400MB/s. So we have a case where slow storage actually costs more than much faster (but less permanent) storage. I wonder how much a couple extra batteries would cost...
    • Re: (Score:3, Interesting)

      by ltmon ( 729486 )

      It's pretty much apples and oranges. Even with batteries (which I wouldn't trust) RAM has different characteristics in power consumption, heat output, storage density etc. By the time you address these challenges you'd have... an SSD.

      Plus the SSDs get their long life from having more raw storage than advertised, and dynamically shutting down dead areas and bringing in reserve areas as it ages. Your sums would have to take into account the cost of this "hidden" storage.

      As an aside the best use for these thin

      • Why would the calculations have to take in to account the hidden storage?
        Thats overhead for using SSD technology. Its unnecessary for any other storage.

        Adding ram purely for disk cache will increase performance many times better than using SSDs.
        Cheaper too.

        • Re: (Score:3, Informative)

          by wisty ( 1335733 )

          Yes, but some legacy operating systems can only address 4G of RAM (including the graphics card). Also, some hardware may not be able to take more ram. I can't think of any machine where 64G of ram is very cheap.

      • Comment removed based on user account deletion
      • by vidarh ( 309115 )
        You can buy systems that use DRAM with battery backup that's available as a "disk". Some use standard interfaces, some use PCI cards to get higher throughput. They all have one thing in common: They are far more expensive than SSD's. In fact, one company quoted me $250,000 for a 64GB unit.

        There are cheaper ones, but I don't know of any that can compete with SSD's on price - if there had been more people would've been using them as harddisk replacements. As it is, the primary market for these units are ext

  • Let's see...$720 [newegg.com] for 32GB ($22/GB) versus $278 for 256GB [newegg.com] ($1/GB.)

    Keep in mind that you could buy two of those 256GB drives, mirror them, and exceed (in all likelihood) the performance of the Intel drive, and have eight times as much storage. Since reliability is pretty unproven, having them in a mirror means your ass is suitably covered.

    The absolute lowest storage density (SAS doesn't come in anything less than 36GB, and 300GB is the top-end) at $22/GB, when $4/GB is the norm for SAS drives (that's a p

  • by phr1 ( 211689 ) on Monday November 24, 2008 @11:11PM (#25881475)
    PC-6400 ram is around 15 dollars a GB now, and the 6400 stands for MB/sec, i.e. ram is over 20x faster than this flash drive and has no write wear issues or slowness of random writing. The only thing wrong with it is volatility, but in an enterprise environment you can use a UPS and/or maintain a serial update log on a fast hard disk or RAID (since the log is serial, the flash drive's ultrafast seek advantage doesn't apply). There is just a narrow window where this $21/gb 32gb flash drive is really cost effective.
    • by dark_requiem ( 806308 ) on Monday November 24, 2008 @11:28PM (#25881613)
      Products that use RAM as the storage media have been around for years. They're exactly what you're describing: a few standard DDR DIMMS and a battery on a PCI card, usually. However, no one in an enterprise environment would actually trust data to such a device, and they never really took off. Home users don't generally have the power and data backup capacity to safely use such a device (and not even the most hardened masochist wants to reinstall or restore everything whenever a breaker goes), and enterprise users can't tolerate the risk level. Sure, you can have backup power, but the risk of losing data and downtime restoring it just isn't tolerable in most enterprise environments.
      • by Splab ( 574204 )

        Actually, RAM-SAN has addressed these issues, only thing remaining is the price tag. Think they come in at around 430.000 IOPS, got full drive backup + batteries in a single "box". Would love to get one of those for my database.

        • by Splab ( 574204 )

          Ok, got my facts wrong. 3.2 Million IOPS, 24GB/s sustainable random data access.

          And a linky:
          http://www.superssd.com/products/tera-ramsan/ [superssd.com]

          • Yup, violin [violin-memory.com] basically sells the same thing - in a yellow box, too.
            The problems with these boxes, until recently, have been price and (obviously) persistence.

            Since data is never really persisted you'd only buy them in addition to a traditional SAN (or SSD nowadays), not as a replacement.
            When you have the dough you can do interesting things with them, though. I know a company that does most of their transaction processing on violins (financial sector, sick throughput) and uses spindles in a write-behind fashi

            • by Splab ( 574204 )

              You should read up on Tera Ram-San - the data is indeed persistent, it even comes with internal backup making sure it can dump all data to its internal discs before shutting down.

              • ...unless it fails, I suppose?

                Admittedly I haven't read up on them but most people I know wouldn't be comfortable with such a "persist on shutdown" option - because the interesting scenario is when the box doesn't get a chance to shutdown.

                • by Splab ( 574204 )

                  Well for the box not to get a chance to shutdown would require something very terminally happening, and in that case you are going to have to run down and grab your backups anyways.

                  Also as I recall the system will periodically flush everything to discs so it doesn't have to make the full write in case of emergencies.

                  • Flushing to disk imho can not work so well when the box is loaded with more IOPS than the spindles can handle. It's the same problem that every database server is having (RAM vs WAL vs tablespace).

                    And about the shutdown issue: When you're talking about a scale where such RAM-based SANs get serious consideration then unknown risk factors like "terminal happenings" can usually not be tolerated.
                    Building in the enough safe-guards to make such a failure sufficiently unlikely is usually more expensive than buildi

                    • Flushing to disk imho can not work so well when the box is loaded with more IOPS than the spindles can handle.

                      Most high IOPS systems are high because of a high number of random reads. The spinning disk would only need linear writes, which should be pretty easy to sustain maximum transfer rates. That Tera-RamSan appears to use 128GB modules, so each would include its only spinning backup disk. Modern disks can easily sustain 100MB/s. A full dump would take 20 minutes.

                      I suspect they would at least implemen

  • by nitsnipe ( 1332543 ) on Monday November 24, 2008 @11:18PM (#25881523)

    What happens when the read-write cycles on this run out?

  • Weak test system (Score:5, Interesting)

    by dark_requiem ( 806308 ) on Monday November 24, 2008 @11:18PM (#25881525)
    I would have liked to have seen them test this drive in a much more powerful system. I mean, a P4 with 1GB RAM, and a fairly dated chipset (955x) as the SATA controller? No one is going to put a drive like this in a system that old. I'd guess that we might see different results on a more powerful system. At some point in those tests, other components of this fairly slow (by today's standards) machine. Throw some serious power behind it, and you can be sure that you're not bottlenecked, and the full power of the drive shows. Can't say for sure if this is actually the case, as I don't have a drive to test, but it's a definite possibility. Hopefully someone else does a similar review with a more powerful testbed.
  • NCQ? (Score:3, Interesting)

    by melted ( 227442 ) on Monday November 24, 2008 @11:59PM (#25881851) Homepage

    Why the heck does a drive that has uniform, low latency random access would even NEED NCQ? NCQ was designed to optimize the seek order in mechanical drives with heads.

    • Re:NCQ? (Score:4, Informative)

      by Anonymous Coward on Tuesday November 25, 2008 @12:31AM (#25882019)
      From the article: "The storage controller is an Intel design that's particularly crafty, supporting not only SMART monitoring, but also Native Command Queuing (NCQ). NCQ was originally designed to compensate for the rotational latency inherent to mechanical hard drives, but here it's being used in reverse, because Intel says its SSDs are so fast that they actually encounter latency in the host system. It takes a little time (time is of course relative when you're talking about an SSD whose access latency is measured in microseconds) between when a system completes a request and the next one is issued. NCQ is used to queue up to 32 requests to keep the X25-E busy during any downtime between requests."
    • Re: (Score:2, Informative)

      by Anonymous Coward

      read here: http://techreport.com/discussions.x/15374

      Since we had her cornered, we took the opportunity to quiz Huffman about a few other matters. One of those was the interaction of Native Command Queuing and SSDs. We're familiar with NCQ as a means of dealing with the seek and rotational latency inherent in mechanical hard drives, but wondered what need there was for NCQ with SSDs. (Intel's just-announced SSDs have NCQ listed prominently among their specifications.) She said that in the case of SSDs, NCQ has the primary benefit of hiding latency in the host system. If you look at a bus trace, said Huffman, there's quite a bit of time between the completion of a command and the issuance of another one. Queuing up commands can keep the SSD more fully occupied.

  • Ooh, now that could be a dealmaker in the server room. With RAID-5 reaching its limits for magnetic media, a rack of these could be a viable replacement.

    Of course a server room has different priorities to the average gamer:

    1. Reliability
    2. Reliability
    3. Capacity
    4. Price
    5. Speed

    • Re: (Score:2, Insightful)

      by troll8901 ( 1397145 )

      With RAID-5 reaching its limits for magnetic media, a rack of these could be a viable replacement.

      * bracing self for long discussion on RAID levels, file systems, and a certain Unix OS *

    • by vidarh ( 309115 )
      RAID5 is not for performance but for reliability, and it's only "reaching its limits" for large capacity systems where the time to rebuild the RAID on drive failure is getting close to the point where the risk of a second drive failure during the rebuild is becoming an issue.

      These drives are intended for high performance setups.

      It's not RAID vs. these drives - nothing prevent you from using them in RAID setups, and most people using them in servers probably will.

  • I believe you meant to say, ``blow the doors off the competition,'' but let's just say the "cost" of those drives ``blows chunks.''

    Extreme for enterprise

    Solid-state drives use either single-level or multi-level cell flash memory. The former stores one bit per memory cell (a value of 0 or 1) while the latter is capable of storing two bits per cell (with possible values of 00, 01, 10, and 11). Obviously, MLC flash has a significant advantage on the storage density front. However, that advantage comes at th

  • It's the late 60s and a groupie is invited backstage after a particularly mega concert featuring all the great bands of the day. After a while, a bit of pot has been smoked, some tabs dropped, and plenty of booze swigged, and so things start to swing. The groupie first goes down on Ray Manzarek, then Jim Morrison, and finally Rob Krieger and John Densmore get theirs. Groupie's not done though, and is just getting started on Jimi Hendrix when Michael Caine bursts in, and shouts...

    "Oi! You're only supposed
  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday November 25, 2008 @05:18AM (#25883895) Homepage

    The earlier model of Intel SSD had some serious performance degradation [matbe.com] after a few hours of heavy use. (Article in French, but it says that after a ten minute torture with IOmeter writing small blocks to the drive, and even after waiting an hour for the drive to 'recover', performance drops by about 70%.) I wonder if they have fixed this bug with the new model?

  • by bconway ( 63464 ) on Tuesday November 25, 2008 @09:10AM (#25885383) Homepage

    We've already seen Intel's first X25-M solid-state drive blow the doors of the competition, and now there's a new X25-E Extreme model that's even faster. This latest drive reads at 250MB/s, writes at 170MB/s

    Yet, 5 articles down the Slashdot homepage, options depending:

    Samsung said it's now mass producing a 256GB solid state disk that it says has sequential read/write rates of 220MB/sec and 200/MBsec, respectively.

    I'm pretty sure the improved write speeds is the part that people are interested in with SSDs these days.

  • by zrq ( 794138 ) on Tuesday November 25, 2008 @09:27AM (#25885605) Journal

    From the techreport article :

    NCQ was originally designed to compensate for the rotational latency inherent to mechanical hard drives, but here it's being used in reverse, because Intel says its SSDs are so fast that they actually encounter latency in the host system.

    Is it time to look at connecting these chips direct to the motherboard ? Avoiding the added complexity of driving what is essentially a block of memory via a serial interface designed to control spinning discs. If the SLC memory chips were mapped into the main memory address space, it should be possible to make them look like a 32G or 64G (NV)RAM drive on a Unix/Linux system. Mount '/' and '/boot' on the (NV)RAM drive and install the OS on it. Presto - very fast boot and load times. You can still use traditional spinning disc(s) for large data, mounted as separate data partitions.

    It would need some thought as to which parts of the filesystem went on spinning disc and which parts went on the (NV)RAM partition. But that is why Unix/Linux has all of the tools for mounting different parts of the filesystem on different partitions. Back in the olden days, most systems had a combination of small fast(ish) discs and big(ish) slow discs, and tweaking fstab to mount different parts of the filesystem on different discs was a standard part of the install process. Most desktop systems now have one huge disc, and the standard Linux install dumps everything on one big '/' partition, but all the tools for optimizing the partition layout are still there.

    How about an ultra quiet desktop workstation with no moving parts, the OS installed on (NV)RAM disc, and user data dragged across the network from a fileserver (e.g NFS mounted /home).

  • ZFS in recent Solaris Nevada and FreeBSD CURRENT supports a feature called L2ARC; Level-2 Advanced Replacement Cache. This is basically the ZFS filesystem/metadata cache, backed by so-called cache devices.

    So, you can get your 32GB SSD, shove it in front of your n-TB ZFS array, and it'll use it to help accelerate random reads. 32GB of storage is a bit feeble, but 32GB of cache.. that's rather compelling, especially if your storage is otherwise backed by cheap and cheerful 7200RPM disks.

  • by dh003i ( 203189 ) <dh003iNO@SPAMgmail.com> on Tuesday November 25, 2008 @10:37AM (#25886535) Homepage Journal

    The Fusion IOdrive is faster...about 510 Mb/s according to dvnation. At $30/GB, that's not bad. Granted, the Intel one is $22/GB...but it has about twice the performance; and it's only priced about 1.4x the price of the Intel ssd.

  • Hmm odd that seemed to have missed out on the leading enterprise SSDs. I think Tom's Hardware reviewed them a while back. Samsung SSDs from what I remember were as cost effective as they get but generally were slower than either Memoright or Mitron?

The more they over-think the plumbing the easier it is to stop up the drain.

Working...