Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

SSD Prices On Parity With High-End HDD By 2011 106

kgagne writes "EMC executives were heavily pitching the virtues of solid state disk drives at their annual users conference in Las Vegas, saying that SSD will not only be on price parity with high-end Fibre Channel disk drives by the end of 2010 or early 2011, but that NAND memory will solve all sorts of read/write issues created by spinning disk technology. EMC's CEO and its storage platforms chief said the company will do everything it can to drive SSD prices down, and adoption up, by deploying them in their products. One issue might be that EMC is using SSD from STEC, which is being sued by Seagate for patent infringement." The article also mentions some of the work EMC has been doing to make sure SSD is enterprise-class reliable, such as developing "wear leveling" software.
This discussion has been archived. No new comments can be posted.

SSD Prices On Parity With High-End HDD By 2011

Comments Filter:
  • SSD from STEC (Score:5, Insightful)

    by quarrel ( 194077 ) on Saturday May 24, 2008 @01:49PM (#23530082)
    > One issue might be that EMC is using SSD from STEC, which is being sued by Seagate for patent infringement.

    Why is this an issue? If EMC think the technology is a winner, and they don't have a stake in a particular player (of course they have to choose a supplier, but that hardly indicates a long term commitment) then what do they care who wins?

    One of the great things about being in EMCs shoes is that you want these things commoditized.

    Either way, as a the sooner SSD is directly competitive the better. They're ICs - you churn them out, and only worry about yield. HDDs are mechanical and will always have their mechanical shortcomings.

    --Q
    • Re: (Score:2, Insightful)

      by vax ( 251660 )
      if seagate has the patent then they better start making some damn SSD drives. (that are actually on the market)

      it would be good to have some competition anyway to drive the prices down..

      still. I cant wait for 100gb SSD drives.. finally a laptop for gigging that can handle a beating.

      really once these are standard in laptops I think you will see more robust laptops on the market since the spinning disks have always been one of the quickest parts to fail (well assuming that the laptop has decent cooling design
  • Overlords (Score:1, Interesting)

    by ickleberry ( 864871 )
    I've had 4 hard drives fail on me in the past year and a half so I for one welcome our new SSD-based overlords

    My laptop and server already run off SSD and with any decent bit of wear-leveling it is near impossible to wear out a SSD.
    • Re: (Score:1, Funny)

      by Anonymous Coward
      Dude, you've got to stop buying your drives from Lucky's Hard Drive and Bait Shop.
    • i've never had a drive fail, not even my first 10mb hard drive. man, that was way better than the 2 360K floppies.
    • My laptop and server already run off SSD and with any decent bit of wear-leveling it is near impossible to wear out a SSD.

      I'd be more worried about the controller chip. I've had a few USB thumbdrives go bad on me this way, rendering the data on them inaccessible unless you're really good at soldering. As a matter of fact, I've even had more USB thumbdrives go bad on me the past few years than harddrives, though admittedly I'm not carrying harddrives around in my pocket so it isn't fair comparison.
  • will they be competitive with mid range priced hard drives? You can get 500GB for $100 these days.
    • Re:But when (Score:4, Insightful)

      by pyite ( 140350 ) on Saturday May 24, 2008 @03:27PM (#23530914)
      will they be competitive with mid range priced hard drives? You can get 500GB for $100 these days.

      In a few years. Right now SSDs perform incredibly in terms of IOPS (I/O operations Per Second) that enterprise storage type folks are eying them longingly. They just need a little bit more space for the money. Until such time, it's very possible that we'll see more in terms of using SSDs as caching components in front of more antiquated spinning media.

      • Or, use them for different purposes. Do you backups to a RAID, boot from read-only flash and have your database in SSD.

        I mean, even in the old days they could choose between punch cards, tape, and teletype for data storage and retrieval.
      • Yeah, but currently the largest SSD are only 64 GB, and cost about $1000. By 2011 will you be able to get a solid state drive that measures 500 GB? Will it cost anywhere close to $100? Probably not. By price comparable do they mean you will be able to get a solid state drive for under $200? Probably, but the capacity will be much lower than a even the high end hard drives they are price matching. You can get a 146 GB, Fibre Channel, 15K RPM drive for under $350. In 3 years, I don't think you'll be ab
    • More than six years away, following from current price points and reduction trends, which is to say "there is no predicting".

      C//
    • Once you've experienced a 120Mb/sec read+write SSD, there is no turning back. You'd get a drive for 100 dollars even if it were only 64GB.
       

    • will they be competitive with mid range priced hard drives? You can get 500GB for $100 these days.


      The other thing I am curious to know is when we are likely to get SSDs with similar read/write performance to current mechanical HDs.
    • by rubeng ( 1263328 )
      More like $80 [diskcompare.com] for 500gb, which is about 16 cents/gigabyte. The cheapest SSDs, at least listed on that site, are $5.16/gb, so there's still about a factor of 32 difference.
  • by Bananatree3 ( 872975 ) on Saturday May 24, 2008 @02:24PM (#23530394)
    Spinning disk hard drives are the mode, median and mean today. You can grab a 1TB platter hard drive for under 200 bucks. It may not last as long as a SSD, but at that price you can certainly buy a bunch of backup drives for a lot cheaper than a 1TB Solid State drive.

    However, SSD is the future wave, as it Just Works better than platter drives. A high quality, high density, low priced SSD would knock the socks off any platter drives today if it were available. Platter drives will be the mainstream market for a while because of cost and size availability. However as SSDs become cheaper and hold more space, the WILL push platter drives out.

    • Re: (Score:2, Interesting)

      by arbiter1 ( 1204146 )
      i've had platter hard drives go 8+ years, and one i know of is at 9 years old and still goin strong
      • Longevity (Score:4, Insightful)

        by Bananatree3 ( 872975 ) on Saturday May 24, 2008 @02:42PM (#23530532)
        I agree that high quality platter drives will last a long, long time. The issue is that anything with moving parts is inherently more prone to breakage than a device with no moving parts. A SSD with no rewrite issues would by principal be inherently longer lasting.

        Platter drives are here to stay for a while. Once SSDs get the bugs worked out and the price drops to current platter drive levels, there will be a large migration.

        • Re: (Score:3, Insightful)

          Comment removed based on user account deletion
          • by deroby ( 568773 )
            Well, my SD card came with software to find back data that was 'lost'. In essence, I think it allows me to scan the 'raw' chips and look for 'recognizable blocks'. When I come to think of it, I guess there won't be that much difference. When you delete a file (on purpose, by accident or due to some malware), it's actually just the file-entry that gets overwritten, the actual contents scattered over the disk / chips remains as is. Unless you're filesystem incorporates some kind of 'secure erasing', you shoul
            • I've done this before with a HEX editor. Just look for "EXIF", and then copy the following megabyte-ish data segment to a file. You usually get a pretty usable picture out of the data. That's even after files have been "deleted" (AKA, references removed from the FAT).
          • well laptops would get it more then likely since SSD's are known for using less power to run on issue with platters, its harder on the drive when you power off the comp and restart it all the time, its harder on the motor's
          • Also,what about data recovery.... If the SSD does as I assume they do and actually erases on delete
            If a file gets wiped when you delete it is entirely up to the file system, not the hard drive. Your disk only reads and writes what the file system tells it to.
            • Re: (Score:3, Interesting)

              Comment removed based on user account deletion
              • What I meant was that thanks to the wear leveling going on under the file system there might be more of a chance of a file getting deleted from an empty sector as that sector gets overwritten by other files from elsewhere to even out the wear across sectors.

                If you undeleted a file on a system that was managing wear leveling behind the scenes, doesn't it seem more likley that area would be allowed to "cool down" for a while since it just had a file in it?

                With a normal filesystem it's more random.

                However what
              • I'm not sure it works that way, as how is the drive supposed to know what kind of file system you're using? The drive just sees a bunch of data in sectors, and doesn't know what is important, and what is unimportant (as in, marked as 'deleted' by the OS) as the file system keeps track of that kind of thing. I would assume then that the wear leveling algorithms won't trash anything, as it cannot know what it can trash, therefore I would guess that a SSD drive would be the same as a HDD in terms of recovery
              • Re: (Score:2, Interesting)

                by orkysoft ( 93727 )
                I think the harddrive would not know about file systems, and would actually swap the data between those sectors, but keep the old sector numbering, so it would be invisible to the higher layers, just like virtual memory works.
          • It should be just the opposite. Since fragmentation isn't really a big issue on SSD's, and since wear leveling would use free space that's been written to the least times -- it seems like a sector marked as "free" during a delete wouldn't be overwritten until the rest of the disk filled up.
        • I'm not so sure I agree. I've had way more RAM chips die than I've had hard drives die. I've only had 1 hard drive die in my personal computers. RAM chips I've had 3 or 4 die. Video cards I've had 3 die. I've seen many network cards go in my life. Countless power supplies. Most of the stuff that does die is the hardware only stuff. I seems counter-intuitive, but when I think about it, it actually makes sense. The hardware parts that die always have very tightly packed circuits, and are very complex
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        Then again, I've had IBM Deathstar* hard drives die in months. Anecdotal evidence does not help the situation, though.

        The reality of the matter is that solid state is simply more reliable** than mechanical electronics in most cases.

        *IBM Deskstar
        ** reliable is relative to the usage.
        • by mikael ( 484 )
          I had one in a desktop workstation - at least it gave a warning that something was up, it kept making a grating/grinding noise.

          I've lost a Travelstar laptop drive in a similar way - another family member helpfully moved the laptop into direct sunlight during a bright Summer day, and left the laptop lid down. The poor hard disk drive overheated and fried out (gave a whining noise until the laptop powered down).

          The only people who could really make any fair comparisons would be search engine/internet archive
      • I've had a Maxtor 6.4GB hard drive since pre-melenium time, its been in a computer case hats been dropped down a flight of stairs, the MBR record destroyed itself (for OS purposes you couldn't install a OS to it but it would hold data this happened pre 2000) its been hit with a hammer (fustration at said MBR fault), connected to a PC's whose power supply blew spctacularly and finally it was connected (as slave and on the same power spur) to a 200GB Maxtor Diamond Plus when said 200GB drive short circuted it
        • by Ma8thew ( 861741 )
          I always read about how people get through dozens of hard drives, but I've never had one fail. Is it possible everyone else abuses their hard drives, or that I have optimum atmospheric conditions where I am? Maybe my frequent backups are warding off failures.
    • Re: (Score:2, Informative)

      by hostyle ( 773991 )
      -1 Re-Iterating The Damned Obvious
    • by Znork ( 31774 )
      Eventually, without a doubt. Although by that time SSD may have little to do with what we regard as SSD today.

      How it plays out depends on how the hard disk manufacturers deal with it; personally I'd prefer they play their strength and simply forget about speed and concentrate on their forte; storing lots of bits.

      I already have all the fast storage I need (if I wanted more I could stripe over more disks). Bulk storage, however, is something I'm permanently short of; I could easily use up to the petabyte rang
    • Re: (Score:3, Insightful)

      by renoX ( 11677 )
      The future is very near!
      Sure currently buying a 1TB Solid State drive would be too expensive, but do we need really it?

      No: on my HDD, I have two partitions: one of 30 GB for the OS and the software (which has still a lot of free space), and a big one for the data.

      Replacing the OS&software partition with a SSD would bring 99% percent of the benefits of having a 'full' SSD: fast boot time, fast application startup, etc.. Especially as we can use a part of the SSD as a cache for the HDD.
      So IMHO, we don't r
      • Which is great for desktops, but what about laptops. You don't have space for 2 drives in a laptop. Sure if you're just doing work on the laptop (and your work doesn't include editing video), you could probably get by with a 64 GB drive. But many people use a laptop as their main computer. For them, 64 GB probably won't suffice.
    • You can grab a 1TB platter hard drive for under 200 bucks
      Obviously, you haven't tried to purchase 1TB of EMC DMX disk lately. HighEnd storage is NEVER cheap. EMC will tell you, "These drives fail less, thereby giving you higher uptime. However, you will pay a premium for them." It also allows them to add yet another layer of storage tiering.

      Also, SSDs, if they have a lower MTBF will enable EMC to cut costs by having fewer CEs out there replacing drives.

  • by epiphani ( 254981 ) <epiphani@dal . n et> on Saturday May 24, 2008 @02:40PM (#23530516)
    Given that many filesystems are designed specifically with the spinning magnetic disk in mind, what open source filesystems are out there that will work to the advantages of solid state storage? Has anyone started thinking about that one as something to address before the major switches start taking place?

    Or... does solid state storage take care of those oddities in firmware with the whole automatic write leveling technology?
    • Re: (Score:3, Interesting)

      by v1 ( 525388 )
      Driver level can make certain assumptions about the physical drive, such as seek time, and for example, work to decrease disk fragmentation. Fragmentation is very minor issue with SSDs. So there will be a minor performance hit (from maximum possible in the SDD) due to the things the drivers and os do to try to get the most performance out of a HDD.

      The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind so I doubt even that will matter to the software.

        Actually with proper software you'd probably like to do the opposite - try to wear out certain blocks as fast as possible. This way the lossage is more predictable and rest of the disk is kept in a better shape. Point being that bad sectors aren't really a big deal if you're prepared for those.

        • Re: (Score:3, Informative)

          by kesuki ( 321456 )

          The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind so I doubt even that will matter to the software.

          Actually with proper software you'd probably like to do the opposite - try to wear out certain blocks as fast as possible. This way the lossage is more predictable and rest of the disk is kept in a better shape. Point being that bad sectors aren't really a big deal if you're prepared for those.

          When NAND memory fails, it can fail in such a way as to make the ENTIRE flash memory device unreadable... this is from real world NAND memory devices failing from real world use, all of a sudden not wear leveling seems like a suicidal mode of wear... if the entire chip can short out from a single block failing.

          "In case of a massive damage,

          * If the device is not accessible at all (circuitry failure), no software can even attempt the recovery. Physical intervention is required.

      • I recently ran across a document that described plans by Microsoft and hard disk vendors to support large physical block sizes on PCs. I don't know when products will be showing up on retail shelves, but it's in the development pipeline.
    • JFFS2 for one.

    • by oever ( 233119 )
      Flash drives have been around for a while, you know. And so have the filesystems:
      YAFFS [wikipedia.org]
      JFFS2 [wikipedia.org]
      LogFS [wikipedia.org]

      • Re: (Score:3, Interesting)

        by 4e617474 ( 945414 )

        Flash drives have been around for a while, you know. And so have the filesystems:

        YAFFS and JFFS2 look to me like they might be showing their age.

        From Wikipedia:

        "YAFFS2 is similar in concept to YAFFS1, and shares much the same code... The main difference is that YAFFS2 needs to jump through significant hoops to meet the "write once" requirement of modern NAND flash.

        YAFFS2 now supports "checkpointing" which bypasses normal mount scanning, allowing very fast mount times. Mileage will vary, but mount times of c. 3 seconds for 2 GB have been reported.

        Measuring mount times in seconds per gigabyte is not encouraging for the design goals we're talking about here. The disadvantages section of the JFFS2 article pretty well speaks for itself, but note

        "All nodes must still be scanned at mount time."

        Overcoming that hurdle was how YAFFS2 even moved up to the seconds per gigabyte range - the introductory paper for LogFS says

        "On the authors notebook, mounting an empty JFFS2 on a 1GiB USB stick takes around 15 minutes. That is a little slower than most users would expect a ïlesystem mount to happen."

        The developer's g

    • by Korin43 ( 881732 )
      I found at [wikipedia.org] least [wikipedia.org] 3 [wikipedia.org] for Linux. I also found some references to Microsoft's "FFS2" and M-System's "TrueFFS", but I can't find any info about them.
      • Please do not link that way, using words like "at" and "least" as the link text. Hypertext (as in HyperText Transfer Protocol) was designed to link complex words or phrases to more details about those particular things. So, for example, if you're talking about a link to a page about JFFS, you link the term, JFFS, not "this other page" or "see here", or anything like that.

        It does involve having to think about how you write a phrase sometimes, but means that everyone has a consistent interface, knowing what
    • by tooyoung ( 853621 ) on Saturday May 24, 2008 @03:32PM (#23530984)

      Given that many filesystems are designed specifically with the spinning magnetic disk in mind, what open source filesystems are out there that will work to the advantages of solid state storage? Has anyone started thinking about that one as something to address before the major switches start taking place?
      No, no one has even considered that yet. I'll alert the academic world while you clue in the industry.
      • Given that many filesystems are designed specifically with the spinning magnetic disk in mind, what open source filesystems are out there that will work to the advantages of solid state storage? Has anyone started thinking about that one as something to address before the major switches start taking place?

        No, no one has even considered that yet. I'll alert the academic world while you clue in the industry.

        While you've got those chaps on the line, there are a few other topics I wanted to bring up that probably nobody has considered.

        1. Did they notice we don't have enough oil?
        2. It's been hot this spring, maybe someone should get on that.
        3. Sometimes when I travel I can't watch YouTube. We need this fixed!
        4. It seems like we ought to have some way to get into space by climbing a line or something, rather than using those big rocket engines.
        5. It'd be really awesome if we could modify plant and animal life geneticall
      • Come on, HDD and SDD have different seek/read/write characteristics.

        I'm sure several algorithms that affect filesystem performance were written with the former's characteristics implicitly in mind.

        Maybe some new approaches become plausible, given this underlying change? Historically, being the first to understand and exploit the advantages of a new technology makes a huge difference.
  • SSD will not only be on price parity with high-end Fibre Channel disk drives

    Yeah, right, just what I buy for my home system right now. The really high-end expensive stuff.

    For nearly all of us, this isn't news until SSD is competitive at the consumer disc drive level.

    And competitive means price and projected lifetime. Watching my SSD start dying in pieces after only weeks, or months, isn't current hard drive reliability.

    • by amorsen ( 7485 )

      For nearly all of us, this isn't news until SSD is competitive at the consumer disc drive level.
      Come on, Slashdot is a nerd site! When did nerds have to wait for technology to become mainstream?
      • Good point. It'll be a long time though before this stuff is obsolete and we can fish it out of a dumpster.
    • Re:Yeah, Right (Score:4, Interesting)

      by Courageous ( 228506 ) on Saturday May 24, 2008 @03:28PM (#23530924)

      Well. These drives (FC, SCSI, SAS) are 10% of the market, very lucrative, and quite important for data center operations, server rooms, and so forth.

      Projected lifetime for modern SSD drives is now getting to the point where they are more likely to be discarded due to technological obsolescence than they are to significantly deteriorate, BTW.

      The projected intersection curve is further than six years out for SATA SSD price parity. That's an eternity in technological time, which is to say, there is no predicting it.

      Price per unit of storage is by far not the only deciding factor, even in the consumer market. Flash can scale up performance much more quickly than spinning media. You can expect flash performance to more than double annually from here on out, I would say. You would of course be right to be wondering how the SATA and SAS busses will keep up.

      Look at FusionIO (http://www.fusionio.com) to see how flash will accelerate in performance. These devices have 160 internal channels in order to make the bytes flow at the rate they do. You can think of it as a sort of 160-wide RAID-0 striping mechanism.

      $2400 for one card is of course way out of consumer space. However, point: 1) the cost of the flash in the system will drop to a fraction of its current price within two years, and 2) the ASICs on board this device will be "paid for" within the same period, allowing them to charge only a small fraction of their current price.

      Expect other similar products to develop soon.

      When FusionIO proves out the market for these devices--and mark my words, they will--competitors will follow in their footsteps, like bees drawn to honey.

      C//
      • To summarize the great post above, price is only one dimension. Performance is another, and SSDs totally blow away any HDD competitors. HDDs have been around a very long time, and already use caching and other sophisticated trickery to break barriers. SSDs are practically faster than HDDs out of the gate, so just imagine how blazing fast they would be by the time they catch up in price?

        In fact, I doubt they will ever catch up in price, because HDDs will ALWAYS have to be cheaper to sell. There is no equatin
  • They keep sending out press releases, but when do they plan on making product available?

    I need four of 64GB or more. Price not important, but they must perform well and be reasonably reliable. SAS preferred.
    • http://www.fusionio.com./ [www.fusionio.com] These products can be ordered now, although it will be more than two months for delivery (they is intense demand).
      • by kesuki ( 321456 )
        you had a typo in that url, an extra period. and those devices are meant for rack mounted server boards with 4 4x PCIe slots available, although they are a low profile board, so if they give a low profile backplane as well, then you can get them in a rack mount server. (not sure if 'low profile' means 1u or 2u i am not a sysadmin)

        http://www.fusionio.com/ [fusionio.com]
        • There is always an implied dot at the end of any FQDN. It's just that usually we omit it. Notice how the web server does answer the request, but errors out because its config files do not point to the requested host.
          • by kesuki ( 321456 )
            you read wikipedia too much, man I've never in any literature had an mention of a 'trailing dot' not even when i configured bind on freebsd.

            seriously I've been using the internet since 1994, and not once has 'an implied trailing dot' been mentioned to me anywhere, except in the wikipedia article (and apparently in the RFCs since wiki cites them)

            i don't read RFCs ever, and apparently DNS resolvers automatically add a trailing dot, but this was the first time I'd ever heard of needing a trailing dot. and iron
            • Hehehe that's funny, I didn't expect it to be such an exotic bit of knowledge. I read about it when I was studying how DNS works, one of the first things I learnt was that at the top of the global DNS tree there is a single entity from which everything else descends, and it is called ".". So the FQDN of a host always ends in ".".
              In my defense, I never read RFCs either :P
    • I Don't Know [overclockers.co.uk]
    • http://www.newegg.com/Store/SubCategory.aspx?SubCategory=636&Category=15&name=Solid-State-Disks [newegg.com]

      Have you even looked. I see at least three 64GB ones. and one 128GB. Price is their biggest disadvantage.
      • by amorsen ( 7485 )

        Have you even looked.
        Yes, I have looked. None of those dare rate their drives for enterprise use, and none are from EMC.
    • Try this list: http://www.storagesearch.com/ssd-fastest.html [storagesearch.com]
  • Is data recovery possible from these devices if the media is damaged, or otherwise unreadable?

    The article doesn't mention numbers in terms of power savings, but I'm looking forward to SSD-based RAID at the same power cost as a single Winchester HDD.

  • by fermion ( 181285 ) on Saturday May 24, 2008 @04:14PM (#23531264) Homepage Journal
    If we reflect back on the floppy disks days, we see that it was not only cost, but density, that killed the floppy oh so many years ago. A floppy was no longer useful for installing apps. MS often needed upward of 10 disks to ship an app. While 3 MB was big enough to hold most files, we were entering a period in which one could no longer survive with a single 3.5" disk. The CD-R, then the DVD-RW, made sense as they could replace the floppy, though in many ways at a higher costs, due to their higher density. The fact that CD was cheaper than other optical solutions made it a good choice. What did finally kill the floppy was the available of USB drives for the sneaker net. Though expensive, they too had a density benefit, as well as not requiring additional hardware, other than a USB port which initially were scarce on MS Windows machines, and the drivers buggy.

    I think that density, not price, is going to drive the SSD market as well. We need space on our small computers, and the mechanical solution is not keeping up. I believe this is why Apple went to flash memory for the iPods, although initially they were dedicated to hard drives. My iPhod mini only has 4 gb, the same as the nano that replaced it. The new nanos have more memory than even the EOL minis. The microdrive, though a good tech, were not scaling. The larger physical size hard disks are now up to 160GB, but that is small for modern times in which many of us have a terabyte sitting on our home machine.

    So I think we will pay for SDD prices if they give us more space. The problem right now is that we have more for a SSD drive, and get less space. We pay $1000 to Apple or practically anyone else for 64GB SSD. That is paying money for nothing. Wait until we can buy a Macbook Pro with a terrabyte SSD for $4000, or a Mac Book air with a 250GB SSD for $2000. Then we will be seeing the SSD laptops flying off the shelf.

    Of course for low end machines many will stick with HDD for many years, just like people entered the 21st century still storing things on floppy. Of course this will hasten the downfall of HDD, as the cheap unreliable HDD will take an even bigger share of the market than they have today, and, just like today, users will attribute a high failure rate to a problem with the technology, and not that they chose to buy a cheap hard drive. With the last major mechanical part gone, computer will become much more reliable, just like when the stereos, for better or worse, left vacuum tubes behind.

    I also hope that DVD drive as a standard goes away soon, and applaud Apple for making the Mac book air drive free. The main reason for a dvd drive, other than installing software, is because we cannot rip out DVDs to a more convinent format. I would much rather carry around a couple Flash Drives than a bag of DVDs. It would seem that in not too many years shipping software on USB dongles would be just as cost effective. Already 4GB flash cost less than $10.

    • by RKBA ( 622932 )

      "Already 4GB flash cost less than $10."

      Yes, but a DVD+R only costs about fifty cents. It will be interesting to watch flash prices.
    • by hdon ( 1104251 )

      I think that density, not price, is going to drive the SSD market as well. We need space on our small computers, and the mechanical solution is not keeping up. I believe this is why Apple went to flash memory for the iPods, although initially they were dedicated to hard drives. My iPhod mini only has 4 gb, the same as the nano that replaced it. The new nanos have more memory than even the EOL minis.

      I'm pretty confident that Apple's reason for switching to solid state flash memory in their handheld electronics (and now/soon their laptops as well) was because iPods were notorious for mechanical failure as they are often put through quite a bit of physical abuse.

  • Comment removed based on user account deletion
  • Flash is just another layer in the memory hierarchy.

    Hierarchy:

    registers
    cache
    RAM
    flash
    hard disk
    tape

    • Right, but I would say the vast majority of home users have dropped tape from their memory hierarchy (and a good number of other users, too), and we hope to drop hard disk from the hierarchy next. These are albatrosses around our collective necks.
    • I don't even use a tape deck anymore. When I started listening to these things called audio CDs around 2005, I realized I couldn't go back!
  • On June 15th, Mtron will start shipping the 1000 series MLC drives. Put these in an array with the right software and you end up with price/GB parity with 36GB 15K 2.5" SAS drives and about 12x the random IO performance.

    HDD Array:

    8 Seagate Savvio 2.5" HDDs: $350ea $2,800
        configured raid-10
    1 SAS raid controller $600
    Total cost for 144 GB $3,400 or $23.61/GB

    SSD Array:

    6 Mtron 1025-32 2.5" SSDs: $290ea $1,740
        configured raid-5
    1 SATA raid controller $250
    MFT Software License $1,250
    Total Cost for 144 GB $3,240 or $22.50/GB

    HDD Performance:
        4K and 8K read IOPS: 250/2000 (single-threaded/multi-threaded)
        4K and 8K write IOPS: 1200

    SSD Performance:
        4K read IOPS: 8000/48000 (single-threaded/multi-threaded)
        8K read IOPS: 6000/36000 (single-threaded/multi-threaded)
        4K write IOPS: 40000
        8K write IOPS: 22000

    These performance numbers are with the MFT driver in place. Without MFT, the 4K random write performance is about 140 IOPS (>250x slower).

    Endurance for these SSDs in this configuration is good enough to overwrite the entire array with random data three times a day (500GB of random updates/day) for about five years.

    These drives make a wicked mail server (EasyCo just moved one of it's mail servers mirrored to MLC flash and the difference is amazing).

    Sorry for the blatant advert, but SSDs are here now.

    Doug Dumitru
    EasyCo LLC
    http://managedflash.com/ [managedflash.com]
    +1 610 237-2000 x2
  • With my limited knowledge - Platters currently provide the best storage per buck, but SSD provide better random access (although after timing my ipod touch vs 60G 5G iPod, I've come to realize that an SSD can be much much slower - thanks Steve). Data Centres where there are very specific needs will I'm sure plump for one or the other - depending upon what their needs are. I'm sure eventually we might all go SSD, but that's way way off imho. What the majority of us need is for some more intelligence in the b
  • Seems like phase change RAM [wikipedia.org] would have much more desirable properties (high write performance, much higher amount of writes a single memory cell can take before it's damaged) for discussed uses.

Keep up the good work! But please don't ask me to help.

Working...