Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage IT

Alienware Puts 64GB Solid-State Drives In Desktops 235

Lucas123 writes "In the face of Seagate's announcement this week of a new hybrid drive, Dell subsidiary Alienware just upped the ante by doubling the capacity of its desktop solid-state disk drives to 64 GB. Dell has remained silent on the solid-state disk front since announcing a 32-GB solid-state option for its Latitude D420 and D629 ATG notebook computers earlier this year. Now, Alienware seems to be telling users to bypass hybrid drives altogether. 'Hybrid we consider to be a Band-Aid approach to solid state,' said Marc Diana, Alienware's product marketing manager 'Solid state pretty much puts hybrid in an obsolete class right now.'"
This discussion has been archived. No new comments can be posted.

Alienware Puts 64GB Solid-State Drives In Desktops

Comments Filter:
  • many write cycles? (Score:2, Interesting)

    by MancunianMaskMan ( 701642 ) on Thursday October 11, 2007 @08:41AM (#20938445)
    can s/o comment on the durability of these (presumabily flash-based) devices? What if the OS decides to write stuff to certain sectors all the time?
  • life time? (Score:2, Interesting)

    by revisionz ( 82265 ) on Thursday October 11, 2007 @08:41AM (#20938447)
    how long are solid state drives suppose to last? Compared to the hard drive?
  • by Ritz_Just_Ritz ( 883997 ) on Thursday October 11, 2007 @08:41AM (#20938449)
    I would pay the extra price for solid state disks on my computer tomorrow, but I can't help but be a bit nervous about the limits of flash memory in terms of the number of times a cell can be written to. On a well exercised machine, how do they pro-actively monitor this and/or avoid corrupting data when one of those cells can't reliably flip bits anymore? I'm not too stressed about it if I get a corrupt picture on my digital camera because of that, but I use my computer for real work.

    Best,
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Thursday October 11, 2007 @08:42AM (#20938459)
    Comment removed based on user account deletion
  • by CastrTroy ( 595695 ) on Thursday October 11, 2007 @08:52AM (#20938537)
    The OS has no power to decide which sectors are written to. The drive contains it's own map of the sectors, and does the write-leveling itself. The OS may think it's writing to sector X, but it's really only a logical sector. It could actually be writing to sector A,B, or C. At least that's how I understand it. Of course this only makes sense with solid state drives, because they don't have variable seek times depending on which sector you put the data at.
  • by Anonymous Coward on Thursday October 11, 2007 @08:59AM (#20938619)
    I have a 60GB drive on one of my well-used desktops. In any given day, I can write to it anywhere from 250MB total, to around 10GB total, measured at the OS.

    So maximum, absolute maximum on a busy day, I write 10GB to it, or 60GB worth of writes in an entire week.

    Given firmware that spreads writes out over cells, that means in one week, I would write to every single cell in the flash drive less than once. That's in a hypothetical SUPER busy week, something I've never done.

    With 100,000 writes maximum before the flash dies, that gives me about 100,000 weeks time until the flash runs out and dies, or a bit under 2,000 years.

    that's at the absolute, positively, busiest use I've noticed myself doing on my desktop's drive.

    Now, given I'm closer to the 2GB mark instead of 10GB worth of writes, I could probably keep going for 10,000 years with normal use, except for my own death.

  • by Amiga Lover ( 708890 ) on Thursday October 11, 2007 @09:09AM (#20938703)
    I have an old mac laptop, a Powerbook 1400, which was sadly limited to 64MB RAM from the factory. Combined with a slow internal HD, the use of VM to get more use out of it slows it down like a dog. The solution to its limited RAM? Add a flashram PC card, make the VM page to it, and you have a pretty quick workaround.

    It's a reasonably well-known hack, and I used this powerbook with flash-based VM storage from 2001 to 2003 as one of my main internet machines, browsing and image editing, and it had a real workout in that time. It's been resting for a few years, but still fires up OK. I've seen perhaps a dozen other people who've done this, and NEVER known of a flash VM card to die.

    In short, the longevity issue doesn't need solving, as it isn't an issue for anything but running something like eBay's database server on.
  • by The Incredible Mr. L ( 26085 ) on Thursday October 11, 2007 @09:43AM (#20939069)
    funny, I was checking out the Dell choices the other day since finding out my company has a discount.

    They offer a 128GB solid state drive option on their XPS M1730 notebook.

    I don't know how long they've offered that but it seems that Dell does have that option.
  • by Burz ( 138833 ) on Thursday October 11, 2007 @09:44AM (#20939079) Homepage Journal
    Earth to Lumpy:

    Flash drives have had wear-leveling as standard for several years.

    Now, back to your utra-scuzzy crap kickers. :-D
  • by SlashdotOgre ( 739181 ) on Thursday October 11, 2007 @10:00AM (#20939259) Journal
    A couple years ago (Fall 2005) I did my senior engineering project in college using embedded Linux devices which utilized 512MB flash drives (CF) as the only storage mechanism. The devices were basically Soekris boards with Debian and some highly custom WiFi drivers/software designed for mesh networking research. After my project, I was hired on by the research institute which funded the project, so I got to play with these things for a while. Nearly every mesh node that used flash ran into "hard drive" issues within a year (we suspected the failure frequency was directly related to how often we used the devices). Most of the time it was simply the MBR becoming corrupt which you could fix by mounting the card on a Linux computer, chroot'ing and re-running LILO; but in a few cases we had to replace the entire card due to corruption. These devices had fairly typical usage patterns of a normal desktop/laptop (booted daily), and we were no where near the 3-5 year estimates most people give flash drives.
  • by WindBourne ( 631190 ) on Thursday October 11, 2007 @10:22AM (#20939585) Journal
    I recently switched my home servers to using a sandisk 4G flash for / (with variable directories moved to disk; /home, /opt, and parts of /var such as /var/logs). The system now loads in about a 1/3 of the time. I have also seen that it is quieter (the regular disks sleep when not in use and the fan that ran all the time now runs infrequently ), and the temp dropped 5 degrees. I would expect that my electricity usage has dropped (as evidenced by lower heat).

    All in all, I have no doubt that within a year, flash will be the rage.
  • by afidel ( 530433 ) on Thursday October 11, 2007 @10:25AM (#20939637)
    You could run Windows well on flash without too much trouble, use a ramdrive and redirect TMP and TEMP to that and disable swap, set your browser to use TMP for cache or disable it altogether. Turn off timestamping on file access and it's even better. By that point if your flash has 500K writes before average failure then you have a drive that will last many years, probably longer than your average HDD.
  • by Anonymous Coward on Thursday October 11, 2007 @10:50AM (#20940027)

    Pet peeve: MTBF is not life expectancy, it's the average time between failures if you replace the drives before they are expected to die. Common MTBF are currently anywhere between 50 and 150 years (mostly made up numbers), whereas life expectancy is in the 3-5 years range (at best).


    W...T...F... Okay, let's walk through this with a candle. Say a candle is expected to burn for 10-15 hours, 10 under a strong wind, 15 if it's a bit oxygen-starved (making this up).

    So, if you replace a 10-15 hour candle every eight hours (ie your words "before they are expected to die"), THOSE are the conditions under which you can start counting MTBF???

    So, if I go:
    Time (hrs) - ten hour candle #
    0 - #1
    8 - #2
    16 - #3
    24 - #4
    32 - #5
    40 - #6
    48 - #8
    in a controlled environment (inside instead of outside, etc), then, since I can go on forever that way as far as the candle is concerned, a candle has a MTBF in the trillions of years+ ???

    (if they simply don't go out on their own).
  • by SlashdotOgre ( 739181 ) on Thursday October 11, 2007 @10:58AM (#20940135) Journal
    Unfortunately I didn't have the opportunity to investigate it much further (and I no longer work for that research institute). From what I recall, we partition the cards into two volumes. The first volume was set to read-only and contained static OS files (eg. /etc, /lib, /[s]bin...) and we had a second partition for logging (which obviously could and did fill up). I believe the read-only volume was larger than the space actually used so we never filled the cards completely; it's probably fair to estimate we hovered around 60-85% most of the time. All the CF cards were off-the-shelf components bought in one big purchase (so it may have been related to that batch); they were typical cards you'd throw into a camera and I'm unsure what speed they were. When I was hired on, I was actually developing embedded devices which would work over the mesh network provided by the mesh nodes mentioned above, so I didn't get to try larger cards, etc. (but that's an interesting theory and would have been good to test). I would also have been curious to just leave one node on for the whole time (not rebooted like the other nodes) and see if it failed around the same time.
  • by cliff45 ( 108620 ) on Thursday October 11, 2007 @11:37AM (#20940811) Homepage
    OK, don't kill this BEFORE you read it....

    Since it's so easy to get "old" data off of a hard drive once it's written, have the ultra-security experts looked at RAM based drives for storing data that should never be recovered at a later time? If you just used a regular disk to boot your OS fully configured into a RAM-based drive, then run the machine from there you could theoretically have a non-recoverable data storage unit. Long-term files would be written to a USB FLASH drive. No "ghost image" to be read back off a magnetic device and looked at, just pull the plug and BAM, your "history" IS really history (inside the computer, anyway).

    Does flash technology leave a phantom image after it's erased like magnetic storage does?
  • by midicase ( 902333 ) on Thursday October 11, 2007 @11:46AM (#20940961)
    Our company manufactures embedded devices that run off CF cards (typically the cheapest 512M we can source). In five years the only failures have been attributed to bad CF cards themselves. Now, each time we receive a difference batch of cards, we scrutinize them under many days of stress testing.
  • by TooMuchToDo ( 882796 ) on Thursday October 11, 2007 @12:16PM (#20941377)
    Now think about this. You saved some electricity by switching to flash, as well as heat output. What happens when Google does a cost benefit and sees how much power they could save across their entire cluster farm in both energy usage and heat, and swaps everything out. It's going to be a great energy conservation benefit, as well as help bring down the cost of flash (economy of scale).
  • by afroborg ( 677708 ) on Thursday October 11, 2007 @12:39PM (#20941687)
    in a controlled environment (inside instead of outside, etc), then, since I can go on forever that way as far as the candle is concerned, a candle has a MTBF in the trillions of years+ ???

    Not quite. If you don't experience any failures, then you can't calculate the MTBF because there are no failures to calculate the mean time between. That does not imply infinite reliability, just that not enough data has been collected.

    From Wikipedia [wikipedia.org]:

    MTBF and life expectancy

    MTBF is not to be confused with life expectancy. MTBF is an indication of reliability. A device (e.g. hard drive) with a MTBF of 100,000 hours is more reliable than one with a MTBF of 50,000. However this does not mean the 100,000 hours MTBF HD will last twice as long as the 50,000 MTBF HD. How long the HD will last is entirely dependent on its life expectancy. An 100,000 MTBF HD can have a life expectancy of 2 years while a 50,000 MTBF HD can have a life expectancy of 5 years yet the HD that's expected to break down after 2 years is still considered more reliable than the 5 years one. Using the 100,000 MTBF HD as an example and putting MTBF together with life expectancy, it means the HD system should on average fail once every 100,000 hours provided it is replaced every 2 years. Another way to look at this is, if there are 100,000 units of this drive and all of them are in use at the same time and any failed drive is put back in working order immediately after the failure, then 1 unit is expected to fail every hour (due to MTBF factor).


    People often use MTBF to mean life expectancy, and even within engineering disciplines this is a common misconception. The concept of MTBF is only relevant to certain theoretical models of wear-out anyway, and even though it is quoted for a lot of products it is often a meaningless quantity. The numbers and testing conditions can (as your example shows) be modified to produce just about any MTBF that the tester wants to prove. For most products with a wear-out failure mechanism, Weibull [wikipedia.org] analysis provides a much more accurate estimation of the life span of the product.

    Reliability engineering and analysis is hard. It is decidedly counterintuitive sometimes, and most engineers have never been trained in it. It is a massive subject and anyone who has worked in warranty analysis or design for reliability will agree with me, it creates a hell of a lot of work for everyone involved. A lot of it only makes sense when you start looking at large volume production (I design electronics for household appliances - BIG volume - reliability is extremely important). I have been on several training courses about this stuff, and I use it all the time in my daily job, and I still barely understand half of it. That's not because I'm dumb (although this is /. so I'm sure you'll all tell me that I am), but because it is a lifetime's work to become an expert in reliability. Have a look at the work of Dorian Shainin for more information.
  • Actually, (Score:1, Interesting)

    by Anonymous Coward on Thursday October 11, 2007 @12:55PM (#20941901)
    are you aware of how they do their set-up? All of their work system have ramdisk that are loaded from elsewhere. As long as the system does not go down, the ramdisk is valid. Somehow I doubt that they will change their ram disks.
  • by Plekto ( 1018050 ) on Thursday October 11, 2007 @02:03PM (#20942819)
    Yes, Windows memory requirements basically quadruple with virtual memory turned off(which is rally what it is - no different than using system ram for video, for instance, and just as much of a speed killer).

    Windows is a frighteningly bloated beast. But I'm pretty much preaching to the choir here I suspect.

    The way to deal with the swap file is a ramdisk. 3 gigs for Windows(assuming you're NOT stupid enough to be running Vista) and the remaining 1 gig windows doesn't usually access is the swap file. Problem solved. You just tricked Windows into using real ram instead of the hard drive.(as it should have been)

    It nearly quadruples speed in XP, btw.
  • by torkus ( 1133985 ) on Thursday October 11, 2007 @02:17PM (#20943005)
    Since wear leveling has been addressed (repeatedly) in reply i'll skip it.

    Instead let's talk about how your 3 year old U320 drive will kick the crap out of bla bla bla.

    In raw transfer speed probably. SSDD do fall behind by varying degrees in raw transfer. However, raw transfer is rarely the most important aspect of a hard drive.

    Far more important is seek time. That's why your fancy SCSI drives spin at 10k or 15k RPM. The 4mS average seek gives them a bid advantage over the 7-10mS in standard desktop hard drives. What's the seek time on SSDDs? Generally around 100uS or 0.1mS. So if you sacrifice 2/3 of your drive capacity (1TB vs 150-300GB for 15k) to halve your seek time what would you sacrifice to improve it by at least an order of magnitude?

    Random seek is critically important for most servers and also for many home uses. In testing with SSDDs windows boot time improved by about 20-30% depending on the situation. App load times also showed substantial improvements. Try throwing a sizable DB on a SSDD and you'll be amazed at the performance even without caching.

    So yes. For raw backup, very high data rate streaming, etc. Your SCSI drives might win out. For the majority of applications SSDD > U320 15K SCSI.

"Engineering without management is art." -- Jeff Johnson

Working...