Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware Technology

Intel's Braidwood Could Crush SSD Market 271

Lucas123 writes "Intel is planning to launch its native flash memory module, code named Braidwood, in the first or second quarter of 2010. The inexpensive NAND flash will reside directly on a computer's motherboard as cache for all I/O and it will offer performance increases and other benefits similar to that of adding a solid-state disk drive to the system. A new report states that by achieving SSD performance without the high cost, Braidwood will essentially erode the SSD market, which, ironically, includes Intel's two popular SSD models. 'Intel has got a very good [SSD] product. But, they view additional layers of NAND technology in PCs as inevitable. They don't think SSDs are likely to take over 100% of the PC market, but they do think Braidwood could find itself in 100% of PCs,' the report's author said."
This discussion has been archived. No new comments can be posted.

Intel's Braidwood Could Crush SSD Market

Comments Filter:
  • Not so sure (Score:5, Interesting)

    by mseeger ( 40923 ) on Friday September 04, 2009 @08:22AM (#29309633)
    When given similar performance but a slightly higher price, i would prefer the SSD. I can't take the flash to the next PC as i can do with the SSD. Hard disks have a highe life expectancy than mainboards (i usually find some good use for old HDs, i never did for old mainboards). Unless the SSD will cost 2-3 times as much as the flash on the mainboard, i believe SSDs will still be used. But maybe this will lead to lower SSD prices.
    • Re:Not so sure (Score:5, Insightful)

      by Dogtanian ( 588974 ) on Friday September 04, 2009 @08:41AM (#29309785) Homepage

      I can't take the flash to the next PC as i can do with the SSD.

      Not really a big deal; if it becomes commonplace, most PCs will eventually have it (or something like it) as standard anyway and you won't be bothered about it.

      • Re:Not so sure (Score:5, Interesting)

        by jackharrer ( 972403 ) on Friday September 04, 2009 @10:53AM (#29311209)

        "(...)if it becomes commonplace, most PCs will eventually have it (...)"

        Which opens an interesting hole. That flash on motherboard will hold some data to speed up system startup. That means first n opened files. With that flash big enough it will also hold quite a lot of user documents. Unless documents can be marked as "not to be cached" it will add extra headache when getting rid of old systems. We have it already with 419ers buying old PCs and smartphones, gangs dumpsterdiving, etc.

        Also try to explain to customers that they will need to erase flash they cannot see in system (and will most probably not even know about it!) or destroy the chip before throwing away old system. With HDDs it's quite hard and those are big, visible and has been around for ages.

        • Re: (Score:3, Insightful)

          by oldspewey ( 1303305 )
          One would hope these motherboards would come with a BIOS option to erase the onboard flash.
        • Re: (Score:3, Interesting)

          by Hurricane78 ( 562437 )

          Why not offer a simple tool. I'd name it "Last Shutdown" and it would be kind of like saying goodbye to your old computer (in style).

          It would first ask if you have saved all your personal data outside the computer and/or removed that storage from the system.
          Then it will go and
          - safely delete all the hard drives
          - safely delete all the flash storages/caches
          - equalize all other residues
          - safely delete all ram content
          - empty all caches
          - etc.
          While showing a nice animation fitting to the theme.

          When done, it would

      • Re:Not so sure (Score:5, Interesting)

        by Garridan ( 597129 ) on Friday September 04, 2009 @12:53PM (#29312875)

        Seems to me that this article is a thinly-veiled marketing trick. Somebody publishes a paper, "Will Intel product A beat Intel product B?", and presto, we've got buzz about product A which doesn't even come close to competing with product B (which is a market leader, dontchaknow), and increased buzz about product B. Then, people chime in with their arguments and counterarguments about which product is better... and Intel wins no matter what. Both product lines are probably going to succeed independently of one another.

        That said, Braidwood sounds awesome to me, especially because my servers talk to a storage box over NFS, and fast onboard cache sounds great to me. But, I want fast local storage too, and 16GB is nothing, so I want large-capacity SSD drives. I really don't see these as competing products. This is just a slashvertizement. Move along, folks.

        • Re: (Score:3, Interesting)

          by Rei ( 128717 )

          My concerns about this product is that flash degrades per write cycle, so the smaller the disk you have, the faster you wear through it. Since this sound like just a small buffer, I'd have concerns about it having a short lifespan.

    • Re:Not so sure (Score:5, Insightful)

      by Z00L00K ( 682162 ) on Friday September 04, 2009 @08:53AM (#29309903) Homepage Journal

      Whoever defined parent as troll must be weird.

      That said - I'm more worrying about the consideration about exhausted flash on the motherboard. Have all avenues actually been considered here, or is that a built-in best before date that new motherboards will have?

      • That's what worrying me too.

        MLC still hasn't improved its lower durability bound: 100K erase cycles. (Often flash companies advertise only upper bound: 1-2.5Mln erase cycles. SLC has it at about 1Mln cycles.) And 100K erases for the flash - especially if one puts a FS's journal there - is really not that much since journal has to be updates often while the operation itself might not even reach the disk. E.g. on many systems the sequence will touch only FS journal: create temp file, work with it for a sp

        • Re: (Score:2, Troll)

          by ErikZ ( 55491 ) *

          That's about 68 erases per sector PER DAY if I want my new SSD to last 4 years.

          I 4 years I expect SSD tech to be much cheaper, faster, and able to contain far more data. I am not concerned.

          I'd also use these on servers, barring a DB server or some kind of caching server.

          • Re: (Score:3, Insightful)

            by Hognoxious ( 631665 )

            That's about 68 erases per sector PER DAY if I want my new SSD to last 4 years.

            TFA says it'll be used for an I/O cache, so I supsect it'll get hit slightly more often than that.

          • Re:Not so sure (Score:4, Interesting)

            by jackharrer ( 972403 ) on Friday September 04, 2009 @10:59AM (#29311267)

            Actually not. It is going to store quite permanently only some files used to speed up system processing. There is not going to be any journals, and the filesystem will be highly optimised for this kind of usage. That is from press release I read somewhere. So even MLC will last long time as writes will be very limited. Only issue is that to drive costs down controller is also going to be scaled down so no great magic as with SSDs. So if somebody hacks that flash to use as HDD it will wear quite badly and quickly.

          • Re: (Score:3, Insightful)

            by gabebear ( 251933 )
            If a 16GB Brainwood used a revolving cache, where any data not already in flash was read from disk and written over the oldest data in flash, then you would see very few erase cycles per day per block. You would need to do more than 16GB of disk IO to eat up one of the 100K erase cycles.

            With intelligent cache techniques you should be able to get the erase-cycle count for each block very low.
    • Re:Not so sure (Score:5, Informative)

      by vidarh ( 309115 ) <vidar@hokstad.com> on Friday September 04, 2009 @09:31AM (#29310231) Homepage Journal
      It's *cache*. It's not meant to be moved, and it doesn't prevent you from moving the hard drive. Nor does it prevent you from using an SSD, it just means the performance reasons for using an SSD may get significantly reduced.
      • So it only caches reads then?

      • Re:Not so sure (Score:5, Interesting)

        by Spoke ( 6112 ) on Friday September 04, 2009 @02:26PM (#29314593)

        Which brings up an interesting design thought:

        Battery backed up (BBU) RAID controllers with volatile RAM cache are very common in the server market because of the huge performance increase of small random writes.

        The RAM cache lets the controller cache writes and then send them to the disk in batches while performing write combining so multiple small writes get turned into larger writes reducing the number of disk seeks required to store the data. Also think of the case where your controller has a 512MB cache and you write 200MB to disk. The controller can say OK as soon as it's written to RAM (fraction of a second) where your typical fast disk these days will take 2 seconds.

        Without having a battery to back up the volatile RAM cache, you could lose a lot of data if the server lost power, but with it, you go go at least a couple days without losing data.

        So now, let's replace that 512MB BBU RAM cache with a 16GB SLC SSD. You won't quite get the burst speed of the BBU RAM controller, but in sustained server loads performance should be a lot better. The SSD will also be able to store a lot more data for reads. If the controller is smart and only uses the SSD for caching random read patterns, you could get close to SSD performance for a lot of workloads but still have 1TB of disk storage.

    • Re: (Score:3, Interesting)

      by gmuslera ( 3436 )
      What if you take it as "cache", one that survives reboots, but where if you really want data persistence you backup it to a more transportable device? Probably will be pretty fast regarding speed (maybe faster than normal ssds, at least regarding bus connection), and having i.e. the most requested files, database slaves for fast queries, swap/temp partitions or even the OS could improve a lot typical pc performance.
      • What if you take it as "cache", one that survives reboots,

        The big thing I see here isn't surviving intentional reboots for efficiency - i.e. stuff cached pre-boot woudl still be available without spinning-disk read post boot. For that matter I'd be wary of such a feature (it would need to be well implemented and very well tested to deal with odd circumstances like disk connections being rearranged physically between shutdown and restart).

        The two big advantages here are standard cache/buffer behaviour during active system use, and written data surviving an unexpect

    • Re:Not so sure (Score:5, Insightful)

      by toppavak ( 943659 ) on Friday September 04, 2009 @10:29AM (#29310913)
      Didn't they already try this with their turbocache stuff? I seem to recall the general consensus [anandtech.com] being that it doesn't really offer any remarkable benefits. Regardless of how fast the cache is, eventually you run apps or open files that can't live on it 24x7 and you're going to revert to magnetic HD performance limits. This might improve some battery life and performance for some apps, but its not going to give you the across-the-board speed and battery life boosts that SSDs do. While this would certainly result in a better experience for the average computer user, I feel like its going to be relegated as a middle-ground between HDDs and SDDs, augmenting the low end, but by no means obsoleting the high-end.
    • Re: (Score:3, Interesting)

      by Krneki ( 1192201 )
      The board I use hardly ever fail, but HDD, oh boy they like to pop those bad blocks.
    • Re: (Score:3, Insightful)

      by Fweeky ( 41046 )

      I would expect they'd be using some sort of slot, something like this [scan.co.uk]. Motherboard manufacturers aren't exactly going to be thrilled at the idea of putting some yet more expensive components on there, but they might be happy to hook up a small ZIF socket thing like some of them do with CF.

      Intel actually had some weird ZIF connected SSD's on there a while ago on preorder, but they appear to have disappeared.

      Either way, it's nice to see some hybrid storage stuff which isn't ZFS L2ARC (zpool add tank cache /d

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday September 04, 2009 @08:23AM (#29309643)
    Comment removed based on user account deletion
    • Capacity is still an issue though. Although in enterprise storage SSDs offer a lower cost per transaction and provide a real benefit, they still have massive amounts of HDDs for storage on the lower tier. Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.
      • by Anonymous Coward on Friday September 04, 2009 @08:34AM (#29309739)

        Just goes to show how warped a professional's perspective really is. Standard consumer with 4TB of data? Really?

        • Re: (Score:3, Insightful)

          by kimvette ( 919543 )

          The RIAA and MPAA would have you believe that each man, woman, and child has downloaded at least that much illegal movies and music.

        • Re: (Score:3, Insightful)

          by KC7JHO ( 919247 )
          Ya really, no one will ever need more than 64k, right?
        • by camperdave ( 969942 ) on Friday September 04, 2009 @08:54AM (#29309915) Journal
          Consumer electronics store shelves are packed with terabyte sized hard drives. 4TB may be a little ahead of the curve, but not by much.
          • Re: (Score:3, Informative)

            No, actually having 4TB of data is way ahead of the curve. The "standard consumer" has maybe 100GB worth of (non-OS) data on a drive, even if the drive is 1TB.
            • by gclef ( 96311 )

              I think people are going to have a lot more than that when recording HDTV with a Tivo-alike device. 1TB works out to about 100-ish hours (yes, I'm rounding heavily) of HD video. Tivo certainly has users who record & keep that much video.

        • A friend of mine studies architecture. He stores several TB in DSLR photos and renderings on his desktop machine. Another friend of mine stores all his audio CDs, DVDs and BluRays and lots of TV recordings on a little server for his HTPC, he recently reached 3TB.

          They are not "standard consumers", but they are not hard-core nerds, either. Storage is so cheap to acquire (and so easy to use) that people can afford to not delete anything ever again. Whether that is sensible is a whole other point. But the resul

        • Re: (Score:2, Informative)

          by linzeal ( 197905 )
          Everyone I know has at least a few TB of data on burnt CDs and DVDs. It would be nice to be able to consolidate your multimedia stuff into one storage device. I'm running 8 terrabytes of data on 10 1TB hard drives on a ATA over ethernet [sourceforge.net] setup [freshmeat.net] in raid6. So yeah I'm pry not your average consumer but being able to access 1000's of hours of movies, tv and home video without having to pay netflix or watch the ads on hulu is pretty nice.
      • Capacity is still an issue though. [..] it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

        Go back and read what he said. It's clear that he was talking about the near to middle future, not the current situation:-

        Sooner or later, no moving parts beats moving parts. The magnetic disk makers have done an amazing job so far, but eventually they're going to lose out to solid-state.

        Flash memory is at present growing in capacity much faster than magnetic drives. (Actually, it's growing at the rate that the latter grew at during the 1990s and early 2000s). Of course, it's still got a long way to go to catch up, and- like hard drives- it's not guaranteed that it'll keep that rate of growth forever. Still, the current shape of solid state's curve has it intersecting th

        • I don't think SSDs do have a long way to go to catch up. SSD - 512GB in a 2.5" box, Hard Disk, 640GiB in a 2.5" box, that's only a small difference, and until last week, the SSD was in the lead. The only reason SSDs trail in the 3.5" drive stakes is the same reason they trail on price: not enough factories have been built yet, fix that (probably won't take long), and SSDs will very rapidly be competing with HDDs capacity wise

        • Re: (Score:3, Interesting)

          by Znork ( 31774 )

          Flash memory is at present growing in capacity much faster than magnetic drives.

          If magnetic drives really push the capacity growth that might not hold; magnetic drives have shrunk in size and increased rotational speeds to decrease latency during that time as well. If they just simply give up the performance race and go for vast capacity they could move back to 5 1/4 full height disks. Can you imagine the amount of data you could stick on that surface area with modern technology? I wouldn't be surprised if

      • by dissy ( 172727 ) on Friday September 04, 2009 @10:41AM (#29311059)

        Capacity is still an issue though.

        Not really for most people.

        The last few systems I have worked on for 'standard consumers' were all quite upset at being forced into purchasing a 'way too big' 300gb hard drive, simply because any drive under 100gb is both very hard to find, and likely expensive in comparison. 500gb was a waste to them, when they only sync their camera once a month and have office and a couple games installed.

        Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

        You are not allowed to use "standard consumer" and "4TB of data" in the same sentence :P
        Careful, they might swoop in and hole punch a warning into your geek card!

        Anything >= 2tb is far far above the standard consumer. Even 1tb is far above the average consumer, although 1tb is still falling well within the power user and average gamer groups.

        • Re: (Score:3, Informative)

          by Spoke ( 6112 )

          For customers who only want/need a 100GB of storage, SSDs are the way to go. They do currently cost a lot more than rotating storage, but a SSD makes a HUGE difference in the apparent performance of many day-to-day tasks.

          A good 120GB SSD like the OCZ Agility costs about $300 compared to $40 for a 160 SATA drive so the price premium is huge.

          BTW - I'm not sure why you say drives smaller than 300GB are hard to find - or why your customers complain about it. NewEgg has a ton of drives smaller than that with t

    • by Kjella ( 173770 )

      Perhaps. But they still have a long way to go on $/GB. Just checking my local price guide it's 0.09 $/GB for 1.5TB HDDs and 3.75 $/GB for the cheapest SSD. But yeah, booting off SSD and having a HDD for media sure.

  • by IcephishCR ( 7031 ) on Friday September 04, 2009 @08:26AM (#29309667) Journal

    Now only if they could start following the server side folks and place an internal USB connector inside and then MS and others could give us the OS on its own usb drive (read only) and we could use the hard drive for updates and programs we could enhance the security as well...

    • Comment removed based on user account deletion
      • This used to be a huge PITA before GRUB supported UUIDs as groot values. It was an even bigger one before Linux would do it. On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of U

        • by drsmithy ( 35869 )

          On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of USB disk emulation mode also.

          Basically, it just needs to appear to the OS as a "fixed disk" rather than a "removable disk".

          • This is why I keep hoping someone will produce a complete computer architecture designed to be virtualized, so that I can genuinely run Windows and Linux (for example) and have both have access to the hardware. I'm tired of deciding who can access the video card. On a system with unified memory it seems especially silly that I can't do this gracefully.

    • by zrq ( 794138 ) on Friday September 04, 2009 @09:17AM (#29310105) Journal

      Why a USB connector ? That causes the same problem as making SSD cards use the SATA interface - the serial interface becomes slower than the things it is connected to.

      What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

      Four sockets with 16 or 32G in each would give you enough space to store the entire OS. I don't know how Windows would handle it, but in a Unix or Linux based system it would be fairly easy to mount the devices as read only partitions and map them into the filesystem. This would be ideal for a server system, mapping the entire OS into the main memory address space and making it read only.

      In fact all the BIOS would need to do is make the first 100M visible as a boot partition, and leave the OS to handle the rest.

      • What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

        The one issue, here, is one of address space. Unless people do a wholesale migration to 64-bit, it won't be possible to simply map the address space of such a device into memory.

  • by Rogerborg ( 306625 ) on Friday September 04, 2009 @08:27AM (#29309675) Homepage
    What does it do, scream "Nooooooooo!" and throw itself underneath the hard drive in slow motion?
  • HW buffer for drives (Score:3, Interesting)

    by Keruo ( 771880 ) on Friday September 04, 2009 @08:28AM (#29309681)
    Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.
    Fast IO is ensured as most operations happen in memory, and dataloss isn't an issue as the memory is battery backed.
    RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.
    16Gb might be overkill for most purposes, you could get away with 2 if the system is used only for low-power tasks like surfing and email.
    • by natehoy ( 1608657 ) on Friday September 04, 2009 @08:36AM (#29309747) Journal

      I agree, but why would Intel want to use flash memory for this? RAM is faster, has the capability of a LOT more read/write cycles, and could be backed up by a small battery in the case of short power outages (or maybe a battery big enough to run the hard drive long enough to flush the write buffer, as others have said).

      This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year. How could forcing people to replace motherboards annually possibly benefit Intel? Oh, wait...

      • Re: (Score:2, Insightful)

        by Truekaiser ( 724672 )

        It's called planed obsolescence. Due to the amount of read/write cycles on the i/o and the fact all flash memory is limited to a certain number. If they integrate this into the motherboard it means that the motherboard has a expiration date they can predict and design around. In such a situation the flash will mostly last about a year or 2.

        Speed has nothing to do with it because your /still/ bound to the data flush to the disk drive which will be much slower. data security between crashes seems to only be a

      • by drsmithy ( 35869 )

        I agree, but why would Intel want to use flash memory for this?

        Because, according to TFA, it's 1/4 the price of DRAM.

        This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year.The SLC flash they're talking about will almost certainly last longer than the hard drives it is caching.

      • This is essentially a cache, which means it's going to get a lot of reads and writes.

        No it doesn't mean that. It's a disk cache, not a memory cache. Meaning, only file operations will hit it. The number of writes will be just the same as on SSD drives which millions of people already have.

        It won't replace SSD drives anyways. Bigger cache dosn't help much at all after a point, and RAM is getting so cheap most systems have plenty of file caching. What the SSD drive gives you is near-instant access to

      • Re: (Score:3, Interesting)

        by gabebear ( 251933 )
        The main thing this would do that battery backed up DRAM wouldn't do is allow for quick boot and hibernate, which is something the enterprise people generally don't care about. The flash looks like it will be replaceable via a dimm-like slot. http://news.cnet.com/8301-13924_3-10258748-64.html [cnet.com] and http://www.hardware.info/en-UK/news/ymiclpqWwpyaaJY/Computex09_Intel_P55_motherboard_gallery/ [hardware.info]

        The other thing this does is bypass the "slow" SATA interface. We have laptop SSD drives that saturate SATA 3.0 and ne
    • Re: (Score:2, Informative)

      by erple2 ( 965161 )

      Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.

      Do you mean gigabit or gigabyte? Also, 16 gigabytes of RAM right now isn't very cheap at all. The cheapest DDR2 memory I've seen is about 12.50 dollars per gigabyte, so that's an additional 200 dollars per 16 gigabytes. Is that a good price to pay for some potential increase in speed? IMO, that's what I'd call "extremely hard to justify" for a consumer.

      RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.

      Meh, even the most expensive RAID cards loaded up with tons of RAM aren't as fast as a couple of Intel SSD's right now, so why bother with the expense?

  • by bboy_doodles ( 170264 ) on Friday September 04, 2009 @08:33AM (#29309723)

    There have also been rumors, however, that Braidwood has been canceled, at least in the near term:
    http://www.dvhardware.net/article37368.html [dvhardware.net]

  • by BESTouff ( 531293 ) on Friday September 04, 2009 @08:33AM (#29309727)
    If the onboard flash is a cache, that means it will be used frequently do it will wear faster. Won't that mean you're more likely to corrupt your data, even if your HD is still good ?
    • by John_Booty ( 149925 ) <johnbooty@booty p r o j e c t . o rg> on Friday September 04, 2009 @08:58AM (#29309953) Homepage

      SLC flash memory, which the article claims Braidwood will use, is an order of magnitude or two more durable (in terms of write cycles) than MLC flash memory, which is what is used in most consumer-level devices like Intel's X-25M SSDs.

      Wear-leveling and overprovisioning should ensure a long life for the memory used in a scheme like Braidwood. Intel, generally speaking, knows what they're doing in this area. Now if only I could afford one of their drives...

    • Re: (Score:2, Interesting)

      by jcaplan ( 56979 )
      No. When flash fails it becomes unwritable, not unreadable. Your data is safe, your capacity declines.
    • Reliability: 100 TB (Score:3, Informative)

      by tepples ( 727027 )

      If the onboard flash is a cache, that means it will be used frequently do it will wear faster. Won't that mean you're more likely to corrupt your data, even if your HD is still good ?

      NAND flash chips are generally guaranteed for at least 100,000 erases per block. As I understand this Braidwood chip, it's a non-volatile ring buffer [wikipedia.org] for data writes. Ring buffers are the easiest thing to wear-level, meaning you can just multiply the cache capacity devoted to writes (let's say 2 GB) by the longevity guarantee to get 200 TB of buffered writes before any failure occurs. And not all blocks on a flash chip fail after the same number of writes; you'll just start to lose ring buffer capacity over

  • by MasterOfGoingFaster ( 922862 ) on Friday September 04, 2009 @08:50AM (#29309875) Homepage

    Funny - this very thing was being discussed around 1985 (I think), but using battery-backed RAM as a way to reduce boot time. The thinking was people wouldn't put up with a computer that took 30 seconds to start, and if we didn't have a 2-5 second boot time (equal to a TV), the personal computer would never fly. But since it took from 1985 (80386 chip) to 1995 (Windows 95) for a 32-bit OS to become popular, maybe 25 years is reasonable.

    Or not. Man, this industry moves at a snails pace in a lot of areas. Why do we still live with the x86 instruction set. Is "the year of UNIX" here yet?

    Anyway, three competitors will emerge:

    - Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

    - Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

    - The third competitor will work with Microsoft or Apple to get OS support for fast boot. Apple will get there first and you'll see a commercial on TV with the Mac guy wondering why the PC guy takes the entire commercial to wake up.

    In a single drive system, the cost will be about the same. Doing it on the drive will create an instant performance boost on any machine, and well worth the estimated $10 added cost.

    • by MobyDisk ( 75490 )

      - Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

      Several manufacturers did this, but it didn't offer much benefit over the existing DRAM caches that are on the drives. Further evidence of this is that Microsoft's ReadyBoost [wikipedia.org] does this, and provides no major benefit. Bottom line: Just get more RAM in your machine, or buy a drive with a bigger cache.

      - Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

      Already covered. Windows XP and above create a Prefetch folder that the files needed during bootup, in a nice contiguous block. Once you do that, putting it into NAND doesn't matter since seek time becomes mo

    • Why do we still live with the x86 instruction set.

      Three reasons:

      • The x86 and x86-64 bytecodes are a fairly dense encoding, which lets more instructions fit in level 1 instruction cache. Code density is the same thing that inspired ARM to invent the Thumb encoding.
      • Path dependence [wikipedia.org]: The x86 architecture is a known quantity with economies of scale from Intel and AMD. Case in point: Mac computers switched from IBM and Freescale CPUs to Intel CPUs in 2006 because they were cheaper for the same performance.
      • More path dependence: Not all software is shipped as so
  • From the article:

    Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity, will only raise the cost of a PC by about $10 to $20 per system, according to Jim Handy, the Objective Analysis analyst who authored the report.

    When comparing that cost increase with the overall cost of a brand new PC it doesn't raise any red flag. Nonetheless, what it must be said is that, as this brainwood technology "resides directly on the motherboard" (i.e., it's yet another component embedded in a motherboard

    • by drsmithy ( 35869 )

      Is a 4GB buffer really capable of successfully buffering all that data?

      Run some benchmarks before and after disabling the 16-32MB cache on your hard disk. You might be surprised.

  • by thue ( 121682 ) on Friday September 04, 2009 @09:40AM (#29310335) Homepage

    The buffer should obviously be on the hard disk. That way the data on the disk will always be in sync, even if there are writes buffered in the flash cache when the computer loses power. I can't see a good reason to put it on the motherboard instead. Especially as most consumer systems have exactly one HDD.

    The article says that the flash buffer could work for "all system io". I can only think of optical disks and flash drives possibilities other than hard disks. But optical disks are interchangeable, so they have to be reread on each use anyway, and could just as well be cached in RAM. And it makes no sense to cache flash drives in flash cache...

  • Hybrid drives are a few years old [wikipedia.org], but apparently not very popular.

    Samsung makes some [samsung.com] with 256 MB of on drive NAND flash.

    I do have to question the effectiveness in multiple drive scenarios. And they talk about 4 GB of space - how do you avoid getting your page file stored on it? And how quickly will the 4 GB be worn out and read only? From the latest AnandTech article on SSDs [anandtech.com]:

    Intel estimates that even if you wrote 20GB of data to your drive per day, its X25-M would be able to last you at least 5 years. Real

  • What nonsense (Score:5, Insightful)

    by PopeRatzo ( 965947 ) * on Friday September 04, 2009 @10:51AM (#29311177) Journal

    Is this the latest FUD? That if a company brings out a successful product that's priced cheaply it'll "erode the market"?

    How did the :"market" become so sacred that it must be preserved at all costs by keeping prices high? It's really funny the crap that'll come out of an MBA's mouth. He'll be all for "free markets" until someone comes along with a better product and then he'll start to squeal that the "market" is under siege.

    Good for Intel.

  • by Gothmolly ( 148874 ) on Friday September 04, 2009 @01:08PM (#29313053)

    Just add the extra 32GB of RAM to the OS, and let it more intelligently manage the data.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...