



Intel's Braidwood Could Crush SSD Market 271
Lucas123 writes "Intel is planning to launch its native flash memory module, code named Braidwood, in the first or second quarter of 2010. The inexpensive NAND flash will reside directly on a computer's motherboard as cache for all I/O and it will offer performance increases and other benefits similar to that of adding a solid-state disk drive to the system. A new report states that by achieving SSD performance without the high cost, Braidwood will essentially erode the SSD market, which, ironically, includes Intel's two popular SSD models. 'Intel has got a very good [SSD] product. But, they view additional layers of NAND technology in PCs as inevitable. They don't think SSDs are likely to take over 100% of the PC market, but they do think Braidwood could find itself in 100% of PCs,' the report's author said."
Not so sure (Score:5, Interesting)
why flash? (Score:1, Interesting)
I mean, why not put cheaper DDR RAM on the motherboard, with a big capacitor or battery to allow it to flush all writes out to disc when the power stops?
In fact, why bother putting ram as an IO cache when you could add the RAM to the motherboard anyway and allow the OS to cache writes. Intel - stop thinking like this, and just hand out free 1GB DRAM sticks with every motherboard, job solved.
Ohh - maybe they could take it to the next step... (Score:4, Interesting)
Now only if they could start following the server side folks and place an internal USB connector inside and then MS and others could give us the OS on its own usb drive (read only) and we could use the hard drive for updates and programs we could enhance the security as well...
HW buffer for drives (Score:3, Interesting)
Fast IO is ensured as most operations happen in memory, and dataloss isn't an issue as the memory is battery backed.
RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.
16Gb might be overkill for most purposes, you could get away with 2 if the system is used only for low-power tasks like surfing and email.
On-Drive NAND also quite likely (Score:4, Interesting)
Funny - this very thing was being discussed around 1985 (I think), but using battery-backed RAM as a way to reduce boot time. The thinking was people wouldn't put up with a computer that took 30 seconds to start, and if we didn't have a 2-5 second boot time (equal to a TV), the personal computer would never fly. But since it took from 1985 (80386 chip) to 1995 (Windows 95) for a 32-bit OS to become popular, maybe 25 years is reasonable.
Or not. Man, this industry moves at a snails pace in a lot of areas. Why do we still live with the x86 instruction set. Is "the year of UNIX" here yet?
Anyway, three competitors will emerge:
- Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.
- Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.
- The third competitor will work with Microsoft or Apple to get OS support for fast boot. Apple will get there first and you'll see a commercial on TV with the Mac guy wondering why the PC guy takes the entire commercial to wake up.
In a single drive system, the cost will be about the same. Doing it on the drive will create an instant performance boost on any machine, and well worth the estimated $10 added cost.
Re:How about the reliability ? (Score:2, Interesting)
Re:why flash? (Score:2, Interesting)
> Your OS doesn't always have time to shut down properly. Don't think anyone's fond of the idea of having their last couple of saves go poof because Windows crashed.
So, what happens if my PC crashes because of some hardware failure and I have to plug in a different HDD for some reason? Or plug the HDD into a different mainboard? All the things I thought I wrote to the disk will be gone. In fact, the file system might be inconsistent if this thing doesn't honor flush requests. But if it does honor flush requests then nothing is gained, it'll still be the OS that does all the caching.
Well, it'll still be a great read cache, 4-16GB read cache is more that most people have as RAM caches, so it'll be good for something.
Re:Ohh - maybe they could take it to the next step (Score:5, Interesting)
Why a USB connector ? That causes the same problem as making SSD cards use the SATA interface - the serial interface becomes slower than the things it is connected to.
What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.
Four sockets with 16 or 32G in each would give you enough space to store the entire OS. I don't know how Windows would handle it, but in a Unix or Linux based system it would be fairly easy to mount the devices as read only partitions and map them into the filesystem. This would be ideal for a server system, mapping the entire OS into the main memory address space and making it read only.
In fact all the BIOS would need to do is make the first 100M visible as a boot partition, and leave the OS to handle the rest.
The flash buffer should be on the HDD (Score:5, Interesting)
The buffer should obviously be on the hard disk. That way the data on the disk will always be in sync, even if there are writes buffered in the flash cache when the computer loses power. I can't see a good reason to put it on the motherboard instead. Especially as most consumer systems have exactly one HDD.
The article says that the flash buffer could work for "all system io". I can only think of optical disks and flash drives possibilities other than hard disks. But optical disks are interchangeable, so they have to be reread on each use anyway, and could just as well be cached in RAM. And it makes no sense to cache flash drives in flash cache...
Re:Not so sure (Score:3, Interesting)
Re:The writing's on the wall. (Score:5, Interesting)
Capacity is still an issue though.
Not really for most people.
The last few systems I have worked on for 'standard consumers' were all quite upset at being forced into purchasing a 'way too big' 300gb hard drive, simply because any drive under 100gb is both very hard to find, and likely expensive in comparison. 500gb was a waste to them, when they only sync their camera once a month and have office and a couple games installed.
Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.
You are not allowed to use "standard consumer" and "4TB of data" in the same sentence :P
Careful, they might swoop in and hole punch a warning into your geek card!
Anything >= 2tb is far far above the standard consumer. Even 1tb is far above the average consumer, although 1tb is still falling well within the power user and average gamer groups.
Re:Not so sure (Score:5, Interesting)
"(...)if it becomes commonplace, most PCs will eventually have it (...)"
Which opens an interesting hole. That flash on motherboard will hold some data to speed up system startup. That means first n opened files. With that flash big enough it will also hold quite a lot of user documents. Unless documents can be marked as "not to be cached" it will add extra headache when getting rid of old systems. We have it already with 419ers buying old PCs and smartphones, gangs dumpsterdiving, etc.
Also try to explain to customers that they will need to erase flash they cannot see in system (and will most probably not even know about it!) or destroy the chip before throwing away old system. With HDDs it's quite hard and those are big, visible and has been around for ages.
Re:Not so sure (Score:4, Interesting)
Actually not. It is going to store quite permanently only some files used to speed up system processing. There is not going to be any journals, and the filesystem will be highly optimised for this kind of usage. That is from press release I read somewhere. So even MLC will last long time as writes will be very limited. Only issue is that to drive costs down controller is also going to be scaled down so no great magic as with SSDs. So if somebody hacks that flash to use as HDD it will wear quite badly and quickly.
Re:The writing's on the wall. (Score:3, Interesting)
Flash memory is at present growing in capacity much faster than magnetic drives.
If magnetic drives really push the capacity growth that might not hold; magnetic drives have shrunk in size and increased rotational speeds to decrease latency during that time as well. If they just simply give up the performance race and go for vast capacity they could move back to 5 1/4 full height disks. Can you imagine the amount of data you could stick on that surface area with modern technology? I wouldn't be surprised if a 25TB disk could be produced today, at a price not much higher than the cost of current high capacity disks.
Sure, latency would stink, but it's still faster to wait for those 20ms extra for any HD video you'd ever recorded to start than getting out of the sofa and locating some physical media.
Using SSD's for latency sensitive stuff and slower magnetic media for bulk storage is one possible way it can play out. It may change in the future, but (outside my professional capacity) I've found that not having enough storage has beaten not having fast enough storage every single time.
Re:Not so sure (Score:3, Interesting)
Re:why flash? (Score:3, Interesting)
Well, obviously the volatile drives aren't much faster than Intel's SSDs. Most SSDs are already starting to bump against the upper limit of what you can get out of SATAII when doing sequential reads.
The first ones I saw were for the PCI-slot and that one is limited to 133 MB/s and 266 MB/s for 64 bit PCI, both of which are lower rates than SATAII.
PCI Express [wikipedia.org] of course starts at 250 MB/s per lane and tops out at 1 GB/s per lane for the latest version. Compare that to DDR3 [wikipedia.org] which peaks at 12.8 GB/s per channel. To saturate a PCIe x16 lane we could settle for three DDR3 channels.
Size is another concern of course, as most of these things tends to go for sockets to plug the memory into.
So, you could try to top out a system with 160 GB of DDR3 RAM (would require 30 blocks), costing $14,099.7 [newegg.com]. And I'm not entirely sure, how you'd fit 30 blocks of RAM onto a single PCIe card, even if it's full length. This setup would obviously only be performance limited by the PCIe bus and the card's memory controller.
Now, HP StorageWorks' IO Accelerator [hp.com] 'only' provides about 700 MB/s depending on the workload, but only costs slightly more than half of the DDR3 solition at $7,700 [hp.com].
The biggest problem with the PCIe-based volatile solutions is fitting enough memory to be useful and that you're fucked if there's a bad power outage. The non-volatile PCIe solutions' biggest problemt hey're hideously expensive compared to regular SSDs and the only advantage they have to RAID-0'ed SSDs is the IO performance, as raw speed is faster if you raid a few of Intel's SSDs to a good controller.
And all the PCIe based storage mechanisms have one huge problem - non-bootable.
Re:HW buffer for drives (Score:3, Interesting)
The other thing this does is bypass the "slow" SATA interface. We have laptop SSD drives that saturate SATA 3.0 and newer drives should be able to saturate the upcoming SATA 6.0. I don't know what kind of bandwidth is going to be available on this new flash slot, but I hope it's a LOT.
Re:Not so sure (Score:5, Interesting)
Seems to me that this article is a thinly-veiled marketing trick. Somebody publishes a paper, "Will Intel product A beat Intel product B?", and presto, we've got buzz about product A which doesn't even come close to competing with product B (which is a market leader, dontchaknow), and increased buzz about product B. Then, people chime in with their arguments and counterarguments about which product is better... and Intel wins no matter what. Both product lines are probably going to succeed independently of one another.
That said, Braidwood sounds awesome to me, especially because my servers talk to a storage box over NFS, and fast onboard cache sounds great to me. But, I want fast local storage too, and 16GB is nothing, so I want large-capacity SSD drives. I really don't see these as competing products. This is just a slashvertizement. Move along, folks.
Re:Not so sure (Score:3, Interesting)
My concerns about this product is that flash degrades per write cycle, so the smaller the disk you have, the faster you wear through it. Since this sound like just a small buffer, I'd have concerns about it having a short lifespan.
Re:Not so sure (Score:5, Interesting)
Which brings up an interesting design thought:
Battery backed up (BBU) RAID controllers with volatile RAM cache are very common in the server market because of the huge performance increase of small random writes.
The RAM cache lets the controller cache writes and then send them to the disk in batches while performing write combining so multiple small writes get turned into larger writes reducing the number of disk seeks required to store the data. Also think of the case where your controller has a 512MB cache and you write 200MB to disk. The controller can say OK as soon as it's written to RAM (fraction of a second) where your typical fast disk these days will take 2 seconds.
Without having a battery to back up the volatile RAM cache, you could lose a lot of data if the server lost power, but with it, you go go at least a couple days without losing data.
So now, let's replace that 512MB BBU RAM cache with a 16GB SLC SSD. You won't quite get the burst speed of the BBU RAM controller, but in sustained server loads performance should be a lot better. The SSD will also be able to store a lot more data for reads. If the controller is smart and only uses the SSD for caching random read patterns, you could get close to SSD performance for a lot of workloads but still have 1TB of disk storage.
Re:Not so sure (Score:3, Interesting)
Something just occurred to me - this is probably why WD and Seagate aren't worried about SSDs. They know they can just slam a crapload of cache onto their HDDs to vastly improve performance, and they already have the capacity advantage.
Re:Not so sure (Score:3, Interesting)
Why not offer a simple tool. I'd name it "Last Shutdown" and it would be kind of like saying goodbye to your old computer (in style).
It would first ask if you have saved all your personal data outside the computer and/or removed that storage from the system.
Then it will go and
- safely delete all the hard drives
- safely delete all the flash storages/caches
- equalize all other residues
- safely delete all ram content
- empty all caches
- etc.
While showing a nice animation fitting to the theme.
When done, it would automatically shut off and change the boot sector to something that only lets you boot from a external bootable medium (eg a OS installation) with a fitting message to tell you what to do. (That this is a fresh computer and you first need to install a OS.)
A more advanced version could look a bit better (colorful graphics) and then offer to automatically configure a DHCP connection and then download an install a Linux distribution of choice. This could be used for companies selling computers without an OS installed.