Rambus Takes Another Shot At High-End Memory 213
An anonymous reader writes "Tom's Hardware is running an article about Extreme Data Rate memory (XDR DRAM for short), which was developed by Rambus and now entered mass production in Samsung's fabs. Right now, Rambus says the memory is only for high-bandwidth multimedia applications such as Sony's Cell processor, but the company ultimately hopes to push XDR into PCs and graphics cards by 2006. Time will tell if Rambus has learned from the mistakes it made with RDRAM a few years ago."
It will be awhile (Score:5, Interesting)
Re:It will be awhile (Score:2)
Re:It will be awhile (Score:5, Interesting)
Unfortunatly, no one seems to be pushing for this despite the headaches it would remove. All you'd have to do is make your memory controller able to recieve faster (like going from DDR333 to DDR400). Plus, with the memory not directly connected, memory makers would not only compete evenly (since the user wouldn't need to know the difference between DDR2 and XDR except speed and price), but they could add other things like an extra cache level in front of the memory just by replacing RAM. And it would mean that the computer you bought today would take the memory that was available 3 years from now. Right now SDRAM costs a FORTUNE. But if you had a computer that takes FBDIMMs, instead of paying $50 a stick for 256mb sticks, you could buy at the price of DDR today (say 512mb for $25 or whatever it is today).
Just think, you wouldn't need to buy new types of RAM for your PC every 2 years.
Re:It will be awhile (Score:5, Interesting)
And (since I think it's serial, instead of parallel like current RAM) it would SERIOUSLY decrease the pincounts of the Opteron and northbridges. Think if you could have quad channel memory in your desktop as an option. Right now the CPU would need THOUSANDS of pins to do that. But you might be able to do it with the current 939 pins on an Opteron if you used FBDIMMs.
Ah, dreams.
Re:It will be awhile (Score:2, Interesting)
Re:It will be awhile (Score:5, Interesting)
IMHO, FBDIMM is just Intel's hedge against RAMBUS going bust. The point of RAMBUS was to reduce pincount per chip by reducing the width of the channel between the memory controller and the chip, and to decouple the notion of a memory controller controlling specific banks of memory. FBDIMMs are to solve the same problem, except RAMBUS is shipping already.
Reducing pincount is one important reason for FB-DIMM, but the real reason for it is to get out of the capacity/speed tradeoff game. See, many systems need lots of memory. However, with current DDR-400 or DDR2-667 you can only put two devices per channel. If you want more RAM than what fits in two devices, you have to reduce the speed. FB-DIMM gets around this problem by using point-to-point links between the devices.
Yes, this increases latency a little bit, but there really isn't any other practical way to increase speed without reducing capacity. However, FB-DIMM compensates for the increased latency by allowing many outstanding transactions on each channel; because of this, latency under high load is actually supposed to be lower than for traditional RAM tech with the same specs.
if it increases latency, "no, thanks" (Score:4, Insightful)
What kills RAM nowadays in common scenarios is latency. Whenever there's a cache miss, or a mis-prediction makes you flush the CPU's pipeline and start again, what causes the CPU to stall is latency. You get to wait until that request is processed by the RAM controller, is actually delivered by the RAM, makes its way back through the RAM controller, and only then you can finally resume computing. That's latency, in a nutshell.
And it's already _the_ problem, and it's gotten steadily worse. A modern CPU has to wait as many cycles for a word from RAM as an ancient 8086 would have if you ran it with a HDD instead of RAM. It's _that_ bad.
That's why everyone is putting a ton of cache and/or inventing work-arounds like HyperThreading. And even those only work so far.
And again, it's only going worse. DDR did increase bandwidth, but did buggerall for latency. Your average computer may well yet transfer two words per clock cycle with DDR, but still has 3 cycles CAS latency like SDR had. And DDR 2 has made it even worse.
So FBDIMM's great big advantage is that it lets you have _more_ latency? Well, gee. That's as much of a solution as a kick in the head as a cure for headache.
As I've said, "no, thanks." If Intel wants to go into fantasy land and add yet another abstraction layer just for the sake of extra latency, I'm starting to think Intel has plain old lost its marbles.
ancient? You newbie . . . (Score:2)
>from RAM as an ancient 8086 would have if you ran it
>with a HDD instead of RAM.
"ancient"? *gasp* [*insert chest-clutching sequence here*]
I knew those new-fangled sixteen bit machines with their wait states were a bad idea. Back to the 8 bit machines! No wait states! In fact, we can pair the 6502 with the 650 and have *two* processors running all ot on the same memory.
64k should be enough for anyone. Especially if you have a second double
Re:if it increases latency, "no, thanks" (Score:2)
> What kills RAM nowadays in common scenarios is latency.
No problem, to reduce latency RAM makers should just increase the speed of light..
Easy!
Re:It will be awhile (Score:2)
Wow. I had no idea. I'm kind of an old guy, but I'm used to CPUs having address and data pins, and some power and ground pins, and a very few bus control signals, and that's basically it. Well, clearly that's not the case here, even if they used a 512-bit-wide data bus, which I kind of doubt that they did (though it would be a great way to increase bandwidth).
In fact, about 10 years ago I decided that the hot future CPUs would have bandwidth issues (right on the money), and that the w
Re:It will be awhile (Score:2)
DRM.
Re:It will be awhile (Score:3, Insightful)
Re:It will be awhile (Score:2)
Commodification of a product reduces profit to "normal" returns. Economic profit and accounting aren't the same; "normal" accounting profits are "zero" economic profit.
Being the first out with something yields a short-term economic profit. As it becomes a commodity, profits drop to normal rates.
Look at memory prices--the first one out with a new sized gets to charge a high price, wich drops as others leap in.
hawk
Re:It will be awhile (Score:2)
But standardization also increases the size of the market, which can increase profit. It also reduces risks.
Are you holding off buying an HD DVD player to see whether Blu-Ray or HD-DVD wins? Standarization would reduce the risk to the consumer, and to the producer, that they'll invest in the unpopular choice.
So, on the whole, standardization is frequently viewed as a good thing.
Re:It will be awhile (Score:5, Interesting)
Funny abstraction layers and everything being agnostic of everything else is a nice CS theoretician fantasy. In a CS theory utopia everything should be abstracted, or better yet virtualized. Any actual hardware or other implementation details should be buried 6 ft deep, under layers after layers of abstraction or better yet emulation.
The problem is that reality doesn't work that way. Every such abstraction layer, such as buffering and translating some generic RAM interface costs time. Every single detail you play agnostic about, runs you the risk of doing something extremely stupid and slow. (E.g., from another domain: I've seen entirely too many program implementations that, in the quest to abstract and ignore the database, end up with a flurry of connections just to save one stupid record.) Performance problems here we come.
The AMD 64 runs fast precisely because it has one _less_ level of abstraction and virtualization. Precisely because their CPU does _not_ play agnostic and let the north-bridge handle the actual RAM details. No, they know all about RAM, and they use it better that way.
So adding an abstraction layer right back (even if one that moves the north-bridge on the RAM stick) would solve... what? Shave some 10% out of the performance? No, thanks.
Or you mention SRAM. Well, the only advantage to SRAM is that it's faster than DRAM. Adding an extra couple of cycles of latency to it would be just a bloody stupid way to get DRAM performance out of expensive SRAM. Over-priced under-performing solutions, here we come.
Wouldn't it be easier to just stick to DRAM _without_ extra abstraction layers to start with? You know, instead of then having to pay a mint for SRAM just to get back to where you started?
Not meant as a flame. Just a quick reflection on how the real world is that-a-way, and utopias with a dozen abstraction layers are in the exact opposite direction.
Latency (Score:4, Interesting)
Oh, I agree with your abstraction comment.
Putting faster things into an FBDIMM just won't do that much, because the speed is physically in the same spot. I did an extensive study of this back prior to 1990 and found these results, and the consolidation of L2 and even Northbridge onto the CPU shows that it's still valid, today. Main memory is going to be slow. Main memory is always going to be slow, because that's a side effect of being "big". Main memory is always going to be "big" as long as the appetite for bits exceeds what can fit onto one chip. Learn to live with it.
Incidentally DRAM latency grows beyond minimum the moment you multiplex row and column addresses. There is a Trcd(max) spec where access is purely row-limited, but in practice that's just about impossible - access is almost always limited by Column access. Trade speed for pins.
Beyond that, even SDR traded off latench for bandwidth, compared to EDO. (I've designed both.) I don't think DDR is that bad a deal, compared with SDR, though I haven't actually done a DDR design, myself. At the very least, DDR offers the half-cycle latency options, and the DDR designs have been architected to scale far higher in frequency than SDR ever was.
Re:Latency (Score:2)
Decades later and we still have access times in the order of 10ms... That really really sucks.
DRAM isn't that bad because the SRAM caches work pretty well in many cases - e.g. you have a loop zooming along at near CPU speed (in SRAM), reading and writing processed data out at DRAM speed. That's fine because the loop often isn't fast enough to move data in and out at SRAM speed.
You are more likely to hit the HDD speed limits first (or run out of DRAM
Re:Latency (Score:2)
Decades later and we still have access times in the order of 10ms... That really really sucks.
First, if DRAM were far faster, computers would be far faster. Memory performance affects CPU performance far more than disk performance does. The slowness of disks can be made up for using DRAM cache, just like SRAM cache makes up for the slowness of DRAM.
Second, modern hard drives are down to about 8.5ms, which is a lot better than the 28ms I remember
Why do we use DRAM in this day and age? (Score:3, Interesting)
Who needs a gig of RAM when you can have a gig of cache?
If they need swap space, they can always write back out directly to a disk-based swap file.
Re:Why do we use DRAM in this day and age? (Score:2, Informative)
Re:Why do we use DRAM in this day and age? (Score:5, Informative)
That and the benefits of cache go DOWN as the size of the cache goes up. Past a MB or two the benefits would be lowered. Also as the # of address lines goes up the access gets slower. And finally a bigger bottle neck is that "external memory" is external.
So unless you want to pay for a cpu with a GB of onboard "memory" in the form of SRAM.... the benefits won't be that high.
Tom
Re:Why do we use DRAM in this day and age? (Score:2)
To clarify, they increased benefit from cache decreases as cache size increases. Performance with a large cache is not worse than performance with a small cache, but the law of diminishing returns starts to kick in over a few MB. A computer with 4GB of SRAM running at the same speed as current cache memory (assuming no memory controller bottleneck) and no cache would be faster than on
Re:Why do we use DRAM in this day and age? (Score:2)
See that's wrong. On the machine with a small cache and large DRAM every cache hit is 2 cycles. Even the 512KB L2's that AMD/Intel use are upwards of 13 cycles to access [or more]. It takes time for the address bits to settle and also the larger the cache the longer it will take for the signal to settle (more distance)
With 4GB of cache EVERY cache hit would be larger [say for conversation] access time of say 30 cycles.
So let's compare acces
Re:Why do we use DRAM in this day and age? (Score:5, Interesting)
Re:Why do we use DRAM in this day and age? (Score:4, Insightful)
This means that for SRAM to be useful, it has to be paired with a lower-latency interconnect. Some apps would benefit tremendously from 128M of what would amount to an L3 cache, even to the point that the $400 or so extra it would cost might be worth it. It's clear however that the market doesn't consider that a worthwhile expenditure.
Although newer system architectures such as AMD's Opteron platform are moving to more closely-attached RAM, the engineering and manufacturing challenges involved in attaching memory as tightly as it is to a GPU have so far proven more expensive than the payoffs warrant. With improvements in manufacturing and interconnect technology, I'm sure we'll see ever-tighter CPU-memory integration. I doubt however the technology will move to SRAM or an SRAM-equivalent simply because the performance/heat trade-off isn't favorable. Saving a few ns of latency on the memory chips is peanuts compared to the 10s of ns of latency in the connection to the CPU, which is probably a much more tractable problem.
Uh. (Score:2)
As for 128MB level 3 cache, some servers do have that.
If it's worth it, they often do put the stuff in.
It's just like battery backed RAM HDDs. If DRAM was cheap enough more of us would be using that instead of klunky spinning discs. As it is, it's cheaper for most people to buy GBs of RAM and use that as disk cache AND space to _execute_ programs, rather than buy the niche RAM-HDD product and not be able to execute programs directl
Re:150million / 6 = 25 million... (Score:2, Informative)
25million bits / 8 = about 3MB.
Parent poster is correct.
Re:Why do we use DRAM in this day and age? (Score:2)
Re:Why do we use DRAM in this day and age? (Score:2)
Why do you think there is only a MB or two at the most?
Re:Why do we use DRAM in this day and age? (Score:4, Funny)
This is like saying why paint your walls with off-white stuff when you can coat them in a layer of gold that resists tarnish?
Well, for one thing, it's greatly more expensive.
Re:Why do we use DRAM in this day and age? (Score:2, Interesting)
As demand for gold increases, the cost of gold rises because of the scarcity of the element. However, for SRAM, the production of it can be ramped up to whatever level is necessary to meet demand.
Higher efficiency in the manufacturing of SRAM would lead to lower prices (though not necessarily lower than the current chip prices).
Re:Why do we use DRAM in this day and age? (Score:5, Interesting)
For a given area of silicon, you could have 1 gigabit of DRAM or 128 Megabit of SRAM. Is it worth that trade-off? One can make more chips, but making chips uses a lot of expensive and toxic chemicals, and fab time isn't free either.
Re:Why do we use DRAM in this day and age? (Score:2)
Re:Why do we use DRAM in this day and age? (Score:2)
Size Matters (Score:2)
1-bit DRAM cell = 1 transistor and a capacitance (not necessarily a physical capacitor, just something that acts like one)
Re:Why do we use DRAM in this day and age? (Score:2)
And SRAM keeps increasing in density, unlike floppies... just not quickly enough to catch up with DRAM. It isn't by any stretch of the imagination a dead tech. It just isn't (currently) cost-effective for use as main memory at the capacity needed.
Never mind (Score:5, Interesting)
Re:Never mind (Score:2)
Re:Never mind (Score:2)
Recently got 2x512MB for $137. Yay Newegg!
Good marketing sense (Score:5, Interesting)
Re:Good marketing sense (Score:2, Interesting)
Re:Good marketing sense (Score:3, Insightful)
Maybe the consoles, but those are usually sold at a loss to get people to buy games. When it comes to HDTV... I don't know about you, but I don't see someone shelling out $7,000 as being price sensitive when a larger screen DLP projection TV goes for thousands less.
And one of the applications they were talking about was high-end video cards. A high end consumer video card costs more than a 250gb SATA ha
Re:Good marketing sense (Score:2)
Maybe the consoles, but those are usually sold at a loss to get people to buy games. When it comes to HDTV... I don't know about you, but I don't see someone shelling out $7,000 as being price sensitive when a larger screen DLP projection TV goes for thousands less.
consoles are driven by available games and have a perceived maximum price (probably around $300). They are not, for the most part, sold at a loss. That would be illegal for Sony to do. Also, I saw a nice HDTV LCD for $4k over at best buy last
Re:Good marketing sense (Score:2, Insightful)
also at extremetech (Score:3, Informative)
Pathetic attempt at FPing (Score:5, Interesting)
In the long run, if they can't significantly drop manufacture prices to (let's say) 150% or even 200% of "regular" (by that date) RAM, the boost in speed a computer with "XDR DRAM" will get compared to (again, let's say) "PC800 RDRAM" will be not significant... and I'll bet (regular) people would rather choose 8 GB of "PC800 RDRAM" over 2 GB of "XDR DRAM" any time of the day.
Bottom line: they're either stuck with "speciality hardware" (like graphic cards or high-end servers) or they have to drop (manufacture) prices rapidly if they want to keep selling.
Re:Pathetic attempt at FPing-Apple Core. (Score:2)
In my (?defence?), I live in Romania, and I highly doubt there's more than a couple thousands of them within the borders of the country.
Re:Pathetic attempt at FPing-Apple Core. (Score:2)
Kind of like Apple?
Apple delivers value for price and doesn't really charge that much more in the first place.
Wait for it.... (Score:3, Funny)
latency? (Score:5, Insightful)
People seem to forget that the "Random" part of RAM is kinda crucial.
Tom
Re:latency? (Score:5, Informative)
Re:latency? (Score:2)
What I'm impressed with is that they actually kept some engineers through their whole lawsuit period, and appar
Re:latency? (Score:2)
They did... it's called the PlayStation2 [playstation.com]. If you look closely, you'll see the main memory is Rambus RAM. Which makes sense, because the PS2 is really more of a graphics engine than a general purpose computer.
Re:latency? (Score:2)
Re:latency? (Score:2)
Just like the new Geforce 6200 that utilizes pci-express and streams its textures from main memory ;)
There is quite possible to increase latency tolerance and graphics cards happen to be one application that is RELATIVELY latency insensitive in
Re:latency? (Score:2)
Re:latency? (Score:2)
That used to be true: graphics cards had fixed functionality and long pipelines, so if you knew that you'd need pixel (51, 96) of a texture two hundred steps down the pipeline you'd just ask the cache to fetch it in plenty of time.
Today, though, as they become more and more programmable, they're starting to see latency problems similar to CPUs. It's hard to predict what data a
Re:latency? (Score:2)
Re:latency? (Score:2)
Re:latency? (Score:2)
Well by gosh, you're right! I totally forgot about that whole "Random" thing.
Unfortunately when I went to jog my recollection a bit and write a program that writes to "Random" places in memory, I got all manner of interesting screens. Did you know Windows has a "Fuschia Screen of Death?" I didn't.
Rambus seems to forget (Score:5, Insightful)
I have adamantly refused to purchase any system that would use their memory for years, and more to the point have made that decision for others that depend on me making that decision. That's a lot of computers over the years were talking about. I am also far from alone.
Re:Rambus seems to forget (Score:2)
All they have is IP. They don't make anything.
If they were to sell their IP, chances are it would end up in some larger mega-IP house, which would ram the stick up even further.
Re:Rambus seems to forget (Score:2)
First of all, they were on a committee of memory manufacturers whose purpose it was to design the next generation of memory, SDRAM. It was in the manufacturers' best interest to form an open standard that could be produced easily.
The strange thing is that Rambus is not a manufacturer. They are an IP company. I don't know why they were on that committee in the first place. What would be in it for them as an IP company to help produce an *open* standa
submarine patents (Score:4, Informative)
It's a tough act for Rambus to carry out; on the one hand, they have to deal with a small group of manufacturers who have (reportedly) been trying to defraud them and put them out of business, on the other hand, they have to rely on that same small group of manufacturers for all of their future revenue, so aggravating them too much is probably also a bad idea.
Of course, it's also possible that the judge was Just Plain Wrong, and Rambus was just trying to get submarine patents in place while they were a member of JEDEC. I don't have the expertise to make that judgement.
Re:submarine patents (Score:2)
i know rambus got the fine reduced, but afaik the fraud conviction still stands.
Re:submarine patents (Score:2)
Unfortunately for anyone who believes in speedy justice for all this bozo has now allowed new claims of fraud (in this case that Rambus destroyed discoverable documents) in order to muddy the
I don't know where you're getting that (Score:2)
Re:I don't know where you're getting that (Score:2)
Re:Rambus seems to forget (Score:2)
Incidentally, SDRAM always offered significantly mo
The numbers don't lie! (Score:2)
Re:The numbers don't lie! (Score:5, Insightful)
If you were dealing with slightly different steppings of the same CPU (I assume a P4?) it would be possible that you had two CPU's of the same clock speed, but the newer stepping was less efficient per clock. The P4's, over time, have been tweaked to be less and less efficient over time, in order to facilitate higher clock speeds. RDRAM was popular with the very first generation of P4's, so it'd be logical that the benchmark you saw may have been a newer core. That shouldn't explain a 20% speed difference, but it's an example of a small thing that may have contributed to making the memory system appear to be the determinant item in performance.
Re:The numbers don't lie! (Score:5, Interesting)
3.06 GHz Pentium 4, 512KB cache, 533MHz FSB, RDRAM
3.00 GHz Pentium 4, 1MB cache, 800MHz FSB, DDR400 RAM
The DDR system is only 86% as fast as the RDRAM system (the RDRAM system is 16% faster). This is despite the DDR system having been purchased almost two years later, and having more cache!
The DDR system does pull ahead for compositing tasks (by quite a bit - in some cases it's twice as fast). I assume this is due to the larger cache.
But ray tracing takes about 90% of my total render times, so it's far more important to optimize. I am disappointed that I can't buy hardware today with the same RAM performance as I got two years ago.
Re:The numbers don't lie! (Score:2, Informative)
3.00 GHz Pentium 4, 1MB cache, 800MHz FSB, DDR400 RAM
You're probably comparing a Prescott to a Northwood. They're fundamentally different processors -- way more than a remap from 130nm to 90nm, but share enough I guess for Intel to continue branding it Pentium 4. For example, Prescott has longer L1 latency than Northwood, twice as long L2 latency than Northwood, and longer mispredict penalty (11 more stages). All those latencies add up to not-as-good pe
Re:The numbers don't lie! (Score:2)
Both systems have the same amount of RAM and OS, right? and we are talking about the same version of the software?
Re:The numbers don't lie! (Score:3, Informative)
Re:The numbers don't lie! (Score:2)
That statement is impossible because Pixar's renderman (prman) is not a ray tracer.
It uses coordinate space transforms and shaders to render, much like a modern 3D video card would (albiet prman only does this after dicing the models to be rendered into millions of quarter-pixel-sized micropolygons, and allows arbitrarily complex shaders of many type
Re:The numbers don't lie! (Score:2)
Re:The numbers don't lie! (Score:2)
Sorry.
Re:The numbers don't lie! (Score:2)
Time Will Tell? (Score:5, Informative)
Well, Rambus has expanded their latest lawsuit blitz to include DDR2 patent claims [infoworld.com], so do you think they've learned?
Why does RAM suck so much? (Score:5, Interesting)
2. RAN changes to quick. I buy RAM for one computer, it's only for that computer. No portability.
I get a hard drive, I can put that in my new system. I get a new mouse, can use that on my new system. Display? Yep. Graphics card? Most likely.
RAM? Not likely.
IMHO they need to standardize RAM like AGP or PCI-X. That way users feel more comfortable investing in it... you can upgrade and keep your RAM.
Re:Why does RAM suck so much? (Score:2, Interesting)
> That way users feel more comfortable investing in it...
> you can upgrade and keep your RAM.
You may be interested in FB-DIMMs [theinquirer.net], if they ever come out. Basically a standard (buffered) interface to all RAM you might want to put on there. Just make a new buffer chip and you're set.
Re:Why does RAM suck so much? (Score:2)
Funny that PCI-X and AGP are on their way out...
Re:Why does RAM suck so much? (Score:2)
Mycroft
FYI (Score:2)
Re:Why does RAM suck so much? (Score:2)
In many cases the 'blurb' under each item is just ripped from the marketting. Unless there is a pci-x version of ati's x500 through x800 chipsets I haven't heard of, or invidia 6?00 chipsets.
And frankly I expect the missuse/confusion to only get worse considering how both pci-x and pci-express are pronounced.
Re:Why does RAM suck so much? (Score:2)
A problem with your idea is that it is like wanting to keep your CPU but upgrade your computer. You don't want a slow CPU? Why would you want slow memory? Memory you bought two years ago is only going to hobble a brand-new chip.
Re:Why does RAM suck so much? (Score:2)
But those days are long gone. It really doesn't matter any more.
Those days may be coming back, if RAMBUS manages to finally take control of the memory industry with its stupid patents. Then you can kiss all that cheap memory goodbye.
Rambus is NOT memory technology but (Score:2)
I would hope optic fiber interconnects could make a push by some tech company!
Wooooops. Maybe i will start one .
Rambus is BAD NEWS (Score:2)
They pulled a fast one on the industry and then tried suing everyone in an effort to bully companies into licensing agreements.
They are a VERY shady company. Very unscrupulous and litigious. I would never deal with them knowing their past.
Rambus Shambus! (Score:2, Insightful)
Wasn't Rambus run out of PCs due to their crooked practices anyway? What makes them think people won't forget? Didn't think I was going to hear that name again. (shakes head in grief)
TFA has short memory (Score:4, Informative)
Actually, RDRAM was introduced around 1995, and was used by industry heavyweights such as SGI and Nintendo.
Gillette just called. They want their razor back. (Score:3, Funny)
May as well call it Extreme Data Rate 3D Titanium Mach 5 Turbo 2k5 Deluxe Edition, or some such...
Re:Gillette just called. They want their razor bac (Score:2)
Sincerely,
Rambus CEO
Re:Gillette just called. They want their razor bac (Score:2)
From a Business Perspective (Score:2)
After AMD's success Intel will follow suit and with both the major players doing things the same way the market will become quite stable.
However if XDRAM can impress before INTEL get's the mem controller on die, then they may be included and have 10 blissful years of monopoly, Intel made that mistake before, (in my mind it plays out like a MicroSoft tie in dea
*We* Should Not Let Them (Score:3, Interesting)
This should be like a usenet death penalty. The free market is there to reward those companies that serve their customers and punish those that do not. It is a good system, but it tends to have a short attention span. Tell your friends. Tell your purchasing deparment. Keep Rambus from coming back from the dead and send a message to other companies who think about abusing submarine patents. It's the same thing as harsh criminal sentencing, except that the free market has a far better track record of responding to example punishment (that is to say; if you support harsh criminal sentincing, you should support this on the same ideological grounds, and if you don't support harsh criminal sentencing because it doesn't work, you should still support this because it does).
Re:No thanks to the GPL - [snore....] (Score:2)
Seriously - don't you have something better to do? Anything?
[yawn]
Re:Sorry... (Score:2)
That all takes time.
With speeds really nearing speed of light, distance from end to end of a typical DIMM becomes significant. Single extra transistor on the way introduces several % of delay. Serial delays things even more - chop the 64-bit signal into 10-inch long pulses (moving at speed of light!) and you're down to 1GHZ clock. And how much potential, how many extra electrons can you fit at 1.3V in 10 inches of a wire? And they must suffice to change polarity often
Re:Sorry... (Score:2)
Also, on Opterons (and some Athlon64s) support dual channel RAM. That comes to the same thing...