Will Rambus Go Bust? 104
retep writes: "32BitsOnline has a interesting article about how the new memory standard RAMBUS may go bust. Essentially a bunch of missteps with Intel's Camino chipset, high costs, the rise in popularity of alternative CPU's such as the Athlon and a lack of performance may prove its undoing. I remember a story in Wired just a year or two ago praising RAMBUS for its innovative tactics; look what's happened now."
Tom says rambus sucks. Good enough for me. (Score:1)
don't always trust the hand that feeds you [tomshardware.com]
nuff said.
Re:Rambus=phft! (Score:1)
Interleaving on x86? (Score:1)
I have the option enabled, although I haven't done any benchmarks. No obvious performance diff. Any suggestions on how to benchmark whether this option does anything?
The problem with Rambus compared to SDRAM... (Score:4)
The consequence of this is that compilers would have to be optimized for that kind of memory access - i. e. accessing a few pages is expensive (and slow) under Rambus, slower than under SDRAM. Accessing many pages is more effective.
The question is, why did Intel chose this kind of tradeoff? Was there no alternative that did not increase the latency by the factor of 10 (according to the link to Tom's hardware)?
Re:The problem with Rambus compared to SDRAM... (Score:1)
Sounds like the "Read-ahead" feature which has been in disk cacheing software since the onset of it all. Amazing how things tend to come full-circle. :-)
The Real Problem (Score:3)
Re:One world: DUH (Score:4)
As far as USB/Firewire goes, though - it isn't royalties that have slowed Firewire acceptance. Intel had included USB in every chipset since the LX several years ago - in fact, USB support was in silicon before any OS had support for it. That's why it was on motherboards - It's part of the chipset whether or not you want it, so you might as well build the ports. Firewire royalties are tiny (below $1/port), and it's split between Apple and the other patent holders (I believe TI and Canon are in the group, too). Firewire would have been adopted quicker had Intel followed through with their earlier plans to include it in newer chipsets.
The other thing that sped acceptance of USB versus Firewire is that USB 1.0 was ready a (relative) long time ago, and Firewire is only a couple of years old. The DV cameras that really take advantage of Firewire have just begun to be priced approriately for the casual camcorder buyer. Sony and Apple build the ports onto virtually all their systems, and are selling them as fast as they can build them.
Also a Good Thing for USB - since the CPU controls the bus and it's a simple protocol, it's well-suited for cheap, simple peripherals like modems, digital still cameras, low-end scanners, audio devices, etc. Firewire aims a lot higher.
- -Josh Turiel
Rambus was doomed from the start (Score:5)
SDRAM took off as a standard, and other chipset makers adopted it - and extended it to PC100 and PC133 from the original PC66.
CPU speeds accelerated faster than anyone planned (a year ago, 600 MHz was state of the art!)
Rambus was late to market, as were the systems designed to use it. This gave SDRAM more of an opportunity to become entrenched.
Rambus has proven to be difficult to manufacture to this point, with horrible yields.
And finally, SDRAM turned out to be a lot more scalable than anyone anticipated at the beginning.
If Intel had expected DDR PC133 SDRAM, Rambus might never have made it out of the starting blocks in the first place. But given the lead time on their chipset and CPU design cycles, they had to make a call based on what the trend appeared to be - and they bet on the wrong one. The 810 chipset is a lot more important to Intel right now than they had expected it to be, and the 815 wasn't even planned - they also were hoping to retire BX by now. Some of their supply problems of late have been driven by this misforecast. When the dust settles, I expect to see Rambus slowly squeezed out of the mainstream and Intel to quietly write off their investment. It seemed like a good idea at the time...
- -Josh Turiel
Speaking of The Register (Score:1)
The first one [theregister.co.uk] says that Kingston Technologies is dropping prices on some of its Rambus RIMMs, 35% average, and as much as 68%.
The second one [theregister.co.uk] says "Micron...will demo three platforms using double data rate (DDR) memory at WinHec 2000 in New Orleans next week."
Look for a dual processor platform, a dual processor dual controller platform for the workstation and server markets, and a uniprocessor system, all running 266MHz memory modules and using a 133MHz front side bus.
Yes, it's been very entertaining readng the Register articles about how everybody kept badmouthing Rambus and the stock price kept climbing in response, until the other day when investors finally tripped over a clue.
Re:The problem with Rambus compared to SDRAM... (Score:1)
Right, it's actually 400MHz, double-pumped. But Rambus calls it 800MHz. Which really makes sense, because even though the _clock_ is only a 400MHz sine wave, the data/address signals operate at 800MHz, and that's really what matters.
Re:One world: DUH (Score:2)
I can get PC-100 SDRAM for about US$100-$110 per 128 MB DIMM; PC-133 SDRAM for about US$130-$145 per 128 MB DIMM; and 800 MHz RDRAM for about US$700 per 128 MB RIMM.
No wonder people aren't so interested in RDRAM. If my guess of US$180-$195 for a DDR-SDRAM 128 MB DIMM is true, then NOBODY is going to buy RDRAM in the long run.
Re:A nit: CPU speeds (Score:1)
Well, there are problems with raw clock speed. I realize you said 'as fast as', implying you weren't expecting the raw MHz levels, but just to check with other people...
At 1GHz, a cycle time is 1ns. In 1ns, light will travel roughly 30cm... about a foot. Electrical signals in traces about half that. So if your high-speed bus lines are more than six inches long, the clock at one end of the board will be a full cycle ahead from the other end. At 7.5 GHz, the electrical signals will travel 2cm: less than an inch. With the synchronous CPU designs in use now, everything running at the higher speed has to be smaller than that amount.
Solutions? The usuals: decrease the feature size to shrink the CPU; integrate more circuitry onto the CPU itself to avoid long traces; separate the CPU clock from the system clock even further... the unusual one is to design a more asynchronous CPU, that doesn't require a single clock standard across the whole chip. While there's been a fair bit of work done on that, it requires throwing out one of the great simplifying design assumptions, and makes verifying the correctness of the design a whole lot harder.
-- Bryan Feir
Re:The problem with Rambus compared to SDRAM... (Score:1)
So you have 400 MHz x 2 x 16 bits which is still 1.6Gbytes/sec.
Re:A nit: CPU speeds (Score:1)
As an engineering road map, Moore's Law has made Intel one of the largest corporations in the world and very, very rich. Why change it? I'll guess that your forecast for 2004 will be spot on.
--
Intel has done their best to quash Firewire (Score:1)
Then they changed their mind, and instead have been chasing USB 2.0. One thing to realize that since IBM went off with the PS/2, Intel has basically been controlling the PC spec, especially since they basically control the chipset market. I wonder if they were afraid to let third parties, including Apple, control one of the standard features in a PC.
(As for camcorders being the most obvious application -- Sony plans to push iLink across their entire product lineup. As Digital TV and other things become more widely adopted, there will be more consumer pressure for 1394 on PCs. It's the applications, stupid! USB had much more obvious applications, because PCs had always lacked a good standard external expansion bus. All that parallel port crap was quickly and happily killed in the face of USB.)
--
Re:One word: DUH (Score:1)
--
Re:MCA, ISA, and EISA (Score:1)
And EISA did catch on -- in the pre-PCI days Compaq used it heavily in their successful server line up. Because IBM was basically MIA in the PC server market in those days, I would guess that by 1995, EISA had a much larger installed base than MCA.
--
Re:The Real Problem (Score:1)
One of big the reasons the PS/2 line never caught on is that the CPUs were consistantly behind what others were shipping.
--
Re:The problem with Rambus compared to SDRAM... (Score:2)
You should optimize the cache controller not the compiler. Instead of keeping all accessed pages you should start keeping pages where access has started.
And I would not be amazed if the 64 bit Intel CPU's have such a cache controller. Intel has proved that it can plan very far ahead so far...
Re:One world: DUH (Score:2)
To be precise $ 0,25 a port.
Contrary to firewire where this fee is split in 7 the fee for USB goes directly to Intel.
Are you certain about this? As far as I know, royalty-free licenses are available for the core USB 1.1 specs, and that these licenses are handled by a non-profit consortium (The USB-IF, I think it was?) that Intel set up together with Microsoft and a bunch of other big companies.
Also, I believe that Intel plans to (but has not yet officially announced) to also offer free licenses for USB 2.0.
Re:a more clever title... (Score:1)
1) Doesn't contain the words "first" or "post"
2) Is actually funny.
In defense of Rambus (Score:2)
Re:The problem with Rambus compared to SDRAM... (Score:2)
Something else, and I could be wrong about this, but I don't think RAMBUS really qualifies at a serial protocol. It sends 16-bits at a time, and hence has the same sort of timing constraints as a "parallel" solution. If you really tried to send single bits over a single channel to avoid skew problems, you would have to clock that channel at 6.4GHz to keep up with lowly PC100 memory. Good luck!
Re:The problem with Rambus compared to SDRAM... (Score:2)
Problem is, that Intel alone (like M$) can't set a standard.
To set a standard you need more than one supplier and more than one firm using this standard.
Look at Firewire (IEEE 1394B) which has become a standard after two years.
With the introduction of DDRam (double data Sdram) i think that Intel has a big problem.
The fact is, that AMD, Apple and other company's are going for DDram instead of RAMbus.
DDram has almost the same price as normal Sdram.
Second, every ram manufactory can make DDram.
RAMbus uses a different process, and because of patent issues is more expensive to manufactor than DDram.
In the end, Intel will switch to 133 Mhz SDram and DDram.
Re:One world: DUH (Score:3)
Nobody really likes new technologies that add significant cost through royalties -- see FireWire (ahem) IEEE 1394. USB was free, which is why everyone had those controllers on their motherboards years before they had anything to plug into them.
Wrong!!!
Like Firewire to add USB costs money.
To be precise $ 0,25 a port.
Contrary to firewire where this fee is split in 7 the fee for USB goes directly to Intel.
At first you had to pay $ 1- a port for Firewire to the Firewire consortium (existing of Apple, Sony, JVC, Intel (yep, Intel is a member of it too!!!) and 3 other company's.
Second, because Intel did build USB in their chipsets USB was on the market a little bit longer.
The biggest problem was not the availibility of USb but the drivers and support.
That's the biggest difference between Firewire and USB.
Firewire is a much more mature technology and is aimed at video and data storage instead of input and output devices like mouses and printers.
Re:Interleaving on x86? (Score:1)
Re:Interleving memory banks (Score:1)
Every interleving method I've seen implemented gave each bank of ram it's own set of control and data lines. When an access was fired off it was done to both banks. Bank one was used for odd addressed memory words and bank two got even addressed memory words. When the data cam back from the ram it was all loaded into the MB cache. On average (asuming random accesses, the next word of memory is in the cache hald the time. In pratice code accesses are helped the most as one uses long sequential words of memory. Data acheives a better than 1.5x speedup as often you have data locality as in stack frames and records.
Re:The problem with Rambus compared to SDRAM... (Score:3)
The consequence of this is that compilers would have to be optimized for that kind of memory access - i. e. accessing a few pages is expensive (and slow) under Rambus, slower than under SDRAM. Accessing many pages is more effective.
And that's a really difficult optimization as you basically have to optimize data accesses as well as code. Data is where it is, locality really can't be improved easily. Systems can be tuned to grab a few pages in a row, but then that may still slow you down if you only used data or code on one of them.
Interleving memory banks (Score:4)
What ever happened to interleaving memory banks for more speed? It does raise the pin counts, but package technologies have been developed that mitigate that. I have an old 486DX2-66 motherboard that does interleving between two banks of ram. Each cache load loaded two memory words into the external cache instead of one. It won't lead to better write performance, but read rates will nearly double (You get something like a 1.8x effective increase).
Re: Rambus revisited (Score:1)
Here's hoping that Rambus goes down the flaming road to hell, and that the majority of the non-corporate investors bail out before they get hurt too much more.
Re:much cheaper than Greyhound. ;-) (Score:1)
If ye'd been readin' dis-a-here line-o-talk very well you-da known dat da ticket for RAMBUS eez much more eexpenseeve than da Greyhound, can'ta go uppa da hills very queek, and will probably go wheels up afore you get to CA.
, BTW, whatcha been smokin' and where can I get some?
Rambus=phft! (Score:4)
Re:Interleving memory banks (Score:1)
Actually the next alpha 21364 chips will have a rambus memory controller integrated onto to the chip die.
Re:The problem with Rambus compared to SDRAM... (Score:2)
Now, for memory, this doesn't help us today (or even for the next couple of years probably). But five or ten years down the line, a method like RAMBUS's might prove to have been the correct long term choice.
However, playing Devil's Advocate, I'd also have to believe that the SDRAM type technologies are going to push into higher and higher clock rates as well (due mainly to the relatively short paths that the signals must travel), and give RDRAM no clear performance win for quite a while. Also, it isn't clear to me that RDRAM is the right implementation of an idea, even if the idea is a good one.
But the main point is that while RAMBUS doesn't have a clear performance advantage now, it may in 5 years or so. But I wouldn't want to bet the farm on it.
Re:A more tempered look at DRDRAM (Score:1)
Original article (Score:2)
I often wonder about why some articles are accepted and others are rejected on /.
Dual Channel Rambus vs DDR (Score:2)
-Aaron
Question from the past (Score:1)
Wazzzzup!!!!
Re:The problem with Rambus compared to SDRAM... (Score:2)
If there's predictability in the access stream, two or in some cases three levels of cache have already stripped it out. We're into the totally-unpredictble range now. Interrupts, table-based branches, database accesses, that kind of thing. That's why even relatively great changes in DRAM performance only make 5% or so differences in system performance -- most of the time the access hits cache and it doesn't matter.
Re:RAMBUS in PlayStation 2 (Score:2)
Dunno where you got that. What I am saying is that the relatively small volumes Sony commands aren't going to be enough to materially alter the market economics of scale. In DRAM, ( volume => low cost ) and ( low cost => volume ) which should be recognizable as positive feedback. Whoever loses the edge in volume will effectively disappear, and you can bet that Sony engineers are furiously preparing a contingency product using DDR SDRAM.
Re:RAMBUS in PlayStation 2 (Score:2)
Shrinking RDRAM doesn't do that much for the power consumption. That's because the worst of the power is in the RAC, which sucks (apt term in this case) so much (juice) because it's an open-drain constant-current analog interface driving many milliamps onto the I/O lines. The DLL also gobbles electrons. Neither of these get smaller or less power-hungry with process technology, which puts RDRAM on a nasty track in cost and yield terms. (And no, I really don't want to go into why they don't scale.)
As for Sony, the situation has changed. Obviously a different system controller would be needed, but present DDR parts provide more bandwidth than the RDRAMs do with about the same pincount, less power, and fewer external components. Check out the memory on the GeForce boards.
Re:Interleving memory banks (Score:3)
Since DRAMs are inherently much wider internally than externally, it's much more economical to do this inside the DRAM itself. SDRAM etc. read an entire row at once and then shift out chunks in very high speed bursts. Meanwhile they have four or more internal banks which can be accessed for overlapping row accesses. Interleaving would require either making the data bus twice as wide (fuggedaboutit) or switching ownership of the databus on every clock (which is even dumber). Thus, no more interleaving.
Re:RAMBUS in PlayStation 2 (Score:3)
This isn't even in the noise. DRAM volumes are measured in millions per day, not per year. Industry unit volumes are on the order of thirty billion devices per year. Somehow I doubt that Playstations will make a serious impact on that.
Re:A more tempered look at DRDRAM (Score:3)
Which is why DDR parts are moving to x16 wide rather than x8 for the largest volumes. Remember when DRAM was x1 and only a few x4 parts were around? x32 and x64 are on the roadmap for later generations. Basically, the width grows more slowly than the devices' size because the increasing appetite for RAM (thanks, Bill!) keeps raising the level of granularity that anyone really wants (who does 32 MB main-memory granularity any more?) Small systems are a bit different, which is why graphics controllers use x32 parts today.
Re:The problem with Rambus compared to SDRAM... (Score:4)
DDR-SDRAM is actually less parallel than Rambus. DDR has a strobe (clock) line for each eight data lines, where Rambus uses a common clock for sixteen. DDR-II also moves to a dedicated differential strobe. The result is that the skew within byte lanes (the main limit on the transfer rate for source-synchronous signaling schemes) is lower and therefore the data rate can be higher.
In contrast, Rambus tries to keep the clock rate low by doing four transfers per clock cycle. Since the data has a frequency half of the transfer rate, more width, and is single-ended, there's really no engineering advantage to the 4x multiplier. There is a honking great cost, though, since it requires the devices to oversample the data in between clock edges. This leads to more jitter (sampling-point variability) and makes the Rambus interface (RAC) very complex and expensive. The primarily-analog circuitry in the RAC is one of the main reasons that RDRAMS have such hideously low yield -- it's just ugly getting all of those comparators and timing paths to match up in a cost-means-everything DRAM process which is basically oriented to making lots and lots of cheap capacitors.
Early DDR devices are already running at over 400 MT/s (million transfers per second) in small systems such as graphics controllers and Transmeta's low-power systems. JEDEC is now putting the finishing touches on DDR and most of the hairy design work is moving to DDR-II. As usual, the second pass mostly applies the lessons learned on the first pass. DDR-II will have less legacy support and will remove some features in the "nice but not worth the speed cost" category.
The objective is to run early parts with >400 MT/s data rates in large systems -- or in other words, your 72-bit DIMM is going to have more than twice the bandwidth of those still-not-available 800 MT/s Rambus parts -- and the DDR-II parts won't require water cooling, either.
Oh, yeah? (Score:1)
Sure IBM made $$ from MCA, BUT it cost them
marketshare. Yours is the first comment from
inside or outside IBM that hasn't concluded
that MCA was an unmitigated disaster for IBM.
Re:The problem with Rambus compared to SDRAM... (Score:1)
However, once they come out, they'll be well worth grabbing. Rambus won't be able to keep up, especially in large memory configurations since latency decreases with each RIMM added...
Re:The problem with Rambus compared to SDRAM... (Score:1)
Oops.
Yeah, thanks.
Re:The problem with Rambus compared to SDRAM... (Score:1)
(And here's a link to a search on Pricewatch [pricewatch.com] too...)
RAMBUS in PlayStation 2 (Score:2)
Re:RAMBUS in PlayStation 2 (Score:2)
This kind of thinking irks me.
Re:The problem with Rambus compared to SDRAM... (Score:2)
--
Re:The problem with Rambus compared to SDRAM... (Score:1)
Think about stuff a server with 2GB of RIMMs (where each 256MB RIMM costs around $1500-2000 a piece)... the memory itself would cost more than the rest of the server components. Also having 8 RIMMs would really crank up the latency to access RAM (even if the chipset supports dual-channels, that's 4 RIMMs per channel).
I doubt of Rambus will bust because of the Playstation 2 sales (and other computers/consoles) that might be based on Rambus. If the yields of high-density Rambus barely improve and prices don't go down enough, then there is a chance that Rambus will lose out. Rambus is a nice technology... but could it be too good?
Re:Interleving memory banks (Score:1)
> but package technologies have been developed that mitigate that.
Well, interleaving memory banks is live and kicking in non-Wintel-cheapo-pc systems
aka so called "workstations". Sun's, HP's, Alpha's all can use the speedup effect
of interleaving memory banks if you stick enough RAM modules into them.
And due to the incredible braindeadness of intel and rambus, they will never
ever use rambus ram's for their systems. And high cost is not the reason!
just my 2 cents
--
Re:Interleving memory banks (Score:1)
Rambus and sales... (Score:1)
Re:The problem with Rambus compared to SDRAM... (Score:2)
133MHz DDR = 266MHz x 64bits x 0.65(bus optimization) = 1.382 GBytes/sec
400MHz RAMBUS = (400MHz x 2/clock cycle) x 16bits = 1.6 Gbytes/sec
During the initial design stages, they found that RAMBUS chips often overheated and burned out, so some genius thought up of the idea of having a RAMBUS RIMM turn off when it isn't doing much work...rah, that was pretty stupid! To power up the RIMM again, it takes an *ETERNITY* in computer time!
Oh, and RAMBUS costs heaps as well, because of all the patent rubbish.
Bottomline: DDR is CHEAPER, FASTER, and BETTER than RAMBUS.
Ekapshi.
Re:The problem with Rambus compared to SDRAM... (Score:2)
However, even with a trace cache, DDR SDRAM will probably outperform RAMBUS, because its bandwidth is higher and its latency is lower.
DDR -> 266 Mhz x 64 bit = 17024;
Initial Latency -> 6 Msec + 2 Cas Cycles.
RAMBUS -> 800 Mhz x 16 bit = 12800;
Initial Latency -> 50 Msec + 2 Cas Cycles.
DDR is simply faster. It's also going to be cheaper, because it uses the same access protocol as SDRAM, so southbridge chips, sockets, and the chips themselves can be manufactured with minor changes.
If RAMBUS succeeds, it will be only because Intel shoved it down people's throats. AMD has officially announced that they are in the DDR camp.
Re:The problem with Rambus compared to SDRAM... (Score:2)
The Athlon platform has the capability to scale to a 400Mhz bus with Quad Data Rate. RAMBUS would have to be at 6.4Ghz to keep up, and it would still have higher latency. RAMBUS has very little chance of catching up to DDR.
Rambus can have lower latency and higher bandwidth (Score:1)
However Rambus (and other serial memory interfaces) greatly reduce the pin count necessary for a given bandwidth. A performance system could use this to put the memory controller on the processor eliminating one part of the latency. Also multiple memory channels could be put in one system to increase the total bandwidth.
The current designs from Intel is just a hack of thier current chipset and do not try to take advantage of the possibilities.
RAMBUS = Beta (Score:1)
mark
Re:The Real Problem (Score:1)
and the lesson here is.... (Score:2)
the broader your market, the broader your sales.
The wired effect ? (Score:4)
Re:Hah! (Score:1)
Re:One world: DUH (Score:1)
If it didn't make things even more offtopic, I would like to be set straight...
--
One world: DUH (Score:4)
Nobody really likes new technologies that add significant cost through royalties -- see FireWire (ahem) IEEE 1394. USB was free, which is why everyone had those controllers on their motherboards years before they had anything to plug into them.
Rambus would have a great chance if it was not commanding a 500% premium, and perhaps cost only 10-15% more than current SDRAM. If they can get the prices down, which they will not, Intel would have been able to release a four channel solution that would significantly reduce its latency (this is coming) and increase its performance. As the technology became financially reasonable for everyone to use, mass production would bring the price down even further.
Besides, who wants to go back to paying 1995 prices for RAM?
--
Down in flames, hopefully. (Score:2)
God bledd The Register. Between BOFH and the pin they poke in the New Economy bubble, it's essential reading.
-carl
Re:Interleaving on x86? (Score:1)
Re:The problem with Rambus compared to SDRAM... (Score:1)
800 MHz RDRAM? Where can you get that? If your thinking of PC800 RDRAM, remember that it is actually clocked at 400MHz...
Compared to / compared with (Score:1)
BTW thanks for the interesting information.
Re:Rambus was doomed from the start (Score:1)
Re:A more tempered look at DRDRAM (Score:1)
The bandwidth/pin count ratio is very important for servers that need huge bandwidth. You can almost put 3 rambus interfaces on a chip for the pin cost of one DDR SDRAM interface. That's something that will make many chip designers take a good hard second look at Rambus
Also, something Tom Pabst never really covered was how much of the Camino performance pitfall was due to the chipset and how much was due to the memory protocol; it's perfectly possible to implement a MUCH more efficient Rambus memory controller than Camino does.
Having said all this, I think Rambus screwed up in many ways; they are notoriously arrogant in the industry, created a product that doesn't yield well, and tried to decommodotize a market, which, although possible, is definitely going against the flow in the industry. But that doesn't mean their technology is uninteresting or that Intel was smoking something when they decided to back Rambus.
DDR SDRAM would have taken a lot longer to come about if JEDEC thought they were sitting pretty with the next generation of mass-produced DRAM.
Bye Bye Wintel (Score:2)
Rambus Patents (Score:2)
I'm not an EE nor a patent lawyer so I cannot say if they have a legit claim. If they did win however, they will have some deep pockets to dip into.
I wouldn't could them out. Not yet at least.
-- It ain't over till it's over.
Why is this so difficult to figure out (Score:2)
Proprietary standard + 5 times as expensive = going to go bust
Yes, rambus is better than what we have now, and it might even be better than DDR (debatable), this is irrelevant though. A Ferrari is technically superior to a Camaro in hundreds of ways, yet they are very rare to see. History is full of failed superior propietary standards. This one will be no different.
Re:RAMBUS in PlayStation 2 (Score:1)
wrong - he's not talking about *pins* (Score:1)
More recent articles on Tom's:
Dissecting Rambus [tomshardware.com]: the March 15 article that perhaps (?) triggered Rambus' recent stock dump.
Rambus Revisited [tomshardware.com]: Second article, April 3rd.
Re:The Real Problem (Score:2)
I didn't think that IBM's MCA program was a disaster. From a financial point of view it was quite successful. All the bigger HW manufacturers were pumping out proprietary hardware back then, IBM was no exception. Not only did the PS/2 offer different architecture, it also offered IBM and it's partners a lucrative market in proprietary "must haves". Remember, these were the days of Bigco and the like going with single provider solutions. IBM was a big player in this market with it's early OS/2 offerings, Token Ring enterprise solutions and seamless integration with it's SNA world of products. The MCA push was all a big part of their overall business plans.
Granted things have changed a lot now and being proprietary is no longer such a great idea, but in the business cycle it did make a substantial amount of money and improved IBM's marketshare in the PC arena when it really needed it. Zenith was their big competitor the time and they were getting crushed.
Another example of a technology ahead of its time. (Score:1)
Now if only I could optimize gcc for rambus memory.. :)
Re:The problem with Rambus compared to SDRAM... (Score:2)
The numbers are accurate, but DDR memory isn't actually being manufactured yet
Actually, DDR memory modules are being manufactured. [celestica.com]A more tempered look at DRDRAM (Score:5)
1 - I've been involved in the design of DRDRAM for several years, now. I've been in memory design for 18 years, also. I'm slightly more informed on this than the average geek-on-the-street.
2 - I really don't like the principle behind DRDRAM. Proprietary things are supposed to eventually become commodities, not the other way around. Memory has long been THE commodity in a computer, and here they are trying to make it go the other way. But it's technically interesting, my contributions won't make or break the whole scheme, and the kids gotta eat.
Cons:
Fundamentally, for at least the near future, it simply takes more silicon to implement the Rambus interface. No matter how much learning you do, that area doesn't go away. Perhaps after the spec stabilizes fully, it may be possible to come up with better fully custom circuit implementations, but that's at least a little ways in the future. Plus in the performance race, it's possible that the spec may never stabilize sufficiently before a given generation is obsolete.
It's very complex. My boss would have slapped me silly had I ever even thought of coming up with this. In years past I've been slapped silly for coming up with stuff a fraction of this complexity.
Latency - Obvious, though there is a second side to this, under Pros.
Wash:
The frequencies are high, and the margins tight. I suspect EVERYONE is going to have to cope with the same realm, sooner or later. DRDRAM is simply a bit ahead of its time on this on, and is taking the pain, first. I remember when it was tough getting the whole chip to run at 100MHz for SDRAM, or even 150nS for page mode DRAM.
Pros:
Granularity - don't discount this one. Presently DIMMs are made with 8 chips, each organized with X8 outputs. That says that 64Mb technology makes 64MB DIMMs. It also says that 256Mb technology makes 256MB DIMMs, even though mainstream PCs today are only now making the transition from 64MB to 128MB. That's part of the reason we've dropped the 4X-per-generation habit, and are bringing out 128Mb SDRAM, because the market just isn't ready for 256Mb. 512Mb and 1Gb are on the drawing boards and early hardware now, so this problem is going to get worse. A single 1Gb chip holds 128MB. (Obviously)
Pin count - As more integration happens, the reduced pincount of DRDRAM may become a bigger factor. It's a simple matter of 168 vs 55, though the 55 need to be at a higher frequency. It's simply easier to integrate a DRDRAM interface and have enough pins to do all of those other things, like an AGP bus.
Banking (Latency) - While simple latency is poorer, under situations with multiple threads of access (multithreading and/or DMA streams) the higher bank count of Rambus becomes an advantage. If a bank is left open, or even if it has just been closed following a prior operation, you need to wait a 'restore time' before you can access that bank, again. With DRDRAM there are usually more accessable banks, so odds are better that the next access will be to a bank that is currently closed. Even if the simple latency is longer, if you don't have to pay the 'restore time' penalty, the effective latency becomes shorter. This doesn't show up unless you have multiple memory access streams, though.
No summary
Re:The problem with Rambus compared to SDRAM... (Score:1)
Jeff
Re:The problem with Rambus compared to SDRAM... (Score:3)
133MHz DDR = 266MHz x 64bits = 2.128 Gbytes/sec
800MHz RDRAM = 800MHz x 16 bits = 1.6 Gbytes/sec
Rambus is crap whichever way you look at it.
Jeff
RamBus DRDRAM vs. DDR SDRAM (Score:5)
The author explains why he thinks DDR SDRAM is better dan DRDRAM and shows once again that MHz isn't everything.
--
Rambus is not going away (Score:1)
Re:Hah! (Score:1)
Re:The problem with Rambus compared to SDRAM... (Score:1)
Remember that 1 mbit = 1024 bit and 1 gbit = 1024 mbit = 1048576 bits.
Grtz, Jeroen
Re:Bye Bye Wintel (Score:1)
Re:Interleving memory banks (Score:1)
Apartment6 [apartment6.org]
Re:The problem with Rambus compared to SDRAM... (Score:2)
I think this analysis is a bit flawed - if you give rambus the same 64 pins that you allow the DDR solution you find rambus parts winning in bandwidth by a large margin, plus you can have independent accesses going on the four different banks which might be nice if you have the CPU, AGP, and PCI all contending for memory at once.
I think rambus has done some very cool stuff. When they first introduced their technology (1991!) it was really gee-whiz compared to fast-page mode DRAM.
Their problems are latency, die size penalty, royalty costs, the care that needs to go into designing a PC board for them and one noone else has mentioned - test costs.
The testers used to test rambus parts are hideously expensive, slow, and can't test as many devices at once which leads to major throughput and cost problems on a factory floor compared to DDR SDRAM which uses an incremental improvement to the testers already in use.
Re:RAMBUS in PlayStation 2 (Score:1)
If the PS2 had been manufactured using SDRAM it would required a 6 layer board instead of the 4 layer one that it actually uses and the board would have had to be bigger to accomodate the traces. For Sony the increased cost of the memory was irelevent compared to the saving in other areas. I think that RAMBUS will not die but it will find a niche in things like the PS2. It is doomed to forever be a low volume product. It is the wrong technology for high performance architectures at the present moment.
It will be interesting to see how the Alpha 21364 performs, putting the controler on the chip might sort out some of the latency issues, but something will still need to be done about the heat (perhaps a shrink).
-dp
A nit: CPU speeds (Score:1)
CPU speeds accelerated faster than anyone planned (a year ago, 600 MHz was state of the art!)
A little while ago, I did a quick and dirty calcualtion using Moore's law vs. my first computer, an 8088 @ 4.77mhz. I didn't look at any computers that I purchased since 1984, only the original. I did account for the speed benifits from improvements in 286, 386, 486, PI, and PII systems, not just raw Mhz.
The results? Damn close to what I'm using now.
So, if I can predict 15 years of CPU speeds from by using Moore's law and an old 8088 as input data, why can't manufacturers?
BTW: In about 5 years -- 2004 -- the same calculations show that I'll be using the equivelent of a 2.8 Ghz PII system...and that's behind the curve. Top of the line systems will be about as fast as a PII running at 7.5 Ghz.
Imagine how investors would suffer... (Score:1)
Re:a more clever title... (Score:1)
-----BEGIN GEEK CODE BLOCK-----
v.3.12
GCS d-(--) s+: a-- C+++$>++++$$ UL++$>++++$$ P+>++++$ L++>++++$ E--- W++$>++
a more clever title... (Score:4)
-----BEGIN GEEK CODE BLOCK-----
v.3.12
GCS d-(--) s+: a-- C+++$>++++$$ UL++$>++++$$ P+>++++$ L++>++++$ E--- W++$>++
Re:The problem with Rambus compared to SDRAM... (Score:1)
...is in chipset overhead. AFAIK, it takes more control circuitry to run DRDRAM, as well as a faster clock. Therefore, it's going to run hotter in the chipset. A big step for DDR SDRAM might be to drop the voltage (if it didn't do it already) even further. The original Pentiums, at 60 and 66 MHz, ran too hot bacause of their 5V design. With the Pentiums at 75 MHz and beyond, Intel dropped the voltage to 3.3V, and the trend has continued to the present day, though it's reaching the theortical minimum that must be there to switch the gates.
So if a pure 1.8V system comes out...CPU core and I/O lines, cache, chipset, RAM, everything...it would run a lot cooler. Until everyone trades that in for faster, of course. The only thing I can see holding back this design are bridge chips required to run the PCI slots at 3.3V. Perhaps it's time for a 1.8V/133MHz PCI bus and 128-bit AGP II?
Re:The problem with Rambus compared to SDRAM... (Score:1)
1 mbit = 1024 bit and 1 gbit = 1024 mbit = 1048576 bits
Huh?? Unless computer math was rewritten since last I checked:
.125 GBytes (1/8 of a GByte)
1 Gbit = 1024 Mbit = 1048576 Kbit = 1073741824 bits
That, in turn, is:
134217728 Bytes = 131072 KBytes = 128 MBytes =
MCA, ISA, and EISA (Score:1)
I didn't think that IBM's MCA program was a disaster.
Absolutely right. The disaster was that IBM wanted everybody to pay retroactive royalties on the ISA stuff they had used, which prompted the creation of EISA: a faster, 32-bit, backwards-compatible ISA bus. IBM is pretty lucky it never caught on. As it is, everyone just continued using ISA, since it was the lowest common denominator, and MCA choked and died. With MCA's death came the demise of Plug&Play. (No, really----you just put the card in, swapped the reference and options (drivers) disks a couple of times, and it ran!)
Re:MCA, ISA, and EISA (Score:1)
Oops, that's right. I was speaking from a home PC point of view.