Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Will Rambus Go Bust? 104

retep writes: "32BitsOnline has a interesting article about how the new memory standard RAMBUS may go bust. Essentially a bunch of missteps with Intel's Camino chipset, high costs, the rise in popularity of alternative CPU's such as the Athlon and a lack of performance may prove its undoing. I remember a story in Wired just a year or two ago praising RAMBUS for its innovative tactics; look what's happened now."
This discussion has been archived. No new comments can be posted.

Will Rambus Go Bust?

Comments Filter:
  • by Anonymous Coward
    dissecting rambus [tomshardware.com]
    don't always trust the hand that feeds you [tomshardware.com]
    nuff said.
  • by Anonymous Coward
    Part of the licensing deal with RAMBUS prohibits you from saying bad things about it publicly. But you can judge what the industry thinks about it by their actions. The Itanium demo box is a good example. Also, Nintendo never criticized the RAMBUS in the N64, but they aren't using it for the Dolphin.
  • I've always been wondering - On my motherboard (Epox EP51-MVP3E-M, VIA MVP3 chipset), my BIOS has an option for DRAM bank interleaving. Is that option actually DOING anything at all? I find it somewhat hard to believe that such a basic mobo (Super7, $100 1.75 years ago) has bank interleaving.

    I have the option enabled, although I haven't done any benchmarks. No obvious performance diff. Any suggestions on how to benchmark whether this option does anything?
  • by Telcontar ( 819 ) on Wednesday April 19, 2000 @01:00AM (#1124385) Homepage
    is a higher latency. This means the CPU has to wait longer until the memory is read, but can read memory faster from then on.
    The consequence of this is that compilers would have to be optimized for that kind of memory access - i. e. accessing a few pages is expensive (and slow) under Rambus, slower than under SDRAM. Accessing many pages is more effective.
    The question is, why did Intel chose this kind of tradeoff? Was there no alternative that did not increase the latency by the factor of 10 (according to the link to Tom's hardware)?
  • You should optimize the cache controller not the compiler. Instead of keeping all accessed pages you should start keeping pages where access has started.

    Sounds like the "Read-ahead" feature which has been in disk cacheing software since the onset of it all. Amazing how things tend to come full-circle. :-)

  • by Digital Commando ( 2881 ) on Wednesday April 19, 2000 @02:27AM (#1124387)
    The real problem with Rambus is that Intel tried to squeeze the market for RAM, the same way it has tried with chipsets, graphics, and networking. And it is getting kicked in the teeth everywhere. Didn't they learn anything from IBM's MCA disaster?
  • by jht ( 5006 ) on Wednesday April 19, 2000 @04:07AM (#1124388) Homepage Journal
    You're dead right on the Rambus pricing issue - it costs way too much. Part of that is the royalty factor, but it's more (right now) caused by low yields on Rambus parts and a very small amount of makers. There's no flood of RDRAM on the market to drive prices down the way there is with SDRAM. If yields were equivalent to SDRAM and more people were making the parts, the prices would be a lot more competitive, but still somewhat pricier because of the royalty issue.

    As far as USB/Firewire goes, though - it isn't royalties that have slowed Firewire acceptance. Intel had included USB in every chipset since the LX several years ago - in fact, USB support was in silicon before any OS had support for it. That's why it was on motherboards - It's part of the chipset whether or not you want it, so you might as well build the ports. Firewire royalties are tiny (below $1/port), and it's split between Apple and the other patent holders (I believe TI and Canon are in the group, too). Firewire would have been adopted quicker had Intel followed through with their earlier plans to include it in newer chipsets.

    The other thing that sped acceptance of USB versus Firewire is that USB 1.0 was ready a (relative) long time ago, and Firewire is only a couple of years old. The DV cameras that really take advantage of Firewire have just begun to be priced approriately for the casual camcorder buyer. Sony and Apple build the ports onto virtually all their systems, and are selling them as fast as they can build them.

    Also a Good Thing for USB - since the CPU controls the bus and it's a simple protocol, it's well-suited for cheap, simple peripherals like modems, digital still cameras, low-end scanners, audio devices, etc. Firewire aims a lot higher.

    - -Josh Turiel
  • by jht ( 5006 ) on Wednesday April 19, 2000 @02:50AM (#1124389) Homepage Journal
    When Rambus was being designed, EDO RAM was the current standard, and Rambus is competitive in a head-to-head with EDO - with latencies in the same class or faster but much better transfer speeds. SDRAM was intended as a stopgap measure to provide a memory technology that could keep up with the faster Pentium/Pentium II systems during the wait for Rambus to make it to market. But a few things happened to screw it all up:

    SDRAM took off as a standard, and other chipset makers adopted it - and extended it to PC100 and PC133 from the original PC66.

    CPU speeds accelerated faster than anyone planned (a year ago, 600 MHz was state of the art!)

    Rambus was late to market, as were the systems designed to use it. This gave SDRAM more of an opportunity to become entrenched.

    Rambus has proven to be difficult to manufacture to this point, with horrible yields.

    And finally, SDRAM turned out to be a lot more scalable than anyone anticipated at the beginning.

    If Intel had expected DDR PC133 SDRAM, Rambus might never have made it out of the starting blocks in the first place. But given the lead time on their chipset and CPU design cycles, they had to make a call based on what the trend appeared to be - and they bet on the wrong one. The 810 chipset is a lot more important to Intel right now than they had expected it to be, and the 815 wasn't even planned - they also were hoping to retire BX by now. Some of their supply problems of late have been driven by this misforecast. When the dust settles, I expect to see Rambus slowly squeezed out of the mainstream and Intel to quietly write off their investment. It seemed like a good idea at the time...

    - -Josh Turiel
  • Two stories of interest on The Register right now.
    The first one [theregister.co.uk] says that Kingston Technologies is dropping prices on some of its Rambus RIMMs, 35% average, and as much as 68%.
    The second one [theregister.co.uk] says "Micron...will demo three platforms using double data rate (DDR) memory at WinHec 2000 in New Orleans next week."
    Look for a dual processor platform, a dual processor dual controller platform for the workstation and server markets, and a uniprocessor system, all running 266MHz memory modules and using a 133MHz front side bus.
    Yes, it's been very entertaining readng the Register articles about how everybody kept badmouthing Rambus and the stock price kept climbing in response, until the other day when investors finally tripped over a clue.
  • 800 MHz RDRAM? Where can you get that? If your thinking of PC800 RDRAM, remember that it is actually clocked at 400MHz...

    Right, it's actually 400MHz, double-pumped. But Rambus calls it 800MHz. Which really makes sense, because even though the _clock_ is only a 400MHz sine wave, the data/address signals operate at 800MHz, and that's really what matters.
  • All you need to do is look at the current street prices for RAM.

    I can get PC-100 SDRAM for about US$100-$110 per 128 MB DIMM; PC-133 SDRAM for about US$130-$145 per 128 MB DIMM; and 800 MHz RDRAM for about US$700 per 128 MB RIMM.

    No wonder people aren't so interested in RDRAM. If my guess of US$180-$195 for a DDR-SDRAM 128 MB DIMM is true, then NOBODY is going to buy RDRAM in the long run.
  • BTW: In about 5 years -- 2004 -- the same calculations show that I'll be using the equivelent of a 2.8 Ghz PII system...and that's behind the curve. Top of the line systems will be about as fast as a PII running at 7.5 Ghz

    Well, there are problems with raw clock speed. I realize you said 'as fast as', implying you weren't expecting the raw MHz levels, but just to check with other people...

    At 1GHz, a cycle time is 1ns. In 1ns, light will travel roughly 30cm... about a foot. Electrical signals in traces about half that. So if your high-speed bus lines are more than six inches long, the clock at one end of the board will be a full cycle ahead from the other end. At 7.5 GHz, the electrical signals will travel 2cm: less than an inch. With the synchronous CPU designs in use now, everything running at the higher speed has to be smaller than that amount.

    Solutions? The usuals: decrease the feature size to shrink the CPU; integrate more circuitry onto the CPU itself to avoid long traces; separate the CPU clock from the system clock even further... the unusual one is to design a more asynchronous CPU, that doesn't require a single clock standard across the whole chip. While there's been a fair bit of work done on that, it requires throwing out one of the great simplifying design assumptions, and makes verifying the correctness of the design a whole lot harder.

    -- Bryan Feir
  • You are a bit nitpicking, yes the clock is at 400 Mhz but data are also transfered when the clock goes up and when the clock goes down.

    So you have 400 MHz x 2 x 16 bits which is still 1.6Gbytes/sec.
  • Moore's Law was intended to be a mandate (to Intel's engineering and marketing departments) as much as it was intended to be a prediction. So, it shouldn't be a shock that Intel has followed through on their promise to double performance/price every 18 months.

    As an engineering road map, Moore's Law has made Intel one of the largest corporations in the world and very, very rich. Why change it? I'll guess that your forecast for 2004 will be spot on.
    --
  • Intel initally agreed to include 1394 as a standard feature in their motherboard chipsets. This would give pretty much universal FireWire adoption for almost no cost to the end user, just like USB.

    Then they changed their mind, and instead have been chasing USB 2.0. One thing to realize that since IBM went off with the PS/2, Intel has basically been controlling the PC spec, especially since they basically control the chipset market. I wonder if they were afraid to let third parties, including Apple, control one of the standard features in a PC.

    (As for camcorders being the most obvious application -- Sony plans to push iLink across their entire product lineup. As Digital TV and other things become more widely adopted, there will be more consumer pressure for 1394 on PCs. It's the applications, stupid! USB had much more obvious applications, because PCs had always lacked a good standard external expansion bus. All that parallel port crap was quickly and happily killed in the face of USB.)
    --
  • Microsoft was actually the biggest thing holding USB back for many years. They were in charge of writing the Windows drivers (duh), but they were years late and very buggy.
    --
  • EISA was plug-n-play, working in a very similar manner to MCA. Certain ISA cards could also be allocated on EISA machines (3Com NICs, for example).

    And EISA did catch on -- in the pre-PCI days Compaq used it heavily in their successful server line up. Because IBM was basically MIA in the PC server market in those days, I would guess that by 1995, EISA had a much larger installed base than MCA.
    --
  • More like dope up a MCA bus for a 286. Meanwhile Compaq is benchmarketing your ass on with a ISA 386 machine.

    One of big the reasons the PS/2 line never caught on is that the CPUs were consistantly behind what others were shipping.
    --
  • Hence, going for the compiler you optimize in the wrong place.

    You should optimize the cache controller not the compiler. Instead of keeping all accessed pages you should start keeping pages where access has started.

    And I would not be amazed if the 64 bit Intel CPU's have such a cache controller. Intel has proved that it can plan very far ahead so far...
  • Like Firewire to add USB costs money.
    To be precise $ 0,25 a port.
    Contrary to firewire where this fee is split in 7 the fee for USB goes directly to Intel.


    Are you certain about this? As far as I know, royalty-free licenses are available for the core USB 1.1 specs, and that these licenses are handled by a non-profit consortium (The USB-IF, I think it was?) that Intel set up together with Microsoft and a bunch of other big companies.

    Also, I believe that Intel plans to (but has not yet officially announced) to also offer free licenses for USB 2.0.

  • Finally! A first post that.

    1) Doesn't contain the words "first" or "post"

    2) Is actually funny.
  • The technology for RDRAM is a little dated, it was being developed to contend with EDO RAM which was a slow beast. Rambus RAM does have the capability to reduce its latency to where it is about the same speed as SDRAM, the problem lies in Intel's chipset with only a single memory channel. DDRSRAM is really cool because the fab process is so much similar to regular SDRAM which means we can pick it up for a low price. The DDR people claim a 2.6GB/s transfer rate which is true but that is burst transfer, it fairs much worse under sustained transfer. Given a good chipset and better fab techniques RDRAM could feasibly end up all over the place. The real killer with Rambus is the stupid licensing, if they would lower their fees a good deal and let volume make up the difference everyone would be much happier.
  • This is the argument I always here, is that RAMBUS will scale better. But, somehow, I'm just not buying it. Right now, high end RDRAM is about three times as fast as high end SDRAM (400MHz vs. 133MHz) with the advantage of being DDR. This is enough to beat out SDRAM despite transfering 1/4 the data at a time (16 bits vs. 64 bits). But, soon DDR-SDRAM will hit the market, and RDRAM needs to clock four times as fast to hit the same bandwidth. It seems that 133MHz DDR-SDRAM will be the first version to hit the street. RDRAM will need to clock at 532MHz just to have the same bandwidth, a hefty increase over the current 400MHz top-speed of RAMBUS today. If SDRAM manages to hit 250MHz (slower than the slowest RAMBUS available today), RAMBUS will have to clock at a full 1GHz to keep up. How long do you think it's going to be before you can buy a motherboard that runs at 1Gig?

    Something else, and I could be wrong about this, but I don't think RAMBUS really qualifies at a serial protocol. It sends 16-bits at a time, and hence has the same sort of timing constraints as a "parallel" solution. If you really tried to send single bits over a single channel to avoid skew problems, you would have to clock that channel at 6.4GHz to keep up with lowly PC100 memory. Good luck!
  • I think the problem is that Intel want to set a standard.
    Problem is, that Intel alone (like M$) can't set a standard.
    To set a standard you need more than one supplier and more than one firm using this standard.
    Look at Firewire (IEEE 1394B) which has become a standard after two years.
    With the introduction of DDRam (double data Sdram) i think that Intel has a big problem.
    The fact is, that AMD, Apple and other company's are going for DDram instead of RAMbus.
    DDram has almost the same price as normal Sdram.
    Second, every ram manufactory can make DDram.
    RAMbus uses a different process, and because of patent issues is more expensive to manufactor than DDram.
    In the end, Intel will switch to 133 Mhz SDram and DDram.


  • Nobody really likes new technologies that add significant cost through royalties -- see FireWire (ahem) IEEE 1394. USB was free, which is why everyone had those controllers on their motherboards years before they had anything to plug into them.

    Wrong!!!
    Like Firewire to add USB costs money.
    To be precise $ 0,25 a port.
    Contrary to firewire where this fee is split in 7 the fee for USB goes directly to Intel.
    At first you had to pay $ 1- a port for Firewire to the Firewire consortium (existing of Apple, Sony, JVC, Intel (yep, Intel is a member of it too!!!) and 3 other company's.
    Second, because Intel did build USB in their chipsets USB was on the market a little bit longer.
    The biggest problem was not the availibility of USb but the drivers and support.
    That's the biggest difference between Firewire and USB.
    Firewire is a much more mature technology and is aimed at video and data storage instead of input and output devices like mouses and printers.

  • Do you have even numbers of memory modules? The old ASUS-SP3G MB that I have requires memory modules in pairs for interleving.
  • Every interleving method I've seen implemented gave each bank of ram it's own set of control and data lines. When an access was fired off it was done to both banks. Bank one was used for odd addressed memory words and bank two got even addressed memory words. When the data cam back from the ram it was all loaded into the MB cache. On average (asuming random accesses, the next word of memory is in the cache hald the time. In pratice code accesses are helped the most as one uses long sequential words of memory. Data acheives a better than 1.5x speedup as often you have data locality as in stack frames and records.

  • The problem with Rambus compared to SDRAM... is a higher latency. This means the CPU has to wait longer until the memory is read, but can read memory faster from then on.
    The consequence of this is that compilers would have to be optimized for that kind of memory access - i. e. accessing a few pages is expensive (and slow) under Rambus, slower than under SDRAM. Accessing many pages is more effective.

    And that's a really difficult optimization as you basically have to optimize data accesses as well as code. Data is where it is, locality really can't be improved easily. Systems can be tuned to grab a few pages in a row, but then that may still slow you down if you only used data or code on one of them.

  • by Bryan Andersen ( 16514 ) on Wednesday April 19, 2000 @02:24AM (#1124410) Homepage

    What ever happened to interleaving memory banks for more speed? It does raise the pin counts, but package technologies have been developed that mitigate that. I have an old 486DX2-66 motherboard that does interleving between two banks of ram. Each cache load loaded two memory words into the external cache instead of one. It won't lead to better write performance, but read rates will nearly double (You get something like a 1.8x effective increase).

  • Everyone, repeat everyone should read both of the articles you mentioned. Especially toward the end where it discusses Rambus' attempts to subvert the JEDEC standards discussions by sneaking part of the spec into their patent application(s), and the industry's responses.

    Here's hoping that Rambus goes down the flaming road to hell, and that the majority of the non-corporate investors bail out before they get hurt too much more.

  • Ye silly AC...

    If ye'd been readin' dis-a-here line-o-talk very well you-da known dat da ticket for RAMBUS eez much more eexpenseeve than da Greyhound, can'ta go uppa da hills very queek, and will probably go wheels up afore you get to CA.

    , BTW, whatcha been smokin' and where can I get some?

  • by Khan ( 19367 ) on Wednesday April 19, 2000 @01:15AM (#1124413)
    Well, I can tell you that yesterday at COMDEX, I saw an Itanium chip running in a demo box. When I asked what kind of ram it was using, I was told that it was PC133. When I asked why it wasn't using RAMBUS, the guy stuttered and said that he wasn't allowed to comment on rambus. 'Nuff said.
  • Well, interleaving memory banks is live and kicking in non-Wintel-cheapo-pc systems aka so called "workstations". Sun's, HP's, Alpha's all can use the speedup effect of interleaving memory banks if you stick enough RAM modules into them. And due to the incredible braindeadness of intel and rambus, they will never ever use rambus ram's for their systems. And high cost is not the reason!

    Actually the next alpha 21364 chips will have a rambus memory controller integrated onto to the chip die.

  • However, one must consider Intel's (and Rambus's) long-term strategy. In the future, density will continue to increase, allowing greater amounts of memory per module (mitigating the RDRAM latency issue somewhat), and the idea is that boosting frequency down the line (for RDRAMs) will be easier than adding more lines to SDRAM, or trying to boost SDRAM frequency (due to its more parallel nature). This is the same argument that all newer high speed serial protocols are using (FireWire, USB, future SCSI), based on the idea that serial protocols can be clocked MUCH higher than parallel.

    Now, for memory, this doesn't help us today (or even for the next couple of years probably). But five or ten years down the line, a method like RAMBUS's might prove to have been the correct long term choice.

    However, playing Devil's Advocate, I'd also have to believe that the SDRAM type technologies are going to push into higher and higher clock rates as well (due mainly to the relatively short paths that the signals must travel), and give RDRAM no clear performance win for quite a while. Also, it isn't clear to me that RDRAM is the right implementation of an idea, even if the idea is a good one.

    But the main point is that while RAMBUS doesn't have a clear performance advantage now, it may in 5 years or so. But I wouldn't want to bet the farm on it.
  • Alright, some one out there in moderator space, stop downchecking the trolls and boost this one up to 5, please. Excellent.
  • I submitted this report [inqst.com] about DDR vs RAMBUS months ago to /. and it was rejected. Now that everyone else has picked up on it it suddenly becomes news. InQuest [inqst.com] has other articles [inqst.com] about Rambus as well. In addition to /. I also submitted the article to Tom's hardware, where he later used it as a reference for his current article.

    I often wonder about why some articles are accepted and others are rejected on /.

  • Here [inqst.com] is another InQuest [inqst.com] article comparing dual-channel Rambus to DDR. Like their first article, it is an interesting read. Note that the InQuest articles don't really go into politics like Tom's, but they do give some good benchmark data.

    -Aaron

  • Rambus is already bust, look at how much problem it has caused already, and all the flaws with it, no one wants a kludge on their motherboard.

    Wazzzzup!!!!

  • And that's a really difficult optimization as you basically have to optimize data accesses as well as code. Data is where it is, locality really can't be improved easily. Systems can be tuned to grab a few pages in a row, but then that may still slow you down if you only used data or code on one of them.

    If there's predictability in the access stream, two or in some cases three levels of cache have already stripped it out. We're into the totally-unpredictble range now. Interrupts, table-based branches, database accesses, that kind of thing. That's why even relatively great changes in DRAM performance only make 5% or so differences in system performance -- most of the time the access hits cache and it doesn't matter.
  • So you're saying that a company that manages to make, say, 50 million dollars a year is completely worthless, and that you have to make billions a year in order to be a *real* company?

    Dunno where you got that. What I am saying is that the relatively small volumes Sony commands aren't going to be enough to materially alter the market economics of scale. In DRAM, ( volume => low cost ) and ( low cost => volume ) which should be recognizable as positive feedback. Whoever loses the edge in volume will effectively disappear, and you can bet that Sony engineers are furiously preparing a contingency product using DDR SDRAM.
  • It will be interesting to see how the Alpha 21364 performs, putting the controler on the chip might sort out some of the latency issues, but something will still need to be done about the heat (perhaps a shrink).

    Shrinking RDRAM doesn't do that much for the power consumption. That's because the worst of the power is in the RAC, which sucks (apt term in this case) so much (juice) because it's an open-drain constant-current analog interface driving many milliamps onto the I/O lines. The DLL also gobbles electrons. Neither of these get smaller or less power-hungry with process technology, which puts RDRAM on a nasty track in cost and yield terms. (And no, I really don't want to go into why they don't scale.)

    As for Sony, the situation has changed. Obviously a different system controller would be needed, but present DDR parts provide more bandwidth than the RDRAMs do with about the same pincount, less power, and fewer external components. Check out the memory on the GeForce boards.
  • by overshoot ( 39700 ) on Wednesday April 19, 2000 @05:36AM (#1124423)
    Basically, interleaving moved onchip. What interleaving did was allow you to overlap accesses when the access time was dictated by the handshake between the controller and the RAM. By having two out-of-phase banks of memory transferring at once the bandwidth could be doubled.

    Since DRAMs are inherently much wider internally than externally, it's much more economical to do this inside the DRAM itself. SDRAM etc. read an entire row at once and then shift out chunks in very high speed bursts. Meanwhile they have four or more internal banks which can be accessed for overlapping row accesses. Interleaving would require either making the data bus twice as wide (fuggedaboutit) or switching ownership of the databus on every clock (which is even dumber). Thus, no more interleaving.
  • by overshoot ( 39700 ) on Wednesday April 19, 2000 @06:04AM (#1124424)
    RAMBUS memory is being used in the PlayStation 2. Considering that 2 million systems have shipped in Japan and the PS2 hasn't been released to the rest of the world yet, I think RAMBUS is going to get some nice business. Remember, the original PlayStation has sold over 75 million units.

    This isn't even in the noise. DRAM volumes are measured in millions per day, not per year. Industry unit volumes are on the order of thirty billion devices per year. Somehow I doubt that Playstations will make a serious impact on that.
  • by overshoot ( 39700 ) on Wednesday April 19, 2000 @06:13AM (#1124425)
    Granularity - don't discount this one. Presently DIMMs are made with 8 chips, each organized with X8 outputs. That says that 64Mb technology makes 64MB DIMMs. It also says that 256Mb technology makes 256MB DIMMs, even though mainstream PCs today are only now making the transition from 64MB to 128MB. That's part of the reason we've dropped the 4X-per-generation habit, and are bringing out 128Mb SDRAM, because the market just isn't ready for 256Mb. 512Mb and 1Gb are on the drawing boards and early hardware now, so this problem is going to get worse. A single 1Gb chip holds 128MB. (Obviously)

    Which is why DDR parts are moving to x16 wide rather than x8 for the largest volumes. Remember when DRAM was x1 and only a few x4 parts were around? x32 and x64 are on the roadmap for later generations. Basically, the width grows more slowly than the devices' size because the increasing appetite for RAM (thanks, Bill!) keeps raising the level of granularity that anyone really wants (who does 32 MB main-memory granularity any more?) Small systems are a bit different, which is why graphics controllers use x32 parts today.
  • by overshoot ( 39700 ) on Wednesday April 19, 2000 @05:54AM (#1124426)
    the idea is that boosting frequency down the line (for RDRAMs) will be easier than adding more lines to SDRAM, or trying to boost SDRAM frequency (due to its more parallel nature).

    DDR-SDRAM is actually less parallel than Rambus. DDR has a strobe (clock) line for each eight data lines, where Rambus uses a common clock for sixteen. DDR-II also moves to a dedicated differential strobe. The result is that the skew within byte lanes (the main limit on the transfer rate for source-synchronous signaling schemes) is lower and therefore the data rate can be higher.

    In contrast, Rambus tries to keep the clock rate low by doing four transfers per clock cycle. Since the data has a frequency half of the transfer rate, more width, and is single-ended, there's really no engineering advantage to the 4x multiplier. There is a honking great cost, though, since it requires the devices to oversample the data in between clock edges. This leads to more jitter (sampling-point variability) and makes the Rambus interface (RAC) very complex and expensive. The primarily-analog circuitry in the RAC is one of the main reasons that RDRAMS have such hideously low yield -- it's just ugly getting all of those comparators and timing paths to match up in a cost-means-everything DRAM process which is basically oriented to making lots and lots of cheap capacitors.

    Early DDR devices are already running at over 400 MT/s (million transfers per second) in small systems such as graphics controllers and Transmeta's low-power systems. JEDEC is now putting the finishing touches on DDR and most of the hairy design work is moving to DDR-II. As usual, the second pass mostly applies the lessons learned on the first pass. DDR-II will have less legacy support and will remove some features in the "nice but not worth the speed cost" category.
    The objective is to run early parts with >400 MT/s data rates in large systems -- or in other words, your 72-bit DIMM is going to have more than twice the bandwidth of those still-not-available 800 MT/s Rambus parts -- and the DDR-II parts won't require water cooling, either.
  • Not a troll????
    Sure IBM made $$ from MCA, BUT it cost them
    marketshare. Yours is the first comment from
    inside or outside IBM that hasn't concluded
    that MCA was an unmitigated disaster for IBM.

  • The numbers are accurate, but DDR memory isn't actually being manufactured yet, while it is (theoretically) possible to find (really expensive) 800 MHz RIMMs.

    However, once they come out, they'll be well worth grabbing. Rambus won't be able to keep up, especially in large memory configurations since latency decreases with each RIMM added...
  • "Judging from the rest of the sentence, you said it backwards."

    Oops.
    Yeah, thanks. :)
  • I guess that was hearsay from somewhere else. I was speaking in terms of actual DIMMs available though; I still haven't seen or heard of those being anywhere...

    (And here's a link to a search on Pricewatch [pricewatch.com] too...)

  • RAMBUS memory is being used in the PlayStation 2. Considering that 2 million systems have shipped in Japan and the PS2 hasn't been released to the rest of the world yet, I think RAMBUS is going to get some nice business. Remember, the original PlayStation has sold over 75 million units.
  • So you're saying that a company that manages to make, say, 50 million dollars a year is completely worthless, and that you have to make billions a year in order to be a *real* company?

    This kind of thinking irks me.
  • Plus, RAMBUS creates more heat than conventional SDRAM (SDR or DDR). More heat means case designers may have to redesign their cases and would be extremely hard to use in laptops. Also, the more RIMMs you install, the higher the latency. RAMBUS is a serial memory technology, therefore you degrade performance as you increase memory in the system. Serial memory technologies can be advantageous when reading large blocks of memory but for normal everyday tasks(office software, CPU intensive, games, etc.), it falls short.
    --
  • I think I heard somewhere that Intel is choosing DDR SDRAM for some of their newer server-level chipsets due to the cost (and availability) of Rambus RIMMs.

    Think about stuff a server with 2GB of RIMMs (where each 256MB RIMM costs around $1500-2000 a piece)... the memory itself would cost more than the rest of the server components. Also having 8 RIMMs would really crank up the latency to access RAM (even if the chipset supports dual-channels, that's 4 RIMMs per channel).

    I doubt of Rambus will bust because of the Playstation 2 sales (and other computers/consoles) that might be based on Rambus. If the yields of high-density Rambus barely improve and prices don't go down enough, then there is a chance that Rambus will lose out. Rambus is a nice technology... but could it be too good? ;)
  • > What ever happened to interleaving memory banks for more speed? It does raise the pin counts,
    > but package technologies have been developed that mitigate that.

    Well, interleaving memory banks is live and kicking in non-Wintel-cheapo-pc systems
    aka so called "workstations". Sun's, HP's, Alpha's all can use the speedup effect
    of interleaving memory banks if you stick enough RAM modules into them.
    And due to the incredible braindeadness of intel and rambus, they will never
    ever use rambus ram's for their systems. And high cost is not the reason!

    just my 2 cents


    --
  • The 840 chipset uses interleaved RAMBUS to effectively halve the latency and double the bandwidth. Even Tom's Hardware [tomshardware.com] reports that it's faster than DDR.
  • Speaking from the perspective of working for an unnamed memory vendor, Rambus isn't doing so hot. We currently offer 64MB and 128MB of PC800 RDRAM and haven't sold a single unit while every day we ship out 60+ units of SDRAM. Even propriatary 512MB Kits of Sun SPARC Memory is doing better sales-wise. RDRAM has a very limited future if these sales trends continue.
  • The bus optimization of RAMBUS is about 80%, while DDR is at about 60% IIRC. Even with bus optimization, DDR comes out to about, 1.382 Gbytes/sec, while RAMBUS comes out at 1.28 Gbytes/sec.

    133MHz DDR = 266MHz x 64bits x 0.65(bus optimization) = 1.382 GBytes/sec

    400MHz RAMBUS = (400MHz x 2/clock cycle) x 16bits = 1.6 Gbytes/sec

    During the initial design stages, they found that RAMBUS chips often overheated and burned out, so some genius thought up of the idea of having a RAMBUS RIMM turn off when it isn't doing much work...rah, that was pretty stupid! To power up the RIMM again, it takes an *ETERNITY* in computer time!

    Oh, and RAMBUS costs heaps as well, because of all the patent rubbish.

    Bottomline: DDR is CHEAPER, FASTER, and BETTER than RAMBUS.

    Ekapshi.
  • The Willamette (the next generation of 32-bit Intel CPU's) and the Merced (64-bit) will have trace cache, which keeps pages based on a trace of code flow. This is probably a similar approach.

    However, even with a trace cache, DDR SDRAM will probably outperform RAMBUS, because its bandwidth is higher and its latency is lower.

    DDR -> 266 Mhz x 64 bit = 17024;
    Initial Latency -> 6 Msec + 2 Cas Cycles.

    RAMBUS -> 800 Mhz x 16 bit = 12800;
    Initial Latency -> 50 Msec + 2 Cas Cycles.

    DDR is simply faster. It's also going to be cheaper, because it uses the same access protocol as SDRAM, so southbridge chips, sockets, and the chips themselves can be manufactured with minor changes.

    If RAMBUS succeeds, it will be only because Intel shoved it down people's throats. AMD has officially announced that they are in the DDR camp.
  • I don't think RAMBUS will ever have a clear performance advantage. With SDRAM, they can not only add more parrallel streams (DDR or QDR or 8DR), they can increase the clock while maintaining low latency.

    The Athlon platform has the capability to scale to a 400Mhz bus with Quad Data Rate. RAMBUS would have to be at 6.4Ghz to keep up, and it would still have higher latency. RAMBUS has very little chance of catching up to DDR.
  • Intel's chip sets and processors are the problem. Thier processors can only handle a data bus of 64bits @ 133Mhz. When you put memory on a chip set at the other end of this, a single rambus chanel exceeds the bandwidth available on the data bus. The latency sucks because you have the latency through the chipset added to the inherent latency of the Rambus protocol.
    However Rambus (and other serial memory interfaces) greatly reduce the pin count necessary for a given bandwidth. A performance system could use this to put the memory controller on the processor eliminating one part of the latency. Also multiple memory channels could be put in one system to increase the total bandwidth.
    The current designs from Intel is just a hack of thier current chipset and do not try to take advantage of the possibilities.
  • Doesn't it ring a familiar tune. People have to pay royalties just to make RAMBUS technology. What on earth were they thinking!

    mark
  • MCA was good. IBM already had MCA periphs, a mature reference, and they knew it kicked ISA's teeth out. VLB was a twinkle in someone's eye, periphs were a year or so out, and it wasn't all that spiffy. IBM needed/wanted a faster, better bus right then. So you go to the R&D people, tell them to dope up a MCA bus for a 486. IBM is happy, their customers get a better bus with faster periphs and are happy. Too bad everybody got all pissy when IBM actually wanted money to use IBM's bus design.
  • ...don't tie your company's future to one company's success, even/especially if it is Intel.

    the broader your market, the broader your sales.
  • by steve.m ( 80410 ) on Wednesday April 19, 2000 @01:57AM (#1124445) Journal
    Didn't wired also say that interactive movies, push content and Iridium were 'the next big thing'. Look where they are now.

  • By the way, that was just in the last three months.
  • my goof -- I won't let it happen again. :^)

    If it didn't make things even more offtopic, I would like to be set straight...

    --

  • by LocalYokel ( 85558 ) on Wednesday April 19, 2000 @02:51AM (#1124448) Homepage Journal
    Regardless of what these EE majors and other so-called experts have to say, the real reason Rambust is going nowhere is cost. Sure, Grambus is not offering a significant/if any performance benefit, but even if it did, the price would have to match performance.

    Nobody really likes new technologies that add significant cost through royalties -- see FireWire (ahem) IEEE 1394. USB was free, which is why everyone had those controllers on their motherboards years before they had anything to plug into them.

    Rambus would have a great chance if it was not commanding a 500% premium, and perhaps cost only 10-15% more than current SDRAM. If they can get the prices down, which they will not, Intel would have been able to release a four channel solution that would significantly reduce its latency (this is coming) and increase its performance. As the technology became financially reasonable for everyone to use, mass production would bring the price down even further.

    Besides, who wants to go back to paying 1995 prices for RAM?

    --

  • The Register has been running on ongoing series of stories about how Rambus is a huge conspiracy and anybodywho invests in them is a dupe.

    God bledd The Register. Between BOFH and the pin they poke in the New Economy bubble, it's essential reading.

    -carl
  • I guess every interleaving memory access requires the memory in pairs (or even quadruples for 4-way interleaving)
  • 800MHz RDRAM = 800MHz x 16 bits = 1.6 Gbytes/sec

    800 MHz RDRAM? Where can you get that? If your thinking of PC800 RDRAM, remember that it is actually clocked at 400MHz...

  • Just a note of information for those who've read too much badly written English and therefore can be excused for not knowing better: when comparing points that differ, you compare one with the other &#151 "Lead is very heavy compared with hydrogen"; when comparing things in order to pick out similarities, you compare one to the other &#151 "Shall I compare thee to a summer's day". Having said all that, the verb "compare" is probably not the best choice in this case. But I can understand that you were in a hurry, as I am now, and didn't have time to parse your headline :)
    BTW thanks for the interesting information.
  • Intel has made a pretty significant investment and commitment to rambus. They didn't get to be the giant of a company that they are by being stupid, but lately their decisions haven't seemed to be all that well thought out. I wouldn't be too suprised to see them stick with rambus just out of a unwillingness to admit they're wrong. Intel could be up for some really rough roads in the future. AMD is looking like a really good investment right now.
  • Pin count - As more integration happens, the reduced pincount of DRDRAM may become a bigger factor. It's a simple matter of 168 vs 55, though the 55 need to be at a higher frequency. It's simply easier to integrate a DRDRAM interface and have enough pins to do all of those other things, like an AGP bus.

    The bandwidth/pin count ratio is very important for servers that need huge bandwidth. You can almost put 3 rambus interfaces on a chip for the pin cost of one DDR SDRAM interface. That's something that will make many chip designers take a good hard second look at Rambus

    Also, something Tom Pabst never really covered was how much of the Camino performance pitfall was due to the chipset and how much was due to the memory protocol; it's perfectly possible to implement a MUCH more efficient Rambus memory controller than Camino does.

    Having said all this, I think Rambus screwed up in many ways; they are notoriously arrogant in the industry, created a product that doesn't yield well, and tried to decommodotize a market, which, although possible, is definitely going against the flow in the industry. But that doesn't mean their technology is uninteresting or that Intel was smoking something when they decided to back Rambus.

    DDR SDRAM would have taken a lot longer to come about if JEDEC thought they were sitting pretty with the next generation of mass-produced DRAM.

  • Yet another great sign that the Wintel monopoly is broken. How long can it be before Microsoft is no longer regarded as a monopoly in their own right? Weeks (Kernel 2.4), Months (KDE 2) or Years (US DoJ does something or wine runs all windows apps), or simply until we make the world believe?
  • I think Rambus has already seen the writing on the wall. Too little, too late. So what's the company left to do? Well I think it's obvious considering all the lawsuits they have filed.

    I'm not an EE nor a patent lawyer so I cannot say if they have a legit claim. If they did win however, they will have some deep pockets to dip into.

    I wouldn't could them out. Not yet at least.

    -- It ain't over till it's over.
  • Heres the formula to tell if the technology is going to go bust

    Proprietary standard + 5 times as expensive = going to go bust

    Yes, rambus is better than what we have now, and it might even be better than DDR (debatable), this is irrelevant though. A Ferrari is technically superior to a Camaro in hundreds of ways, yet they are very rare to see. History is full of failed superior propietary standards. This one will be no different.

  • I strongly doubt Sony would prepare a contingency product of any kind. The technical specifications, the development tools and the code are all based around the implementation created, and they are probably in no hurry to introduce any potential incompatibilities or other such problems. It is interesting that Sony is not the first console system to utilize Rambus RAM; the Nintendo 64 actually takes that cake. I'm not sure how Nintendo was getting it, but if I'm not mistaken, Toshiba has some vested interest in not only Rambus research but also in Playstation 2 console.
  • The 64 for SDRAM vs. the 16 for RDRAM is not number of pins but bus width. RDRAM has a 2-bit bus while SDRAM has an 8-bit bus. So RDRAM needs to be clocked 4 times as fast as SDRAM just to provide the same throughput.

    More recent articles on Tom's:

    Dissecting Rambus [tomshardware.com]: the March 15 article that perhaps (?) triggered Rambus' recent stock dump.
    Rambus Revisited [tomshardware.com]: Second article, April 3rd.
  • Not a troll, just too lazy to log out and back in. . .

    I didn't think that IBM's MCA program was a disaster. From a financial point of view it was quite successful. All the bigger HW manufacturers were pumping out proprietary hardware back then, IBM was no exception. Not only did the PS/2 offer different architecture, it also offered IBM and it's partners a lucrative market in proprietary "must haves". Remember, these were the days of Bigco and the like going with single provider solutions. IBM was a big player in this market with it's early OS/2 offerings, Token Ring enterprise solutions and seamless integration with it's SNA world of products. The MCA push was all a big part of their overall business plans.

    Granted things have changed a lot now and being proprietary is no longer such a great idea, but in the business cycle it did make a substantial amount of money and improved IBM's marketshare in the PC arena when it really needed it. Zenith was their big competitor the time and they were getting crushed.

  • I imagine, if RDRAM came out in 2003, it would of become the best thing since sliced bread. The same thing with OS/2, it was just ahead of its time. I feel kind of sad, because _WE_ are the people who are going to have to deal with SDRAM as it becomes obsolete, and then we'll have to beg intels forgiveness... :)

    Now if only I could optimize gcc for rambus memory.. :)

  • The numbers are accurate, but DDR memory isn't actually being manufactured yet

    Actually, DDR memory modules are being manufactured. [celestica.com]
  • by dpilot ( 134227 ) on Wednesday April 19, 2000 @03:45AM (#1124463) Homepage Journal
    Two forenotes:

    1 - I've been involved in the design of DRDRAM for several years, now. I've been in memory design for 18 years, also. I'm slightly more informed on this than the average geek-on-the-street.

    2 - I really don't like the principle behind DRDRAM. Proprietary things are supposed to eventually become commodities, not the other way around. Memory has long been THE commodity in a computer, and here they are trying to make it go the other way. But it's technically interesting, my contributions won't make or break the whole scheme, and the kids gotta eat.

    Cons:

    Fundamentally, for at least the near future, it simply takes more silicon to implement the Rambus interface. No matter how much learning you do, that area doesn't go away. Perhaps after the spec stabilizes fully, it may be possible to come up with better fully custom circuit implementations, but that's at least a little ways in the future. Plus in the performance race, it's possible that the spec may never stabilize sufficiently before a given generation is obsolete.

    It's very complex. My boss would have slapped me silly had I ever even thought of coming up with this. In years past I've been slapped silly for coming up with stuff a fraction of this complexity.

    Latency - Obvious, though there is a second side to this, under Pros.

    Wash:

    The frequencies are high, and the margins tight. I suspect EVERYONE is going to have to cope with the same realm, sooner or later. DRDRAM is simply a bit ahead of its time on this on, and is taking the pain, first. I remember when it was tough getting the whole chip to run at 100MHz for SDRAM, or even 150nS for page mode DRAM.

    Pros:

    Granularity - don't discount this one. Presently DIMMs are made with 8 chips, each organized with X8 outputs. That says that 64Mb technology makes 64MB DIMMs. It also says that 256Mb technology makes 256MB DIMMs, even though mainstream PCs today are only now making the transition from 64MB to 128MB. That's part of the reason we've dropped the 4X-per-generation habit, and are bringing out 128Mb SDRAM, because the market just isn't ready for 256Mb. 512Mb and 1Gb are on the drawing boards and early hardware now, so this problem is going to get worse. A single 1Gb chip holds 128MB. (Obviously)

    Pin count - As more integration happens, the reduced pincount of DRDRAM may become a bigger factor. It's a simple matter of 168 vs 55, though the 55 need to be at a higher frequency. It's simply easier to integrate a DRDRAM interface and have enough pins to do all of those other things, like an AGP bus.

    Banking (Latency) - While simple latency is poorer, under situations with multiple threads of access (multithreading and/or DMA streams) the higher bank count of Rambus becomes an advantage. If a bank is left open, or even if it has just been closed following a prior operation, you need to wait a 'restore time' before you can access that bank, again. With DRDRAM there are usually more accessable banks, so odds are better that the next access will be to a bank that is currently closed. Even if the simple latency is longer, if you don't have to pay the 'restore time' penalty, the effective latency becomes shorter. This doesn't show up unless you have multiple memory access streams, though.

    No summary
  • To generate anything approximating a square wave at 400MHz would involve harmonic frequencies of many GHz. The rambus clock will look something like a sinewave on the PCB tracks. Maybe it'll be a bit squarer once it gets onto the die though.

    Jeff
  • Your right about the latency of RDRAMS being higher, but the raw throughput of 133MHz DDR SDRAM is faster than the fastest (800MHz) RDRAMs.

    133MHz DDR = 266MHz x 64bits = 2.128 Gbytes/sec
    800MHz RDRAM = 800MHz x 16 bits = 1.6 Gbytes/sec

    Rambus is crap whichever way you look at it.

    Jeff
  • by Halo1 ( 136547 ) on Wednesday April 19, 2000 @01:26AM (#1124466)
    I think you may find the following article quite interesting: "DDR SDRAM vs. RAMBUS DRDRAM" [macosrumors.com]. It's not written by the mosr staff (it was actually a reaction to a feature containing incorrect/skewed information), so all you mosr-haters can calm down already.

    The author explains why he thinks DDR SDRAM is better dan DRDRAM and shows once again that MHz isn't everything.



    --
  • What everyone seems to be forgetting here is that Rambus is in lots of other markets besides PC memory, which it clearly isn't going to be good for. However, I'm sure that it will become dominant as an interconnect technology in routers, and it also has a bright future in embedded systems. Will all of this justify the current valuation? Probably not. But the company certainly has prospects to have continued profitability for several years.
  • The business market really is the sweetest of them all, businesses don't care about price as much as end users and they upgrade regularly.
  • Actually 12800 divided by 8 is 1600 million bytes. Which divided by 1048576 (1GBit) = 1.5258789 GigaBytes.

    Remember that 1 mbit = 1024 bit and 1 gbit = 1024 mbit = 1048576 bits.

    Grtz, Jeroen

  • Never underestimate the dark side of the Source light side yedi hacker you are? A nice .sgi you have. is strong in Linux Torvalds and you the source Grtz, Jeroen
  • Yes, supermicro sells a motherboard that is a quad xeon server board with quad interleaved memory channels, onboard SCSI-160, Ethernet 10/100, and 64bit pci 66MHz =) good stuff. I want one. (rather have an Athlon solution but I just can't have that right now can I?


    Apartment6 [apartment6.org]
  • I think this analysis is a bit flawed - if you give rambus the same 64 pins that you allow the DDR solution you find rambus parts winning in bandwidth by a large margin, plus you can have independent accesses going on the four different banks which might be nice if you have the CPU, AGP, and PCI all contending for memory at once.

    I think rambus has done some very cool stuff. When they first introduced their technology (1991!) it was really gee-whiz compared to fast-page mode DRAM.

    Their problems are latency, die size penalty, royalty costs, the care that needs to go into designing a PC board for them and one noone else has mentioned - test costs.

    The testers used to test rambus parts are hideously expensive, slow, and can't test as many devices at once which leads to major throughput and cost problems on a factory floor compared to DDR SDRAM which uses an incremental improvement to the testers already in use.

  • This would be very difficult given the current design of the PS2. The reason that RAMBUS was chosen was that it provided the required memory bandwidth (remember this thing only has to render at a fixed resolution that is known in advance) at the right space, thermal and pin count costs.

    If the PS2 had been manufactured using SDRAM it would required a 6 layer board instead of the 4 layer one that it actually uses and the board would have had to be bigger to accomodate the traces. For Sony the increased cost of the memory was irelevent compared to the saving in other areas. I think that RAMBUS will not die but it will find a niche in things like the PS2. It is doomed to forever be a low volume product. It is the wrong technology for high performance architectures at the present moment.

    It will be interesting to see how the Alpha 21364 performs, putting the controler on the chip might sort out some of the latency issues, but something will still need to be done about the heat (perhaps a shrink).

    -dp
  • CPU speeds accelerated faster than anyone planned (a year ago, 600 MHz was state of the art!)

    A little while ago, I did a quick and dirty calcualtion using Moore's law vs. my first computer, an 8088 @ 4.77mhz. I didn't look at any computers that I purchased since 1984, only the original. I did account for the speed benifits from improvements in 286, 386, 486, PI, and PII systems, not just raw Mhz.

    The results? Damn close to what I'm using now.

    So, if I can predict 15 years of CPU speeds from by using Moore's law and an old 8088 as input data, why can't manufacturers?

    BTW: In about 5 years -- 2004 -- the same calculations show that I'll be using the equivelent of a 2.8 Ghz PII system...and that's behind the curve. Top of the line systems will be about as fast as a PII running at 7.5 Ghz.

  • Just took a look at RMBS on the nasdaq. The stock is valued at a P/E of 452 while selling at 170$. That's crazy considering what we are seing technology-wise. Imagine the drop when/if Intel finally pulls the plug. Ouch! Doomster
  • if only you were a moderator...
    -----BEGIN GEEK CODE BLOCK-----
    v.3.12
    GCS d-(--) s+: a-- C+++$>++++$$ UL++$>++++$$ P+>++++$ L++>++++$ E--- W++$>++
  • by dirtmerchant ( 162306 ) on Wednesday April 19, 2000 @12:53AM (#1124477) Homepage
    ... then wouldn't a more clever title have been rambust? sorry, its early and i need sleep. i realize that was unforgivably stupid.
    -----BEGIN GEEK CODE BLOCK-----
    v.3.12
    GCS d-(--) s+: a-- C+++$>++++$$ UL++$>++++$$ P+>++++$ L++>++++$ E--- W++$>++
  • ...is in chipset overhead. AFAIK, it takes more control circuitry to run DRDRAM, as well as a faster clock. Therefore, it's going to run hotter in the chipset. A big step for DDR SDRAM might be to drop the voltage (if it didn't do it already) even further. The original Pentiums, at 60 and 66 MHz, ran too hot bacause of their 5V design. With the Pentiums at 75 MHz and beyond, Intel dropped the voltage to 3.3V, and the trend has continued to the present day, though it's reaching the theortical minimum that must be there to switch the gates.

    So if a pure 1.8V system comes out...CPU core and I/O lines, cache, chipset, RAM, everything...it would run a lot cooler. Until everyone trades that in for faster, of course. The only thing I can see holding back this design are bridge chips required to run the PCI slots at 3.3V. Perhaps it's time for a 1.8V/133MHz PCI bus and 128-bit AGP II?

  • 1 mbit = 1024 bit and 1 gbit = 1024 mbit = 1048576 bits

    Huh?? Unless computer math was rewritten since last I checked:

    1 Gbit = 1024 Mbit = 1048576 Kbit = 1073741824 bits

    That, in turn, is:

    134217728 Bytes = 131072 KBytes = 128 MBytes = .125 GBytes (1/8 of a GByte)

  • I didn't think that IBM's MCA program was a disaster.

    Absolutely right. The disaster was that IBM wanted everybody to pay retroactive royalties on the ISA stuff they had used, which prompted the creation of EISA: a faster, 32-bit, backwards-compatible ISA bus. IBM is pretty lucky it never caught on. As it is, everyone just continued using ISA, since it was the lowest common denominator, and MCA choked and died. With MCA's death came the demise of Plug&Play. (No, really----you just put the card in, swapped the reference and options (drivers) disks a couple of times, and it ran!)

  • Oops, that's right. I was speaking from a home PC point of view.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...