Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware IT

Rambus Takes Another Shot At High-End Memory 213

An anonymous reader writes "Tom's Hardware is running an article about Extreme Data Rate memory (XDR DRAM for short), which was developed by Rambus and now entered mass production in Samsung's fabs. Right now, Rambus says the memory is only for high-bandwidth multimedia applications such as Sony's Cell processor, but the company ultimately hopes to push XDR into PCs and graphics cards by 2006. Time will tell if Rambus has learned from the mistakes it made with RDRAM a few years ago."
This discussion has been archived. No new comments can be posted.

Rambus Takes Another Shot At High-End Memory

Comments Filter:
  • It will be awhile (Score:5, Interesting)

    by I_am_Rambi ( 536614 ) on Tuesday January 25, 2005 @11:43PM (#11476752) Homepage
    before AMD might even thinking about accepting it. Since AMD now puts the memory controller on chip, AMD will have to see proff that it is faster. AMD will not go for DDR until it gets faster. Their reasoning, DDR2 adds cost and decreases performance. Without help from AMD, Rambus might be heading down the same track.
    • so, are you talking about AMD? I wasn't clear...
    • Re:It will be awhile (Score:5, Interesting)

      by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Tuesday January 25, 2005 @11:57PM (#11476870) Homepage
      All the more reason to move to FBDIMMs. AMD would put one memory controller on their chips, and it would work with SD, DDR, DDR2, Rambus, XDR, or anything else someone wants to put on. Makes things easy. Becuase the physical interface is constant and buffered, you don't get the problems of needing a different socket for every kind of RAM out there.

      Unfortunatly, no one seems to be pushing for this despite the headaches it would remove. All you'd have to do is make your memory controller able to recieve faster (like going from DDR333 to DDR400). Plus, with the memory not directly connected, memory makers would not only compete evenly (since the user wouldn't need to know the difference between DDR2 and XDR except speed and price), but they could add other things like an extra cache level in front of the memory just by replacing RAM. And it would mean that the computer you bought today would take the memory that was available 3 years from now. Right now SDRAM costs a FORTUNE. But if you had a computer that takes FBDIMMs, instead of paying $50 a stick for 256mb sticks, you could buy at the price of DDR today (say 512mb for $25 or whatever it is today).

      Just think, you wouldn't need to buy new types of RAM for your PC every 2 years.

      • Re:It will be awhile (Score:5, Interesting)

        by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday January 26, 2005 @12:01AM (#11476904) Homepage
        One other thing I forgot. With FBDIMMs it would be easy to replace your DRAM with SRAM (if prices dropped enough) because the refresh circuitry is on DIMM. That means one less thing that the memory controller has to do, which means less complexity and less silicon (not that the refresh logic takes up a huge ammount, but every little bit). When magnetic RAM comes along, you wouldn't need yet another memory controller.

        And (since I think it's serial, instead of parallel like current RAM) it would SERIOUSLY decrease the pincounts of the Opteron and northbridges. Think if you could have quad channel memory in your desktop as an option. Right now the CPU would need THOUSANDS of pins to do that. But you might be able to do it with the current 939 pins on an Opteron if you used FBDIMMs.

        Ah, dreams.

        • Re:It will be awhile (Score:2, Interesting)

          by Anonymous Coward
          Uh, you realize that with FBDIMMs, there is an unavoidable latency for serializing/deserializing between the memory controller and the DIMM? And that as cache lines get longer (Pentium M has 64-byte lines, PPC970 has 128-byte lines) the serialization will increase the pressure per FBDIMM lane. Then, in the effort to reduce the lane blocking for transferring large cache line reloads, they'll want to increase the lane frequency even more, which means you're going to have to end up getting rid of your FBDIMM
          • Re:It will be awhile (Score:5, Interesting)

            by joib ( 70841 ) on Wednesday January 26, 2005 @02:33AM (#11477682)

            IMHO, FBDIMM is just Intel's hedge against RAMBUS going bust. The point of RAMBUS was to reduce pincount per chip by reducing the width of the channel between the memory controller and the chip, and to decouple the notion of a memory controller controlling specific banks of memory. FBDIMMs are to solve the same problem, except RAMBUS is shipping already.


            Reducing pincount is one important reason for FB-DIMM, but the real reason for it is to get out of the capacity/speed tradeoff game. See, many systems need lots of memory. However, with current DDR-400 or DDR2-667 you can only put two devices per channel. If you want more RAM than what fits in two devices, you have to reduce the speed. FB-DIMM gets around this problem by using point-to-point links between the devices.

            Yes, this increases latency a little bit, but there really isn't any other practical way to increase speed without reducing capacity. However, FB-DIMM compensates for the increased latency by allowing many outstanding transactions on each channel; because of this, latency under high load is actually supposed to be lower than for traditional RAM tech with the same specs.
            • by Moraelin ( 679338 ) on Wednesday January 26, 2005 @07:03AM (#11478590) Journal
              As someone else already said, "people seem to forget what the R in RAM stands for".

              What kills RAM nowadays in common scenarios is latency. Whenever there's a cache miss, or a mis-prediction makes you flush the CPU's pipeline and start again, what causes the CPU to stall is latency. You get to wait until that request is processed by the RAM controller, is actually delivered by the RAM, makes its way back through the RAM controller, and only then you can finally resume computing. That's latency, in a nutshell.

              And it's already _the_ problem, and it's gotten steadily worse. A modern CPU has to wait as many cycles for a word from RAM as an ancient 8086 would have if you ran it with a HDD instead of RAM. It's _that_ bad.

              That's why everyone is putting a ton of cache and/or inventing work-arounds like HyperThreading. And even those only work so far.

              And again, it's only going worse. DDR did increase bandwidth, but did buggerall for latency. Your average computer may well yet transfer two words per clock cycle with DDR, but still has 3 cycles CAS latency like SDR had. And DDR 2 has made it even worse.

              So FBDIMM's great big advantage is that it lets you have _more_ latency? Well, gee. That's as much of a solution as a kick in the head as a cure for headache.

              As I've said, "no, thanks." If Intel wants to go into fantasy land and add yet another abstraction layer just for the sake of extra latency, I'm starting to think Intel has plain old lost its marbles.
              • >A modern CPU has to wait as many cycles for a word
                >from RAM as an ancient 8086 would have if you ran it
                >with a HDD instead of RAM.

                "ancient"? *gasp* [*insert chest-clutching sequence here*]

                I knew those new-fangled sixteen bit machines with their wait states were a bad idea. Back to the 8 bit machines! No wait states! In fact, we can pair the 6502 with the 650 and have *two* processors running all ot on the same memory.

                64k should be enough for anyone. Especially if you have a second double
              • > As someone else already said, "people seem to forget what the R in RAM stands for".
                > What kills RAM nowadays in common scenarios is latency.

                No problem, to reduce latency RAM makers should just increase the speed of light..
                Easy!
        • 939 pins???!!!???

          Wow. I had no idea. I'm kind of an old guy, but I'm used to CPUs having address and data pins, and some power and ground pins, and a very few bus control signals, and that's basically it. Well, clearly that's not the case here, even if they used a 512-bit-wide data bus, which I kind of doubt that they did (though it would be a great way to increase bandwidth).

          In fact, about 10 years ago I decided that the hot future CPUs would have bandwidth issues (right on the money), and that the w

      • But doesn't standardization decrease profits? Over the long term, that wouldn't allow for companies to force customers into buying their upgrades. Or is there some sort of deep economic thinking behind what MBCook has said that I don't understand? After all, I'm not an economist.
        • I'm an economicst, maybe I can help.

          Commodification of a product reduces profit to "normal" returns. Economic profit and accounting aren't the same; "normal" accounting profits are "zero" economic profit.

          Being the first out with something yields a short-term economic profit. As it becomes a commodity, profits drop to normal rates.

          Look at memory prices--the first one out with a new sized gets to charge a high price, wich drops as others leap in.

          hawk
        • Standardization will make it easier for competitors to offer compatible product. Which could reduce profit.

          But standardization also increases the size of the market, which can increase profit. It also reduces risks.

          Are you holding off buying an HD DVD player to see whether Blu-Ray or HD-DVD wins? Standarization would reduce the risk to the consumer, and to the producer, that they'll invest in the unpopular choice.

          So, on the whole, standardization is frequently viewed as a good thing.
      • Re:It will be awhile (Score:5, Interesting)

        by Moraelin ( 679338 ) on Wednesday January 26, 2005 @07:19AM (#11478624) Journal
        One reason the AMD 64 works so well is precisely because they _reduced_ latency. That's basically the great advantage that the IMC (Integrated Memory Controller) offers.

        Funny abstraction layers and everything being agnostic of everything else is a nice CS theoretician fantasy. In a CS theory utopia everything should be abstracted, or better yet virtualized. Any actual hardware or other implementation details should be buried 6 ft deep, under layers after layers of abstraction or better yet emulation.

        The problem is that reality doesn't work that way. Every such abstraction layer, such as buffering and translating some generic RAM interface costs time. Every single detail you play agnostic about, runs you the risk of doing something extremely stupid and slow. (E.g., from another domain: I've seen entirely too many program implementations that, in the quest to abstract and ignore the database, end up with a flurry of connections just to save one stupid record.) Performance problems here we come.

        The AMD 64 runs fast precisely because it has one _less_ level of abstraction and virtualization. Precisely because their CPU does _not_ play agnostic and let the north-bridge handle the actual RAM details. No, they know all about RAM, and they use it better that way.

        So adding an abstraction layer right back (even if one that moves the north-bridge on the RAM stick) would solve... what? Shave some 10% out of the performance? No, thanks.

        Or you mention SRAM. Well, the only advantage to SRAM is that it's faster than DRAM. Adding an extra couple of cycles of latency to it would be just a bloody stupid way to get DRAM performance out of expensive SRAM. Over-priced under-performing solutions, here we come.

        Wouldn't it be easier to just stick to DRAM _without_ extra abstraction layers to start with? You know, instead of then having to pay a mint for SRAM just to get back to where you started?

        Not meant as a flame. Just a quick reflection on how the real world is that-a-way, and utopias with a dozen abstraction layers are in the exact opposite direction.
        • Latency (Score:4, Interesting)

          by dpilot ( 134227 ) on Wednesday January 26, 2005 @09:27AM (#11479090) Homepage Journal
          There's another aspect of latency here that's being ignored. Here and elsewhere in this thread tree folks are talking about circuitry issues, like the memory controller, DRAM itself, DDR, etc. Those are all valid, but there's one more that's being neglected - wires, drivers, and receivers. By simply putting the DRAM somewhere away from the CPU/Northbridge, up on a DIMM socket, you take a big hit in latency. Even getting Zero-access DRAM wouldn't speed things up that much, because of the physical-related delays.

          Oh, I agree with your abstraction comment.

          Putting faster things into an FBDIMM just won't do that much, because the speed is physically in the same spot. I did an extensive study of this back prior to 1990 and found these results, and the consolidation of L2 and even Northbridge onto the CPU shows that it's still valid, today. Main memory is going to be slow. Main memory is always going to be slow, because that's a side effect of being "big". Main memory is always going to be "big" as long as the appetite for bits exceeds what can fit onto one chip. Learn to live with it.

          Incidentally DRAM latency grows beyond minimum the moment you multiplex row and column addresses. There is a Trcd(max) spec where access is purely row-limited, but in practice that's just about impossible - access is almost always limited by Column access. Trade speed for pins.

          Beyond that, even SDR traded off latench for bandwidth, compared to EDO. (I've designed both.) I don't think DDR is that bad a deal, compared with SDR, though I haven't actually done a DDR design, myself. At the very least, DDR offers the half-cycle latency options, and the DDR designs have been architected to scale far higher in frequency than SDR ever was.
          • DRAM slowness doesn't bother me as much as HDD slowness.

            Decades later and we still have access times in the order of 10ms... That really really sucks.

            DRAM isn't that bad because the SRAM caches work pretty well in many cases - e.g. you have a loop zooming along at near CPU speed (in SRAM), reading and writing processed data out at DRAM speed. That's fine because the loop often isn't fast enough to move data in and out at SRAM speed.

            You are more likely to hit the HDD speed limits first (or run out of DRAM
            • DRAM slowness doesn't bother me as much as HDD slowness.

              Decades later and we still have access times in the order of 10ms... That really really sucks.


              First, if DRAM were far faster, computers would be far faster. Memory performance affects CPU performance far more than disk performance does. The slowness of disks can be made up for using DRAM cache, just like SRAM cache makes up for the slowness of DRAM.

              Second, modern hard drives are down to about 8.5ms, which is a lot better than the 28ms I remember
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Tuesday January 25, 2005 @11:45PM (#11476765) Journal
    SRAM is much faster, closer to the core of the CPU, and plentiful (if the chip manufacturers wanted it to be).

    Who needs a gig of RAM when you can have a gig of cache?

    If they need swap space, they can always write back out directly to a disk-based swap file.
    • by Anonymous Coward
      cuz it's 6-9x more transistors per bit so your gig o sram costs at least 6-9x more than your gig o sdram.
    • ...SRAM is much more expensive to produce? It also takes more power and generates more heat.

      That and the benefits of cache go DOWN as the size of the cache goes up. Past a MB or two the benefits would be lowered. Also as the # of address lines goes up the access gets slower. And finally a bigger bottle neck is that "external memory" is external.

      So unless you want to pay for a cpu with a GB of onboard "memory" in the form of SRAM.... the benefits won't be that high.

      Tom
      • That and the benefits of cache go DOWN as the size of the cache goes up. Past a MB or two the benefits would be lowered.

        To clarify, they increased benefit from cache decreases as cache size increases. Performance with a large cache is not worse than performance with a small cache, but the law of diminishing returns starts to kick in over a few MB. A computer with 4GB of SRAM running at the same speed as current cache memory (assuming no memory controller bottleneck) and no cache would be faster than on

        • "on the former every memory access would take around 2 cycles"

          See that's wrong. On the machine with a small cache and large DRAM every cache hit is 2 cycles. Even the 512KB L2's that AMD/Intel use are upwards of 13 cycles to access [or more]. It takes time for the address bits to settle and also the larger the cache the longer it will take for the signal to settle (more distance)

          With 4GB of cache EVERY cache hit would be larger [say for conversation] access time of say 30 cycles.

          So let's compare acces
    • by be-fan ( 61476 ) on Tuesday January 25, 2005 @11:54PM (#11476846)
      Because SRAM takes up 6 transistors per bit, while DRAM takes up 1 transistor per bit. The biggest mainstream CPUs run about ~150m transistors, and that's only enough (if everything were cache), about 3MB.
      • by ottffssent ( 18387 ) on Wednesday January 26, 2005 @01:10AM (#11477292)
        On the other hand, I can buy quality 1GB DIMMs for $250. Divide by 4 (rough guess. SRAM at 6T should be 6x the price, but DIMMs have caps too. 4x the manufacturing costs seems reasonable, assuming the infrastructure were in place), and you've got 256M SRAM modules for $250. Obviously that's a bit on the spendy side for large capacity RAM, but clearly there's a market for faster DIMMs. Unfortunately, DRAM access time, at about 5ns, isn't the major component of memory latency, which even on the best systems runs 10x that. The market won't bear 4x the price for a 10% increase in speed.

        This means that for SRAM to be useful, it has to be paired with a lower-latency interconnect. Some apps would benefit tremendously from 128M of what would amount to an L3 cache, even to the point that the $400 or so extra it would cost might be worth it. It's clear however that the market doesn't consider that a worthwhile expenditure.

        Although newer system architectures such as AMD's Opteron platform are moving to more closely-attached RAM, the engineering and manufacturing challenges involved in attaching memory as tightly as it is to a GPU have so far proven more expensive than the payoffs warrant. With improvements in manufacturing and interconnect technology, I'm sure we'll see ever-tighter CPU-memory integration. I doubt however the technology will move to SRAM or an SRAM-equivalent simply because the performance/heat trade-off isn't favorable. Saving a few ns of latency on the memory chips is peanuts compared to the 10s of ns of latency in the connection to the CPU, which is probably a much more tractable problem.
        • by TheLink ( 130905 )
          You already have tight CPU-SRAM memory integration. It's called CPU cache.

          As for 128MB level 3 cache, some servers do have that.

          If it's worth it, they often do put the stuff in.

          It's just like battery backed RAM HDDs. If DRAM was cheap enough more of us would be using that instead of klunky spinning discs. As it is, it's cheaper for most people to buy GBs of RAM and use that as disk cache AND space to _execute_ programs, rather than buy the niche RAM-HDD product and not be able to execute programs directl
    • Because SRAM designed to run at full chip speed is Fucking Expensive(tm).

      Why do you think there is only a MB or two at the most?
    • by Stevyn ( 691306 ) on Tuesday January 25, 2005 @11:59PM (#11476887)
      Interesting?

      This is like saying why paint your walls with off-white stuff when you can coat them in a layer of gold that resists tarnish?

      Well, for one thing, it's greatly more expensive.
    • 1-bit SRAM cell = 6 transistors
      1-bit DRAM cell = 1 transistor and a capacitance (not necessarily a physical capacitor, just something that acts like one)

  • Never mind (Score:5, Interesting)

    by LittleLebowskiUrbanA ( 619114 ) on Tuesday January 25, 2005 @11:45PM (#11476767) Homepage Journal
    if they plan on charging exorbitant prices for their memory again. I inherited a network full of fairly fast (2ghz) Dell boxes using RAMBUS. Sure is fun spending about $300 for a 512 upgrade. Of course you can only install this crap in pairs so there goes your slots.... Junk.. Rather buy a cheap new box than a memory upgrade using this overpriced crap.
  • Good marketing sense (Score:5, Interesting)

    by gbulmash ( 688770 ) * <semi_famous@yah o o . c om> on Tuesday January 25, 2005 @11:46PM (#11476778) Homepage Journal
    Smart plan not to try to make it main RAM. By going after multimedia applications like HDTV, video games, etc. they're targeting a market historically willing to pay a premium to get the best performance. I'll be really interested to see the graphic cards based on it and how they compare with the alternatives.
    • By going after multimedia applications like HDTV, video games, etc. they're targeting a market historically willing to pay a premium to get the best performance.
      Is this really true? I think you're smoking crack... HDTV and game consoles are consumer electronics, which is a market almost entirely driven by price. I don't know about HDTV, but I can't think of any way the popular consoles could be construed as including "premium" parts...
      • HDTV and game consoles are consumer electronics, which is a market almost entirely driven by price.

        Maybe the consoles, but those are usually sold at a loss to get people to buy games. When it comes to HDTV... I don't know about you, but I don't see someone shelling out $7,000 as being price sensitive when a larger screen DLP projection TV goes for thousands less.

        And one of the applications they were talking about was high-end video cards. A high end consumer video card costs more than a 250gb SATA ha

        • Maybe the consoles, but those are usually sold at a loss to get people to buy games. When it comes to HDTV... I don't know about you, but I don't see someone shelling out $7,000 as being price sensitive when a larger screen DLP projection TV goes for thousands less.

          consoles are driven by available games and have a perceived maximum price (probably around $300). They are not, for the most part, sold at a loss. That would be illegal for Sony to do. Also, I saw a nice HDTV LCD for $4k over at best buy last

    • Also, many of these applications are well suited to streaming data in and out of memory. RDRAM was / is known for high sustained data throughput but less than stellar random access. That makes it well suited for video memory but less than optimal for main system memory on processors unless the processor is designed to burst blocks of memory in and out of cache.
  • also at extremetech (Score:3, Informative)

    by Anonymous Coward on Tuesday January 25, 2005 @11:47PM (#11476789)
    I don't visit Tom's as a matter of principle - it's my feeling that Tom's reviews favor his biggest advertisers, not the best technology. ExtremeTech covers the same topic here: http://www.extremetech.com/article2/0,1558,1188770 ,00.asp
  • by tibike77 ( 611880 ) <.moc.oohay. .ta. .zemagekibit.> on Tuesday January 25, 2005 @11:47PM (#11476790) Journal
    Well, looks like they haven't learned much from their old mistakes, but are trying to avoid the consequences... smart move targetting heavy bandwidth apps for now.

    In the long run, if they can't significantly drop manufacture prices to (let's say) 150% or even 200% of "regular" (by that date) RAM, the boost in speed a computer with "XDR DRAM" will get compared to (again, let's say) "PC800 RDRAM" will be not significant... and I'll bet (regular) people would rather choose 8 GB of "PC800 RDRAM" over 2 GB of "XDR DRAM" any time of the day.

    Bottom line: they're either stuck with "speciality hardware" (like graphic cards or high-end servers) or they have to drop (manufacture) prices rapidly if they want to keep selling.
  • by eobanb ( 823187 ) on Tuesday January 25, 2005 @11:47PM (#11476791) Homepage
    Apple to rebrand it as "RAM Extreme"
  • latency? (Score:5, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Tuesday January 25, 2005 @11:56PM (#11476866) Homepage
    8GB/sec is good but not if the latency is higher than DDR.

    People seem to forget that the "Random" part of RAM is kinda crucial.

    Tom
    • Re:latency? (Score:5, Informative)

      by be-fan ( 61476 ) on Wednesday January 26, 2005 @12:15AM (#11476984)
      Not necessarily. It depends on the application. In "streaming" applications (hint: 3D rendering like on a graphics card!) the latency doesn't matter nearly as much as bandwidth.
      • Yeah, using this for graphics cards seems like an obvious and good idea. It makes me wonder why they didn't do this with RDRAM. It's so much easier to get a place on a graphics card: no issues about adding memory sticks, no reason to establish an industry standard before soldering on some chips... Yes, it seems like their new ideas have a much better chance of working. (But we still hate them...)

        What I'm impressed with is that they actually kept some engineers through their whole lawsuit period, and appar

        • Yeah, using this for graphics cards seems like an obvious and good idea. It makes me wonder why they didn't do this with RDRAM.

          They did... it's called the PlayStation2 [playstation.com]. If you look closely, you'll see the main memory is Rambus RAM. Which makes sense, because the PS2 is really more of a graphics engine than a general purpose computer.

      • using it for graphics cards would be terrible if it has higher latency, graphics processing requires constant "random" access because rendering a life-like scene requires knowing the state of everything in line of sight, as well as ambient and reflected lighting.
        • using it for graphics cards would be terrible if it has higher latency, graphics processing requires constant "random" access because rendering a life-like scene requires knowing the state of everything in line of sight, as well as ambient and reflected lighting.

          Just like the new Geforce 6200 that utilizes pci-express and streams its textures from main memory ;) There is quite possible to increase latency tolerance and graphics cards happen to be one application that is RELATIVELY latency insensitive in

        • Rough visibility culling (based on line of sight) is done long before the scene hits the graphics card. By the time the scene is in a graphics card, the shaders are going fairly linearly through a series of vertex buffers, making highly localized accesses to a few textures in the process. There are very few ambient lights in most games (8 is a common maximum), so their positions are usually stored as arguments to a pixel shader or as part of the OpenGL state. Either way, they are not randomly accessed from
      • "In "streaming" applications (hint: 3D rendering like on a graphics card!) the latency doesn't matter nearly as much as bandwidth."

        That used to be true: graphics cards had fixed functionality and long pipelines, so if you knew that you'd need pixel (51, 96) of a texture two hundred steps down the pipeline you'd just ask the cache to fetch it in plenty of time.

        Today, though, as they become more and more programmable, they're starting to see latency problems similar to CPUs. It's hard to predict what data a
        • It's fairly easy to predict what texture a shader program will need. Shaders, even fairly complex ones, only draw from a few source textures, and even then, in a highly localized manner. Remember, nearly all textures accessed by a shader still contain data in the spatial domain. A small cache on the GPU should still do a good job of hiding memory latency for the shader.
    • People seem to forget that the "Random" part of RAM is kinda crucial.

      Well by gosh, you're right! I totally forgot about that whole "Random" thing.

      Unfortunately when I went to jog my recollection a bit and write a program that writes to "Random" places in memory, I got all manner of interesting screens. Did you know Windows has a "Fuschia Screen of Death?" I didn't.

  • by onyxruby ( 118189 ) <onyxruby&comcast,net> on Tuesday January 25, 2005 @11:57PM (#11476873)
    Rambus seems to forget their attempt to shanghai the entire memory business through fraud a few years ago. Perhaps they should be reminded that the IT community has not. They should sell their IP and disolve themselves to avoid losing their stockholders any more money.

    I have adamantly refused to purchase any system that would use their memory for years, and more to the point have made that decision for others that depend on me making that decision. That's a lot of computers over the years were talking about. I am also far from alone.

    • "They should sell their IP and disolve themselves to avoid losing their stockholders any more money."

      All they have is IP. They don't make anything.

      If they were to sell their IP, chances are it would end up in some larger mega-IP house, which would ram the stick up even further.
  • I was just as mad as everyone else at Rambus' outlaw marketing tactics. But then I discovered, much to my dismay, that even the fastest currently available DDR RAM results in a ~20% speed penalty versus two-year-old RDRAM on the rendering application I use most. I would LOVE to be able to buy more RDRAM, even at a premium price.
    • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Wednesday January 26, 2005 @12:46AM (#11477165) Homepage
      That would be an unusual special case. First off, most (non realtime) 3D rendering isn't terribly bandwidth or latency sensitive. Assuming the CPU is fast enough that it isn't the main bottleneck, such apps will tend to be more sensitive to latency than to bandwidth. When tracing a ray, for example, one may need to access data from all over memory to do hit-testing, but not need very much information in total. So, the relatively poor latency characteristics of RDRAM don't really suggest a keen funtansticness for 3D rendering. And, considering that current single channel DDR400 has as much bandwidth as dual channel RDRAM did... Well, I'm just surprised that your app would have such a benefit. I'd suspect that there were other differences that caused such a difference in your benchmarks. Do you have any more specifc information, such as what app you use, what sort of scene it was, and what the test systems were?

      If you were dealing with slightly different steppings of the same CPU (I assume a P4?) it would be possible that you had two CPU's of the same clock speed, but the newer stepping was less efficient per clock. The P4's, over time, have been tweaked to be less and less efficient over time, in order to facilitate higher clock speeds. RDRAM was popular with the very first generation of P4's, so it'd be logical that the benchmark you saw may have been a newer core. That shouldn't explain a 20% speed difference, but it's an example of a small thing that may have contributed to making the memory system appear to be the determinant item in performance.
      • by captaineo ( 87164 ) on Wednesday January 26, 2005 @01:39AM (#11477454)
        The test case was intensive ray tracing with Pixar's RenderMan on two systems:

        3.06 GHz Pentium 4, 512KB cache, 533MHz FSB, RDRAM
        3.00 GHz Pentium 4, 1MB cache, 800MHz FSB, DDR400 RAM

        The DDR system is only 86% as fast as the RDRAM system (the RDRAM system is 16% faster). This is despite the DDR system having been purchased almost two years later, and having more cache!

        The DDR system does pull ahead for compositing tasks (by quite a bit - in some cases it's twice as fast). I assume this is due to the larger cache.

        But ray tracing takes about 90% of my total render times, so it's far more important to optimize. I am disappointed that I can't buy hardware today with the same RAM performance as I got two years ago.
        • by Anonymous Coward
          3.06 GHz Pentium 4, 512KB cache, 533MHz FSB, RDRAM
          3.00 GHz Pentium 4, 1MB cache, 800MHz FSB, DDR400 RAM


          You're probably comparing a Prescott to a Northwood. They're fundamentally different processors -- way more than a remap from 130nm to 90nm, but share enough I guess for Intel to continue branding it Pentium 4. For example, Prescott has longer L1 latency than Northwood, twice as long L2 latency than Northwood, and longer mispredict penalty (11 more stages). All those latencies add up to not-as-good pe
          • Yes, that was exactly the suspicion I had in my post. But, would it actually show a 16% jump in performance. I don't think it should be that big, unless you have a pretty pathological test case. (which the poster may...) Also, I suppose his newer system is only using single channel memory, which would potentially cause some of the performance hit.

            Both systems have the same amount of RAM and OS, right? and we are talking about the same version of the software?
        • Why don't you use equivalent processors when doing this kind of comparison. Even though the second CPU has 1 MB of cache, it's a Prescott core and can often be slower than the older Northwood at same clock speed because of the much deeper pipeline.
        • The numbers might not lie, but you do (probably unintentionally in this case).

          The test case was intensive ray tracing with Pixar's RenderMan on two systems:

          That statement is impossible because Pixar's renderman (prman) is not a ray tracer.
          It uses coordinate space transforms and shaders to render, much like a modern 3D video card would (albiet prman only does this after dicing the models to be rendered into millions of quarter-pixel-sized micropolygons, and allows arbitrarily complex shaders of many type

          • Traditionally, this was the case, but current versions of prman do support ray tracing. Apparently, they added it around the time exluna was digging into their market share, in order to be competitive. Before that happened, you had to use BMRT as a rayserver for prman. since the guy who wrote BMRT went off to exluna, it obviously wasn't a good marketing move to suggest using his old software to do raytracing with prman, rather than just buying his newer, better software that didn't need prman... :)
  • Time Will Tell? (Score:5, Informative)

    by cacepi ( 100373 ) on Wednesday January 26, 2005 @12:36AM (#11477098)
    Time will tell if Rambus has learned from the mistakes it made with RDRAM a few years ago.

    Well, Rambus has expanded their latest lawsuit blitz to include DDR2 patent claims [infoworld.com], so do you think they've learned?
  • by digitalgimpus ( 468277 ) on Wednesday January 26, 2005 @12:42AM (#11477135) Homepage
    1. Fast RAM is still expensive.

    2. RAN changes to quick. I buy RAM for one computer, it's only for that computer. No portability.

    I get a hard drive, I can put that in my new system. I get a new mouse, can use that on my new system. Display? Yep. Graphics card? Most likely.

    RAM? Not likely.

    IMHO they need to standardize RAM like AGP or PCI-X. That way users feel more comfortable investing in it... you can upgrade and keep your RAM.
    • > IMHO they need to standardize RAM like AGP or PCI-X.
      > That way users feel more comfortable investing in it...
      > you can upgrade and keep your RAM.

      You may be interested in FB-DIMMs [theinquirer.net], if they ever come out. Basically a standard (buffered) interface to all RAM you might want to put on there. Just make a new buffer chip and you're set.

    • Funny that PCI-X and AGP are on their way out...
      • Actually pci-express is supposed to be on it's way IN. agp is going though.

        Mycroft
        • by TheLink ( 130905 )
          PCI-express != PCI-X
    • The new memory card standards are there to take advantage of of the latest performing memory.

      A problem with your idea is that it is like wanting to keep your CPU but upgrade your computer. You don't want a slow CPU? Why would you want slow memory? Memory you bought two years ago is only going to hobble a brand-new chip.
  • memory bus-like technology. It's not about memory . It's about wires that connect to memory. All they do is multiplex the lines. Nothing new here.

    I would hope optic fiber interconnects could make a push by some tech company!

    Wooooops. Maybe i will start one .
  • Aside from products and technology, has everyone forgot the kind of business that Rambus ran?

    They pulled a fast one on the industry and then tried suing everyone in an effort to bully companies into licensing agreements.

    They are a VERY shady company. Very unscrupulous and litigious. I would never deal with them knowing their past.
  • Rambus Shambus! (Score:2, Insightful)

    by mjh49746 ( 807327 )
    Not on my PC! If I wanted to get ripped off in the CPU price/performance ratio, then I would've bought a Pentium 4, and if I want to get ripped off in the memory price/performance ratio, then I'll consider Rambus. I'll hedge my bets on DDR2 as it matures and put my chips on AMD.


    Wasn't Rambus run out of PCs due to their crooked practices anyway? What makes them think people won't forget? Didn't think I was going to hear that name again. (shakes head in grief)

  • TFA has short memory (Score:4, Informative)

    by arekusu ( 159916 ) on Wednesday January 26, 2005 @02:18AM (#11477609) Homepage
    "The introduction of XDR however is reminiscent of RDRAM around 2000/2001. The technology provided significantly more speed than DDR and was promoted by industry heavyweights such as Samsung and Intel."

    Actually, RDRAM was introduced around 1995, and was used by industry heavyweights such as SGI and Nintendo.
  • by Anonymous Coward on Wednesday January 26, 2005 @03:07AM (#11477812)
    Alright. Extreme Data Rate? C'mon, this is RAM we're talking about here, not a goddamned razor.

    May as well call it Extreme Data Rate 3D Titanium Mach 5 Turbo 2k5 Deluxe Edition, or some such...
  • This is probably one of the last chances to get XDRAM included in the on die memory controllers for AMD and Intel systems for the next 5-10 years.

    After AMD's success Intel will follow suit and with both the major players doing things the same way the market will become quite stable.

    However if XDRAM can impress before INTEL get's the mem controller on die, then they may be included and have 10 blissful years of monopoly, Intel made that mistake before, (in my mind it plays out like a MicroSoft tie in dea
  • by Bob9113 ( 14996 ) on Wednesday January 26, 2005 @09:58AM (#11479290) Homepage
    We are a big section of the opinion makers in computer hardware. We have the ability to affect the public opinion on XDR. To a large extent we were the ones most adversely affected by the last round, and we are the ones who can shift public opinion now.

    This should be like a usenet death penalty. The free market is there to reward those companies that serve their customers and punish those that do not. It is a good system, but it tends to have a short attention span. Tell your friends. Tell your purchasing deparment. Keep Rambus from coming back from the dead and send a message to other companies who think about abusing submarine patents. It's the same thing as harsh criminal sentencing, except that the free market has a far better track record of responding to example punishment (that is to say; if you support harsh criminal sentincing, you should support this on the same ideological grounds, and if you don't support harsh criminal sentencing because it doesn't work, you should still support this because it does).

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...