Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Why Dr. Tom Dislikes Rambus, Inc. 122

homerj79 writes: "The good Dr. Thomas Pabst has posted his feelings towards Rambus, Inc. and why he, and his site, are so critical of the company. Here's a bit from the article I found interesting: 'When Intel 'decided' to go for Rambus technology some three years ago, it wasn't out of pure believe into technology and certainly not just 'for the good of its customers', but simply because they got an offer they couldn't refuse. Back then Rambus authorized a contingency warrant for 1 million shares of its stock to Intel, exercisable at only $10 a share, in case Chipzilla ships at least 20% of its chipsets with RDRAM-support in back-to-back quarters. As of today Intel could make some nifty 158 million dollars once it fulfills the goal.' It's a good read for people thinking about investing in RMBS. Something seemed fishy over at Rambus, and now I know what it is."
This discussion has been archived. No new comments can be posted.

Why Dr. Tom Dislikes Rambus, Inc.

Comments Filter:
  • I can think of a couple of reasons why Intel is pushing rambus:

    * Intel has a long tradition of moving the industry to a place where it benefits Intel while harming the consumer. How about Slot1/2? It drove the costs up in terms of producing machines, with scant little benefit to the industry. It did, however, throw up a new barrier to AMD and other cloners. Now that that advantage is gone, Intel has returned to sockets.

    * Although $158 mm is chump change to Intel as a corp, who knows how many Rambus options found their way to the officers of the company? Imagine being in a situation where you can throw up barriers to your competition, while lining your own pockets. Even though it seems to be backfiring, there are probably many folks at Intel that will get very rich if they succeed at cramming this down the throats of the many clueless out there.

    Intel knows that bringing out a successor to the BX chipset will have the maximum benefit to the consumer, but so what? They still control the industry. Itanium is another attempt at moving the industry somewhere else, but it will ultimately fail.

  • by DrSkwid ( 118965 ) on Sunday May 28, 2000 @11:36AM (#1041629) Journal
    Once upon a time Intel tried to innovate.

    the Pentium series may have started life as a traffic light controller but Grove and Moore demanded a lot from their engineering team and got it.

    Their relationship with M$ did get pretty cosy but it was never a perfect marriage that the word Wintel would suggest (see Inside Intel http://www.webreviews.com/9711/inside_intel.html here [webreviews.com]).

    However as the company grew they seem to have inevitably lost touch with their engineering roots. Pressure from other manufacturers has always hurt them bad.

    The Register [theregister.co.uk] has a good take on it here [theregister.co.uk] - http://www.theregister.co.uk/000525-000009.html

    Rambus is a brain dead attempt at fencing people in to non-commodity memory. Especially ironic as Intel have been burned by memory production once already (when the market went commodity). I'm sure everybody (except Rambus Inc.) is pleased that it looks like it's heading towards a spectacular failure as it would drive the price of memory up for no good engineering reason.

    It's an expensive foray for Intel and co. probably one that we'll end up paying for in the end one way or another.

    I hope they learn a good lesson and go back to chasing Mhz.

    .oO0Oo.
  • This conspiracy theory is hard to believe. 158 million may seem a lot to most of us (i.e. if you are not B. Gate) but considering that Intel's annual profit is measured in billions, it is really a drop in the bucket. The more likely scenario is that Intel uses the Rambus propriety technology in an effort to dictate the motherboard market. Since Intel holds Rambus stocks and obviously has great influnece on Rambus's operation, so if Rambus technology becomes predominant, they would be in the position to set the rules of game in the motherboard industry. Eventually, they would squeeze out other chipset makers, in particular VIA. However, it looks this game plan does not work out that well (at least for now).
  • Intel understands memory latency. 5 years ago, back when they had control, they started talking new architecture. In the first draft spec for a 64bit VLIW chip, they included special instructions for minimizing memory latency. Merced includes an instruction to prefetch a memory page based on an instruction that has yet to be executed. One step beyond branch prediction, they wanted memory prediction. They goal was to blow away the competition by making memory latency a thing of the past, and making bandwidth the issue. Everybody would have to rethink design, and Intel would be ahead.

    In this spirit, Intel backed Rambus, knowing full well that it would be slower to start. The idea is to use their industry control to fill the market with Rambus. When Merced comes out, the base is already laid for high bandwidth high latency memory. With a chip where latency is no longer an issue, Intel would be king.

    I doubt it will work. Rambus has had technical problems in addition to performance problems. Intel is no longer unquestioned king, so they cannot force the market like they originally planned. When Merced comes out, and Rambus is not ready, they will fall behind.
  • I don't play 3-d games that much anymore. So I can't for the life of me think of any way rambus will negatively effect my life.
  • So are you saying I'm a problem? I usually moderate well. I somehow made a simple mistake in a moderation and was wanting to post an A.C. reply to the improperly moderated post explaining that it was wrong hoping that someone would mod it back down. And then by not posting A.C. (in a hurry and didn't check the box) it cleared out all my other moderations. All I was trying to do was the right thing and it back fired on me. I do not approve of such articles that are meant to be slanderous in anyway. Especially those that are just plain Troll or Flamebait. I try to post quality to this board and moderate such. So, I'm not sure what was meant by the comment, but for the record, I'm not the problem. I just made a stupid mistake.
  • I don't think that Intel is trying to compete with FireWire by releasing USB2. I think Intel realises that USB2 and FireWire are complementary technologies.

    ie. Do I want a FireWire keyboard? Do I want a FireWire camcorder on the same bus as my mouse?

    -
    Ekapshi
  • Exactly. Intel isn't going to get stock at $10/share and then just dump it for a profit -- they'll hold it. Then they'll buy some more, etc. This isn't an uncommon practice in other industries like telecom where a equipment supplier will sweeten a 'strategic alliance' with a carrier by offering to sell stock at a discount in excahange for a volume commitment by the telco. Kind of thing makes for good long-term business relationship -- once the big company owns stock in its smaller supplier, there is an incentive to continue doing business to ensure the contunies health of their inventment. This often leads to the supplier being bought out eventually.
  • Intel, Microsoft, Compaq and other heavyweights make many strategic investments in companies developing promising technology. Since they're in business they expect to eventually make money on their investments, which is normal for public companies. At the time of Intel's investment Rambus stock was trading much closer to $10/share than it is today, and Rambus had no customers in the CPU space, so it was a much riskier deal. Rambus' current price (and Intel's current paper profit) is due more to "irrational exuberance" in the stock market than any old-school valuation. The slimey business practice of getting competitive advantage by investing in partners is common in high-tech, but is far more common (in fact normal) in Germany; I'm sure that Dr Tom could also list the firms from his homeland, starting with all the banks, then working through the heavy industries on his way to media firms which he has similar complaints. Rambus will win because it produces higher bandwidth at lower cost (power, pincount, etc) than SDRAM, or it will fail because the RF engineering needed for 800+MHz traces can't be done economically by the time SDRAM busses can deliver the same data rates. The investments made in Rambus by its semiconductor partners will have little say in the long run, though they may have funded the firm for long enough to give the new idea a reasonable chance in the marketplace. elbee
  • but their management team has shown themeselves to be creative and willing to put their balls on the chopping block

    I'm not sure where you've got this idea. RAMBUS has only shown creativity in the bigger and bigger lies they can manufacture to show that somehow RAMBUS is faster, cheaper or anything remotely better than DDR-SDRAM. The management team at RAMBUS is worse than the PR team at Microsoft in my opinion for their endless stream of FUD, mininformation and blatant lies.

    think about why there is not a HARDWARE equivalent of open source software

    If there is an open source solution in Memory then it is most certainly not RAMBUS, but DDR-SDRAM. The DDR spec is openly created for anyone with the manufacturing capabilities to use without the crippling licencing fees charged by RAMBUS. The fact that it is smaller, cooler, cheaper and faster than the equivalent RAMBUS modules and yet somehow RAMBUS and Intel are still peddling their inferior technology smack very much of conspiracy and not of the creativity you suggest.

    John Wiltshire

  • Touché.

  • Do you mean that this use of the semicolon is incorrect, or just that he could as well have done without it?

    As far as I can tell, this falls under clause B on this page [commnet.edu] but it's a fuzzy rule so I'm not sure.
    --

  • Although I won't go so far as to insult him personally as I don't know him, but I do agree with you on one point. Why bitchslap people for bad moderation? Isn't that what the carefully crafted metamoderation system is for? Isn't the idea of "free x! no one ruler!" what slashdot supposedly stands for? Well, having one person bitchslap people down so that only users who purposely set their thresholds to -1 will see any bitchslapped post sort of goes against this idea, right? I don't remember the names of the users, but there have been several people bitchslapped because Rob noticed 1 bad moderation on Signal 11. Is one bad moderation really worth taking away all karma, setting all previous comments to -1, and setting all future comments to -1, regardless of karma? That sounds like some heavy handed Russian czar killing anyone who dares cross him.

    So, is slashdot against censorship or not? That's essentially what bitchslapping amounts to. Few people set their thresholds to -1.

    For more information, check out http://slashdot.org/comments.pl?sid= moderation [slashdot.org]
  • With the increasing bus speeds and DDR ram, rambus' incremental performance at huge cost is unwanted. If Intel wants to dick-over it's customers in order to turn a penny, let them. It's just more incentive for me to go out and make my next system an AMD box.

  • Now I think Rambus is getting a lot of unnecessary bad press. I think Tom's reviews are a little narrow minded, not to mention Tom has always been a little hot headed ( I can recall an incident or two when he was in a pissing contest with some other news boards. And I recall him having to appologize for some non-professional letters ). All in all, I like his site, but I take this sort of criticizm ( also remembering a complete emotional rejection of 3Dfx ) with a grain of salt.
    Point one. Intel has had a very long term strategy ever since the Pentium Pro. A simple fact is that the longer a CPU pipeline is, the more efficiently you can crunch numbers. The only other alternative is to make redundant CPU components and a seriously more complex internal bus ( you'd have to have more parallel ports to the register set, plus more complex scheduling ). Where-as, if you had to perfect 50 independant calculations, and you had 50 fully pipelined stages, you could chuck out an instruction/ clock. And at 1,500MHZ, that's pretty fscking sweet. This is completely the path of the Alpha chip. You get a hell of a lot more bang for your buck, since it takes a tiny bit more silicon to add an additional stage ( buffer plus redundant operations ), while a parallel stage requires 100% redundancy of functional unit plus additional bus interconnections a deeper, more complex and less efficient scheduler and a heavier load on the register set ( all of which degrade overall performance unless you can achieve a decent instruction level parallelism ).
    Unfortuntaly, pipelining has it's own can of worms. First of all data-dependancy can really kill you if the pipe is too deep. Secondly, you need higher speeds to achieve the same throughput as a wider CPU. ( 750MHZ single but deep pipeline CPU should be similar to a 500MHZ short double pipe ). Higher speeds means greater memory latency dependancy. Thus a 500MHZ CPU will feel less memory latency effects than that of a 1,000MHZ CPU. So if you had a 4-way 500MHZ machine, it would probably overtake the single way 1,000MHZ machine simply because of memory latency.
    Even though Intel has designed n-way machines, they have been focusing on widening the pipeline ( in both CPU and graphics chips.. even though they've reduced/dropped support for that market ). Wide pipes in graphics are especially useful because you are working on seriously complex operations.
    In order to alleviate this problem, Intel has made many ventures into the memory market. They started with their Pentium Pro by using some seriously fast L2-cache. Sadly this was rediculously expensive ( over $2,000 for a 1Meg chip ). Just before this, we were seeing motherboards with 2, 4 or even 8 Meg of L2 cache. I personally was having hopes that we would eventually see 64 or 128Meg of L2 cache and could finally replace DRAM with SRAM and eliminate most of the decades old limitations. We would have a much simpler system, and the power, heat and silicon costs would eventually be worked away, simply because of economies of scale. Unfortunately the PPro's cache was too hot, power-hungry, expensive, and low yield / volume. It was killed in favor of the P-II's significantly simpler ( and seperately purchased ) L2 cache. This direction virtually killed the mother-board based cache, and thus my hopes of a replacement of DRAM any time soon. Soon celerons started seeing on-die implimentations, and now we're seeing coppermines taking this aproach. This approach ( which we saw years ago with the Alpha's 92 or so K of L2 cache on-die ), basically eliminates the viability of 64+ Meg of memory on CPU's at least until they reach .1 micron sizes.
    In any case, caches can only temporarily hide the latencies of extremely slow memory. Believe it or not, DRAM has been consistently around 16MHZ for the past 10 years with barely any fundamental advancements. Sure, there are some neat tricks like making the pipe really wide ( 256 bits read out in 4 sequential bursts ), having intelligently refreshed rows, and allowing nearly 2 simultaneously active rows for fast column access, but if you look at the timing specs of these chips, they tend to have very high-latency for initial row accesses. I know that there are some premium chips that have significantly higher worst case timings ( namely used in graphics cards ), but we don't tend to find these in mainstream systems.
    Even DDR-SDRAM only finds ways to take active rows ( which I'm going to guess are more optimized and possible smaller to allow shorter response times ) and make them more rapidly accessible. The simplest ( and stupidest ) form of DDR-SDRAM would take the exact same amount of time to perform a colum fetch, but transfer the 256bit block twice as fast ( on both rising and falling clock edges, which seems to be all the rave nowadays ). Basically, I am doubting that there is any radically new technology here that addresses the core limitation of DRAM.
    More expensive systems ( almost always non-intel ) make use of interleaved memory, basically taking 2 or 4 parallel and independant memory pipes, each performing their own memory fetch.. Usually these are odd / even type interleaves, for example. In this way you can have twice the bandwidth and half the latency on average if you can maintain a non-empty queue of memory requests ( which isn't hard to do at all with modern n-way out-of-order CPUs ). This works especially well when you have multiple CPUs.
    In fact, the larger the memory queue, the better the opportunity to perform optimized memory access schedules by the memory controller, much like a SCSI controller can organize disk accesses in a elevator method ( rocking arm left and right ). If you were to be able to have 4, 8 or 16 independant memory devices, then you would be able to truely achieve up to 16 times the bandwidth. An intereting point is that DDR-SDRAM focuses mainly on sending large chunks of contiguous memory. Something that only really ever happens on cache fills, and they tend to be 256 bits wide. Rarely does the next memory access come immediately afterwards ( unless you are performing DMA operations ). In a 16-way interleaved memory structure, the bandwidth of each channel can be slower ( especially if the external IO is BW limited ) so long as you can initiate a new transaction while transferring the data of the old one. Essentially this is the art of pipelining. Pipelined L2 cache was the first memory to take this approach. SDRAM soon followed ( but not to the extent we are talking here ).
    The situation for Intel is as follows. Higher speed CPU's are going to be memory starved. Very rarely do we find tightly organized CPU instructions and data sets that fully take advantage of L2/ L3 caches. The standard today is to use generic, ready-made packages such as MFCs which are _heavily_ bloated. In fact, most C++ code is going to be non-spacially optimized. As DLL's grow in size, modularity generally increases, and context switching becomes more and more prevalent, caches will be less and less effective. Additionally, Intel/HP have worked hard ( in their IA64 ) to allow the queing of memory in a CPU ( and graphics chips ). Their ideal setup will be a quad processor configuration with one or more graphics processors, all contending for memory. Each device queuing up literally dozens of memory operations ( Not to mention the DMA access of streaming audio, video, network ). Single channel DDR-SDRAM, or even quad, octeplet, etc memory isn't going to sustain such load. Two independant sources will most likely be on seperate rows, and thus the memory controller will have to attempt to minize row thrashing by reordering requests. But there will still be a serious lag when these thrashes occur.
    There is no reason that I can think of that would prevent the use of interleaved DDR-SDRAM modules. Albeit you'd need a Chipset manufacturer ( other than Intel ) willing to risk making such an expensive device.
    Intels solution is RDRAM. Here you can have multiple channels, AND to my knowledge, you can queue multiple commands on the same channel to potentially multiple devices. Now, this is the theory. From the above, if you have a fixed bandwidth on the external IO, then you can divide the total inner bandwidth between each channel. You can do this by either lowering the frequency of internal channels, or by reducing the BUS width. Lowering the freq would slow simple commands being sent to each device ( thus reducing latency ), so it's best to reduce the bus-width. This also helps in reducing the number of wires on the motherboard. 8 channels at 64bits each takes up a lot of space. Now RAMBUS is similar in theory to PCI or SCSI: You have a master and a slave device, where commands and data are sent in packets. Theoretically, a channel could have multiple read/write(+data) packets sent, and the RDRAM modules would process them in a pipeline. Multiple RDRAM chips ( I believe they're called RIMMS ) on a channel would act like interleaving except cheaper. ( the controller only deals with 2 or 4 or 8 channels, but you could have #channels x #chips/channel interleaved memory segments. You wouldn't be able to expand #channels for a given motherboard(e.g. chipset ), but you could easily expand the number of chips on a channel, unlike [DDR-]SDRAM where you have one channel, and one active chip on that channel ). So, no matter how fast the external bus is, you can handle multi-CPU configurations, and even deep out-of-order memory operations, and especially IA64 super-deep memory operations in a snap.
    I seriously believe that these were the fanticized goals of Intel when they invested in Rambus. I do not know how far along RDRAM was when Intel got involved ( namely, did they know how poor the system would be at the outset? ) Intel most assuredly proto-typed their IA64 with various memory models (including a hypothetical high-latency, infinite concurrency memory model) and also against current SDRAM technologies. I'm sure that they learned their serious weakness with SDRAM. They must have been desperate to find a new solution. RDRAM was the closest that they could find, undoubtedly for the above reasons. Rambus must have foresaw some of their limitations and were thus desperate to get market-share. They must have known that they weren't going to survive unless they had high volume. New memory that completely lacks compatibilty is not going to survive unless it's adopted. Thus, their tactics are understandible. Intel's tactics seem fair when you look at IA64's needs. In order to ease the release, they had to get the market comfortable with the newer memory. If they could succeede in replacing SDRAM with RDRAM, then IA64 migration would be seamless. If not, the many _many_ limitations of IA64 would become painfully apparent, and they could potentially lose everything to AMD or other post-RISC CPUs. They are betting the farm on IA64, and must do everything in their power to maintain market share acceptance.
    IA64 is novel; it is bleeding edge technology. It is intelligent engineering since they didn't just make VLIW, or RISC, but instead "engineered" a solution from what seems to be most of the current technologies. But none of this will mean jack if it doesn't actually perform faster than existing technologies. The one saving grace is that it's designed with headroom, so it has much opportunity for improvement, where-as x86 and simple-RISC ( save Alpha ) really have to pull rabbits out their asses to increase performance without doubling silicon. Alpha, for example has compilation hints in their instructions. They've survived for quite a while without OOE. Only recently have they taken to this horse. Thus they still have some growth left in their current design.
    So, what of Rambus? The theory sounds great. It simply has to work for Intel's sake. But the reality is less dreamy. For power-consumption purposes you can't seem to be able to have too many ( if even more than one ) active chips on a channel, and thus you can look at this as row-thrashing taken to an extreme. I'm not even sure if you can request multiple addresses from completely independent chips on a channel. Thus at best you are getting the effects of interleaving 2 or 4 channels, but the inter-chip thrashing could more than make up for it. Additionally Tom has pointed out some serious bus latencies which could potentially seriously hurt performance. These latencies wouldn't be so bad at all if you could have multiple concurrent chips in operation ( again, I'm not sure if this is possible ), but single-chip operation seriously reduces the over-all benifit of 2 or 4 channel operation.
    Now, about benchmarks. Tom has basically condemned Rambus because of marginal improvements for extreme additional expense. He's also evaluated this whole stock-sharing policy from a short-sighted greed point of view. I think I've described the necessity of Intel's involvement and the desperation of Rambus to gain acceptance. Economies of scale regarding a radically new design are going to initially be expensive. Hell, Intel has almost always sold their state-of-the art at extreme prices, which [usually] eventually trickle down to the mainstream at reasonable prices. The only evil deed I see here is Intel's insistance on the acceptance dispite the desires of the general populace. We like the prices of PC's now. We're dealing with ever-more-expensive software ( ironically with lesser quality and usefulness ) in addition to fluxuations in RAM prices ( in addition to other components such as DVDs ). A manditory doubling in system price isn't going to sit well with us, and AMD will be all the happier to take up the slack.. If Intel ignores this, then they'll be sorry ( their marketing boys need to go back to economics school for some basic supply/demand and the effect of substitution goods ). Things may look bleak, but competition is doing very well ( Government even seems to be taking care of the bad Software boys.. well, sorta ).
    And lastly about the benchmarks.. Well, we're talking about software ( and hardware ) that barely takes advantage of the RDRAM. Remember, the theoretical point of this new system was to sustain high bandwidth WHEN you have a deep memory-request-queue. Many highly optimized programs are going to try and fit in cache and might even organize large-data accesses in such a way that do not split evently across RDRAM channels. Thus a single application ( namely quake ) will most likely not take too great of an advantage with RDRAM ( though I do see improvements ). BUT, when you multi-task, and more importantly, when you have multi-CPUs, or even when you migrate to IA64, THEN you will find incredible benifits. That's great an all, but what if I'm looking for a gaming platform. I want top-of-the-line, but my only choice seems to be RDRAM, and that's just not cost-effective. Well, right now you seem to be SOL unless your open to AMD. But, what I see is the new IA32 CPUs will have wider memory fetch queue's. They have slowly adopted multiple outstanding memory requests ( I believe in one of their PII lines ), and I believe that they still have some room for advancement in this technology in the IA32 line. From what I've seen, IA32 still has favor with Intel ( they don't exactly want to give IA32 to AMD on a silver plate, now do they? )
    More importantly, RDRAM is, in my unsubstantiated opinion, slated for servers which farm out many concurrent services ( web-servers, for example ). I don't know if there are many good benchmarks out there for this. An Apache + DHTML test-bed would probably be the best test for an RDRAM system. My guess is that it will excell over SDRAM. File servers, too should fit nicely.
    My perspective is that RDRAM is a bad buy right now unless you absolutely need every ounce of performance that you can get. It is NOT currently for single app configurations. This would probably rule out the windows 98 market, and much of the NT market ( minus the *cough* *cough* NT-web-server, back-oriface market ). Certain Linux servers should work well also.
    RDRAM is here, and I think it has staying power. It _will_ get cheaper, it will be shoved down our throats. But more importantly, it will adapt much better to future CPU architectures than existing designs. DDR-SDRAM has a chance with interleaved memory as I said, but I'm doubting the feasibility. ( 5 years down the road, the shere number of pins of dual DDR-SDRAM might bring the costs beyond that of 4chan-RDRAM )

    In conclusion. Tom needs to release some of that agression by actually playing Quake instead of benchmarking it. He needs to be a little more open minded ( and actually attempt to argue FOR the other side every once in a millenium. Or at least take on their pespective ). This goes for many of you techno-journalists. You do great work, but your minds can be like a long narrow tunnels. On the outside, capitalists seem cold, greedy and generally evil. And in many instances, they are ( blindly following economic models and acting out darwinistic barbarism on their competition, fed by the stock-driven need for unrelenting market growth ). But sometimes evil aliances ( such as WinTel, RamTel, MSac, etc ) are driven out of necessity for survival, and you have to be open minded to see their benifits over personal disgust.

    -Michael
  • i can only imagine that it had a good deal to do with consumer stupidity and a great deal to do with movie studio deals to put their movies on the inferior videotape format due to marketing and good ole $US.

    Beta (Sony) lost out in the VCR market for the same reasons that Apple lost out in the PC market. Neither of them would license their technology. Both companies weren't content with owning the standard. They wanted to own the entire markets. The developers of VHS (Panasonic and JVC, I believe) licensed the standard to anyone who wanted it. The result? VHS was cheaper, and Beta lost out. Sony did eventually license Beta, but it was much too late.

    Cheers,
    Bun
  • by Chas ( 5144 ) on Sunday May 28, 2000 @09:45PM (#1041644) Homepage Journal

    Your Beta analogy is more apt than you know.

    The reason Beta failed was because SONY tried to push Beta forward alone. They didn't want to share the VCR market with anyone and wanted to control the standards.

    Unfortunately, everybody jumped onto the VHS bandwagon (and basically told Sony where to stick their licensing fees) and the inferior VCR format prevailed.

    Right now, while RDRAM is a very forward-thinking step, it's usefulness compared to a well tuned SDRAM system is next to negligible.

    RAMBUS and Intel keep spouting off about how future generations of RDRAM will be more powerful. Well, that's in the future. Right now, RDRAM is an expensive, proprietary, uncecessary boondoggle.

    Some day we WILL have to go to a serial memory device like RAMBUS. But that day hasn't come yet.


    Chas - The one, the only.
    THANK GOD!!!

  • Well ok what I probably meant was that proprietry memory design may be bad for us all price wise. Intel have already had negative experiences trying to compete with the far east to shift memory. I think the consortium is trying to box people in and I just don't like that feeling.

    All institutions eventually fall but while they exists the can cause as much harm as good, but heck - what technology doesn't? (in a broad sense so no nit picking what about X or Y comments !-)
    .oO0Oo.
  • I wasn't enitirely serious but ....

    The alpha isn't priced for domestic use.
    and the G4 dunno they don't sell them at my local supplier.

    A non x86 AMD chip that ran Linux might be a nice product and earn AMD some respect from techies.

    They've obviously got some skilled people.

    You think a bit of pride might make them do such a thing for the cavalier hell of it!
    .oO0Oo.
  • What do you know, an article with actual facts in it. I have been wondering when people were going to catch on about the pincount aspect of this.

    I am an ASIC (chipset) designer by trade and the only reason we ever looked at Rambus parts was because a high-pincount package (anything fancier than intel's 256/272-pin BGA or whatever the current BGA-du-jour is) costs a lot. Trying to get a ton of memory bandwidth using SDR or DDR SDRAM means using lots of pins, which in turn burn a lot of power and require a lot of additional power and ground pins to support them.

    We ended up using SDR SDRAM for our parts but that was almost 24 months ago that a decision was made, and we were worried about constrained availability on C-RDRAM or D-RDRAM. Guess we were right about that one.

    RDRAM is not the antichrist, everyone. It's just ridiculously expensive at the moment. So are 1GHz pentiums and SRAM-based disk drives, but no one complains about those facts. The price _will_ come down. Will DDR or QDR SDRAM already own the market by that point? Who knows. Just don't count out RDRAM, since the pincount requirements are a very important part of optimizing overall system costs.
  • Obviously not. So, to sum up: RAMBUS hits a maximum of 1.6 Gigs per second across a 400-Mhz bus (or, what is being called 800-Mhz RDRAM). DDR-SDRAM can hit 2.1 Gigs per second RIGHT NOW on a 133-Mhz FSB. Some of the upcoming SDRAM technologies will hit over 6 Gigs per second.

    The problem isn't even the damn bandwidth, but the latency. RAMBUS RAM is slower than all crap to get started, and that absolutely kills it. Right now, the fastest RAMBUS RAM is actually SLOWER than a 133 MHZ FSB will get you out of SDRAM, and so long as RAMBUS maintains high burst rate, high latency, it will continue to get worse.

    Why don't you go look it up? Just plop "RAMBUS" in his search engine.

  • So what that Linux runs on a trash bucket PC, that doesn't mean you put critical stuff on that box.

    If you throw absolute shit at the kernel, it might keep it running, but it'll probably crash. Try running a VAX (cluster) with flakey CPU(s) and see how far you get (clue: not very).

  • hehe

    so many people, so little brain

    duh
    .oO0Oo.
  • The only 820 motherboards that have the bug [intel.com] are the ones with SDRAM, not the ones with RDRAM.
    Ironically they are willing to trade you the buggy SDRAM one for a RDRAM one with 128MB ram.
    ...Maybe this wasn't a bug afterall ;)
  • I don't care if he is a "doctor"; he needs to take a remedial English class.

    do you care if he's German? and, he speaks English and German and geek better than you. Actually, I think his English is very good: the subtle nuance that he's a little bitchy comes through loud and clear. I think he and his site do a terrific job, though.

  • I read the intel/rambus/warrants thing a few when it was posted, and it just didn't ring right to me. Here's why. The warrants issued have a value of about $160 Million. While this is not chump change, it doesn't mean ALL that much to intel. Even if the value doubles to about $300M, it still isn't enough to sway intel from a technical path that they see as 'right'. If you search back through the news, early in 1999, intel spent a LOT of money greasing the palms of Ram makers to kickstart RDRAM production. These 'gifts' came to a LOT more than $160M, I seem to recall them giving one manufacturer about $500M to get started (I am to damn lazy and tired to look up the links right now). They spent WAY more to jumpstart RDRAM production that they will EVER get from the value of the rambus warrants. In my mind this discounts the money theory in my mind. This leaves us with the question of why is intel proceding along such a STUPID path. They could switch over to PC133 at any time. If VIA can make a chipset that does it, intel can do it in half the time, and better (no matter what you say about intel, they have a LOT of tallented engineers. Look at the BX chipset. Nothing comes close. If they do the same thing with PC133 memory, and all the bells and whistles, look out world, and via). The only question is wether management will let them do it, and why not? I only have one theory here, control. Intel has long been, like MS, a company that controlls standards. If you control standards, you can shape the market and reap HUGE profits be being first and best at everything. Intel made the PC66 standard, and was, for a long time, the best vendor of PC66 based chased chipsets. They bobbled with PC100, and other vendors took over the technology lead. Intel quickly caught up, but WAS behind for a while. They are not in the ballpark now with PC133, and it is hurting them. They went from nearly 100% of the chipset market a few months ago to about 60% now. That has to hurt. And until they get thier production act together (september-ish?), the situation will not get much better. Couple this with the fact that they were collectively bitch-slapped be the DRAM vendors recently, and the future doesn't look to good for them. One thing that will get them back into the lead quick is owning the standards. They can then produce the best chipsets first, and set direction. $160M isn't enough for them to stick with a losing technology. Controlling the marketplace is. THAT is why I think intel is sticking with rambus.
  • All we need now is someone with PHD in computer science reviewing medical treatments.

  • I know it's late in the day for this, but here's another - new - Tom Article [tomshardware.com]. Guess what, he still doesn't like RAMBUS and gives more benchmarks to back his arguments up. In typical Tom Tirade fashion.:)

    IMHO, as per

    J:)
  • USB 1: 1.5Mbps now
    USB 2: 400Mbps (not here yet)

    You would put them both on the same bus? That's insane.

    IEEE1394 (FireWire): 100-400Mbps now.

    Apple changed the licensing from per port years ago.

    I can already buy FireWire stuff. All the digital video cameras come with FireWire so why USB 2? It isn't needed.
  • Well, if people "clearly" were able to see AMD's processors as "superior", then they would buy them, no?

    I've always seen AMD's pursuit of the x86 market as a personal crusade of Larry Sanders.

    I think that it might be a good idea for AMD to make an Pentium beater with an INCOMPATIBLE instruction set not compatible. (maybe they already are, send tell).

    Making a super chip and porting something like Linux, or *BSD or BeOS or anOtherOS to it might be a winner.

    Go on Larry - roll the dice
    .oO0Oo.
  • rambus is suck!

    I KISS YOU!

    (that'll cost me a coule karma points!)
  • I don't, but its a story of a better technology losing the market. why? i can only imagine that it had a good deal to do with consumer stupidity and a great deal to do with movie studio deals to put their movies on the inferior videotape format due to marketing and good ole $US.

    Does RAMbus really suck? i have no clue, but their management team has shown themeselves to be creative and willing to put their balls on the chopping block. $158 million may not mean much to Intel, but ill bet the people at rambus have more than that on the line -- and it DOES mean something to them.

    Now, im sure we all would agree that buying market share is not a healthy capitalistic practice, but do you think Intel would be wasting their time for $158 million on a technology that was anything less than adequate? i wouldn't think so.

    everyone here loves the newest/fastest/bestest stuff, but in the real world we rarely get it in our hardware -- think about why there is not a HARDWARE equivalent of open source software...its called factories -- bring on the nanobots!

    __________________________
  • Athlon is not "clearly superior" to the Pentium III. Let's see -

    (a) Athlon uses double the power of the Pentium III, and runs twice as hot

    (b) No support for MP (still!)

    (c) PC Magazine recently ran an article which said a 1 GHz Athlon is approximately as fast as the 866 MHz Pentium III. The 1 GHz Pentium III absolutely slaughters the 1 GHz Athlon in most benchmarks (especially iSPEC).

    Unfortunately, mass-public perception is that Athlon is superior. Intel has to work on marketing.

  • Tom and his staff are actually relatively competent at reviewing things on a technical basis. But when they try to do these business assessments - oh, man - the results are just appalling. This article brings no new information to the Intel/Rambus relationship (is $158 million really important enough for Intel to risk its business on?). Anybody else remember the article a couple of months ago where Van Smith indicated that Intel would exit the microprocessor business imminently? Come on, Tom, stick to technical articles. We'll read about the business side from the experts.

  • The real question to ask is how much RAMBUS stock do the corporate officers of INTEL own?
  • I'm sure everybody (except Rambus Inc.) is pleased that it looks like it's heading towards a spectacular failure as it would drive the price of memory up for no good engineering reason.

    There are many, many good engineering reasons to switch to Rambus. It has a higher maximum bandwidth and uses less pins. RDRAM isn't as cheap as SDRAM, but guess what - that doesn't suddenly eliminate the benefits of RDRAM.

  • > I agree. Rambus is so overrated. I will never by Rambus. Save my money and wait for DDR-SDRAM.
    Answer me this, will you be purchasing a PlayStation 2? If you do, Sony is using RDRAM in the PS2.

  • I don't care if he is a "doctor"; he needs to take a remedial English class.

    Yes, well can you REALLY justify the use of that semi-colon?
  • I can count the number of times Linux has crashed on me in the past five years on one hand with fingers to spare. What the hell's your problem?

    Or are you nothing more than a common troll? Oh, never mind.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • by Chalst ( 57653 ) on Sunday May 28, 2000 @05:26PM (#1041668) Homepage Journal
    No. Revenue is the correct comparison: if RDRAM is as bad as Tom's
    say it is, then it affects revenue on all the lines that are supposed
    to use Rambus.

    I agree with Tom's that RDRAM isn't right for commodity PCs, but I
    don't buy the conspiracy theory (beyond the fact that Intel would love
    to lock PC manufacturers into a proprietary technology). Rambus is an
    ambitious technology, with lots of potential, and Intel backed
    it because they believed it would perform better than it did. They
    might be forced to change their mind, but we will just have to wait
    and see, a surprise isn't impossible.

  • Some kind of mutant marmoset ?
  • by deaddeng ( 63515 ) on Sunday May 28, 2000 @05:41PM (#1041670) Homepage
    There is a war between Intel and AMD. It is for the future of not only the desktop PC architecture, but the server architecture and the
    soon-to-be-booming gaming console/internet appliance architecture. The basis for this war is about as complex as the alliance structure that resulted in WWII. The catalytic event that launched this conflict was the Anti-trust case (and victory) against Microsoft.

    Microsoft had effectively controlled the architecture by controlling the OS environment. This will soon be over. The next big thing will be embedded OS's in gaming consoles. Intel and AMD are vying to dominate that market.

    The stuff you see on Tom's Hardware and Anandtech are distractions. Those are feints and skirmishes aimed at press ink and enthusiast mindshare. No one ever said that the world is fair or that the best technology has to win. Rambus IS the best technology, and the only DRAM technology that can
    scale right now to keep up with Moore's law. DDR is a legacy bandaid.

    The real war is being fought between AMD and Intel among the DRAM manufacturers and silicon foundries of Asia--Korea, Taiwan and Japan. The game is to get AMD and Intel to pay for DRAM conversions and partnerships. DRAM manufacturing has been a VERY marginal profit business for the past decade--look at the consolidation that has taken place in Japan and Korea. The DDR vs. RDRAM war give the industry a chance to make a huge amount of money. They are all holding these hostage to the highest bidder --AMD vs. Intel.

    This is why the X-Box victory for Intel was such a big deal. It was the opening salvo in the war. Personally, I believe that the X-box may never be built. But the announcement of Intel's (and Nvidia's) victory has implications for the DRAM wars--it showed that Intel was willing to build the CPUs for the X-box for free, or at cost. Why? To deny the market to AMD, of course, but even more importantly: to ensure that the next generation of Win32-based games for PCs and consoles would use Intel's SSE extensions and architecture enhancements, not AMD's 3D-NOW. Intel could do this because THEY ARE HUGE--they have the fab space to make at-cost coppermine chips. It gives intel a production base through 2004 for .18-m process coppermine cores while other plants are converted to .13+copper Willamette and McKinley cores. AMD does not have the fab capacity to do this while maximizing profits. It's fab capacity is better used for Athlon/T-bird/Duron cores and flash memory.

    Taiwan has positioned it's quasi-government-owned semiconductor plants to play the crucial part in the next phase of the war. You may notice that
    Samsung, and Micron, Hyundai, NEC and the other DRAMurai constantly issue conflicting statements about their production plans for DDR vs. RDRAM. This is not just bad reporting. This is a strategy: they are asking Intel and AMD, "how bad do you want it?" "How much are you willing to pay?"

    The main pressure has to be on the stronger contestant: Intel. If they pressured AMD too much, they would lose leverage over Intel's wallet. They are using upstart AMD as a stalking horse to get Intel to pay for the conversion to RDRAM production and guarantee profits. Very nice profits from producing RDRAM.

    The thing is, consortiums and cartels are weak things. Intel is constantly probing the fissures in these relationship. One weak link is Hyundai --it desperately needs cash, and Intel is dangling $200 Million for RDRAM production. But the weakest link is Taiwan. Taiwan's companies (Mosel-Vitec excepted) are not part of the seven Dramurai. None of Taiwan's main semiconductor companies design DRAM. These companies are also the tightest-knit of any of the major Asia companies. Samsung and Hyundai compete fiercely. NEC, Toshiba, Hitachi, and Fujitsu compete fiercely. And Taiwan holds a unique position in the semiconductor world: 80% of the contract foundry/fab capacity in the world is on Taiwan. When VIA-a fabless design shop--needs to build it's chipsets, it turns to TSMC, UMC and Winbond, Taiwan's home-grown, government-sponsored foundries. When Nvidia or 3DFX need a place to make their graphics chips, they turn to Taiwan. When one of the DRAM manufacturers needs quick capacity, they turn to Taiwan. These are state-of-the-art foundries, using .13-micron and copper-interconnections if required.

    Below the Taiwan government, there is a huge conglomerate called Formosa Plastics Group. It's founder is probably the least known and wealthiest
    billionare in Asia. Under the FPG umbrella are subsidiaries like VIA and TSMC, and also "strategic partners" like FIC--interlocking boards, cross-investment, patent sharing, the works. The Taiwan group is just waiting for Intel to pull out it's wallet, IMHO. VIA would love to settle the Intel patent infringement suit and ITC complaint. It desperately needs a partnership with Chipzilla for it's own (formerly Cyrix) CPU plans to succeed. So, the news [that VIA is working on an RDRAM chipset] needs to be read in this light--it is NOT yet a victory by Intel. It is a probe, a signal by VIA that it is ready to talk.

    VIA does NOT need a Rambus license to design and build a RDRAM chipset. The license needs to be held by the FOUNDRY. TSMC, UMC, and Winbond ARE ALREADY RAMBUS PARTNERS. The foundry PAYS the ROYALTY. It's all there at http://www.rambus.com.

    So the war is far from over, but I think that Intel is very close to playing the Taiwan option. That is the whole point of the lawsuit against VIA: not to break them, but to leverage them against AMD. VIA had assumed a KEY position as AMD's partner. AMD NEEDED VIA to build the chipsets for Athlon and thunderbird/duron, and to build the DDR-SDRAM chipsets as well. THIS IS NOW IN DOUBT: Aces' hardware had a story a few days ago about the fallout between AMD and VIA over the KX133 chipsets incompatibility with the Thunderbird and Duron CPUs. AMD now says that the first DDR-SDRAM chipset will NOT be from VIA, but from ALi. Acer Aladdin (ALi) is one of the few big Taiwan companies that is not connected with FPG. This is a desperation play by AMD. ALi is not even in VIA's league.

    DDR-SDRAM's share of the PC main memory market will be virtually zero this year and the first 1/2 of next year. If you look beyond the BS, Look at the KX133 chipset for Athlon. It came out in January. It is now June. You still can't get one from any of the major vendors like Gateway or Compaq; they are still using motherboards with the obsolescent AMD750 chipset(no AGP 4X, no PC133 DRAM, incompatible with GeForce cards, crappy HDD controllers). The taletale is to go to Gateway or Compaq or any of the others and look at the system specs: if they say AGP-2X or PC100 SDRAM, it's the old AMD750 chipset. That's SIX MONTHS.

    Realistically, that means that the first volume shipments of ANY DDR-SDRAM computers won't be before March 2001. IMHO, June 2001 is more likely. This assumes that they work. I'm getting suspicious that the DDR-SDRAM meetings are not already demonstrating production chipsets. IF DDR-SDRAM WAS A SLAM DUNK EASY THING, SOMEBODY WOULD HAVE ALREADY DONE IT. You would have seen a high-end workstation company like SGI, SUN, DEC/APLHA/COMPAQ, INTEGRAPH, or SOMEBODY do it by now. This is not the slamdunk they want you to think it is.

    Assuming DDR-SDRAM can be produced for volume system sales, it should be usable in any application that today uses SDRAM--obviously video cards, but also other applications. I still think it is the last trick they are going to pull out of SDRAM; you will probably see seom systems produced, and then they are done.

    When Willamette is introduced, I think it will answer a lot of questions. We will see what the best semiconductor design company on the planet (Intel) can do with a from-the-ground-up platform intended to take full advantage of RDRAM's unmatched bandwidth. If Willamette delivers, I think that the DRAM companies will produce RDRAM in volume, but it is going to cost Intel dearly for the misteps of the past year. The DRAM industry is not going to risk another i820 fiasco--Intel is going to have to write them an insurance policy.

    Sorry this is so long. I'll just add:

    Tom Pabst IS SUCK!
  • Let's put it this way. This "weekend" I've already had to go in to work five times to reboot a crashed Linux machine. And there's still one more day to go... sigh...If only they upgraded them to VMS; management loves the buzzwords though, so Linux it is.

  • The advantage of fewer pins is killed by needing much higher frequency to maintain bandwidth.
  • You misunderstand. The way SDRAM works, you don't NEED 800Mhz traces to achieve greater bandwidth.

    DDR SDRAM in the 133-150Mhz range supplies more bandwidth than 800Mhz RDRAM. It also does it with lower latency.

    Also, right now, the memory really isn't the problem. Intel's GTL+ interface is. Basically it tops out around 800Mb/s.

    Maybe some time in the future, when motherboard trace counts get unconscionably high, a soloution like RDRAM will be more attractive. Right now, it's performance is less, and it's cost more than commodity SDRAM.


    Chas - The one, the only.
    THANK GOD!!!

  • Or do you hate reading technical articles intentialy split across multiple pages to increase banner views?
    ___
  • No, not really. But I did notice they put a little disclaimer at the top saying the used of the FreeBSD daemon was for a nice devil figure, not a way to connect FreeBSD and Rambus together.
  • by Anonymous Coward on Sunday May 28, 2000 @11:05AM (#1041677)
    seemed very clear at the start, but more vague to wards the end with 3000 warrants being issued here, there and every where to whichever chip manufacturer produced RAMBUS chipsets the fastest.

    Is this not a form of cartel ? Is this not a dedicated attempt to replace a large user installed base hardware system ( SDRAM ) with a technically inferior - or at least similar system that provides costs more.

    I just bought a new motherboard from ASUS and had a devils job getting a PIII board that still supported SDRAM. ( the SC2000 ) there was little choice at all. ASUS may have other boards listed on their site, however the vendors can't buy them. Only the RAMBUS ones.

    I'm not trading in my large investment in RAM (384M) only 6-9 months after buying it ! looking at the article - you shouldn't either. Unless you enjoy lining Intel's pockets.
  • Its like the electic companies buying up all the commuter transit companies. They make the electricity that the city (and the communter) pay for by using the trains...

    Intel isn't just making money on their stocks (who doesn't), but their also serving to drive SDRAM out of business by using their 820 and trade chipsets - which just happen to have bugs in them to make them perform slower when they are using SDRAM... Coincidence?

  • Sorry for the above length.. High points:

    -Current IA32 CPU's and single tasking/single threaded software ( like quake ) does not present too many opportunities for multiple-concurrent memory access.

    -Deeper pipelining and ever more advanced add-ons to IA32 in addition to AGP and faster DMA devices ( such as gigabit-ether and RAID drives ) would provide greater concurrent load for a memory device.

    -Multi-threaded/process services such as file-serving, web serving, etc on a multi-CPU system can provide high main-memory load ( defeating virtually any caching system ) and requiring a greater need for intelligent mem-access management ( as with SCSI elevator optimizations for disk access ). Being able to service more than one mem-request simultaneously is more valuable than servicing a mem-request more quickly. You can double, or quadruple throughput easily by going to an n-way memory system instead of increasing mem-latency by 50% each generation.

    -Memory Blahs: DRAM is based on leaky capacitors ( pseudo-batteries ) which must be recharged, and pre-charged in such a way that causes serious performance lag, especially when you change which row is being accessed. Thus the "rated" speed of all DRAM chips is misleading if you do not understand this. 200MHZ DDR-SDRAM is it's burst speed. Dozens of clock cycles are consumed when non-optimal adjacent memory accesses occur. This is a fundamental flaw with DRAM and is present in most architectures ( RDRAM included ). Thus, only speeding up latency can never fully resolve this problem.

    -[DDR-]SDRAM is designed for high-speed dumping of closely spaced regions of memory (within a row) in a serial fashion. Higher bandwidth allows the faster flushing of internally cached hits, but there is still a severe latency between accesses. An ideal ( though costly ) advancement might be to produce multiple interleaved "channels" of [DDR-]SDRAM in order to handle multiple concurrent memory accesses. I have seen no indication of this direction in motherboard chipset manufacturers, and thus doubt it's feasibility. This solution is typically found in high-end workstations ( See SGI's visual NT station or many RISC servers )

    -RDRAM is designed with the idea of multiple channels from the beginning. Sadly, it's radically different architecture means an extreme introductory price which would only decrease in higher production volumes. RDRAM did not turn out to be as glorious as would be hoped. BUT, most benchmarks I have seen deal with non-server apps in single-CPU environments. Thus scores were only marginally better.

    -Intel's incentive. Tom leads us to believe that Intel is trying to make a quick buck on RDRAM. I have suggested that Intel absolutely needs an RDRAM type solution ( if not a normal interleaved solution ). Pipelining ( in both CPU and GPU ) allows the masking of memory loads ( some-what ), and thus Intel is migrating towards higher-latacey tolerant CPU's that are capable of ever increasing number of outstanding mem-requests ( which is facilitated by RDRAM and it's like ). Ultimately, Intel will move to IA64, which is completely designed around massive pre-queueing of memory accesses. Without massively parallel memory interfaces, IA64 will be even more memory starved than Alphas ( due to larger instructions ( 40 + overhead vs 32 bits/instruction ), and the heavy usage of speculative mem-loads ). I believe that IA64 will be significantly slower than IA32 unless a more advanced memory structure can be used. Additionally, I believe that IA64 will be far better suited to RDRAM than IA32 ( due to the above ).
    Intel NEEDS RDRAM ( or something like it ) to succeed for fear of IA64 flopping. Introducing IA32 to i820 and RDRAM is supposed to ease the market into acceptance. I doubt they care about lowering the price for IA64 ( since it'll be astronomical in and of itself ), but you need to at least encourage chip manufacturers to get the bugs out early.
    Rambus, on the other hand is just trying to keep their business going.. Thus they're having to give incentives out here and there ( that's just standard business ).

    I think Tom is being a little emotional in saying Rambus / Intel are evil. Rambus needs customers, and Intel needs a better memory architecture.

    No, Rambus is not a cost-effective solution to single IA32 CPU systems. It definately is not worth the price for single tasking systems ( such as for gaming ), though it might still work well in periferal contention situations ( 2xNIC + SCSI + RAID + CPU ). I would be curious to see RDRAM's benchmarks in IA32 mult-CPU configurations with multi-threaded / processes apps, of course.

    I strongly believe that RDRAM is a good match for IA64 and possibly 4 CPU configurations ( where memory costs are not going to consist of too much of the overall package ).

    Last and strongest point: Tom and many other techno-journalists, though very valuable in their insight and general contributions, are often seriously single-minded and emotional. I believe that they should spend a little more time being objective and trying to analyze the _why's_ of corporate America; looking at things from their perspective from time to time ( and actually comment about it ). You make a much stronger voice when you show an understanding of the situation, rather than preach to the choir and make ASSumptions.

    -Michael
  • ME TOO

    I guess he doesn't know that the daemon image is copyrighted and he isn't allowd to missuse the daemon that way without permission from Marshall McKusick. And he writes daemon wrong...
    Mr. Pabst should have done his homework better---not a good sign for a journalist I must say.
  • This is just like Intel to force someone's inferior technology with their name. Another reason to buy AMD. ;)
    It's pretty scary how many people won't buy the clearly superior Athlon because it's not an Intel... that kind of small mindedness sickens me. Just like the people who 'don't trust' open source software because it wasn't hammered out of some conglomerate corporation. I must agree with Tom on this one. Rambus saw themselves on the way out, so offered themselves to the major computer players and managed to make a huge amount of money in the process.
  • This is similar to the tactic IBM used when they chose the 8088 processor for the IBM PC. As I recall, though, IBM bought a bundle of Intel stock after they announced their decision.

    Many believed the principal reason IBM chose the Intel processor was because they could leverage a huge profit from the stock ploy, whereas if they chose the Motorola 68000, nada, because Motorola was a much larger company.

    Why didn't IBM chose Zilog, which had a much larger share of the PC processor market at that time? That may have been a technology decision because the Z80, Z8000 and Z80000 (did I remember the right numbers of zeros?) were mutually incompatible (far more so than the 8008/8080/8085/80186/80188/8086/8088 line), not a good portent for the future.

    Incentives like this may be "good business", but they seldom benefit the consumer. As, when a vendor has to "buy" a market for an inferior product, such as RAMBUS.

    Octalman
  • IF DDR-SDRAM WAS A SLAM DUNK EASY THING, SOMEBODY WOULD HAVE ALREADY DONE IT.

    You mean, like GeForce cards [asus.com]?

    (what's the difference between SDRAM and SGRAM anyways?)


    Your Working Boy,
  • by sometwo ( 53041 ) on Sunday May 28, 2000 @11:10AM (#1041684)
    Here's another article that dislike rdram. http://www.mackido.com/Hardware/rdram.ht ml [mackido.com]
  • First of all, a little professionalism would be appretiated.

    Secondly, the fact still remains that there are heavy initial access delay times ( 7 cycles in PC66 SDRAM alone ). This delay is only marginally avoided by interleaving / pipelining ( unless you can produce 4 to 8 fully pipelined stages ). This is still a downside or flaw to DRAM. To my knowledge SRAM does not require independant row and column charges ( though the addressing logic may require some twiddling ).

    Either way, I doubt that your nit-pick makes any difference to my main point. That you can achieve greater performance in a concurrent-access memory subsystem by going n-way ( at the cost of redundancy and expense ).

    -Michael
  • Uhh, correct me if I'm wrong, but if you have N64 games, don't you already have an N64? If not, then you must not need one, so I don't see why backwards compatibility is such a big deal. The only benefit is a little space saving (and a little bit better graphics on some games - yippee). But the risks of being saddled with poor design decisions (every system has some) and old technologies makes backwards compatibility a very bad goal for a console. Okay, if you can get it in a system incidentally, why not, but I don't think it is a great feature. And it does help ease the initial size of a system's game library, but no one is going to buy a PlayStation 2 to play PlayStation games.

    Besides, the Dreamcast can play [slashdot.org] PlayStation games. Do you really think the Dolphin won't? Now playing Dreamcast games, that would be cool. A system that could play the games of another contemporary system would rock!
  • "rambus is suck" ?

    greetings, anandtech'er!
  • That ad was not "paid" for. It was a certificate type thing. But it certainly did cause a fuss.

    JOhn
  • Note to Rambus Marketing: For $158 Million I will write a very positive article about your organization and products.

  • Tom received quite a bit of hate after the "Voodoo is dead" thing. But look...a year later he was right.
  • by Kalak451 ( 54994 ) on Sunday May 28, 2000 @12:21PM (#1041691)
    But its not just $158 million, its a big chunk of rambus. And if it becomes the dominant form of memory in the world, then the stock price will just keep going up, and the dividends from that million shares of stock will start adding up. Also Intel will have some say in what rambus will be doing and will be able to force it into helping them stay on top. Also intel doesn't pay rambus royalties, AMD would if they made a chipset that used rambus, ya can't beat that.
  • Help, /. is being invaded by mentally unstable marmots intent on flinging their entrials at people!
    Ok, not quite, but I have noticed a few trolls around today, and almost a complete lack of good posting.
    Yes, I know, by posting this I'm submitting to the marmot syndrome everyone else seems to have today..
  • I think Tom was polite. This is your e-mail to him which is entirely pointless and clueless as well. Have a day.
  • Just because its not illegal, and pretty much not unethical doesn't mean it doesn't suck. And believe me, if intel had knowen that rambus was going to cause these kinds of problems 3 years ago, the wouldn't have done it, but now they have made an investment and are going to try and make some cash off of it.
  • by swinge ( 176850 ) on Sunday May 28, 2000 @12:27PM (#1041695)
    Intel's income in 1999 was $8 billion; that's the important number, not revenue. the $158 million must be compared with the $8 billion, and you can see it is much more significant (not to mention, Intel has to "work" to increase the $8 billion, but Rambus does the work to increase the $158 million so it's likely to be worth more.

    Furthermore, executives get compensated largely through options. Options have an strike price, the price that you pay for the real shares if and when you exercise the option. Your income when you do this is the difference between the share price and the exercise price, but the exercise price must be, more or less, the price of the stock when the options are granted. So, executive compensation is not based on overall profits, but on growth of profits, and here is where the $158million looms quite large. It is a very significant number to Intel's executives. In the long run... wait, today's Intel executives care less about the long run than Intel's shareholders do.

  • Browse at '1'. All the trolls just disappear.

    /peter
  • One would think that after Tom posted "Voodoo is dead forever" last year next to a paid Nvidia ad on his website, while extolling how his website is "unbiased" nobody would pay attention to this loudmouth hypocrite anymore.

    I like Tom as much as the next guy (Which is not much), but Tom is right. Voodoo is dead. They seem to about one step behind NVIDIA since Voodoo3. Although, they are better in supporting non-win platforms, NVIDIA hardware is better.

  • Was anyone besides me bothered by Tom's use of the FreeBSD daemon mascot in an article that has absolutely nothing to do with FreeBSD?

  • Am I wrong, or doesn't Intel own the USB spec, whereas Apple owns the Firewire name, and the standard isn't owned by anybody? Therefore it'd be in intel's interest to push their product over the "standard".

    A few years ago, Intel was just a bit player in the chipset market. They captured 100% share by patenting Slot 1, and finally had to license it to competitors a few years later in order to head of an antitrust investigation. With their moves into networking products, graphics chipsets, and licensing a patented memory system, it seems pretty obvious that, aside from the hard drive, Intel want's to own every piece of silicon inside every computer...
  • Anandtech has a very different( and in my opinion saner) take on Rambus [anandtech.com]part 1 [anandtech.com] looks at the reasoning behind Rambus while part 2 [anandtech.com] gives current performance of RDRAM vs. SDRAM and looks at where things are expected to be a few years down the road.
  • I wouldn't think that Intel would be at all interested in selling off the shares it would get from Rambus in the event that they fulfilled their obligation to them... Instead, they'ed exercise their warrants and instead own a minority stake in them... People would buy Intel CPU's, with Intel chipsets, Intel graphics adapters, using Intel memory, with Intel networking products, in tehir ideal world... Even if their products aren't in fact the *best* in the market, they're making a very concerted effort to own every piece of your computer.
  • It was my understanding that beta was visually superior, but had a few annoyances, such as a single movie requiring more than one tape and I thought the format was very proprietary and thus very costly.

    Rambus certainly sounds bad and I trust tomshardware way more than Intel or Rambus.

    Nano bots would rock. I may never shower again.

  • Do you think a company like Intel would want to tell its shareholders that it didn't get 160million in easy extra profits because it was too lazy to try to promote a new memory standard?

    Do you think Mr Manager X at Intel wouldn't like to earn an extra big bonus for grabbing this opportunity which paid off so well?



    ---
  • Although, they are better in supporting non-win platforms, NVIDIA hardware is better.

    True, 3DFX has OSS linux drivers, but they are massively inferiour to the Windows drivers. The binary only NVIDIA drivers on the other hand are usually within 2% of the speed of the Windows drivers. Remind me again.. Who has better support?
  • You must have a hefty wallet, then. :-)
    -aardvarko
    webmaster at aardvarko dot com
  • by hernick ( 63550 ) on Sunday May 28, 2000 @03:07PM (#1041706)
    It's an option to purchase 1 million Rambus shares at 10$ each.

    Tom's numbers come from the fact the right now, Rambus shares are priced 168$. That means that Intel can exercice those options and purchase 168 millons' worth of shares for only 10 millions.

    Now, back in march, Rambus shares were worth 471 at one point. That's 461 million of free money for Intel if they had exercised their options back then.

    Intel is probably hoping that they can drive Rambus to such high prices again, and therefore make huge amounts of money.
  • No, you probably don't want a FireWire keyboard or mouse (or joystick and etc). Its gross overkill for those applications.

    However, ask yourself the question "Do I want a USB2 camcorder/diskdrive"? These are the applications that Intel seems to be targeting with its marketing materials for USB2, which is a direct swipe at FireWire's market segment.
    --
    Will Dyson
  • nope, just means I have no reason to upgrade...
  • Intel is doing what IBM tried to do a few years ago. IBM made a new bus and made it hard for anyone else to use. The industry said goodby and we have cheaper computers to show for it. As soon as rambus chips get cheap we will see rambus used on low end machines. 16 bit wide ram is only to make a cheaper motherboard and why would anyone make three ram slots a standard for high end servers. Intel is making business foot the bill. Buisness doesn't know the difference as they run software from microsoft that takes a 20meg/sec Raid system put 2meg/sec out the network. Maybe someone should tell AMD to talk to Sun and get a real memory subsystem. As for USB why don't we make keyboards and mice and camcorders use ethernet and forget about usb, firewire, fibrechannel, and every other expensive standard. What happened to the good old days when I could by a computer for $100 and it was a new computer.
  • Intel had nothing to lose, just another DRAM company on their right hand. But i still can't believe yet how a bit, "reliable" company like Intel would do that, but who cares, the Athlon is stronger than the Coppermine ;)
  • This is a good article. They eventually come down
    against using RDRAM for most purposes at the moment, but explain why they think it is a long term winner.

    Briefly: bandwidth does matter some now, and will matter more. There is a real detectable gain from moving the RAM clock on any current system from 66 to 100 to 133 MHz, even if you fiddle with the settings to cancel out the latency gain. Forthcoming peripheral technologies: AGP 4x, 66MHz PCI busses, ATA-100, will all place more demand on memory bandwidth. Now, SDRAM has a fairly clear route to about 2.1 or maybe 2.6 GB/s bandwidth (DDR at up to 166MHz) but no obvious path beyond there.

    DRDRAM can go up to about 3.2 GB/s per bus (800MHz DDR x 16 bits) eventually, and does 1.6GB/s per bus now. This is broadly comparable with SDRAM. On the other hand fitting two or more SDRAM busses onto a motherboard seems pretty hopeless for general purposes. The track count would be very silly indeed. Intels 840 chipset already supports 2 RDRAM busses. In terms of track count, it would not be inconceivable to get 4 onto a motherboard. This is the key advantage of RDRAM: lower track counts, allowing multiple busses. I have yet to hear how any SDRAM solution can beat this in the medium term. On the other hand this is for the medium term. In the short term, RDRAM is only a good deal if you don't need too much, and you really need peack bandwidth.

  • Personal crusade or not, AMD is currently starting to make Intel sweat.

    I think it would be a BAD idea for AMD to make a processor incompatible with x86. Let's see... we have the Alpha and G4 and those already have TREMENDOUS support from Joe Sixpack. (Though, admittedly, the Alpha was never meant to compete with the Pentium.)

    Ironic that you should bring this up, actually. Rumour has it that Intel's upcoming 64-bit chip is NOT going to be x86 compatible, but AMD's is. If anyone has the links to prove/disprove that, please post 'em.
  • Do you really think a company like Intel, which does over $35 billion a year in sales would make a decision with such a huge, long-term imapct (i.e. going with Rambus) for a measly $160 million in stock options??? I don't think so. It would take a lot more than that to bribe Intel.
  • Haha, very sad, yet very true.
  • Before somebody marks me down as "redundant", Tom (if you don't know already) has been known to take "corporate bribes" to write positive articles.

    (Read: nVidia)
  • by Raleel ( 30913 ) on Sunday May 28, 2000 @11:15AM (#1041716)
    Actually, my reason was for the lack of multiprocessor support. I am irritated at this whole Rambus business because I just put together company systems with rambus...a whole product line of them. I knew it was expensive. I knew the tactics used (at least in some respects). But the fact of the matter was it did offer an increase in performance on the applications we are using (scientific computing). I also know that 6 months down the road when DDR SDRAM is out, I will probably be dropping the rambus thing... but of course, Intel will be putting out IA-64 around then and I'll have to evaluate it...and it'll use rambus or some similar expensive tech.
  • The fee is 25c per machine, it's not a per-port fee. And USB2 is completely unsuitable for applications where FireWire is used (such as digital video), because you can't guarantee bandwidth. Also, USB works with hubs where with FireWire every device can be a master/slave (ie. your camcorder can transfer data directly to an external HD without the data passing through your computer)

    --
  • by duffbeer703 ( 177751 ) on Sunday May 28, 2000 @11:17AM (#1041718)
    To quote Tom:

    Why do I really dislike Rambus? Well, a company that uses questionable benchmarks to prove that their product is better than others, a company that tries to wipe away benchmark data that show the opposite as 'inappropriate' and a company that is trying to buy its forced entry into the market with those kind of tactics is a company that I will never ever trust.


    One would think that after Tom posted "Voodoo is dead forever" last year next to a paid Nvidia ad on his website, while extolling how his website is "unbiased" nobody would pay attention to this loudmouth hypocrite anymore.

    As for the $160 million dollar deal, I do not see what is wrong with it. Companies give incentives to each other all the time, and holding equity in other firms is a great way to cement relationships between companies. Intel takes in $37.4 BILLION dollars in sales every year, and owns major stakes in dozens of high tech companies.

    I do not see what is predatory or wrong with this.

  • Well, if people "clearly" were able to see AMD's processors as "superior", then they would buy them, no? In fact, there are 2 reasons why people go out and buy Intel: a) Excellent grip on publicity and brand-name penetration, b) people are NOT technically literate. What does that mean? What's clearly something to you, is most likely muddy and obtrusificated to non-techies. You want other people to buy AMD? Then explain to them why they should. I'll bet you have a reasonably hard time... Not everyone's a chip-wiz ;)

    Chris
  • Alpha has had this probably from the beginning (at least since '94, when I saw this in an Alpha ISA document). I also recall seeing prefetch instructions on at least one VLIW design. Do (some) L2 caches enable the CPU to issue line/block prefetch commands?
  • I'm no fan of Intel, but Tom's accusations seem like a stretch to me.

    In 1999, Intel made $29-Billion in revenue [intc.com]. It doesn't seem reasonable that Intel would gamble such a large part of it's reputation on a shoddy product get a piddly $158 Million dollars [tomshardware.com] (Well, I guess that's piddly). They probably spend more then that on advertising and marketing in 1999.

  • Pabst is a doctor of medicine, see page

    http://www.tomshardware.com/site/personal.html

    While I don't agree with the angst in your statements, he does seem to have it "in" for Intel/3dfx and others that don't give him exclusive access to new boards for testing

    just a thought.

    MRo
  • Yes that's right. If you log onto the www.insanehardware.com website in Australia, you'll see that Intel have made avaliable to them Intel i820 motherboards for sale to the public (for A$350, about US$200, including Australia-wide deleivery), & each board comes with a free 128MB RAMbus module. It seem strange that they'd do this considering that recently I noticed 128MB rambus modules selling for A$1000, just by themselves. Now I know why such a 'good' deal is on offer. Now I know why.
  • by be-fan ( 61476 ) on Sunday May 28, 2000 @11:23AM (#1041724)
    Yep, you heard right. RAMBUS RAM is crappy. Sure it runs a 800 MHz, but at what cost? It only has a 16 bit data bus, which greatly affects latency. As Tom has pointed out before on TomsHardware.com, high power 3D apps (read: games) use comparitivly little bandwidth, but need low latency. Even one of the most demanding of these apps (read: Quake) uses only about a few hundred megs of bandwidth. As such, DDR-SDRAM is a much better choice, because
    A) It provides much lower latency.
    B) It is much cheaper.
    C) It has just as much bandwidth.
    There is a reason the latest GeForce cards aren't using RDRAM. (Aside from the cost.) DDR-SDRAM is a much better memory technology. The only real reason that RDRAM has made it even this far is that Intel wants a little piece of the memory game.
  • Uhh, correct me if I'm wrong, but if you have N64 games, don't you already have an N64?

    I, like many others, have limited space under the TV. It is cramed full of things, a DVD player, AC-3/DTS decoder, satalite reciever, VCR, and game machine. I like many others have a signifigant other who likes a nice looking room more then a nifty new game machine. I like having a (happy) signifigant other much more then having a new game machine, not to mention an OLD game machine.

    So I got rid of the PlayStation (really I put it in a box in the basement) when I got my DreamCast.

    I'll bet a lot of people with N64s would like to have a Dolphin, and not have to box up their N64. Of corse I would like to have a PSX2 and not have to box up my DreamCast too. We don't all get what we want.

    Plus I would figure it is a minor, but positave, selling point to have an established set of games, even if many arn't so hot, and even if none push the machine.

    And back to the main point, RAMBUS is good memory technology for systems that can tolarate fairly high latencey, and need a whole lot of bandwidth for cheep. The "for cheep" part is important, if you have to pay too much for RAMBUS, you can buy more common RAM technologies, and use a wider bus (note, I'm assuming you are designing a system, this won't work if you went out and bought a motherboard, buying 4 SDRAM sticks won't magically give you 4x the bandwidth if the motherboard and chepset arn't designed to work that way).

    The high end technical workstation market has little need of RAMBUS, it can design 512-bit wide paths to memory. Sure they cost a ton of money, but that's a big part of why the $7000 Alpha motherbords cost $7000 (and the total reason you need to buy RAM in 4 stick lots, er, maybe 8). Except for the low end Alpha motherboards which cost $2000 or $3000 because they don't sell enough to earn back the NRE at $120 like PC motherboards.

    Note that I say little need, not no need. It looks like the Alpha 21364 has RAMBUS controlers on chip, which Compaq claims reduces latency signifigantly. Hopefully they are right, because I like the Alpha, and it would be unplesent if it started sucking.

    RAMBUS isn't a very good technology for PC memory because bandwidth to main memory isn't as big an issue as the latency. It might be good memory for a 3D board's texture storage, since bandwidth matters a lot there. Then again it may not, since new 3D boards store vertex info there, which may be far more latency sensitave.

    But RAMBUS makes a decent memory for a game machine designed 5 years ago (N64). There was no SDRAM to chalange RDRAMs bandwidth. In fact RDRAM had competiave latancy then even. The psudo-caching effects of RDRAMs sense banks let a no L2 cache RDRAM system compete well with a mid-size (for the year) L2 cache system with EDO DRAMs. Great choice.

    Beats me why the Dolphin is slated to use it. Maybe they didn't beleve in SDRAM? Maybe they know something about RDRAM we don't. Maybe they are on crack. Chances are we'll find out in less then a year.

    As far as I can tell cheap DDR SDRAM leaves RDRAM as a solution in search of a problem, but that doens't mean RDRAM was allways a steaming pile. In 1992 (which is when I recall the first RDRAM demo systems) it was really promising, and well ahead of the pack. They just havn't been running fast enough, and now the world of cheap RAM has cought up.

  • Bus width has nothing to do with latency - think about it.

    You are right that a narrow bus width doesn't cause high latency in and of itself. And right that it isn't the source of RDRAMs latency. But it doesn't have nothing to do with latency.

    If you have an identical 16 and 32 bit bus, and you want to fetch a 128 bit cache line from 32 bit address X, the 16 bit bus has to take two clocks to transmit address X, and 8 more to transmit the data, or 10 clocks. The 32 bit bus needs only one cycle to transmit the address, and 4 more for the data, total of 5 clocks, half the time of hte 16bit bus. As you said "Latency is the amount of time it takes from the issue of a memory read until it answers." (I would count time to issue the read as well).

    The latency in RAMBUS (which I'll freely admit I don't know the numbers for) probably has a lot more to do with all of the transactions and shit that RAMBUS does.

    RDRAM transaction overhead is resonably competitave with EDO RAM RAS and CAS timings (RDRAM takes 4 clocks at 400Mhz to 800Mhz to start a transaction EDO takes two but at a much lower speed, SDRAM I think also only takes two, but with DDR SDRAM it could be 2 clocks at 266Mhz, which is almost as fast as 600Mhz RDRAM).

    The thing that kills RDRAM seems to be the time it takes the motherboard chipset to set up the transaction, the low number the motherboard chipset keeps actiave (each RDRAM can usefully service transactions from 2 sense banks), and most importnat of all, the time the RDRAM itself takes to light up a sesne bank.

    With 7-1-1-1 PC100 SDRAM it takes 7 clocks at 100Mhz (70ns?) to read the first 64 bits back from the SDRAM. Allways. No matter what 64bits you want. Then one clock (10ns) for the following 3 64bit words (I think bursts can be longer, but I don't know of any PC chipset that does long bursts).

    With 800Mhz RDRAM it takes 4 800Mhz cycles (~6ns? about 10x PC100) to start the transaction. Then if the data is in one of the RDRAMs two sense amps the data comes back 16bits per cycle (a little under 4ns per 64 bit word - about 2x the speed of PC100). However if the data is not in one of the sense amps, the data that is in the amp will be written back (if it has been modifyed), and then that bank will be lit up. This can take a whole lot longer then PC100's 70ns. A whole whole lot longer. Like 400ns. If you have lots o hits in the same sense amp (like you would on a very small, or no cache machine) you get lots more speed form RDRAM. If you get few hits on the sense amp, you suck. Hard.

    Regretably that 400ns number is old (as is my SDRAM timings). I don;t know if the newer RDRAM has improved things, or has more sense amps, or a hidden write back. I know that DDR SDRAM comes much closer to RDRAMs peak timings, and that DDR SDRAM can get those numbers consitantly, while RDRAM needs a "good" access pattern otherwise it's throughput falls to a few percent of peak.

    The nice thing is that you can overlap (well, I hope) them to hide some of it.

    I donno if you can overlap transactions to the same RDRAM chip (I think you can), but you can overlap them between RDRAMs. Of corse that requires either a CPU that can do multiple outstanding memory references, or multiple CPUs (or other sources of memory traffic). You could allways interleve, but that seems kind of pointless given the high bandwidth once the sense amp is lit!

    That doesn't mean that RAMBUS doesn't suck, but make sure you don't bash it for the wrong reasons.

    Total agreement. RDRAM is cool, but it just doesn't solve a problem most people have.

  • I guess PR doesn't count for much these days. Intel didn't care about the performance hits they would take using crappy Rambus? I mean, I would think they would get more in the long run if they were able to, say, beat AMD. Then again, $158 Million is a whole hell of a lot of money. Maybe they didn't know Rambus sucked so bad? In any case, I'm sure they had teams of analysts debating which would be best in the long run, and IANA Economist.

    But in this light, I don't think it's so surprising why Intel is pushing Rambus so hard. It's an all-or-nothing proposition -- if they back out now they have the worst of both worlds. They don't get the money from Rambus but DO get all the egg on their face of supporting a crap technology. It would be kind of funny if Rambus was Intel's downfall, but I somehow doubt that will happen.

    __________________________________________________ ___

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...