Why Dr. Tom Dislikes Rambus, Inc. 122
homerj79 writes: "The good Dr. Thomas Pabst has posted his feelings towards Rambus, Inc. and why he, and his site, are so critical of the company. Here's a bit from the article I found interesting: 'When Intel 'decided' to go for Rambus technology some three years ago, it wasn't out of pure believe into technology and certainly not just 'for the good of its customers', but simply because they got an offer they couldn't refuse. Back then Rambus authorized a contingency warrant for 1 million shares of its stock to Intel, exercisable at only $10 a share, in case Chipzilla ships at least 20% of its chipsets with RDRAM-support in back-to-back quarters. As of today Intel could make some nifty 158 million dollars once it fulfills the goal.' It's a good read for people thinking about investing in RMBS. Something seemed fishy over at Rambus, and now I know what it is."
A simpler answer: (un)aligned interests (Score:1)
* Intel has a long tradition of moving the industry to a place where it benefits Intel while harming the consumer. How about Slot1/2? It drove the costs up in terms of producing machines, with scant little benefit to the industry. It did, however, throw up a new barrier to AMD and other cloners. Now that that advantage is gone, Intel has returned to sockets.
* Although $158 mm is chump change to Intel as a corp, who knows how many Rambus options found their way to the officers of the company? Imagine being in a situation where you can throw up barriers to your competition, while lining your own pockets. Even though it seems to be backfiring, there are probably many folks at Intel that will get very rich if they succeed at cramming this down the throats of the many clueless out there.
Intel knows that bringing out a successor to the BX chipset will have the maximum benefit to the consumer, but so what? They still control the industry. Itanium is another attempt at moving the industry somewhere else, but it will ultimately fail.
When engineering is left behind (Score:3)
the Pentium series may have started life as a traffic light controller but Grove and Moore demanded a lot from their engineering team and got it.
Their relationship with M$ did get pretty cosy but it was never a perfect marriage that the word Wintel would suggest (see Inside Intel http://www.webreviews.com/9711/inside_intel.html here [webreviews.com]).
However as the company grew they seem to have inevitably lost touch with their engineering roots. Pressure from other manufacturers has always hurt them bad.
The Register [theregister.co.uk] has a good take on it here [theregister.co.uk] - http://www.theregister.co.uk/000525-000009.html
Rambus is a brain dead attempt at fencing people in to non-commodity memory. Especially ironic as Intel have been burned by memory production once already (when the market went commodity). I'm sure everybody (except Rambus Inc.) is pleased that it looks like it's heading towards a spectacular failure as it would drive the price of memory up for no good engineering reason.
It's an expensive foray for Intel and co. probably one that we'll end up paying for in the end one way or another.
I hope they learn a good lesson and go back to chasing Mhz.
Hard To Believe (Score:1)
Intel's secret: RAMBUS and MERCED (Score:1)
In this spirit, Intel backed Rambus, knowing full well that it would be slower to start. The idea is to use their industry control to fill the market with Rambus. When Merced comes out, the base is already laid for high bandwidth high latency memory. With a chip where latency is no longer an issue, Intel would be king.
I doubt it will work. Rambus has had technical problems in addition to performance problems. Intel is no longer unquestioned king, so they cannot force the market like they originally planned. When Merced comes out, and Rambus is not ready, they will fall behind.
hey, I just realized something (Score:1)
Re:offtopic post (Score:1)
Re:Intel and USB 2 vs FireWire (Score:1)
ie. Do I want a FireWire keyboard? Do I want a FireWire camcorder on the same bus as my mouse?
-
Ekapshi
Re:$158-Million Dollar Conspiracy (Score:1)
Intel's money is normal, and no guarantee (Score:1)
Hardware equivalent of open source? (Score:1)
but their management team has shown themeselves to be creative and willing to put their balls on the chopping block
I'm not sure where you've got this idea. RAMBUS has only shown creativity in the bigger and bigger lies they can manufacture to show that somehow RAMBUS is faster, cheaper or anything remotely better than DDR-SDRAM. The management team at RAMBUS is worse than the PR team at Microsoft in my opinion for their endless stream of FUD, mininformation and blatant lies.
think about why there is not a HARDWARE equivalent of open source software
If there is an open source solution in Memory then it is most certainly not RAMBUS, but DDR-SDRAM. The DDR spec is openly created for anyone with the manufacturing capabilities to use without the crippling licencing fees charged by RAMBUS. The fact that it is smaller, cooler, cheaper and faster than the equivalent RAMBUS modules and yet somehow RAMBUS and Intel are still peddling their inferior technology smack very much of conspiracy and not of the creativity you suggest.
John Wiltshire
Teehee. (Score:1)
Re:Is it just me? (Score:1)
As far as I can tell, this falls under clause B on this page [commnet.edu] but it's a fuzzy rule so I'm not sure.
--
Re:offtopic post (Score:1)
So, is slashdot against censorship or not? That's essentially what bitchslapping amounts to. Few people set their thresholds to -1.
For more information, check out http://slashdot.org/comments.pl?sid= moderation [slashdot.org]
What's the point? (Score:1)
Purpose of Rambus (Score:1)
Point one. Intel has had a very long term strategy ever since the Pentium Pro. A simple fact is that the longer a CPU pipeline is, the more efficiently you can crunch numbers. The only other alternative is to make redundant CPU components and a seriously more complex internal bus ( you'd have to have more parallel ports to the register set, plus more complex scheduling ). Where-as, if you had to perfect 50 independant calculations, and you had 50 fully pipelined stages, you could chuck out an instruction/ clock. And at 1,500MHZ, that's pretty fscking sweet. This is completely the path of the Alpha chip. You get a hell of a lot more bang for your buck, since it takes a tiny bit more silicon to add an additional stage ( buffer plus redundant operations ), while a parallel stage requires 100% redundancy of functional unit plus additional bus interconnections a deeper, more complex and less efficient scheduler and a heavier load on the register set ( all of which degrade overall performance unless you can achieve a decent instruction level parallelism ).
Unfortuntaly, pipelining has it's own can of worms. First of all data-dependancy can really kill you if the pipe is too deep. Secondly, you need higher speeds to achieve the same throughput as a wider CPU. ( 750MHZ single but deep pipeline CPU should be similar to a 500MHZ short double pipe ). Higher speeds means greater memory latency dependancy. Thus a 500MHZ CPU will feel less memory latency effects than that of a 1,000MHZ CPU. So if you had a 4-way 500MHZ machine, it would probably overtake the single way 1,000MHZ machine simply because of memory latency.
Even though Intel has designed n-way machines, they have been focusing on widening the pipeline ( in both CPU and graphics chips.. even though they've reduced/dropped support for that market ). Wide pipes in graphics are especially useful because you are working on seriously complex operations.
In order to alleviate this problem, Intel has made many ventures into the memory market. They started with their Pentium Pro by using some seriously fast L2-cache. Sadly this was rediculously expensive ( over $2,000 for a 1Meg chip ). Just before this, we were seeing motherboards with 2, 4 or even 8 Meg of L2 cache. I personally was having hopes that we would eventually see 64 or 128Meg of L2 cache and could finally replace DRAM with SRAM and eliminate most of the decades old limitations. We would have a much simpler system, and the power, heat and silicon costs would eventually be worked away, simply because of economies of scale. Unfortunately the PPro's cache was too hot, power-hungry, expensive, and low yield / volume. It was killed in favor of the P-II's significantly simpler ( and seperately purchased ) L2 cache. This direction virtually killed the mother-board based cache, and thus my hopes of a replacement of DRAM any time soon. Soon celerons started seeing on-die implimentations, and now we're seeing coppermines taking this aproach. This approach ( which we saw years ago with the Alpha's 92 or so K of L2 cache on-die ), basically eliminates the viability of 64+ Meg of memory on CPU's at least until they reach
In any case, caches can only temporarily hide the latencies of extremely slow memory. Believe it or not, DRAM has been consistently around 16MHZ for the past 10 years with barely any fundamental advancements. Sure, there are some neat tricks like making the pipe really wide ( 256 bits read out in 4 sequential bursts ), having intelligently refreshed rows, and allowing nearly 2 simultaneously active rows for fast column access, but if you look at the timing specs of these chips, they tend to have very high-latency for initial row accesses. I know that there are some premium chips that have significantly higher worst case timings ( namely used in graphics cards ), but we don't tend to find these in mainstream systems.
Even DDR-SDRAM only finds ways to take active rows ( which I'm going to guess are more optimized and possible smaller to allow shorter response times ) and make them more rapidly accessible. The simplest ( and stupidest ) form of DDR-SDRAM would take the exact same amount of time to perform a colum fetch, but transfer the 256bit block twice as fast ( on both rising and falling clock edges, which seems to be all the rave nowadays ). Basically, I am doubting that there is any radically new technology here that addresses the core limitation of DRAM.
More expensive systems ( almost always non-intel ) make use of interleaved memory, basically taking 2 or 4 parallel and independant memory pipes, each performing their own memory fetch.. Usually these are odd / even type interleaves, for example. In this way you can have twice the bandwidth and half the latency on average if you can maintain a non-empty queue of memory requests ( which isn't hard to do at all with modern n-way out-of-order CPUs ). This works especially well when you have multiple CPUs.
In fact, the larger the memory queue, the better the opportunity to perform optimized memory access schedules by the memory controller, much like a SCSI controller can organize disk accesses in a elevator method ( rocking arm left and right ). If you were to be able to have 4, 8 or 16 independant memory devices, then you would be able to truely achieve up to 16 times the bandwidth. An intereting point is that DDR-SDRAM focuses mainly on sending large chunks of contiguous memory. Something that only really ever happens on cache fills, and they tend to be 256 bits wide. Rarely does the next memory access come immediately afterwards ( unless you are performing DMA operations ). In a 16-way interleaved memory structure, the bandwidth of each channel can be slower ( especially if the external IO is BW limited ) so long as you can initiate a new transaction while transferring the data of the old one. Essentially this is the art of pipelining. Pipelined L2 cache was the first memory to take this approach. SDRAM soon followed ( but not to the extent we are talking here ).
The situation for Intel is as follows. Higher speed CPU's are going to be memory starved. Very rarely do we find tightly organized CPU instructions and data sets that fully take advantage of L2/ L3 caches. The standard today is to use generic, ready-made packages such as MFCs which are _heavily_ bloated. In fact, most C++ code is going to be non-spacially optimized. As DLL's grow in size, modularity generally increases, and context switching becomes more and more prevalent, caches will be less and less effective. Additionally, Intel/HP have worked hard ( in their IA64 ) to allow the queing of memory in a CPU ( and graphics chips ). Their ideal setup will be a quad processor configuration with one or more graphics processors, all contending for memory. Each device queuing up literally dozens of memory operations ( Not to mention the DMA access of streaming audio, video, network ). Single channel DDR-SDRAM, or even quad, octeplet, etc memory isn't going to sustain such load. Two independant sources will most likely be on seperate rows, and thus the memory controller will have to attempt to minize row thrashing by reordering requests. But there will still be a serious lag when these thrashes occur.
There is no reason that I can think of that would prevent the use of interleaved DDR-SDRAM modules. Albeit you'd need a Chipset manufacturer ( other than Intel ) willing to risk making such an expensive device.
Intels solution is RDRAM. Here you can have multiple channels, AND to my knowledge, you can queue multiple commands on the same channel to potentially multiple devices. Now, this is the theory. From the above, if you have a fixed bandwidth on the external IO, then you can divide the total inner bandwidth between each channel. You can do this by either lowering the frequency of internal channels, or by reducing the BUS width. Lowering the freq would slow simple commands being sent to each device ( thus reducing latency ), so it's best to reduce the bus-width. This also helps in reducing the number of wires on the motherboard. 8 channels at 64bits each takes up a lot of space. Now RAMBUS is similar in theory to PCI or SCSI: You have a master and a slave device, where commands and data are sent in packets. Theoretically, a channel could have multiple read/write(+data) packets sent, and the RDRAM modules would process them in a pipeline. Multiple RDRAM chips ( I believe they're called RIMMS ) on a channel would act like interleaving except cheaper. ( the controller only deals with 2 or 4 or 8 channels, but you could have #channels x #chips/channel interleaved memory segments. You wouldn't be able to expand #channels for a given motherboard(e.g. chipset ), but you could easily expand the number of chips on a channel, unlike [DDR-]SDRAM where you have one channel, and one active chip on that channel ). So, no matter how fast the external bus is, you can handle multi-CPU configurations, and even deep out-of-order memory operations, and especially IA64 super-deep memory operations in a snap.
I seriously believe that these were the fanticized goals of Intel when they invested in Rambus. I do not know how far along RDRAM was when Intel got involved ( namely, did they know how poor the system would be at the outset? ) Intel most assuredly proto-typed their IA64 with various memory models (including a hypothetical high-latency, infinite concurrency memory model) and also against current SDRAM technologies. I'm sure that they learned their serious weakness with SDRAM. They must have been desperate to find a new solution. RDRAM was the closest that they could find, undoubtedly for the above reasons. Rambus must have foresaw some of their limitations and were thus desperate to get market-share. They must have known that they weren't going to survive unless they had high volume. New memory that completely lacks compatibilty is not going to survive unless it's adopted. Thus, their tactics are understandible. Intel's tactics seem fair when you look at IA64's needs. In order to ease the release, they had to get the market comfortable with the newer memory. If they could succeede in replacing SDRAM with RDRAM, then IA64 migration would be seamless. If not, the many _many_ limitations of IA64 would become painfully apparent, and they could potentially lose everything to AMD or other post-RISC CPUs. They are betting the farm on IA64, and must do everything in their power to maintain market share acceptance.
IA64 is novel; it is bleeding edge technology. It is intelligent engineering since they didn't just make VLIW, or RISC, but instead "engineered" a solution from what seems to be most of the current technologies. But none of this will mean jack if it doesn't actually perform faster than existing technologies. The one saving grace is that it's designed with headroom, so it has much opportunity for improvement, where-as x86 and simple-RISC ( save Alpha ) really have to pull rabbits out their asses to increase performance without doubling silicon. Alpha, for example has compilation hints in their instructions. They've survived for quite a while without OOE. Only recently have they taken to this horse. Thus they still have some growth left in their current design.
So, what of Rambus? The theory sounds great. It simply has to work for Intel's sake. But the reality is less dreamy. For power-consumption purposes you can't seem to be able to have too many ( if even more than one ) active chips on a channel, and thus you can look at this as row-thrashing taken to an extreme. I'm not even sure if you can request multiple addresses from completely independent chips on a channel. Thus at best you are getting the effects of interleaving 2 or 4 channels, but the inter-chip thrashing could more than make up for it. Additionally Tom has pointed out some serious bus latencies which could potentially seriously hurt performance. These latencies wouldn't be so bad at all if you could have multiple concurrent chips in operation ( again, I'm not sure if this is possible ), but single-chip operation seriously reduces the over-all benifit of 2 or 4 channel operation.
Now, about benchmarks. Tom has basically condemned Rambus because of marginal improvements for extreme additional expense. He's also evaluated this whole stock-sharing policy from a short-sighted greed point of view. I think I've described the necessity of Intel's involvement and the desperation of Rambus to gain acceptance. Economies of scale regarding a radically new design are going to initially be expensive. Hell, Intel has almost always sold their state-of-the art at extreme prices, which [usually] eventually trickle down to the mainstream at reasonable prices. The only evil deed I see here is Intel's insistance on the acceptance dispite the desires of the general populace. We like the prices of PC's now. We're dealing with ever-more-expensive software ( ironically with lesser quality and usefulness ) in addition to fluxuations in RAM prices ( in addition to other components such as DVDs ). A manditory doubling in system price isn't going to sit well with us, and AMD will be all the happier to take up the slack.. If Intel ignores this, then they'll be sorry ( their marketing boys need to go back to economics school for some basic supply/demand and the effect of substitution goods ). Things may look bleak, but competition is doing very well ( Government even seems to be taking care of the bad Software boys.. well, sorta ).
And lastly about the benchmarks.. Well, we're talking about software ( and hardware ) that barely takes advantage of the RDRAM. Remember, the theoretical point of this new system was to sustain high bandwidth WHEN you have a deep memory-request-queue. Many highly optimized programs are going to try and fit in cache and might even organize large-data accesses in such a way that do not split evently across RDRAM channels. Thus a single application ( namely quake ) will most likely not take too great of an advantage with RDRAM ( though I do see improvements ). BUT, when you multi-task, and more importantly, when you have multi-CPUs, or even when you migrate to IA64, THEN you will find incredible benifits. That's great an all, but what if I'm looking for a gaming platform. I want top-of-the-line, but my only choice seems to be RDRAM, and that's just not cost-effective. Well, right now you seem to be SOL unless your open to AMD. But, what I see is the new IA32 CPUs will have wider memory fetch queue's. They have slowly adopted multiple outstanding memory requests ( I believe in one of their PII lines ), and I believe that they still have some room for advancement in this technology in the IA32 line. From what I've seen, IA32 still has favor with Intel ( they don't exactly want to give IA32 to AMD on a silver plate, now do they? )
More importantly, RDRAM is, in my unsubstantiated opinion, slated for servers which farm out many concurrent services ( web-servers, for example ). I don't know if there are many good benchmarks out there for this. An Apache + DHTML test-bed would probably be the best test for an RDRAM system. My guess is that it will excell over SDRAM. File servers, too should fit nicely.
My perspective is that RDRAM is a bad buy right now unless you absolutely need every ounce of performance that you can get. It is NOT currently for single app configurations. This would probably rule out the windows 98 market, and much of the NT market ( minus the *cough* *cough* NT-web-server, back-oriface market ). Certain Linux servers should work well also.
RDRAM is here, and I think it has staying power. It _will_ get cheaper, it will be shoved down our throats. But more importantly, it will adapt much better to future CPU architectures than existing designs. DDR-SDRAM has a chance with interleaved memory as I said, but I'm doubting the feasibility. ( 5 years down the road, the shere number of pins of dual DDR-SDRAM might bring the costs beyond that of 4chan-RDRAM )
In conclusion. Tom needs to release some of that agression by actually playing Quake instead of benchmarking it. He needs to be a little more open minded ( and actually attempt to argue FOR the other side every once in a millenium. Or at least take on their pespective ). This goes for many of you techno-journalists. You do great work, but your minds can be like a long narrow tunnels. On the outside, capitalists seem cold, greedy and generally evil. And in many instances, they are ( blindly following economic models and acting out darwinistic barbarism on their competition, fed by the stock-driven need for unrelenting market growth ). But sometimes evil aliances ( such as WinTel, RamTel, MSac, etc ) are driven out of necessity for survival, and you have to be open minded to see their benifits over personal disgust.
-Michael
Re:Remember Betamax? (Score:1)
Beta (Sony) lost out in the VCR market for the same reasons that Apple lost out in the PC market. Neither of them would license their technology. Both companies weren't content with owning the standard. They wanted to own the entire markets. The developers of VHS (Panasonic and JVC, I believe) licensed the standard to anyone who wanted it. The result? VHS was cheaper, and Beta lost out. Sony did eventually license Beta, but it was much too late.
Cheers,
Bun
Re:Remember Betamax? (Score:3)
Your Beta analogy is more apt than you know.
The reason Beta failed was because SONY tried to push Beta forward alone. They didn't want to share the VCR market with anyone and wanted to control the standards.
Unfortunately, everybody jumped onto the VHS bandwagon (and basically told Sony where to stick their licensing fees) and the inferior VCR format prevailed.
Right now, while RDRAM is a very forward-thinking step, it's usefulness compared to a well tuned SDRAM system is next to negligible.
RAMBUS and Intel keep spouting off about how future generations of RDRAM will be more powerful. Well, that's in the future. Right now, RDRAM is an expensive, proprietary, uncecessary boondoggle.
Some day we WILL have to go to a serial memory device like RAMBUS. But that day hasn't come yet.
Chas - The one, the only.
THANK GOD!!!
Re:When engineering is left behind (Score:1)
All institutions eventually fall but while they exists the can cause as much harm as good, but heck - what technology doesn't? (in a broad sense so no nit picking what about X or Y comments !-)
Re:Just like Intel (Score:1)
The alpha isn't priced for domestic use.
and the G4 dunno they don't sell them at my local supplier.
A non x86 AMD chip that ran Linux might be a nice product and earn AMD some respect from techies.
They've obviously got some skilled people.
You think a bit of pride might make them do such a thing for the cavalier hell of it!
Re:A different take on Rambus (Score:1)
I am an ASIC (chipset) designer by trade and the only reason we ever looked at Rambus parts was because a high-pincount package (anything fancier than intel's 256/272-pin BGA or whatever the current BGA-du-jour is) costs a lot. Trying to get a ton of memory bandwidth using SDR or DDR SDRAM means using lots of pins, which in turn burn a lot of power and require a lot of additional power and ground pins to support them.
We ended up using SDR SDRAM for our parts but that was almost 24 months ago that a decision was made, and we were worried about constrained availability on C-RDRAM or D-RDRAM. Guess we were right about that one.
RDRAM is not the antichrist, everyone. It's just ridiculously expensive at the moment. So are 1GHz pentiums and SRAM-based disk drives, but no one complains about those facts. The price _will_ come down. Will DDR or QDR SDRAM already own the market by that point? Who knows. Just don't count out RDRAM, since the pincount requirements are a very important part of optimizing overall system costs.
Did you not read the articles on his site? (Score:1)
Obviously not. So, to sum up: RAMBUS hits a maximum of 1.6 Gigs per second across a 400-Mhz bus (or, what is being called 800-Mhz RDRAM). DDR-SDRAM can hit 2.1 Gigs per second RIGHT NOW on a 133-Mhz FSB. Some of the upcoming SDRAM technologies will hit over 6 Gigs per second.
The problem isn't even the damn bandwidth, but the latency. RAMBUS RAM is slower than all crap to get started, and that absolutely kills it. Right now, the fastest RAMBUS RAM is actually SLOWER than a 133 MHZ FSB will get you out of SDRAM, and so long as RAMBUS maintains high burst rate, high latency, it will continue to get worse.
Why don't you go look it up? Just plop "RAMBUS" in his search engine.
Fix your app, or fix your hardware. (Score:1)
So what that Linux runs on a trash bucket PC, that doesn't mean you put critical stuff on that box.
If you throw absolute shit at the kernel, it might keep it running, but it'll probably crash. Try running a VAX (cluster) with flakey CPU(s) and see how far you get (clue: not very).
Re:Larry who? (Score:1)
so many people, so little brain
duh
Re:another article (Score:1)
Re:Considering its going to cost Intel... (Score:1)
Ironically they are willing to trade you the buggy SDRAM one for a RDRAM one with 128MB ram.
Re:Ist yoost me? (Score:2)
do you care if he's German? and, he speaks English and German and geek better than you. Actually, I think his English is very good: the subtle nuance that he's a little bitchy comes through loud and clear. I think he and his site do a terrific job, though.
Why the money doesn't matter to intel (Score:3)
Re:Now he's a doctor? (Score:1)
All we need now is someone with PHD in computer science reviewing medical treatments.
Re:another article (Score:2)
IMHO, as per
J:)
USB v2 and USB v1 on the same bus? No way. (Score:1)
USB 2: 400Mbps (not here yet)
You would put them both on the same bus? That's insane.
IEEE1394 (FireWire): 100-400Mbps now.
Apple changed the licensing from per port years ago.
I can already buy FireWire stuff. All the digital video cameras come with FireWire so why USB 2? It isn't needed.
Re:Just like Intel (Score:1)
I've always seen AMD's pursuit of the x86 market as a personal crusade of Larry Sanders.
I think that it might be a good idea for AMD to make an Pentium beater with an INCOMPATIBLE instruction set not compatible. (maybe they already are, send tell).
Making a super chip and porting something like Linux, or *BSD or BeOS or anOtherOS to it might be a winner.
Go on Larry - roll the dice
Re:Rambus is suck. (Score:1)
I KISS YOU!
(that'll cost me a coule karma points!)
Remember Betamax? (Score:3)
Does RAMbus really suck? i have no clue, but their management team has shown themeselves to be creative and willing to put their balls on the chopping block. $158 million may not mean much to Intel, but ill bet the people at rambus have more than that on the line -- and it DOES mean something to them.
Now, im sure we all would agree that buying market share is not a healthy capitalistic practice, but do you think Intel would be wasting their time for $158 million on a technology that was anything less than adequate? i wouldn't think so.
everyone here loves the newest/fastest/bestest stuff, but in the real world we rarely get it in our hardware -- think about why there is not a HARDWARE equivalent of open source software...its called factories -- bring on the nanobots!
__________________________
Re:Just like Intel (Score:2)
(a) Athlon uses double the power of the Pentium III, and runs twice as hot
(b) No support for MP (still!)
(c) PC Magazine recently ran an article which said a 1 GHz Athlon is approximately as fast as the 866 MHz Pentium III. The 1 GHz Pentium III absolutely slaughters the 1 GHz Athlon in most benchmarks (especially iSPEC).
Unfortunately, mass-public perception is that Athlon is superior. Intel has to work on marketing.
Tom needs to stick to technical articles.... (Score:1)
Re:$158-Million Dollar Conspiracy (Score:1)
Re:When engineering is left behind (Score:1)
There are many, many good engineering reasons to switch to Rambus. It has a higher maximum bandwidth and uses less pins. RDRAM isn't as cheap as SDRAM, but guess what - that doesn't suddenly eliminate the benefits of RDRAM.
Re:RAMBUS is crappy RAM technology. (Score:1)
Answer me this, will you be purchasing a PlayStation 2? If you do, Sony is using RDRAM in the PS2.
Re:Is it just me? (Score:1)
Yes, well can you REALLY justify the use of that semi-colon?
Asinine .sig... (Score:1)
Or are you nothing more than a common troll? Oh, never mind.
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
Re:$158-Million Dollar Conspiracy (Score:3)
say it is, then it affects revenue on all the lines that are supposed
to use Rambus.
I agree with Tom's that RDRAM isn't right for commodity PCs, but I
don't buy the conspiracy theory (beyond the fact that Intel would love
to lock PC manufacturers into a proprietary technology). Rambus is an
ambitious technology, with lots of potential, and Intel backed
it because they believed it would perform better than it did. They
might be forced to change their mind, but we will just have to wait
and see, a surprise isn't impossible.
Marmot = ? (Score:1)
CPU wars: Big Picture Perspective (Score:4)
soon-to-be-booming gaming console/internet appliance architecture. The basis for this war is about as complex as the alliance structure that resulted in WWII. The catalytic event that launched this conflict was the Anti-trust case (and victory) against Microsoft.
Microsoft had effectively controlled the architecture by controlling the OS environment. This will soon be over. The next big thing will be embedded OS's in gaming consoles. Intel and AMD are vying to dominate that market.
The stuff you see on Tom's Hardware and Anandtech are distractions. Those are feints and skirmishes aimed at press ink and enthusiast mindshare. No one ever said that the world is fair or that the best technology has to win. Rambus IS the best technology, and the only DRAM technology that can
scale right now to keep up with Moore's law. DDR is a legacy bandaid.
The real war is being fought between AMD and Intel among the DRAM manufacturers and silicon foundries of Asia--Korea, Taiwan and Japan. The game is to get AMD and Intel to pay for DRAM conversions and partnerships. DRAM manufacturing has been a VERY marginal profit business for the past decade--look at the consolidation that has taken place in Japan and Korea. The DDR vs. RDRAM war give the industry a chance to make a huge amount of money. They are all holding these hostage to the highest bidder --AMD vs. Intel.
This is why the X-Box victory for Intel was such a big deal. It was the opening salvo in the war. Personally, I believe that the X-box may never be built. But the announcement of Intel's (and Nvidia's) victory has implications for the DRAM wars--it showed that Intel was willing to build the CPUs for the X-box for free, or at cost. Why? To deny the market to AMD, of course, but even more importantly: to ensure that the next generation of Win32-based games for PCs and consoles would use Intel's SSE extensions and architecture enhancements, not AMD's 3D-NOW. Intel could do this because THEY ARE HUGE--they have the fab space to make at-cost coppermine chips. It gives intel a production base through 2004 for
Taiwan has positioned it's quasi-government-owned semiconductor plants to play the crucial part in the next phase of the war. You may notice that
Samsung, and Micron, Hyundai, NEC and the other DRAMurai constantly issue conflicting statements about their production plans for DDR vs. RDRAM. This is not just bad reporting. This is a strategy: they are asking Intel and AMD, "how bad do you want it?" "How much are you willing to pay?"
The main pressure has to be on the stronger contestant: Intel. If they pressured AMD too much, they would lose leverage over Intel's wallet. They are using upstart AMD as a stalking horse to get Intel to pay for the conversion to RDRAM production and guarantee profits. Very nice profits from producing RDRAM.
The thing is, consortiums and cartels are weak things. Intel is constantly probing the fissures in these relationship. One weak link is Hyundai --it desperately needs cash, and Intel is dangling $200 Million for RDRAM production. But the weakest link is Taiwan. Taiwan's companies (Mosel-Vitec excepted) are not part of the seven Dramurai. None of Taiwan's main semiconductor companies design DRAM. These companies are also the tightest-knit of any of the major Asia companies. Samsung and Hyundai compete fiercely. NEC, Toshiba, Hitachi, and Fujitsu compete fiercely. And Taiwan holds a unique position in the semiconductor world: 80% of the contract foundry/fab capacity in the world is on Taiwan. When VIA-a fabless design shop--needs to build it's chipsets, it turns to TSMC, UMC and Winbond, Taiwan's home-grown, government-sponsored foundries. When Nvidia or 3DFX need a place to make their graphics chips, they turn to Taiwan. When one of the DRAM manufacturers needs quick capacity, they turn to Taiwan. These are state-of-the-art foundries, using
Below the Taiwan government, there is a huge conglomerate called Formosa Plastics Group. It's founder is probably the least known and wealthiest
billionare in Asia. Under the FPG umbrella are subsidiaries like VIA and TSMC, and also "strategic partners" like FIC--interlocking boards, cross-investment, patent sharing, the works. The Taiwan group is just waiting for Intel to pull out it's wallet, IMHO. VIA would love to settle the Intel patent infringement suit and ITC complaint. It desperately needs a partnership with Chipzilla for it's own (formerly Cyrix) CPU plans to succeed. So, the news [that VIA is working on an RDRAM chipset] needs to be read in this light--it is NOT yet a victory by Intel. It is a probe, a signal by VIA that it is ready to talk.
VIA does NOT need a Rambus license to design and build a RDRAM chipset. The license needs to be held by the FOUNDRY. TSMC, UMC, and Winbond ARE ALREADY RAMBUS PARTNERS. The foundry PAYS the ROYALTY. It's all there at http://www.rambus.com.
So the war is far from over, but I think that Intel is very close to playing the Taiwan option. That is the whole point of the lawsuit against VIA: not to break them, but to leverage them against AMD. VIA had assumed a KEY position as AMD's partner. AMD NEEDED VIA to build the chipsets for Athlon and thunderbird/duron, and to build the DDR-SDRAM chipsets as well. THIS IS NOW IN DOUBT: Aces' hardware had a story a few days ago about the fallout between AMD and VIA over the KX133 chipsets incompatibility with the Thunderbird and Duron CPUs. AMD now says that the first DDR-SDRAM chipset will NOT be from VIA, but from ALi. Acer Aladdin (ALi) is one of the few big Taiwan companies that is not connected with FPG. This is a desperation play by AMD. ALi is not even in VIA's league.
DDR-SDRAM's share of the PC main memory market will be virtually zero this year and the first 1/2 of next year. If you look beyond the BS, Look at the KX133 chipset for Athlon. It came out in January. It is now June. You still can't get one from any of the major vendors like Gateway or Compaq; they are still using motherboards with the obsolescent AMD750 chipset(no AGP 4X, no PC133 DRAM, incompatible with GeForce cards, crappy HDD controllers). The taletale is to go to Gateway or Compaq or any of the others and look at the system specs: if they say AGP-2X or PC100 SDRAM, it's the old AMD750 chipset. That's SIX MONTHS.
Realistically, that means that the first volume shipments of ANY DDR-SDRAM computers won't be before March 2001. IMHO, June 2001 is more likely. This assumes that they work. I'm getting suspicious that the DDR-SDRAM meetings are not already demonstrating production chipsets. IF DDR-SDRAM WAS A SLAM DUNK EASY THING, SOMEBODY WOULD HAVE ALREADY DONE IT. You would have seen a high-end workstation company like SGI, SUN, DEC/APLHA/COMPAQ, INTEGRAPH, or SOMEBODY do it by now. This is not the slamdunk they want you to think it is.
Assuming DDR-SDRAM can be produced for volume system sales, it should be usable in any application that today uses SDRAM--obviously video cards, but also other applications. I still think it is the last trick they are going to pull out of SDRAM; you will probably see seom systems produced, and then they are done.
When Willamette is introduced, I think it will answer a lot of questions. We will see what the best semiconductor design company on the planet (Intel) can do with a from-the-ground-up platform intended to take full advantage of RDRAM's unmatched bandwidth. If Willamette delivers, I think that the DRAM companies will produce RDRAM in volume, but it is going to cost Intel dearly for the misteps of the past year. The DRAM industry is not going to risk another i820 fiasco--Intel is going to have to write them an insurance policy.
Sorry this is so long. I'll just add:
Tom Pabst IS SUCK!
Re:Asinine .sig... (Score:1)
Re:A different take on Rambus (Score:3)
Anandtech forum Rambus article part 1 [gisystech.com]
Anandtech forum Rambus article part2 [gisystech.com]
Re:When engineering is left behind (Score:1)
Re:Intel's money is normal, and no guarantee (Score:2)
You misunderstand. The way SDRAM works, you don't NEED 800Mhz traces to achieve greater bandwidth.
DDR SDRAM in the 133-150Mhz range supplies more bandwidth than 800Mhz RDRAM. It also does it with lower latency.
Also, right now, the memory really isn't the problem. Intel's GTL+ interface is. Basically it tops out around 800Mb/s.
Maybe some time in the future, when motherboard trace counts get unconscionably high, a soloution like RDRAM will be more attractive. Right now, it's performance is less, and it's cost more than commodity SDRAM.
Chas - The one, the only.
THANK GOD!!!
Is it just me? (Score:1)
___
Re:FreeBSD Daemon (Score:1)
Read the article it... (Score:4)
Is this not a form of cartel ? Is this not a dedicated attempt to replace a large user installed base hardware system ( SDRAM ) with a technically inferior - or at least similar system that provides costs more.
I just bought a new motherboard from ASUS and had a devils job getting a PIII board that still supported SDRAM. ( the SC2000 ) there was little choice at all. ASUS may have other boards listed on their site, however the vendors can't buy them. Only the RAMBUS ones.
I'm not trading in my large investment in RAM (384M) only 6-9 months after buying it ! looking at the article - you shouldn't either. Unless you enjoy lining Intel's pockets.
Re:Does Tom have any credibility? (Score:1)
Intel isn't just making money on their stocks (who doesn't), but their also serving to drive SDRAM out of business by using their 820 and trade chipsets - which just happen to have bugs in them to make them perform slower when they are using SDRAM... Coincidence?
Re:Purpose of Rambus(Summary) (Score:3)
-Current IA32 CPU's and single tasking/single threaded software ( like quake ) does not present too many opportunities for multiple-concurrent memory access.
-Deeper pipelining and ever more advanced add-ons to IA32 in addition to AGP and faster DMA devices ( such as gigabit-ether and RAID drives ) would provide greater concurrent load for a memory device.
-Multi-threaded/process services such as file-serving, web serving, etc on a multi-CPU system can provide high main-memory load ( defeating virtually any caching system ) and requiring a greater need for intelligent mem-access management ( as with SCSI elevator optimizations for disk access ). Being able to service more than one mem-request simultaneously is more valuable than servicing a mem-request more quickly. You can double, or quadruple throughput easily by going to an n-way memory system instead of increasing mem-latency by 50% each generation.
-Memory Blahs: DRAM is based on leaky capacitors ( pseudo-batteries ) which must be recharged, and pre-charged in such a way that causes serious performance lag, especially when you change which row is being accessed. Thus the "rated" speed of all DRAM chips is misleading if you do not understand this. 200MHZ DDR-SDRAM is it's burst speed. Dozens of clock cycles are consumed when non-optimal adjacent memory accesses occur. This is a fundamental flaw with DRAM and is present in most architectures ( RDRAM included ). Thus, only speeding up latency can never fully resolve this problem.
-[DDR-]SDRAM is designed for high-speed dumping of closely spaced regions of memory (within a row) in a serial fashion. Higher bandwidth allows the faster flushing of internally cached hits, but there is still a severe latency between accesses. An ideal ( though costly ) advancement might be to produce multiple interleaved "channels" of [DDR-]SDRAM in order to handle multiple concurrent memory accesses. I have seen no indication of this direction in motherboard chipset manufacturers, and thus doubt it's feasibility. This solution is typically found in high-end workstations ( See SGI's visual NT station or many RISC servers )
-RDRAM is designed with the idea of multiple channels from the beginning. Sadly, it's radically different architecture means an extreme introductory price which would only decrease in higher production volumes. RDRAM did not turn out to be as glorious as would be hoped. BUT, most benchmarks I have seen deal with non-server apps in single-CPU environments. Thus scores were only marginally better.
-Intel's incentive. Tom leads us to believe that Intel is trying to make a quick buck on RDRAM. I have suggested that Intel absolutely needs an RDRAM type solution ( if not a normal interleaved solution ). Pipelining ( in both CPU and GPU ) allows the masking of memory loads ( some-what ), and thus Intel is migrating towards higher-latacey tolerant CPU's that are capable of ever increasing number of outstanding mem-requests ( which is facilitated by RDRAM and it's like ). Ultimately, Intel will move to IA64, which is completely designed around massive pre-queueing of memory accesses. Without massively parallel memory interfaces, IA64 will be even more memory starved than Alphas ( due to larger instructions ( 40 + overhead vs 32 bits/instruction ), and the heavy usage of speculative mem-loads ). I believe that IA64 will be significantly slower than IA32 unless a more advanced memory structure can be used. Additionally, I believe that IA64 will be far better suited to RDRAM than IA32 ( due to the above ).
Intel NEEDS RDRAM ( or something like it ) to succeed for fear of IA64 flopping. Introducing IA32 to i820 and RDRAM is supposed to ease the market into acceptance. I doubt they care about lowering the price for IA64 ( since it'll be astronomical in and of itself ), but you need to at least encourage chip manufacturers to get the bugs out early.
Rambus, on the other hand is just trying to keep their business going.. Thus they're having to give incentives out here and there ( that's just standard business ).
I think Tom is being a little emotional in saying Rambus / Intel are evil. Rambus needs customers, and Intel needs a better memory architecture.
No, Rambus is not a cost-effective solution to single IA32 CPU systems. It definately is not worth the price for single tasking systems ( such as for gaming ), though it might still work well in periferal contention situations ( 2xNIC + SCSI + RAID + CPU ). I would be curious to see RDRAM's benchmarks in IA32 mult-CPU configurations with multi-threaded / processes apps, of course.
I strongly believe that RDRAM is a good match for IA64 and possibly 4 CPU configurations ( where memory costs are not going to consist of too much of the overall package ).
Last and strongest point: Tom and many other techno-journalists, though very valuable in their insight and general contributions, are often seriously single-minded and emotional. I believe that they should spend a little more time being objective and trying to analyze the _why's_ of corporate America; looking at things from their perspective from time to time ( and actually comment about it ). You make a much stronger voice when you show an understanding of the situation, rather than preach to the choir and make ASSumptions.
-Michael
Re:FreeBSD Daemon (Score:1)
I guess he doesn't know that the daemon image is copyrighted and he isn't allowd to missuse the daemon that way without permission from Marshall McKusick. And he writes daemon wrong...
Mr. Pabst should have done his homework better---not a good sign for a journalist I must say.
Just like Intel (Score:1)
It's pretty scary how many people won't buy the clearly superior Athlon because it's not an Intel... that kind of small mindedness sickens me. Just like the people who 'don't trust' open source software because it wasn't hammered out of some conglomerate corporation. I must agree with Tom on this one. Rambus saw themselves on the way out, so offered themselves to the major computer players and managed to make a huge amount of money in the process.
Turnabout is (not) Fair Play! (Score:1)
Many believed the principal reason IBM chose the Intel processor was because they could leverage a huge profit from the stock ploy, whereas if they chose the Motorola 68000, nada, because Motorola was a much larger company.
Why didn't IBM chose Zilog, which had a much larger share of the PC processor market at that time? That may have been a technology decision because the Z80, Z8000 and Z80000 (did I remember the right numbers of zeros?) were mutually incompatible (far more so than the 8008/8080/8085/80186/80188/8086/8088 line), not a good portent for the future.
Incentives like this may be "good business", but they seldom benefit the consumer. As, when a vendor has to "buy" a market for an inferior product, such as RAMBUS.
Octalman
Re:CPU wars: Big Picture Perspective (Score:1)
You mean, like GeForce cards [asus.com]?
(what's the difference between SDRAM and SGRAM anyways?)
Your Working Boy,
another article (Score:5)
Re:DRAM isn't inherently slow you clueless twit (Score:2)
Secondly, the fact still remains that there are heavy initial access delay times ( 7 cycles in PC66 SDRAM alone ). This delay is only marginally avoided by interleaving / pipelining ( unless you can produce 4 to 8 fully pipelined stages ). This is still a downside or flaw to DRAM. To my knowledge SRAM does not require independant row and column charges ( though the addressing logic may require some twiddling ).
Either way, I doubt that your nit-pick makes any difference to my main point. That you can achieve greater performance in a concurrent-access memory subsystem by going n-way ( at the cost of redundancy and expense ).
-Michael
Re:RAMBUS is crappy RAM technology. (Score:1)
Besides, the Dreamcast can play [slashdot.org] PlayStation games. Do you really think the Dolphin won't? Now playing Dreamcast games, that would be cool. A system that could play the games of another contemporary system would rock!
Re:Rambus is suck. (Score:1)
greetings, anandtech'er!
Re:Does Tom have any credibility? (Score:1)
JOhn
Re:Why Dr. Tom dislikes Rambus Inc. (Score:1)
Note to Rambus Marketing: For $158 Million I will write a very positive article about your organization and products.
Re:Does Tom have any credibility? (Score:1)
Re:$158-Million Dollar Conspiracy (Score:3)
Marmot Entrials (Score:1)
Ok, not quite, but I have noticed a few trolls around today, and almost a complete lack of good posting.
Yes, I know, by posting this I'm submitting to the marmot syndrome everyone else seems to have today..
Re:Message From Tom Pabst... (Score:1)
Re:Considering its going to cost Intel... (Score:1)
Re:$158-Million Dollar Conspiracy (Score:3)
Furthermore, executives get compensated largely through options. Options have an strike price, the price that you pay for the real shares if and when you exercise the option. Your income when you do this is the difference between the share price and the exercise price, but the exercise price must be, more or less, the price of the stock when the options are granted. So, executive compensation is not based on overall profits, but on growth of profits, and here is where the $158million looms quite large. It is a very significant number to Intel's executives. In the long run... wait, today's Intel executives care less about the long run than Intel's shareholders do.
Re:Marmot Entrials (Score:1)
/peter
Re:Does Tom have any credibility? (Score:2)
I like Tom as much as the next guy (Which is not much), but Tom is right. Voodoo is dead. They seem to about one step behind NVIDIA since Voodoo3. Although, they are better in supporting non-win platforms, NVIDIA hardware is better.
FreeBSD Daemon (Score:1)
Re:USB is relatively open (Score:1)
A few years ago, Intel was just a bit player in the chipset market. They captured 100% share by patenting Slot 1, and finally had to license it to competitors a few years later in order to head of an antitrust investigation. With their moves into networking products, graphics chipsets, and licensing a patented memory system, it seems pretty obvious that, aside from the hard drive, Intel want's to own every piece of silicon inside every computer...
A different take on Rambus (Score:1)
Re:Tom needs to stick to technical articles.... (Score:1)
Betamax (Score:1)
It was my understanding that beta was visually superior, but had a few annoyances, such as a single movie requiring more than one tape and I thought the format was very proprietary and thus very costly.
Rambus certainly sounds bad and I trust tomshardware way more than Intel or Rambus.
Nano bots would rock. I may never shower again.
Re:More thoughts... (Score:1)
Do you think Mr Manager X at Intel wouldn't like to earn an extra big bonus for grabbing this opportunity which paid off so well?
---
Re:Does Tom have any credibility? (Score:1)
True, 3DFX has OSS linux drivers, but they are massively inferiour to the Windows drivers. The binary only NVIDIA drivers on the other hand are usually within 2% of the speed of the Windows drivers. Remind me again.. Who has better support?
Re:hey, I just realized something (Score:1)
-aardvarko
webmaster at aardvarko dot com
It's not 158 million (Score:3)
Tom's numbers come from the fact the right now, Rambus shares are priced 168$. That means that Intel can exercice those options and purchase 168 millons' worth of shares for only 10 millions.
Now, back in march, Rambus shares were worth 471 at one point. That's 461 million of free money for Intel if they had exercised their options back then.
Intel is probably hoping that they can drive Rambus to such high prices again, and therefore make huge amounts of money.
Re:Intel and USB 2 vs FireWire (Score:1)
However, ask yourself the question "Do I want a USB2 camcorder/diskdrive"? These are the applications that Intel seems to be targeting with its marketing materials for USB2, which is a direct swipe at FireWire's market segment.
--
Will Dyson
Re:hey, I just realized something (Score:1)
RAMBUS = MicroChannel - Intel's PS/2 (Score:1)
Re:More thoughts... (Score:1)
Re:A different take on Rambus (Score:1)
against using RDRAM for most purposes at the moment, but explain why they think it is a long term winner.
Briefly: bandwidth does matter some now, and will matter more. There is a real detectable gain from moving the RAM clock on any current system from 66 to 100 to 133 MHz, even if you fiddle with the settings to cancel out the latency gain. Forthcoming peripheral technologies: AGP 4x, 66MHz PCI busses, ATA-100, will all place more demand on memory bandwidth. Now, SDRAM has a fairly clear route to about 2.1 or maybe 2.6 GB/s bandwidth (DDR at up to 166MHz) but no obvious path beyond there.
DRDRAM can go up to about 3.2 GB/s per bus (800MHz DDR x 16 bits) eventually, and does 1.6GB/s per bus now. This is broadly comparable with SDRAM. On the other hand fitting two or more SDRAM busses onto a motherboard seems pretty hopeless for general purposes. The track count would be very silly indeed. Intels 840 chipset already supports 2 RDRAM busses. In terms of track count, it would not be inconceivable to get 4 onto a motherboard. This is the key advantage of RDRAM: lower track counts, allowing multiple busses. I have yet to hear how any SDRAM solution can beat this in the medium term. On the other hand this is for the medium term. In the short term, RDRAM is only a good deal if you don't need too much, and you really need peack bandwidth.
Re:Just like Intel (Score:1)
Personal crusade or not, AMD is currently starting to make Intel sweat.
I think it would be a BAD idea for AMD to make a processor incompatible with x86. Let's see... we have the Alpha and G4 and those already have TREMENDOUS support from Joe Sixpack. (Though, admittedly, the Alpha was never meant to compete with the Pentium.)
Ironic that you should bring this up, actually. Rumour has it that Intel's upcoming 64-bit chip is NOT going to be x86 compatible, but AMD's is. If anyone has the links to prove/disprove that, please post 'em.
More thoughts... (Score:2)
Re:Why Dr. Tom dislikes Rambus Inc. (Score:1)
Re:Why Dr. Tom dislikes Rambus Inc. (Score:1)
(Read: nVidia)
Re:Just like Intel (Score:3)
Re:Intel and USB 2 vs FireWire (Score:1)
--
Does Tom have any credibility? (Score:3)
One would think that after Tom posted "Voodoo is dead forever" last year next to a paid Nvidia ad on his website, while extolling how his website is "unbiased" nobody would pay attention to this loudmouth hypocrite anymore.
As for the $160 million dollar deal, I do not see what is wrong with it. Companies give incentives to each other all the time, and holding equity in other firms is a great way to cement relationships between companies. Intel takes in $37.4 BILLION dollars in sales every year, and owns major stakes in dozens of high tech companies.
I do not see what is predatory or wrong with this.
Re:Just like Intel (Score:1)
Chris
Re:Intel's secret: RAMBUS and MERCED (Score:1)
$158-Million Dollar Conspiracy (Score:5)
In 1999, Intel made $29-Billion in revenue [intc.com]. It doesn't seem reasonable that Intel would gamble such a large part of it's reputation on a shoddy product get a piddly $158 Million dollars [tomshardware.com] (Well, I guess that's piddly). They probably spend more then that on advertising and marketing in 1999.
Re:Now he's a doctor? (Score:1)
http://www.tomshardware.com/site/personal.html
While I don't agree with the angst in your statements, he does seem to have it "in" for Intel/3dfx and others that don't give him exclusive access to new boards for testing
just a thought.
MRo
free RDRAM with Intel boards in Oz (Score:1)
RAMBUS is crappy RAM technology. (Score:5)
A) It provides much lower latency.
B) It is much cheaper.
C) It has just as much bandwidth.
There is a reason the latest GeForce cards aren't using RDRAM. (Aside from the cost.) DDR-SDRAM is a much better memory technology. The only real reason that RDRAM has made it even this far is that Intel wants a little piece of the memory game.
Re:RAMBUS is crappy RAM technology. (Score:2)
I, like many others, have limited space under the TV. It is cramed full of things, a DVD player, AC-3/DTS decoder, satalite reciever, VCR, and game machine. I like many others have a signifigant other who likes a nice looking room more then a nifty new game machine. I like having a (happy) signifigant other much more then having a new game machine, not to mention an OLD game machine.
So I got rid of the PlayStation (really I put it in a box in the basement) when I got my DreamCast.
I'll bet a lot of people with N64s would like to have a Dolphin, and not have to box up their N64. Of corse I would like to have a PSX2 and not have to box up my DreamCast too. We don't all get what we want.
Plus I would figure it is a minor, but positave, selling point to have an established set of games, even if many arn't so hot, and even if none push the machine.
And back to the main point, RAMBUS is good memory technology for systems that can tolarate fairly high latencey, and need a whole lot of bandwidth for cheep. The "for cheep" part is important, if you have to pay too much for RAMBUS, you can buy more common RAM technologies, and use a wider bus (note, I'm assuming you are designing a system, this won't work if you went out and bought a motherboard, buying 4 SDRAM sticks won't magically give you 4x the bandwidth if the motherboard and chepset arn't designed to work that way).
The high end technical workstation market has little need of RAMBUS, it can design 512-bit wide paths to memory. Sure they cost a ton of money, but that's a big part of why the $7000 Alpha motherbords cost $7000 (and the total reason you need to buy RAM in 4 stick lots, er, maybe 8). Except for the low end Alpha motherboards which cost $2000 or $3000 because they don't sell enough to earn back the NRE at $120 like PC motherboards.
Note that I say little need, not no need. It looks like the Alpha 21364 has RAMBUS controlers on chip, which Compaq claims reduces latency signifigantly. Hopefully they are right, because I like the Alpha, and it would be unplesent if it started sucking.
RAMBUS isn't a very good technology for PC memory because bandwidth to main memory isn't as big an issue as the latency. It might be good memory for a 3D board's texture storage, since bandwidth matters a lot there. Then again it may not, since new 3D boards store vertex info there, which may be far more latency sensitave.
But RAMBUS makes a decent memory for a game machine designed 5 years ago (N64). There was no SDRAM to chalange RDRAMs bandwidth. In fact RDRAM had competiave latancy then even. The psudo-caching effects of RDRAMs sense banks let a no L2 cache RDRAM system compete well with a mid-size (for the year) L2 cache system with EDO DRAMs. Great choice.
Beats me why the Dolphin is slated to use it. Maybe they didn't beleve in SDRAM? Maybe they know something about RDRAM we don't. Maybe they are on crack. Chances are we'll find out in less then a year.
As far as I can tell cheap DDR SDRAM leaves RDRAM as a solution in search of a problem, but that doens't mean RDRAM was allways a steaming pile. In 1992 (which is when I recall the first RDRAM demo systems) it was really promising, and well ahead of the pack. They just havn't been running fast enough, and now the world of cheap RAM has cought up.
Re:Wrong (Score:2)
You are right that a narrow bus width doesn't cause high latency in and of itself. And right that it isn't the source of RDRAMs latency. But it doesn't have nothing to do with latency.
If you have an identical 16 and 32 bit bus, and you want to fetch a 128 bit cache line from 32 bit address X, the 16 bit bus has to take two clocks to transmit address X, and 8 more to transmit the data, or 10 clocks. The 32 bit bus needs only one cycle to transmit the address, and 4 more for the data, total of 5 clocks, half the time of hte 16bit bus. As you said "Latency is the amount of time it takes from the issue of a memory read until it answers." (I would count time to issue the read as well).
RDRAM transaction overhead is resonably competitave with EDO RAM RAS and CAS timings (RDRAM takes 4 clocks at 400Mhz to 800Mhz to start a transaction EDO takes two but at a much lower speed, SDRAM I think also only takes two, but with DDR SDRAM it could be 2 clocks at 266Mhz, which is almost as fast as 600Mhz RDRAM).
The thing that kills RDRAM seems to be the time it takes the motherboard chipset to set up the transaction, the low number the motherboard chipset keeps actiave (each RDRAM can usefully service transactions from 2 sense banks), and most importnat of all, the time the RDRAM itself takes to light up a sesne bank.
With 7-1-1-1 PC100 SDRAM it takes 7 clocks at 100Mhz (70ns?) to read the first 64 bits back from the SDRAM. Allways. No matter what 64bits you want. Then one clock (10ns) for the following 3 64bit words (I think bursts can be longer, but I don't know of any PC chipset that does long bursts).
With 800Mhz RDRAM it takes 4 800Mhz cycles (~6ns? about 10x PC100) to start the transaction. Then if the data is in one of the RDRAMs two sense amps the data comes back 16bits per cycle (a little under 4ns per 64 bit word - about 2x the speed of PC100). However if the data is not in one of the sense amps, the data that is in the amp will be written back (if it has been modifyed), and then that bank will be lit up. This can take a whole lot longer then PC100's 70ns. A whole whole lot longer. Like 400ns. If you have lots o hits in the same sense amp (like you would on a very small, or no cache machine) you get lots more speed form RDRAM. If you get few hits on the sense amp, you suck. Hard.
Regretably that 400ns number is old (as is my SDRAM timings). I don;t know if the newer RDRAM has improved things, or has more sense amps, or a hidden write back. I know that DDR SDRAM comes much closer to RDRAMs peak timings, and that DDR SDRAM can get those numbers consitantly, while RDRAM needs a "good" access pattern otherwise it's throughput falls to a few percent of peak.
I donno if you can overlap transactions to the same RDRAM chip (I think you can), but you can overlap them between RDRAMs. Of corse that requires either a CPU that can do multiple outstanding memory references, or multiple CPUs (or other sources of memory traffic). You could allways interleve, but that seems kind of pointless given the high bandwidth once the sense amp is lit!
Total agreement. RDRAM is cool, but it just doesn't solve a problem most people have.
I don't get it... (Score:2)
But in this light, I don't think it's so surprising why Intel is pushing Rambus so hard. It's an all-or-nothing proposition -- if they back out now they have the worst of both worlds. They don't get the money from Rambus but DO get all the egg on their face of supporting a crap technology. It would be kind of funny if Rambus was Intel's downfall, but I somehow doubt that will happen.
__________________________________________________ ___