Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Tom's Hardware Retracts P4 Endorsement 262

Dice writes: "More benchmarks have come in and Tom (of Tom's Hardware) is expressing doubt in regards to the P4 in this article, "I have to admit that I started off being a believer in Pentium 4 and I still respect Pentium 4's future potential. However, right now I am genuinely disappointed. For the time being, I wouldn't let any of my friends or family members buy a Pentium 4 system. It's simply not justifiable."" Intel is definitely not impressing the hardware reviewers with their new chip.
This discussion has been archived. No new comments can be posted.

Tom's Hardware Retracts P4 Endorsement

Comments Filter:
  • Where have you been for the last 10 years ? To make money on stocks you don't have to guess how the actual company that has the stocks are performing but rather how everyone else will do.
  • What operating systems support SMP on the AMD motherboard (I'd heard that it isn't the same MPS spec)?
  • Do Dell servers support SMP? My SMP system has more than 16 IRQs.
  • Intel's offers neigh hold a candle to Sun's offerings. An E10000 can have 64 processors and 64 gigs of RAM (which is SDRAM by the way). P3s max out at two processors and ~6.8 gigs of RAM (IIRC) but due to practical hardware limitations usually max out at 4 gigs. Xeons on the other hand can page as much memory as SPARC chips but the GX chipset limits the number of processors to 8 meaning you have to build a cluster in order to get 64 chips. The E10000 scales to 64 processors with no clustering which is an inherent speed advantage. There are workarounds for everything of course, do theoretically you CAN hack together some P3 or P4 boxes to handle huge databases and do a bunch of fancy shit but it will still cost you alot of money.
  • Doesn't ACPI get around this problem of IRQs? I've heard that Win2K on an ACPI systems puts all of the devices on to the same IRQ.
  • "Give the P4 time, it's not worth it to buy it right now. But as code becomes SSE2-optimized and the like, the Pentium 4 will strut it's stuff, EXACTLY as the Pentium Pro behaved when it was first released."

    True, the Pentium Pro didn't impress many when it came out, and reincarnated as the Pentium II, it did beat the AMD K6.

    "This is like benchmarking an Athlon on 16-bit code on a performance-per-clock basis as a Pentium. The Pentium would waste the Athlon in 16-bit code, because the Athlon is simply not meant to run with 16-bit code very well (ala using x87 FPU on the P4. The P4 wants to use SSE2, which is superior.)"

    This is open to opinion. AMD's Athlon/Duron is much more important in the marketplace today than the K6 was when the Pentium Pro was released. There will be no rush to SSE2 until there are a lot of P4's out there on desktops. It isn't worth it to the software companies (even Microsoft) do waste the effort. Simple point of fact is that most all CURRENT apps run better on a 1.2 GHz Athlon than they will on anything less than a 2 GHz P4.

    BTW, AMD is going to also support SSE2, I believe they have licensed it. I know it will be in the `Hammer 64-bit chip, but I'd bet it will end up in future Athlons.

    Actually, it's to Intel's advantage that AMD use SSE2, as it will make it an industry standard. Intel right now isn't in a position to dictate industry standards, it lost this ability when they went to proprietary motherboards (Slot 1), then really lost it with RAMBUS. Slot 1 and RAMBUS did to Intel what Micro Channel did to IBM.


  • The whole 15 IRQ thing is annoying! Yes, yes, I know that a lot of you will flame me and say "get a SCSI system," but some people can't always afford that kind of system (but I would love one!)

    So then it seems your whole basis for complaint is that you don't make enough money to buy the things you want. Am I missing something here?
  • No, as the responder above me pointed out, the P4 IS out for sale. You're thinking of Itanium.

    -----------------------

  • Intel is doing the only thing they can do - make faster processors than their previous ones. But I think that we've finally reached a point where a fast processor doesn't mean a whole lot. You won't be able to tell the difference between an 800MHz processor and a 1.5GHz processor unless you are benchmarking. Even so, 1.5GHz is almost twice the clock speed of an 800MHz processor. You can tell the difference between a 166MHz computer and a 350MHz computer.

    The PIV is powerful alright, but I don't see the need to be at 1.5GHz right now when most people are fine under 500MHz.

    What's really sad is that a lot of people think MHz is equivilent to power, when this is far from true. a 1.2GHz Athlon could toast a 1.5GHzP4 in most things, which proves that more MHz doesn't equal more power. And a G4 at 500MHz is often comparable to a PIII at 800MHz. Intel designed the proc for higher GHz with less power. People who look for high GHz will be swayed to Intel thinking they are getting a more powerful chip.

    I'm sick and tired of seeing 1GHz machines come with 64MB of RAM. When will people learn that processor speed doesn't equal performance?
  • Lets face it; many of Intels 'new' chips don't make immediate sense, but who was buying the predecesor to any of the above chips once the new style had been in the market for a while.

    Well, I don't think any of the reviewers giving the P4 bad reviews are hesitating to point out that this is a design for the future, that the primary goal of the P4 is to be the start of a whole new line of performance improvements, that the P3's are pretty much at the end of the line, etc. etc. None of them are denying that the P4 has a lot of potential, just as all the chips you listed had far more potential than the chips that preceded them.

    But the purpose of a review is simply to give people an idea about whether they should splash out their cash or not. And the opinion seems to be fairly unanimous at present that your cash should be kept in your pocket, or spent on an Athlon. That's all.

    These reviewers absolutely love their benchmarks. If a 2GHz P4 is romping it in with the best benchmarks in a year's time, you can bet they'll be singing its praises, alleged pro-AMD bias or not..

  • by Graymalkin ( 13732 ) on Thursday November 23, 2000 @11:46PM (#603927)
    Fighting over the newest and most bestest Pentium based system is getting a bit old. The P4 is DIFFERENT from the P3 it isn't merely a core modification like the P3 was to the P2 and the P2 was to the PPro. The P4 is something Intel wants to promote as a real solution until they are able to roll out IA64 in the mainstream. If it costs too much or doesn't do what you want, don't buy the fucking thing. I'm not going to buy one. Let the OEMs buy a shitload of them and package them in consumer PCs. In terms of actual core quality, I'd vie for a Xeon over a regular P3 or P4. I get the option of lots of memory, better pipelining, and better branch prediction. Along with that I get the option of an enormous L2 cache which is very important when you're using the same instructions over and over, say when I'm doing 3D rendering or have a web server running; it's also going to increase performance on large numbers of non-repeating instructions (i.e. a large number of individual apps running all doing their own thing, say on a large multi-user system). The P4 is in the same situation the K6-3 was, AMD haters hoped it would kill AMD while supports defended it as more of a testbed than anything else. The P4 is a production test of some new approaches to things. It won't sell well in retail sales but I bet OEMs will eat it up because their customers will want it. The Duron many of you bought is a combination of technologies used in the K6-3 and Athlon, if the K6-3 had never been released the Duron might cost you a lot more due to the fact that some techniques were untried which means unrefined. In production cases, anything you try to do without first perfecting will cost you alot of money.
  • The limit on the number of interrupts was solved by PCI. The PCI bus has four shared interrupt lines. We just need to get rid of all of the legacy ISA crap and build systems with USB, IEEE-1394 and PCI.
  • Pentium's 4 architecture leaves much to be desired. It's price compared to equivalent AMD chips is way to high... the price difference between an ATHLON and a P4 would be better put to use by buying more RAM, a better motherboard or video card -- you'd end up getting a lot more bang for your buck with that.


    _______________
    SitePoint.com - Resources to Build and Grow Your Site [sitepoint.com]
  • by Anonymous Coward
    I don't care who wins the fight. The bottom line is that the more competition in terms of speed, advancements and price, the better for the public. As for AMD not having enough advertisements... Just remember for every advertisement that Intel buys air-time for; you pay for it (at least those who pay the extra dough to buy an Intel processor). Five months ago a top of the line Ghz processor cost at least a thousand dollars for both Intel and AMD. Now AMD's top of the line processor barely costs half that. Not because of efforts to bring joy to the consummer, but to compete with Intel dropping their prices BIG TIME. Bottom line, I hope that they switch places in terms of all of those factors (price, speed, core advancements), because it means more of the good things (speed/adv's) and less of the bad things (price).
  • ...at least on the MPEG test highlighted in the Tom's Hardware article. My guess is that since the OS isn't optimized for multiprocessing yet, it wouldn't do terribly well (though a plug-in for multiprocessor support for this particular ap/benchmark might already exist). However, once OS X is finalized, and the codecs are fully optimized for multiprocessor AND AltiVec support, I bet a dual G4 would smoke any x86 machine (or at the very least any x86 machine at the same price point). Pure speculation, but it would be an interesting comparison none the less...

  • by knarf ( 34928 ) on Thursday November 23, 2000 @08:09PM (#603937)
    I agree about Intel (currently) being a major GNU/Linux supporter. Sure, they are probably only thinking of their own interests, but by doing so they add weight to the position that free software can succeed in a commercial/business environment. Good for them, good for us. Everybody happy.

    but, I have to take exception to the following statement:

    The fact is that Intel is a corporation, and that corporations play hard ball business. They'll use the legal system, contracts, and whatever it takes to sell more product. Its just the nature of corporations.

    The mere fact that Corps. act as though they own the laws and can do whatever they wish does not mean I have to accept that as a 'fact of life'. Replace 'corporation' with 'mafioso', and that line suddenly looks less appealing, even though the Mafia has been (and probably still is) supporting some causes which might, by some, be seen as beneficial for society. Like ridding neighbourhoods of crime (by criminal means, but still). That does not negate the poisoning role of the Mafia (or other crime syndicates) in several public institutions.

    So, cheers to Intel for their insight that free software and business can go together. But boo to them (and all other nasty corporations) for their continued disregard of 'the intent of the law', for their heavy-handedness, their lies and their greed.

  • Well, let's look at it face on: The P4 with about 1.5 GHz easily beats the PIII with 1.0GHz (and probably even 1.13). A big part of it is probably due to better memory performance. But as the PIII seems to be at the end of the lane MHz wise (as the 1.13 GHz "launch" showed) the P4 is just at the start of it. The P4 was desinged to run at a higher speed, it makes no sense to compare it to a PIII clock by clock because the PIII can never reach those clockrates. We'll probably see 2GHz P4s in the stores within the next year (intel announced them for Q3) competing with Athlons (Palomino) clocked at about 1.6 GHz.
    If you look at it this way the 1.2 GHz Athlon and the 1.5 P4 are just the top of the line processors so it's just right to compare their performance. But to be fair one should allow for some increased performance for the P4 with some firmware updates (a few percent), and, more importantly, consider that the P4 design will probably go a longer way. So the P4 will set high standards for the next major overhaul of the Athlon core (Thoroughbred) and AMD moving to 64-bit with the ClawHammer.
    This means we'll see some serious competition between intel and AMD next year, and that's just what benefits the consumers most.
  • I think Intel is definitely aware that this cpu, in its current incarnation, is not satisfactory. But it does do one thing for them. It let's them say they have a faster clock speed than AMD. That's why they released it. They don't really care how well it works right now, they just want bragging rights. Eventually, they'll have to remedy it, but for now, I think they have accomplished their goal. To people who know better, it works against them, but to people like their stockholders and the average consumer, it lets them say, "Hey, look what we did. Look how fast our new processors are."

  • I wonder if this is entirely true. I have no knowledge of chip design and layout issues but it seems to me that sheer interest in advanced features wouldn't push Intel to P4-like processors. It is twice the size of P3, which means less chips per wafer and less yeild.

    It turns out that a few points make it more attractive to use a more complex core:

    • A huge chunk of that die space is cache.

      This means that making the processor core itself larger doesn't have as big an impact on the size of the chip as a whole as one might think.

    • Cost of the dies doesn't dominate in the endgame.

      When yields approach reasonable ranges, as they always do eventually, the cost of an individual chip drops dramatically. Most of the cost of a module is support cost for the company that produced it, as opposed to raw silicon cost. While the die cost is still significant - especially when you're still fine-tuning and have low yields - chip size isn't as big a problem as it might first appear to be.

    • Cache has saturated - to build a faster chip, you need a better architecture.

      We've reached the point where adding more cache to a chip doesn't help very much (for many applications, at least). Thus, the only way to get a performance gain over one's competitors is to produce a chip that has a higher clock rate for a given linewidth, or that can find more instructions to issue per clock, or both. To do this requires a redesign. We're nowhere near having a perfect design yet; there are always new things that can be added, especially as linewidth shrinks and transistor counts rise. So, redesign remains a useful way of improving performance.


    In summary, the cost of using more transistors actually isn't that high, and the benefits of a redesign using these transistors are potentially great. So, the new cores continue.
  • Yes, but won't it take YEARS, if ever, for this "advantage" to actually benefit the majority of apps that will be run on a P4?

    Media players (+ codecs), games, versions of directx all have very short release cycles.

    Who is going to rush out to support SSE2 instructions for a chip that isn't likely to sell very well?

    Microsoft, GNU and Borland will. They make the compilers.

    Also, to use SSE2 and the P4 to it's potential, you have to upgrade EVER SINGLE APP on your PC.

    Are you saying that "EVERY SINGLE APP" on your PC uses FPU instructions? I don't think so.

  • While this is quite true, the companies that choose to pursue their goal of making money through sleazy licensing tactics and anti-competitive business tactics are wrong to do what they do.

    I would quit a company that pulled those kinds of stunts. I very nearly did once when a company I worked for contracted out to have their stock promoted via spam.

    There are many ways of making money, but if a company chooses to make money by some means other than providing a superior product, excellent service, or some other strong benefit to their customers, their a bad and evil company. Them being a company does not automatically free them from being held to a moral standard for what they choose to do.

    The reason I gave up on Intel about 3-5 years ago is that it was becoming apparent that they were relying on marketing and sleazy business tactics over engineering excellence. Until they change, I will continue to do so.

  • It's not so much that us geeks are against Intel, it's that we're against monopolies because we know if AMD dies and there is no real competition for Intel, they'll start slacking off and charging much more.

    I think that's why we mostly hate Microsoft. I also remember having a serious dislike of Lotus, Word Perfect, and Novell in their day (not realizing the Microsoft juggernaught would eventually run them over back then...)

    Competition is good for all of us. Variety in the marketplace is also good.

    But it's a good point. How many Linux user out there use Linux on non-IA chips? No, don't answer. This is a rhetorical question, not a poll! :)

  • And, uh, P4 is still 32bit...

    Only half-way. A lot of silicon is devoted to SSE2, which if memory serves, is 64-bit. As soon as you see how fast programs are(when they're compiled with a SSE2-aware compiler), you'll start to wonder if maybe Intel processors arn't really x86 any more ... :)

    The Pentium was a major leap forward from the 486, bringing forward major speed advancements...

    The real "speed advancement" for the pentium was its pipeline - sure, not much - but it allowed the Pentium to increase its clock speed dramatically(4-5 times), which ended up making computers faster. The P4 is expected to end at around 7 times its current speed. Not too shabby.

    Again, the P4 and P3/Athlon are all 32bit...

    See my first argument - the P4 can no longer truely be called "32-bit". And the P4 actually runs strictly 32-bit floating-point ops slower than the PIII(significantly slower), but those nice SSE2 instructions are supposed to be absolutely blazing.

    before Intel actually brings something worthwhile out...

    What do you think that "worthwhile" processor will be? Yup, you guessed it! The P4, with a different package and maybe a slightly tweaked core. This core is expected to go up to 10GHz. You think going from 1.4(actually, they'll be releasing a 1.2GHz version too) to 10GHz is just a "short-term filler"? What Tom was referring to was the chipset/socket combo as short-term. The P4 is going to be around for quite a while - they build their processors with longevity these days.

    Dave

    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • Well I was wondering when the first revision of this kind of the MPEG "benchmark" would come out. MPEG encoding/decoding is dominated by 2 or 3 very tight loops.

    Which CPU looks good will depend far more on which architecture you optimised for than any inherent strengths weaknesses. Case in point. The P4 suddenly looks "bad" when Tom switched to stack F.P. based iDCT routine. Well this frankly is mere luck. You don't need F.P. to do an accurate iDCT. If the FlaskMPEG guys had used a good MMX iDCT (it *is* possible!) istead the P4 would have stood there like the MPEG CPU to end all others.
    Instead its suddenly a lemon.

    Acutally, I personally think Intel blew it with the decision to go to a super-long pipe. Quite a few codes *are* branchy and not all branches can be predicted. Period. The P4 always will be a brittle performer. Good on f.p. crunching with SSE and some kinds of "multimedia" stuff. A total lemon for other codes. Given the current trend to off-load a shed-load of the f.p. work to GPU's I think Intel made a bad call...

    However they do deserve kudos for finally having the courage to side-line the horrible stack f.p. and put their effort into SSE2 instead (with far better potential). It think we'll see some really good f.p. numbers as the SSE2 compiler support cuts in.

    Andrew
  • Since the P4 costs as much as two of the 1.2GHz Athlons wouldn't it make more sense to compare the P4 to a system with the AMD 760MP chipset and two of the DDR Athlon 1.2GHz CPUs?

    Where can I buy the 760MP chipset? Oohhhh yeah -- it won't be available until 2H01. Heck, the 760 chipset which all of the current benchmarks are against isn't even released yet.

  • Intel NEVER has enabled SMP on the first stepping released of a CPU (and AMD and Cyrix have never released a commercial SMP capable platform). The P4 will support SMP probably sooner than any new Intel processor ever. [Here's a clue: the ZDNET article which stated that SMP wouldn't be supported until 2H01 is out and out false].
  • Actually I thought it was interesting how slow the P4 was EVEN WITH these optimizations. On most of the scores it only narrowly beat the 1GHz Athlon, so would probably have narrowly lost to the 1.2GHz version.

    It really does appear that we're going to have to wait for Northwood and the new socket, and/or see whether Intel can ramp up the clock speed faster than AMD can before the P4 will offer much.
  • The reason the P4 doesn't support SMP is simple ... HEAT!

    Some clues are in order.

    The 1.5 GHz P4 puts out 30 degrees of heat. The 1.5 GHz Athlon is projected to put out 95 (!!!!) degrees of heat. Yep, that's only five degrees less that the boiling point for water.
  • You're only looking at it from an architecture perspective and not from a circuit perspective.

    One thing that every P4 reviewer (even the haters) have remarked is that the P4 runs extremely cool, and is extremely overclockable.

    The P4 @ 1.5 GHz runs at LESS THAN ONE THIRD of the temperature of an Athlon at the same speed (30 degrees vs. 95 degrees). This says, of course, that the P4 has dealt with the heat problem already, and thus has SIGNIFICANTLY more head room to increase speed, since it puts out so much less heat.

    The Athlon is going to hit a speed bump because it puts out so much heat. You can't sell a processor which takes 95 degrees, so they won't even be able to do 1.5 GHz unless they radically modify the core (... Palomino is supposed to; we'll see).
  • It looks like you just doesn't get his point - that every new Intel processor family has been met with criticism. Later, it usually died off.

    The P4 isn't that much faster than the PIII for many applications, but the architecture gives room to increase the frequency a lot more - which will give it enough speed to outspeed the PIII on all tasks and compete with the Athlon, something the PIII wasn't able to do anymore.

    It doesn't seem to run most common apps much faster, but those really doesn't matter much - most office tasks aren't even remotely CPU bound. P4 does introduce SSE2, which has the potential to speed up many operations when utilized. And it vastly increases memory bandwith from the PIII, which has been a chokepoint

    Would I buy one now? No. RAMBUS memory is way overpriced, and from a company I detest. The chip will go to 2 GHz and a shrink soon, DDR-supporting chipsets will be released - at that time, it might be a good alternative.

  • Is the Pentium II a descendant of the P-Pro? I thought Intel dumped the idea of doing CISC to RISC conversions. Obviously the PII takes some of the ideas of the P-Pro, but wasn't the defining feature of the P-Pro the fact that it was a RISC chip masquerading as a CISC? And if so, is the PII like that?

    Pretty much all modern x86 chips, not just Intel's, are implemented this way. CISC instructions are very difficult to pipeline, which is why this technique was introduced (with one of the VAX chips, IIRC; neither Intel nor AMD nor NextGen invented it).
  • Tom's ego compares favourably to the heat sink on a P4, the power supply for an Athlon, the Mac LCD cinema display, the Razor Boomslang mouse, old HP laser printers, and IBM's "wing o' death" keyboard.

    Which is to say that it's fucking enormous.

    Tom, my man, I think you need to chill out. Start popping some Paxil. Quit taking yourself so fucking seriously. You're just a little shit in a big pond. You don't make or break the hardware world.


    --
  • You are neglecting the fact that yield is dependent on die size. Therefore your "eventually" will depend on die size and you can easily run into the problem that your design is outdated (meaning you can only ask a low price for you chips) by the time your yield is acceptable.

    This is true; however, the limit is still pretty high, because part of the development of a new linewidth is tweaking of the process until adequate reliability for large dies is obtained. The customers want to be able to produce large, fancy chips, so the fab houses tailor their processes accordingly.

    You can also get a bit of leeway by using more conservative design rules when laying out your chip, though this usually has the side effect of making the chip slower.

    Note: All of this just applies to the size of the core, not the entire die. Most of the die is cache, which is very easy to build in a fault-tolerant manner (more rows are included than are needed, and during initial testing, faulty rows are permanently isolated from the circuit).

    There is a considerable amount of research on building chips that can function despite design or fabrication faults, which should make the problem much less severe in the future (yet another thing that I have to study as a degree project). This is needed not because of the yields, but because designing a billion-transistor chip without making critical mistakes may not be practical.
  • If I recall correctly, one of the bigger challenges Intel faces is in retaining skilled chip designers.

    I'm under the impression that their best and brightest designers have fled the company, and they're now left with newbies who have simply never worked on anything approaching the scale of these CPUs.

    It's an interesting problem, come to think of it: the only way you get that sort of expertise is to progress through the chip designs. You start off designing 8086, get involved in designing 80186, 286, 386... eventually, you're an expert at designing large CPUs, because you've been chiefly responsible for designing increasingly larger CPUs.

    When you kick the bucket, how's that runny-nosed kid fresh from tech school ever going to cope with developing the next generation CPU? Poor little bugger hasn't ever designed a CPU at all: his training was all theoretical, and perhaps a few class projects designing variations on the 555 timer.

    In all likelyhood, it's going to become one helluva problem within the next ten to twenty years, as the old school designers, who cut their teeth on simpler CPUs and were key in the development of more complex CPUs, die off or retire.


    --
  • What I find interesting is that when the first versions of various Intel CPU's came out, they didn't really make much sense.

    But once Intel was able to quickly speed up the core CPU speed, then it did make sense. Remember the original Pentium 60/66 MHz CPU's? Everybody complained about the cooling requirements of those CPU's, but once Intel switched to the Socket 7 design and went from 75 MHz all the way up to 200 MHz, THEN the Pentium CPU's became very desirable. The same with the original 233/266 MHz Pentium II's; it was not an improvement over the Pentium MMX 233 MHz until Intel sped the CPU to 333 MHz and Intel introduced the second-generation PII's that supported PC-100 DIMM's. The same also applies with the Pentium III, which started at 450-500 MHz, but didn't become desirable until Intel sped it up all to 600 MHz (Katmai core) and 1,000 MHz (Coppermine core). (By the way it appears that Intel has finally licked their 1,000 MHz PIII production problem; it appears that supplies of the PIIIEB FC-PGA variants up to 1,000 MHz are fairly plentiful, if a bit expensive.)

    Of course, Intel needs to quickly ramp up new and better CPU technologies soon. The current AMD "Thunderbird" CPU's are more than a match for the PIIIEB, especially with the new DDR-SDRAM technology. With new, cooler-running Socket A Athlons comimg in the early spring of 2001, AMD could crank up the speed of the CPU to as high as 1,700 MHz, which when combined with DDR-SDRAM could mean AMD can in many ways keep up with the Pentium 4, but at much lower cost. And with the Athlon likely supporting SSE2 instructions in the second half of 2001, a 1,700 MHz Athlon with DDR-SDRAM could do everything the P4 could do but possibly faster and definitely less costly, too.
  • If what you say is true, then i should be able to take an R3000 and with a sufficiently small die size and make it go 3 ghz.

    But I can't.


    Sure you can.

    Later chips just add features that the old chip didn't support, or redesign functional units to work more efficiently. Thus, as they're more useful and silicon (below a certain area threshold) is cheap, the later chips are used.

    One example: The R3000 has an in-order pipeline. The R10000 has an out-of-order pipeline. This means that the R10000 can keep on crunching in cases where the R3000 would be stalled.

    Other differences exist, but I'm having a surprising amount of difficulty finding documentation on the MIPS cores' features on the web.
  • increase its clock speed dramatically(4-5 times)

    Well, as much as this whole 'fiasco' reminds me of 'fiascos' past that really weren't, I have to put in my 2 cents. I have seen the future of PC's and I'm, honestly, wondering what-the-fuck.

    Has anyone heard about the heatsinks on these things? New power supply? Reminds me of a Voodoo5. In case you've never seen one, a Voodoo5 is a full length AGP card that needs to be plugged into a hard drive power source. These things are meant to be the best of the best, and instead they just run pretty fast and run hot as hell. AMD isn't a whole lot better.. The shop I work in recommends that all Athlons (.9GHz+) be equipped with a 300W power supply.

    What's going on here? Whatever happened to smaller and cooler? I thought some of the best 'geek' wants were just those.. Webpads, wireless, laptops, etc. These things are none of those, and also not cheap. People like the idea of cheap, small, portable and cool. (Starts thinking about Snow Crash and Diamond Age.. Mmmm...) Instead we're basically buying these things and letting Intel know that we don't actually mind if our desktop systems are getting bigger again and starting to look like older style computers. Yeah, the downward trend of software doesn't help too, I know, but what's wrong with making what we have more.... accesible.
  • The Athlon is going to hit a speed bump because it puts out so much heat. You can't sell a processor which takes 95 degrees, so they won't even be able to do 1.5 GHz unless they radically modify the core (... Palomino is supposed to; we'll see).

    Actually, linewidth shrinks can still be done. Power dissipation is proportional to the capacitance being charged and discharged, which is proportional to the area of the core (I'm ignoring the cache, which can be partitioned so as to scale without additional heat generation). For a given core layout, power dissipation at a given clock rate goes down as the square of the linewidth.

    As clock rate is also determined by capacitance, it goes up by at most the same amount, resulting in a worst-case power dissipation the same as before the shrink - with a processor that runs much faster.

    That having been said, reducing heat production will still allow you to increase the clock rate (you'd raise the clock speed, which in turn might require raising the core voltage, until the power dissipation was again at your maximum acceptable threshold).

    As for the P4... Part of the reason it runs so cool is that it has a heat sink the size of Alaska sitting on top of it. What are its actual power consumption figures? These are a better basis for comparison.
  • I don't think you understand you chips achieve low power. It has very little to do with the linewidth, and everything to do with power management.

    Um, I've spent the last 5 years learning how to build chips. While power management is important, total area - absolute area, not number of transistors - is also directly tied to power dissipation.

    The power dissipated is simply the clock speed times the square of the core voltage times capacitance that is charged or discharged per clock.

    If you optimize your chip so that only areas that are being used are clocked, you save power. This is what you were referring to.

    If you lower your core voltage, you save power.

    If you reduce the total area of the chip - by applying a linewidth shrink, for instance - you reduce the total capacitance (by a factor of 2, usually), and save power.

    Thus, a linewidth shrink would most certainly allow an Athlon to run faster for the same power dissipation.

    Also, saying that "the P4 should dissipate more power because it's bigger" isn't strictly true - all of this applies to the size of the CORE, not the size of the CACHE. The cache can be optimized to dissipate pretty much the same amount of power no matter what its size, as in any given clock, you're only accessing one or two rows of it. It's the core that's changing state all the time.

    The relative sizes of the Athlon _core_ vs. the P4 _core_ are what would be important for your argument.

  • True, yes, but I'm morally against overclocking. :P One of the reasons I bought the Athlon was for it's exceptional floating point performance, however.
  • No it doesn't in 2.2, but does in 2.4, so it will probably be in the next round of distro releases.
  • by jeffsenter ( 95083 ) on Thursday November 23, 2000 @04:49PM (#604012) Homepage
    Since the P4 costs as much as two of the 1.2GHz Athlons wouldn't it make more sense to compare the P4 to a system with the AMD 760MP chipset and two of the DDR Athlon 1.2GHz CPUs?
    Has anyone seen such a comparison?
  • by Animats ( 122034 ) on Thursday November 23, 2000 @09:56PM (#604016) Homepage
    This whole approach to CPU evaluation seems off.
    • Tom's Hardware thinks the main application of FPUs is recompressing illegal copies of DVDs? (Could be worse. Apple's big PR benchmark used to be Gaussian blur in Photoshop.) Actually, I think the big application of FPUs in the next few years is going to be running the AI and physics engine in games. Most of the graphics has already moved to the graphics board.
    • Using AMD's 3DNOW instructions in a compressor would probably be a win. Those allow you to split the FPU in half and do two 32-bit operations simultaneously. But the codec doesn't support this, which adds a a pro-Intel bias.
    • The whole Pentium 4 thing is based on the concept that customers will go for a higher clock rate, even though the P4 gets less done per clock than a P3. One reviewer commented that Intel will probably have to keep high-end P3s off the market in mid-2001 to prevent them from making the P4 look stupid.
  • by Flavio ( 12072 ) on Thursday November 23, 2000 @04:52PM (#604017)
    I have this feeling everyone's against Intel and is making sure this viewpoint gets through.

    People openly say "I want Intel to crash and burn" ALL THE TIME!, even though this isn't Tom Pabst's or /.'s point of view.

    Most of you who curse Intel are hypocrites. You'll be buying Intel processors if they come back and saying "Intel rules".

    You seem to disconsider Intel has developed good products and technologies despite its failures. Please don't send me Intel's top 10 (or 50, or 100) top mistakes list. I'm well aware of those.

    If you think Intel charges way too much for their processors (and they do), fine, just don't buy them. You shouldn't run around screaming antipropaganda simply because they're on top.

    A similar phenomena has happened to 3dfx and Netscape. They also have made bad, wrong decisions. But haven't they also broken huge amounts of ground?

    So lay off Intel a bit. Cut them a little slack. Do you think RAMBUS's debacle and design difficulties all over are done on purpose?

    Here's an analogy for you guys (you say if it's bad or good). Most of you (the ones from the US, at least) sometimes agree with Jon Katz about the geek kids who are made fun of by the jocks.

    Intel started playing really bad american football for some reason and is like a geek kid. Don't kick them when they're down, at least not that hard. You have your reasons, I know, but enough is enough.

    Flavio
    P.S.: I don't work for Intel, never have and don't expect to. I wouldn't mind to, though.
  • Well, some of Intel's innovations were not so hot (eg. MMX). The P4
    looks to be a big step towards a different model of processor (deep
    pipelining, sophisticated branch prediction) which whilst I agree in
    the long run is probably right, in the short run it might be a long
    time before it becomes an improvement on current technology. One
    could say the same about Rambus...

    And if we are in the business of backing predictions about which
    will be the best architecture in the long term despite less than
    stellar short term performance, why should we believe the P4
    architecture is better than that of rival VLIW architectures (eg. the
    Crusoe)? Following Intel's lead has been the right thing to do whilst
    Moore's law held, but now it rather looks broken...

  • I Agree that intel supports the Open Source (Well, they support Linux, not the *BSD - but thats another issue)...

    However, just because they support Linux, doesn't mean I cannot criticize them. It is my full right to scream and shout to everyone that this (P4) chip sucks real bad currently and I wouldn't buy it (specially with their price tag right now)

    People are saying that with optimizations to this specific processors - then this processor will kick AMD's butt. We'll see - just a small reminder - intel did that trick with the first Pentium 166 with MMX. Since then, not many software packages have used it (besides all the DVD playing/Video capturing programs)..
  • by at-b ( 31918 ) on Thursday November 23, 2000 @04:54PM (#604024) Homepage
    The P4 isn't a chip for you and me. Wanna know why?

    * In almost all kinds of applications, it is slower than an Athlon T-Bird 1.2 Ghz, and that's from a P4 1.4Ghz. Even overclocked to 1.7ghz, it's still slower.

    * Almost all applications - meaning pretty much everything involving a floating point unit, including CAD, raw calculations, Office apps, and Unreal Tournament - are slower than on the lower-clocked and cheaper Athlong. Oh, and I forgot: It is atrociously slow compiling anything with gcc.

    * The much slower P3s actually beat it in speed at many real-life applications.

    * Tom's review compares it encoding a long DivX movie in high quality with a 1.2Ghz Athlon. The P4 needs twice as long at some tests.

    * You can get a 1Ghz Athlon for less than $300 in some places, with Athlon prices dropping weekly. A 1.4Ghz P4 will cost around $1000. Prices won't be dropping anytime soon.

    * The P4 needs a new socket, doesn't always play nice with all types of memory, its socket is of course incompatible with everything, it needs gigantic coolers which NECESSITATE new cases, where old cases are simply too narrow. That's right, many old cases (ATX format) simply won't take a P4+cooler.

    * The P4 will not come with a multi-CPU chipset anytime soon. In fact, the P4 right now and in the next few months will definitely be a no-MP tool. MP Athlons are just around the corner, and so is the 266mhz FSB Athlon chipset for use with superfast DDR memory. Rambus, anyone?

    And if you read the reviews, the only thing it's actually faster than the Athlons is at Quake3. Seeing how many buying decisions are made by completely irrelevant Q3 scores, this may be a very bad thing.

    And yes, the incessant pro-AMD propagande isn't good, but have a look at face intel [faceintel.com] to see why intel really isn't a good company. Maybe that will explain some of the hostility.

    Alex T-B
    St Andrews
  • by macpeep ( 36699 ) on Thursday November 23, 2000 @10:26PM (#604025)
    If you think Intel is going down, why don't you short some Intel stock? There are billions to be made and for shorting stock, you don't even need money! Coming to think of it, you'd think a lot of Slashdot readers should be shorting Microsoft stock. Talk about an opportunity to make money there!!

    That is.. if you truly believe they are going down. Kinda makes you wonder...
  • I think you're way off base here.

    The K6-III was the exact same as a K6-2, except it had integrated L2 cache (256kb) running at chip speed. This was evolutionary, but a major gain for the K6 series which was mainly bottlenecked by the slow L2 cache. A K6-2 400 and a K6-III 400 in the exact same motherboard, with the exact same Voodoo3, shows how much of a difference this makes. The K6-2 400 does between 12 and 26 fps. The K6-III does between 18 and 33 fps. Everything else was the same, except for the processor.

    Even underclocked, the K6-III did more per clock than the K6-2. The K6 series processor had an admittedly over-engineered branch predictor, so it had to constantly be getting instructions to be living up to its full potential. The L2 cache on the K6-III let it live up to this potential, and the K6-III 400 actually out performs the P2-450 on many operations (except FPU).

    The K6-III was not a testbed. It was an evolution step in the series, and allowed people to get even better performance out of relatively inexpensive processors. The only problem was ramping up the speed of the chip, which prooved too much for AMD as they refocused their efforts on the K7 development.

    The Athlons were released with separate L2 cache at first, but the new T-birds and Durons both have integrated L2 cache. Yes, AMD probably learned a few lessons on the K6-III about it, but they also applied it elsewhere. The K6-2+ for laptops is a K6-2 with 128kb of L2 cache in the CPU core. The T-bird/Duron core is the exact same, except for flaws in the L2 cache which leave the Duron with 64kb of working L2 cache, and the T-bird with 128kb of working L2 cache.
    --
  • That was an incredibly acidic review.

    Most of it is really true, people can claim toms is PRO-AMD all they want but...

    I havent seen a single glowing review ANYWHERE, I mean come on not everyone can be Intel haters, unless theres a reason for that HMMM

    I mean come on the whole P4 deal leaves a bitter taste in my mouth

    I can go get a 900Mhz processor a thunderbird and make a whole system that outperforms a P4 for about the price of a p4, what gives?

    I do know that toms is a little pro amd, but hes not a stupid person so there is prolly some justification for his bias, AMD actually DOES have good chips now...

    Oh but people arent allowed to be biased, formulate your own opinions but I feel safe in trusting all the reviews that P4's basically stink.. anyways

    ... its a marketing game now.. and Intel will prolly do *OK*

    I hope AMD crushes them just for the shit they are putting out

    Jeremy

  • Actually, they usually buy BOTH cigarettes and lottery tickets.

    Lighten up folks, the tag line was a joke. Its statistics. Get it?
  • What do you expect? That Intel is going to be perfect? It is a huge company, in any big company there's not only focus on delivering a good product, but there are paychecks, deadlines, taxes, legal issues, unsatisfied customers, lawsuits... Do you expect them to disregard all that and just try to make your life easier by doing miracles and pleasuring everyone?

    It is brutal to compare a company like Intel with Mafioso clan. It just shows, that you are a geek that has this gut feeling against anything that's stronger than him and you don't accept that. What the hell is up with that? Grow up. It is a real world, someday you will find a job in an IT industry and will understand, that still the main motivation of any company is to make money, and not to deliver the best product ever. Get real. And as for mafioso, rent (if you don't own) a Godfather I and II. See the difference. Or even better, go to Russia, you'll enjoy it there.


  • I think that MPG4 compressors are the best benchmark tools out there, because the MPG4 algorithm is very FPU intensive (dct and similar algorithms implied), it also uses integer arithmetics (used for scaling) and requires high chipset bandwidth (memory/disc/DVD) because of the size of the source file/s. Also it has to decompress the MPG2 source, which is said to require at least a 400MHz CPU to be real-time. As you can see, the process could stress any processor out there (no matter the speed), and it gives overall performance results.

    As a last note, Flask could be used to recompress MPG1 and MPG2 files that are *legal* (anybody out there has a capture card?). Also, what if he owns DVD discs? It's legal to recompress them...

    PS => Sorry 'bout my poor English.
  • Yes, actually I *do* think the RAMBUS fiasco was intentional... Well, the first part of it anyways.

    They [Intel] wanted to get more control over the industry. They do this by developing a technology and then licensing its use (witness MMX, SSE, etc). They knew that PCs needed more memory bandwidth, and they sought a solution, that much is reasonable. But then they ended up choosing a solution that would lead everyone to buying a new type of ram, which their strategic partner (who Intel owns a bit of) held the patents on...

    They tried to force everyone to use a new technology because they got kickbacks from the sales of that technology, not because it was better. They did they by linking their product (CPUs and chipsets) to those of another company (RAMBUS's RDRAM). In many markets this is illegal, it'd be like GM putting a special tire-detection chip on their cars which wouldn't let the car work without GM-approved tires (which would of course cost three times as much.)

    This little scheme backfired because AMD and VIA were here to give consumers enough choice.

    So, Intel didn't choose to be screwed over in the RAMBUS deal but they deserve it because they intended to screw customers over, locking everyone into a patent-enforced monopoly.

    Maybe all companies would do this... maybe AMD will try. But I'll be against any company who does, no need to reward that sort of behavior.
  • by Len ( 89493 ) on Thursday November 23, 2000 @05:02PM (#604044)
    Since the P4 is slower than one Athlon in most benchmarks, pitting it against a pair of Athla is just mean.
    --
  • All i have to say is...Give Intel Time. (The first new core design since the P Pro). Good luck to AMD.
  • Yeah, it's tough to play by the rules. That would be a good excuse for me to cheat on my taxes, steal whenever I could, sabotage any competition, etc. I mean, it's tough out there, so why should I be expected to follow all the rules?

    I do expect that mistakes will happen, what I don't accept is when companies intentionally break the law because they can tie the case up for years if anyone sues them, or when a company uses strong-arm tactics against those who are unable to fight back.

    FYI: Nobody compared Intel to the Mafia, the comparison was between their actions.

    If Intel does whatever 'need to be done' to dominate, by any barely legal means, does that really seem like an unfair comparison? People seem to cut companies *way* too much slack when they're trying to make money. I think it's some little Ayn Rand thing, but it's insane. Nobody cuts a thief any slack when he breaks into your home and steals something, even if he's doing it for a profit. How is it different when a company with an army of lawyers traps you in some barely (if at all) legal trap of theirs and threatens to break you through legal fees? It's basically a protection racket.

    I'm all for people trying to make a profit, when they follow the same rules everyone else has to follow.
  • Pathetic. Are you twelve? Your rationalizations seem to be from someone of that age.

    Just because someone else does something doesn't mean that you should too.

    Companies are as companies do, if they play dirty, they are dirty.
  • After seeing all the reviews I'm starting to wonder whether Intel's real reason for bringing the P4 out isn't because it's more efficient, or because they've improved the architecture in any way or made any improvement over the PIII.

    Perhaps their only problem was that the PIII kept breaking when they pushed it up to ridiculously high clock speeds. So they moved things about a bit (no idea how, maybe increased the separation distance between the etched components), and possibly made it less efficient, but in the future able to be pushed up to 2, 3, 4GHz or whatever.

    If that's right, then it's so not The Way Forward. There's a point when channeling most of your R&D into pushing the same old architecture faster and faster will provide fewer and fewer gains compared to actually designing a better one. I mean, how long is it since the first 64-bit processor now? It made me wonder how important it is for M$ that PCs stay on the 32-bit architecture for as long as possible, because of course it's much more painful for windows users to take full advantage of 64-bit processors (they'd need to get a whole new precompiled OS distribution) than linux users and the like who could recompie everything overnight.

    I don't know. Maybe I've been reading too many conspiracy theories... :)

    Dave
  • Found this in my cache:
    http://www.tomshardware.com/cpu/00q4/001120/inde x.html

    Record Reader Numbers Ask For Record Responsibility

    On the very Monday of Pentium 4's release the web servers of Tom's
    Hardware Guide were under extreme load, making it difficult for many
    readers to download the pages of the Pentium 4 article. I would like to
    apologize for those inconveniences and also thank you for your faith in
    Tom's Hardware, as we scored a new record of 1.413 million pages that
    day, although slashdot intentionally refrained from giving our Pentium 4
    article the recognition it deserved, despite it being amongst many other
    remarkable things the only Pentium 4 review with Linux benchmarking
    scores. While I daresay that slashdot has most certainly some
    politically sinister things going on behind its usually reputable
    facade, I would also like to express my awareness of carrying a huge
    responsibility towards all those hundreds of thousand faithful readers
    who rely on the conclusions of my articles. Unfortunately new results
    out of the still ongoing Pentium 4 evaluation have urged me now to
    change my stance on how I see Pentium 4 and I want to get the word out
    without any hesitation.

  • That's what I did, except I used the extra dough to buy nice SCSI card and two 10krpm drives. I haven't looked back since.
  • I havent seen a single glowing review ANYWHERE
    Have a look at this one [realworldtech.com]. These guys are really onto it. Note that they are saying that the P4 has great potential, not that it's a fantastic processor for anyone now. Truth is I wouldn't be suprised to see most review sites using many SSE2 optimized benchmarks in a years time and then the P4 will be looking damn good. Still won't be any good for me with my legacy software though.
  • by smoon ( 16873 ) on Thursday November 23, 2000 @05:19PM (#604073) Homepage
    This reminds me of...

    The 386 (faster than a 286, but oh so expensive, and no one uses 32 bit apps yet anyway)

    The 486 (who needs a math co-processor? Geez it's expensive)

    The Pentium (Gosh 486's are available with the same or higher clock speed)

    The Pentium Pro (16 bit apps actually run _slower_)

    The Pentium II (oh, bummer, L2 cache is at half-speed, PPro is so much better...)

    Lets face it; many of Intels 'new' chips don't make immediate sense, but who was buying the predecesor to any of the above chips once the new style had been in the market for a while.

    Personally, I'm looking forward to the new AMD chips. As always, more bang for the buck than Intel.
  • Yup...

    Let's get serious now however. We have learned that Pentium 4 has got a rather exciting and interesting brand new design that comes with a whole lot of potential. However, the benchmark results might seem a bit sobering to the majority of you. Whatever Pentium 4 is right now, it is certainly not the greatest and best performing processor in the world. It's not a bad performer as well though.

    I don't know about the sucking up to AMD thing though, they're pro-competition. Intel has a lousy track record.

    In the day of the Celeron 300a, Tom's Hardware was all over the fact that it was the best bang for the buck. If they're pro-AMD it is because AMD is in the lead right now.

  • by jonfromspace ( 179394 ) <jonwilkins@nosPam.gmail.com> on Thursday November 23, 2000 @05:25PM (#604082)
    ...to count the dimpled chads the first time, and durring the recount he found some extra cpu cycles for designated for the Athlon.
  • Despite my various comments here defending Intel, I agree wholeheatedly with your conclusions, especially about the "little or no sense to buy part."

    I'd be more harsh - buying a P4 right now is a serious waste of money. By the time a majority of applications have been re-compiled to take advantage of P4's features(thereby making the P4 a decent buy if you're looking for performance), a new P4-based platform(much improved over the current offerings) will be available.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • by Vryl ( 31994 ) on Thursday November 23, 2000 @05:27PM (#604086) Journal
    That when the optimising compilers come out supporting SSE2 that the P4 will kick arse?

    As others have pointed out, these new chips are *usually* slower on 'legacy' code (cf original 5volt pentiums etc).

    All this is a bit preliminary, ie P4 is certainly expensive and not that great Right Now, but almost certainly will rock when there is the code to support it (as there certainly will be).

    I am a big AMD supporter, I think that they have done extremely well. But now, they are under the pump, just as they had Intel under the pump for the last 18 months.

    All is well, just don't buy a P4 right now. Maybe the vapourware dual Athlon chipsets and motherboard will come out in time to toast the P4, but otherwise Athlon has some serious competition.

  • It isn't Intel marketing fooling anybody. Its the fact that for some applications you really need more processors and a larger cache. Sun sells SPARC boxes with enormous cache sizes for a reason. If you're running a high powered web server with an efficiently compiled binary you can fit most if not all of the most frequently used instructions in the cache making CPU-to-memory retreival unimpoortant and making the full chip speed cache a big advantage. Big fast cache == benefit in performace but hinderance in cost.
  • Maybe the downfall of the P4 will convince Intel to rethink their plans, maybe not... either way both Intel and AMD are still rehashing the x86.

    Mostly because x86 is still very popular and no one wants to toss away their software. I'm not an expert on microprocessor architecture, but I'm happy with x86 itself as long as the prices are low, and the performance is good. (Note: I'm an Athlon user, so a bit of subjectiveness is ahead.)

    I agree with you in that I'm somewhat surprised that Intel is still following their old, pathetic, marketing-driven roadmap. You'd think in the year that the Athlon has been out and cutting into Intel's profit margin, that they would have at least been preparing themselves in some way to top AMD technologically.

    One thing I like about AMD is that they went ahead and developed a whole new processor core and dubbed it the Athlon. Intel, OTOH, has been using their old 686 core since the days of the first PentiumPro, and it shows. I don't *know* whether or not the Pentium4 still uses the PPro core, but I suspect it does based on the sheer lunatic cooling and power requirements.

    I didn't expect AMD would keep their title of Most Powerful x86 Chip on the Planet(tm) for this long.

  • The infinite liability is relative to the current price of the stock. If the stock is worth $10, and I short, the most I'm going to make on that stock is $10. On the other hand, if it goes to $1k, then I'm going to loose $990 in the hopes of making that $10.

    Given that the stock market is designed to encourage a general rise in price, that's quite a steep (theoretical) bet to make.
    `ø,,ø`ø,,ø!

  • by HiyaPower ( 131263 ) on Thursday November 23, 2000 @05:33PM (#604094)
    According to Intel's plans, the revision of the P4 chip will not take the same socket as the P4 (picking up the worst of the AMD technique...) Thus being at the leading edge of this will be a dead end. But it plays Quake3 well...

    When I threw this same story in the hopper this morning, I did it with something along the lines of "Happy Thanksgiving and the P4 is a turkey". With Grove gone, these guys don't have a clue anymore...

  • What do you expect?

    Simple, only that Intel does its best to reach the top by any legal and ethical means. Like many companies do. Nothing else.

    Grow up. It is a real world, someday you will find a job in an IT industry and will understand, that still the main motivation of any company is to make money, and not to deliver the best product ever. Get real.

    I am 35, and have had many jobs in both ICT and the media. In those jobs, I've seen a lot of companies I like, and some I like less or indeed dislike. So I think I am real enough...

    And as for mafioso, rent (if you don't own) a Godfather I and II. See the difference. Or even better, go to Russia, you'll enjoy it there.

    I think most people understood that I did not compare Intel with the Mafia. You obviously did not. I referred to the fact that good or beneficial acts are no excuse for unethical or unlegal behaviour.

  • I thought it was logical, since cache is the only difference I can think of between the T-bird and the Duron. If a proc comes off the line, but the L2 cache has an error, AMD may be able to save it by doing some creative wiring. I do doubt that AMD does this, since mass marketting has lead to the end of hotfixes to make "mostly OK" products into "OK" products (go look into any older machines for hotfix wires).

    As for the cache size being off.. whoops, I don't own a T-bird. I do own a K6-III and a K6-2, so I know the specifics of those chips from dealing with them daily (cat /proc/cpuinfo :))

    "Had me half convinced.." .. sigh.. must everyone think that every other post on Slashdot written with any form of coherency is a fscking troll? I guess that's why I live more on Kuro5hin [kuro5hin.org].
    --
  • Yeah, my K7 info is kinda spotty. I think I actually read about "Recycled" T-bird -> Duron on /. somewhere (gee, that should learn me to not believe all that I read ;)).

    I think the most AMD got from the K6-III was just how hard it could be to do get a good yield on those chips (something Intel learned with the PentiumPro which suffered similar problems). The K6-III comes in 3-4 forms. The most comment are the 2.2v 4x100 model, and the 2.4v 4.5x100 model (I have one of each at my house right now.. the 450 is not mine, it's just here because it happens to be here).

    They had to increase the voltage on the early 450 models, as they just didn't work at 2.2v stably. This led to interesting heat problems, too, as they ran up to 5 degrees hotter than a K6-2 (especially with dnetc). My K6-III 400 runs at 48 to 53 degrees (right after replacing the CPU fan). The K6-III 450 is between 52 and 58. It can easily go over 60 degrees :-\

    Since the K6 series had a large (8192) BTU and a complex algorithm, I'm guessing it tended to starve for instructions because it hit the end of an execution pipe, rather than need to reload its execution pipelines because it mispredicated a branch. This is why the K6-2+ can probably get away with 128kb of cache -- because the "end of pipeline" stalls occur far less often than the "mispredicted branch" stall would happen (if the BTU was not as overpowered). AMD probably learned some of the magic of making the L2 cache size such that the reduced latency balanced with the rest of the chip.

    The K7 has a different BTU which has a lower accuracy rate (90% I think). This is offset by the faster CPU < -- > northbridge speed (200, and more recently, 266Mhz using DDR singnalling). Going from the original K7 half-to-a-third-speed-but-large L2 cache to the fairly small on-chip stuff seemed to me a good move. The Duron with 64kb is not enough, IMO, especially without DDR ram support. Yes, you can combine with the L1 cache, but when it branch mispredicts, all of the pipeline has to be flushed and reloaded (arg). Since their instruction decoding mechanism is enough to issue a lot of CISC into mu-ops and fill the pipeline, it's mainly a problem of the memory speed. That's why the DDR 760 is so freakin' cool and kicks everything's ass in benchmarks :)


    --
  • Yup, same thing here. Used a P3 700 at work, and my Duron 700 at home seems just as fast. Probably isn't, exactly, but the benchmarks would be damn close. And the little Duron 700 costs about $80 these days. Can't beat it.

    The P4, even if it was a bit faster than a 1.2 GHz Athlon wouldn't be worth it for the price difference. But apparently it isn't as fast.
  • So because one company (or person) does something, everyone should?

    I'm not saying that everything *is* roses. I'm just saying that anyone who does something despicable is despicable, despite their excuse that everyone else is doing it.

    I refuse to support a despicable action. I may have to support a company that has been despicable, but I won't support one of those decisions directly.
  • The P4 does not use the 686 core, but a completely new core. And from what I'm seeing so far, other than the deeper pipeline (20 stage) which enables much higher clock speeds, the "Williamette" core is inferior to the 686 core in absolute clock-for-clock speed and more importantly, the FPU.

    Raw MHz and the new SSE2 (MMX 3?) instructions are all this thing is offering right now.
  • >Maybe I live in a cave, but I personally have never seen an AMD advertisment on TV or in the trade mags.
    You probably lived in a cave a year or two ago, when they launched some SERIOUSLY popular ad's about their brand name, and nothing else.

    I have mirrored some of the funnier ones at http://iamsure.psychasia.com/html/funny.html

    >Besides, someone needs to take a shot at those blue Intel whatever they ares.... everytime I see those ads I want to puke
    Those 'blue Intel whatever' are The Blue Man Group, a world famous perfmormance group on par with Stomp!. The Intel ads capture a glimpse of their humor and style, but doesnt really do it justice.

  • Well, I don't know that I agree with you entirely. Intel is going to need a LOT of time to even catch AMD, let alone actually ship faster processors! I am not even going to discuss price.

    AMD has proven, with the Athlon/Duron kick to Intel's nuts, that there is no reason to believe that the "Next Genneration" AMD offering won't be every bit as superior to the P4 as their current line is to the PIII/Celeron.

    Don't count AMD out... remeber, they have not even pushed the envelope. Tom was crankin' that P4 to 1.7GHz! Gimmie a break! I'll take a next genneration AMD over the P4 any day, and I am an Intel FAN!. AMD has the momentum right now, and while Intel will gain it back eventually, it won't happen soon. Neither Intel nor AMD are going anywhere but faster, we might as well sit back and dream of 1000fps Q3.
  • This is probably one of the most insightfull things I've heard all day. The irony of the situation is that Intel is one of the most powerful open source supporters around. Go ahead, do a search on their site for Linux, you'll see what I mean.

    They have, and continue to provide, the most excellent documentation imaginable for their new product lines, including specifications for some of how their chips work internally, but especially for embedded systems designers.

    Look at their 87c51 documentation, and see if you can't get a feel for how good they are at it. Look how they've been helping to get linux running on the IA-64 arch. The fact is that Intel is a corporation, and that corporations play hard ball business. They'll use the legal system, contracts, and whatever it takes to sell more product. Its just the nature of corporations. Now, that said, I'm still a die hard AMD user currently, and am not likely to change. (At least in the desktop area.) And if AMD were in Intel's shoes right now, they would be playing all the same tricks.

  • Another reviewer (local to us Aussies), basically comes up with the same opinion. The review has some nice pictures, a good discussion as to the pro's (negligible) and con's (many) of RDRAM,
    why Rambus is evil, and a lot of links for further info as to why.

    http://www.dansdata.com/p4.htm

  • by Tridus ( 79566 ) on Thursday November 23, 2000 @05:52PM (#604126) Homepage
    The P4 is a new core actually, its the first new core since the PPro. Thats the reason why its slower right now really, Intel always has this problem with its new cores at first. The Pentium was slower at first, and so was the PPro.

    Much like those, give the P4 some time. As the clockrates go up and SSE2 enabled software comes out, it will start to look better.

    Actually, The Register did some SSE2 enabled benchmarks, and the P4 was rather impressive in them.

    http://www.theregister.co.uk/c ont ent/3/14922.html [theregister.co.uk]
  • Some unprofessional retorts, if I may:

    The 386 (faster than a 286, but oh so expensive, and no one uses 32 bit apps yet anyway)

    But the less expensive (by $700) AMD Athlon is also faster... (the price also doesn't include the new motherboard, case, power supply and fans you have to buy!) And, uh, P4 is still 32bit...

    The 486 (who needs a math co-processor? Geez it's expensive)

    Same argument as above...

    The Pentium (Gosh 486's are available with the same or higher clock speed)

    The Pentium was a major leap forward from the 486, bringing forward major speed advancements... Sure, the P4 is technically nicer, (but for now, at least) they are slower and more expensive.

    The Pentium Pro (16 bit apps actually run _slower_)

    Again, the P4 and P3/Athlon are all 32bit... so I'm not sure what you're getting at... The Athlon runs 32bit programs faster then the P4 does...

    The Pentium II (oh, bummer, L2 cache is at half-speed, PPro is so much better...)

    A quote from Tom's "informants":

    In some ways, the 1st generation P4 is a bit like the Pentium Pro in Socket8, which enjoyed a rather short life before getting replaced by PII/Slot1. By the time Northwood/Brookdale is launched, Willamette/i850 will be completely phased out.

    Basically, the PPro was simply a test run, since Intel isn't stupid - they know when they're next chips are coming out way before we do... The P4 is nothing but a place-holder (or short-term filter" as Tom calls it) before Intel actually brings something worthwhile out...

  • by mikethegeek ( 257172 ) <blair AT NOwcmifm DOT comSPAM> on Thursday November 23, 2000 @05:52PM (#604128) Homepage
    I think Dr. Tom is biased against BAD HARDWARE, not necessarily Intel. You should have read him some time ago, he was always endorsing the P3/Celeron, until AMD just simply came out with a better product.

    And any objective reviewer would have to conclude, that unless you need SMP, AMD's top processor is better than Intel's top processor. And Intel's top processor IS NOT the P4, it's the 1 GHz P3...

    I hadn't owned an AMD based machine (since my original `286, circa 1990) until I recently replaced my P3 with a Duron 700. And I'm very happy with it and plan to replace my Duron with a 1-1.2 GHz Thunderbird Athlon. Not a P4.

    Keep in mind, these benchmarks are RAMBUS based P4's going up against PC133 SDRAM Athlons... And the Athlons win. When the DDR based Athlon system is available, the gap will WIDEN.
  • by dbarclay10 ( 70443 ) on Thursday November 23, 2000 @05:54PM (#604130)
    First, a disclaimer: I never intend on buying this revision of the P4 - too expensive, not enough performance, etc., etc.. My next computer is probably going to be a dual-processor T-Bird/Duron.

    Now, on to the meat. :)

    -It's slower clock for clock than a P3 or an Athlon... In fact, a 1.2 GHz Athlon is probably a bit faster than the 1.4 GHz P4.

    Read the article. A 1.2 GHz Athlon IS faster than a 1.4GHz P4.

    -You are stuck with RAMBUS and the buggy Intel RAMBUS chipsets.

    So far, Intel's RAMBUS chipset are only buggy when you use SDRAM with them.

    after all, hasn't every Intel chip since the 8088 out performed the prevtious generation at the same clock?

    Quite the opposite - in most cases, Intel's newest CPU architecture doesn't perform as well as what it replaces - at least for a while, until the compilers have been modified.

    and probably can end up at higher clock rates than the Thunderbird Athlon

    Intel engineers, and most people who understand these things, feel that the P4 should be able to ramp up to somewhere in between 7-10GHz, given appropriate die shrinks.

    But the P4 is like a school bus racing against a Porsche, it's got to have a much bigger engine running at a much higher RPM to equal the speed.

    You've got it backwards - in this analogy, the P4 is the small engine that can rev to extremely high RPMs(incidentally, Formula-1 racers can only go so fast because the engines can take extremely high RPMs - for a given size of engine, the only way to get more horsepower[at a certain point] is to increase RPMs). The T-Bird/Durons are the ones that have engine size(high IPC), but can't rev high. P4's have small engines(low IPC), but can rev to extremely high RPMs.

    the P4 can't do SMP yet, and likely won't be able to before the Thunderbird Athlon (and the upcoming new core) can.

    The P4 won't be able to do SMP(as far as Intel is concerned) until it has been switched to the next socket format.

    The P4, like the Celeron, would have to run considerably FASTER than 900 Mhz to equal a 900 Mhz Athlon.

    That's exactly the point. Intel will be able to get massive MHz out of this core, and it'll leave Athlons in the dust(unfortunatly). Athlons will only be able to clock so high, and then that's it. They won't get any faster, without staying an Athlon. The P4 will increase its clock rate very quickly - and with that, you'll get a lot of performance. Sure, on a clock-for-clock basis, the Athlon can do more, but if the P4 has twice, even three times the clock rate, it'll be faster. That's what Intel plans on doing.

    Dave

    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • Well, the idea of high clock speeds isn't just marketing. It's a technological desicion. Intel started designing this processor when AMD was still selling to third-rate, third-tier OEMs. They didn't make this processor just to counter AMD - it's a logical succession to the PPro core.

    Incidentally, the P4 is most *definetly* not based on the PPro, although it does use some of the concepts(deep pipeline, on-die L2 cache, etc., etc.).

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • How likely is it that Intel will have used Quake 3 as a benchmark internally, and actually optimised their design for it?

    Think about it, it's not completely unlikely, as the percentage of people and websites out there that rates Quake 3 as the most important benchmark is very high (I was one of them until the P4 came out), so it would make more sense commercially than to optimize for SPEC as they do traditionally.

    Most reviews explain the "Quake 3 discrepancy" by its large hunger for memory bandwidth, which certainly is true. However within the large range of benchmarks I saw (for wide selection see the one on Ace's for example) there were plenty more memory-bound benchmarks, all of which the Athlon did equally well or better on.

    Also, Quake 3 is VERY hungry for float performance, which the P4 appearently hasn't got. Does Quake3 use SSE2, either itself or in nvidea drivers which would allow them to mask this? Maybe it would be fun to test Quake 3 on a P4 with a video card that doesn't have T&L (to reduce the floating point load) and has no SSE2 optimisations (voodoo5 maybe?).
  • I know this will be considered flamebait, but someone has to say it. Tom pretty much hates intel. I wouldn't go to Toms to hear about how good the newest Intel Chipset is, ever.

    i would take his 'reversal' with a grain of salt. i'm not trying to say anything mean, just don't take it at face value. would you trust a microsoft report on the bad side of open-source?

  • The company that people were expecting to dominate the market (well, they have been...) finally screwed something up under the legitimate pressure of AMD. A two-chip world, what a wonderful place!!
  • A quote from Tom's "informants": In some ways, the 1st generation P4 is a bit like the Pentium Pro in Socket8, which enjoyed a rather short life before getting replaced by PII/Slot1. By the time Northwood/Brookdale is launched, Willamette/i850 will be completely phased out. Basically, the PPro was simply a test run, since Intel isn't stupid - they know when they're next chips are coming out way before we do... The P4 is nothing but a place-holder (or short-term filter" as Tom calls it) before Intel actually brings something worthwhile out..

    That is not quite correct. Intel was certain that the PPro was the way ahead. In many ways it was. The PPro had a fast achitechture with a big full speed cache. Unlike the later PII, the PPro could be used in big SMP machines with atleast 1024 processors. (Sequent made such a beast.) The PPro was going to be Intel's next big chip after the pentium.

    Unfortunately there were two drawbacks to the PPro: There were poor yields due to the huge on die cache (512k or 1024k). This drove prices up. More importantly 16bit code ran much slower on the PPro compared to an equally clocked Pentium. Microsoft had harsh words for Intel because of this. Microsoft was not even close to getting rid of all the 16bit code in Win95 and Win95 wasn't even out. A lot of bad press was generated and people were told not to buy PPro by the trade rags. This more then anything forced Intel back to the design room to hack together a chip that ran 16bit code better then the PPro. The press was so bad over the PPro that Intel made a lot of marketing noise to distance the PII from the PPro. While Intel was designing the PII they came out with pentiumMMX to satisfy consumers and keep Cyrix and others from eating Intel's lunch. The mess over the PPro really pushed back Intel's roadmap.

    Intel made a mistake with the PPro. They had a vison of the future, (all 32bit code) but the market wasn't ready for it. I think that they have made the same mistake with the P4. Intel wants to move on, but the market is demanding backwards compatibility. It is too bad really. I think that Intel gets overly critised for keeping the i386 alive well past its prime. Intel is not blameless, but the fault mostly lies with a market demanding that they be able to run their 8 bit 8088 apps on the latest Intel chips.

  • The Pentium Pro DID ramp to higher clocks - that's what the PII/Celeron/PIII are based on(with modification, of course).

    Now, the chips might have seemed faster than their predecessors, but I'm betting you only bought them after a lot of code had been re-compiled. The Pentium Pro was slower, clock-for-clock, then the Pentium. It ran 16-bit code REALLY slowly, compared to Pentiums(which is why PPros never really entered the home-PC market).

    Anyways, it all depends, there are a lot of factors. But the P4 is most definetly, without a doubt, not the first Intel architecture that is "slower" than the one it replaces.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • I know I am not the only one who can see that when all your applications are being run on a remote server you do not need a powerful processor. You just need a good marketing campaign. A year from now the P4 will be smoking along at 2-3Ghz while the better processor, Athalon (Palamino version), will be at 2 Ghz. Intel will say "We have the fastest processor and the best for Microsoft.Net". Running applications from a server optimally would require huge bandwidth but a poor FPU. That describes the P4 perfectly I believe. Read Red Herring's take on P2P and Intel.. Welcome to "Wintel" reconsolidating their shared monopoly.
  • That's really what we're waiting for, right? I mean, right now a 450 mHz Macintosh probably beats the x86 processors across the board on mpeg4 compression. Based on Intel's stated goal of making the P4 a powerful chip for media processing in particular, I don't think this will be a long wait.
  • by My Third Account ( 78496 ) on Thursday November 23, 2000 @07:02PM (#604153)
    The FPU in the P4 is there for x86 compatability. Intel is betting that software developers will use some of the P4's 144 new instructions to accomplish floating point operations. The new instructions, if used properly, could realize significant speed increases.

    Further, as was mentioned earlier, Intel has always released new generations of CPUs that didn't exactly take the benchmark world by storm. Wait 'till software emerges that takes advantage of what the P4 has to offer. Then you can try to complain.

    Didn't anyone read what Paul DeMone [realworldtech.com] had to say? Or Ace's [aceshardware.com] review of the P4?
  • "Pentium Pro. I've seen Pentium Pro 200 servers and was impressed. Nice chip, too bad the design didn't ramp to higher speeds."

    'The Pentium Pro DID ramp to higher clocks - that's what the PII/Celeron/PIII are based on(with modification, of course).'


    I think the point was that the physical configuration was vastly different. Because cache was quickly becomming an important thing, Intel wanted to add cache, but adding cache to the die at the time was prohibitively expensive and it wasn't until relatively recently that it was done, starting with the Celeron 'A', I believe.

    The multi-chip module apparently failed in terms of economics, as Intel switched to the cartridge design to place multiple ICs on a printed circuit board. Once IC technology caught up so that the entire cache could be dumped along side the core on the die, the cartridge style ceased to be of use to them.

    I believe Apple was the pioneer of the "consumer-level" CPU card idea, and even they had abandoned it with the G3 and G4 models. I heard they originally did it to make upgrades easier, a system with a 601 could be upgraded to a 604e and beyond, but apparently no longer.

  • by delmoi ( 26744 )
    Um, IRQ's are not related to a the CPU arcitecture, but rather to the origional design of the IBM board on the XT. Everything in a "PC" excpects things that way, so trying to change it would break every card that wants to use an IRQ.

    And what does SCSI have to do with it?
  • That's exactly the point. Intel will be able to get massive MHz out of this core, and it'll leave Athlons in the dust(unfortunatly). Athlons will only be able to clock so high, and then that's it. They won't get any faster, without staying an Athlon.

    While the P4 will always be able to run at a higher clock at a given linewidth than an Athlon, this differential won't just keep growing - it levels off at a fixed ratio and stays there, regardless of how many linewidth shrinks each architecture gets.

    This is just a reflection of how pipelining works. To do, say, a floating-point multiply, you have to run inputs through a block of logic. This logic block has a certain internal delay, that won't change. A pipelined processor breaks this block into smaller blocks, but the _total_ delay remains the same (actually, it gets worse, due to pipeline register overhead). Sell a case of Coke as four 6-packs or 24 individual cans, you still have the same amount of Coke.

    The relative clock ratio between the machines is simply the ratio of the durations of the individual pipeline stages in each architecture (actually of the longest pipeline stage in each architecture). This ratio doesn't change as the design shrinks; the P4 still has j stages, and the Athlon still has k stages, and a multiply still takes a fixed amount of time to perform at that linewidth.

    Now, how many stages you should have is an interesting optimization problem, but it's not relevant to this discussion (scaling of each of these two existing cores).

    Linewidth shrinks can be applied to any processor, and will speed up each processor by the same factor (more or less). Differences are the result of different low-level implementations (static vs. dynamic logic, one-sided vs. differential signalling, manufacturing process differences, etc); not the result of large-scale architecture.

    So, I have a lot of trouble with these claims that one design can scale farther than another; as far as I can tell, there's no reason why both couldn't scale as far as they wanted to.

    The reason why we don't just keep shrinking the same design forever is that as more transistors become available, you have space to implement more complicated structures. Things like schedulers and branch- and data-predictors can work in any of a large number of ways, and some of the more interesting versions take huge numbers of transistors. Similarly, adding SIMD support only became practical recently. This is why designs change - not because of some magical clock speed beyond which the old ones can't run.

    I have a BASc in Computer Engineering and am working on my Masters, so I've been studying this a fair bit :).
  • by delmoi ( 26744 )
    beacuse 450 Mhz is so much faster then 1.2ghz...
  • "The FPU in the P4 is there for x86 compatability. Intel is betting that software developers will use some of the P4's 144 new instructions to accomplish floating point operations. The new instructions, if used properly, could realize significant speed increases."

    Yes, but won't it take YEARS, if ever, for this "advantage" to actually benefit the majority of apps that will be run on a P4?

    Who is going to rush out to support SSE2 instructions for a chip that isn't likely to sell very well?

    Also, to use SSE2 and the P4 to it's potential, you have to upgrade EVER SINGLE APP on your PC. How likely is that to happen? Even assuming they are available, which they aren't?

    Intel made a HUGE mistake in putting a wussy FPU on this chip. If the P4 even had the P3's FPU it would have been even with the Athlon.

  • by Christopher Thomas ( 11717 ) on Thursday November 23, 2000 @07:25PM (#604168)
    Now, the chips might have seemed faster than their predecessors, but I'm betting you only bought them after a lot of code had been re-compiled. The Pentium Pro was slower, clock-for-clock, then the Pentium. It ran 16-bit code REALLY slowly, compared to Pentiums(which is why PPros never really entered the home-PC market).

    Actually, for the older Intel chips (8086 to 80486), he's definitely correct. I had an old x86 assmebly manual at one point that listed instruction latencies in clock cycles for each of these processors - the latencies on the old chips weren't funny.

    As far as I can tell, what happened was the newer chips had more transistors to play with, and so could implement more efficient (but bigger) functional units for operations. This would be especially noticeable for things like multiplication (software vs. shift-and-add vs. other methods).

    As for the late end of that spectrum... The 486, if I recall correctly, was the first x86 chip to support pipelining. This made a *huge* difference, as was probably one of the driving factors behind the huge increase in clock speed that occurred with that chip (though linewidth shrinks would have helped). Effective latency for a lot of instructions went down to 1 clock, and clock period went down to the delay of one pipeline stage.

    The Pentium introduced superscaling - a badly implemented dual-issue pipe, but a dual-issue pipe nonetheless. Code with instructions in the right order would execute up to twice as fast as in an equivalently-clocked 486 (though ordering it was difficult and annoying, due to many restrictions).

    The Pentium Pro had a much better superscaling architecture - four pipelines, for handling different types of micro-op, and far fewer restrictions on what could be issued at the same time. It also broke CISC instructions into RISC-ian primitives for execution, which made superscaling much easier. While there wouldn't be a quantum leap in performance over the Pentium, it should have gotten much closer to the theoretical factor-of-two-over-486 than the Pentium did on most code (even optimized code).

    The Pentium MMX and the Pentium Pro were two divergent forks off of the Pentium - not successors to each other. The Pentium MMX used the old Pentium core with a larger cache and SIMD integer instructions. Not terribly noteworthy (though the cache definitely helped).

    The Pentium II was basically a P-Pro with the SIMD instructions and a larger cache. More of an incremental polishing over the P-Pro than anything else.

    Ditto the Pentium III, though the SIMD instructions it added were actually extremely useful (sped up graphics drivers that used software transformation by about 25% on average, if I remember Tom's benchmarks).

    So, I can't find fault with any of the changes Intel made. The 32 vs. 16-bit thing was IMO a _good_ tradeoff, as it didn't sacrifice new performance for legacy support.

    No idea how good the P4 architecture is; I only have detailed specs on the ones listed above. The trace cache is a good idea, though.

news: gotcha

Working...