Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's New Venice Core Shows Overclocking Potential 234

Vigile writes "It looks like the new Venice core processors from AMD are going to offer more than just 90nm technology through the entire line up. According to this article on PC Perspective, it is going to offer a lot of headroom for future processors as the author was able to overclock their 2.0 GHz sample to 2.8 GHz! I think I hear an FX-61 calling my name!"
This discussion has been archived. No new comments can be posted.

AMD's New Venice Core Shows Overclocking Potential

Comments Filter:
  • unlocking? (Score:3, Interesting)

    by thundercatslair ( 809424 ) on Thursday April 07, 2005 @10:05PM (#12172157)
    Will it be easy to unlock these though, because if there is potentially to destory it I would not risk it.
    • Re:unlocking? (Score:5, Informative)

      by bersl2 ( 689221 ) on Thursday April 07, 2005 @10:08PM (#12172173) Journal
      Multipliers on AMD processors are unlocked in the downward direction.
      • Re:unlocking? (Score:3, Informative)

        by eRacer1 ( 762024 )
        Multipliers on AMD processors are unlocked in the downward direction.

        Athlon 64 processors are unlocked in the downward direction. Athlon 64 FX processors are unlocked in both directions.
    • only downwards (Score:5, Informative)

      by doormat ( 63648 ) on Thursday April 07, 2005 @10:24PM (#12172267) Homepage Journal
      AMD chips have multipliers unlocked downwards. That means if its got a 10x or 12x multiplier, you can chose 8, 9, 10, up to the default number. It works well, even if you dont want to OC, you can turn down the multiplier and crank up the FSB for better performance.
  • by essreenim ( 647659 ) on Thursday April 07, 2005 @10:06PM (#12172159)
    ..with water brought to you directly from the highly polluted canals of Venice. Sniff, ahhhhh, I love the smell of sewage in my PC..

  • Intel-Rating? (Score:2, Interesting)

    We know that clock for clock, AMDs are faster than Intels. So what does 2.8 Ghz in AMD mean in terms of Intel performance?
    • by ergo98 ( 9391 ) on Thursday April 07, 2005 @10:10PM (#12172190) Homepage Journal
      So what does 2.8 Ghz in AMD mean in terms of Intel performance?

      Duh...

      2.8Ghz -> 9081 AMD Cybermarks -> 84.7 ISO 9011:2005 quartets -> 1.7E10 Intel TruePerfs.

      I think that was fairly obvious.
    • Re:Intel-Rating? (Score:3, Insightful)

      by SunFan ( 845761 )
      So what does 2.8 Ghz in AMD mean in terms of Intel performance?

      Zero, because you'd be running an AMD chip!

      Given how well Athlon 64/Opteron have been doing in benchmarks, power consumption, and pricing, there really is little to no reason to buy a 64-bit chip from Intel. It's sad, but it's true.

      • Hypothetical:

        Let's say a brotha is about to finish grad school, and will therefore have an unprecedented amount of free time in which to game (after years of living with a Celeron 500).

        He looks around and notices he can purchase a nifty Dell with a 19 inch flat screen and a nice graphics card for $700. Then he notices that a comparable machine from a vendor that sells Athlon 64's is typically double the price!

        Is there anyone out there selling AMD gaming rigs that are "affordable?"

        • Re:Intel-Rating? (Score:3, Informative)

          by JDevers ( 83155 )
          Which hypothetical Dell are you refering to? The closest I've seen on their site comes with a 17 in FP and starts at $999 (Dimension 8400) but when you add the decent card and upgrade it to a 19 in FP it is $1298.

          You could always build your own, then you know what is going into it and know where you skimped and where you spent. You could easily built a kick ass system for $1000 (obviously not top end, after all the graphics card would be $500 if you went that route...).
        • Re:Intel-Rating? (Score:2, Interesting)

          by TheKidWho ( 705796 )
          You don't want a dell for gaming, especially not the cheap ones. My friend without asking me for advice first bought a $1200 dell system 2 months ago with a 15" flat panel... and it came with ONBOARD VIDEO, he couldnt even play 3 year old games nicely on the computer, they ran at like 12fps... You DONT Want it, just build a comp yourself, my 3 year old comp plays everything nicely and it cost around $1000 to build.
        • He looks around and notices he can purchase a nifty Dell with a 19 inch flat screen and a nice graphics card for $700. Then he notices that a comparable machine from a vendor that sells Athlon 64's is typically double the price!

          I suspect that the two machines are not actually comparable. But without specifics, I can't say. Care to provide details? I didn't see any decent system for $700 on dell's site, personally. And for gaming, you really don't want LCD anyway.

          • Re:Intel-Rating? (Score:4, Informative)

            by jm92956n ( 758515 ) on Thursday April 07, 2005 @11:18PM (#12172560) Journal
            Link [dell.com]

            Sytem Includes:

            • 3ghz Intel Pentium 4
            • 512mb RAM
            • 80gb 7200 RPM HD
            • CD-RW
            • 19 inch Ultrasharp digital LCD
            Total Price: $658, free shipping included. Add in an extra $200 for a PCI-Express video card and, at $858, it's comparatively inexpensive. It's not an excellent machine, I understand that; and, while I'm willing to pay a premium for a better machine, I don't expect the premium to be more than 10-20 percent more.
            • Without the $300 video card to play today's game. You mind as well spend that money on something else.

            • Few problems with this:

              1) Pentium 4 is a quick way to get yourself laughed at in serious gaming circles.
              2) RAM... this box has 1/2 the *minimum* RAM a new gaming rig should have. Dell clearly sells incomplete systems.
              3) 80GB is tiny by today's standards
              4) LCD != Gaming display. At least not one that Dell would provide. A CRT would be cheaper and offer better IQ for the purposes of gaming.

              Unfortunately, with your expectations of "premium", I wouldn't expect a "premium" gaming experience either. Having
        • Well, there's this eMachines package [bestbuy.com]. $880 - $330 rebates. Drop in the PCIe graphics card of your choice (I like GeForce 6600GT's). That package has a 17" CRT and printer instead of the 19" panel, but the PC is superior to what Dell's offering. You can buy the PC without the monitor and printer bundled of course.
        • We need to define "game" and "affordable" better.

          If "game" means solitaire and minesweeper, then "Dell" means "computer" and we leave it at that.

          Let's assume you mean the HL2/DooM3 generation of games.

          The quality of your rig is, to a large extent, proportionate to the amount you spend on it. This in turn, for the discerning gamer, yield much better enjoyment through smooth gameplay with crisper images, immersive sound and higher resolutions. Gaming at this level is very much like audiophilia (all the
    • It means you need an Intel P4 processor clocked somewhere between 4.2 and 5.4 Ghz, depending on the applications you like to use.

      Or you need a Pentium M "centrino" style processor running at the same clock or better, again depending on the application.

  • nt (Score:5, Funny)

    by Anonymous Coward on Thursday April 07, 2005 @10:06PM (#12172161)
    I think I hear an FX-61 calling my name!

    Sorry, actually, that's my Intel chip. Noisy bugger.
  • One has to wonder how overclocking about 40% does not introduce heat issues, that is, without elaborate cooling mechanisms like water cooling, etc.

    • by bersl2 ( 689221 )
      Oh, not really. I've heard of a few people even getting to 3GHz with Winchester (the previous core) on air.
    • Pretty simple.. (Score:5, Interesting)

      by cbreaker ( 561297 ) on Thursday April 07, 2005 @10:51PM (#12172409) Journal
      There's plenty of explinations.

      Here's some:

      A) The chip is designed to run very cool. Overclocking it makes it hot, but it still runs fine. Just very hot.

      B) The chip is designed to be run at higher speeds, and the initial offering is clocked-down. This gives AMD a few steps before more core/retooling work.

      C) The cooler that comes with the CPU is very good.
  • uh (Score:5, Insightful)

    by eobanb ( 823187 ) on Thursday April 07, 2005 @10:07PM (#12172169) Homepage
    What real good does overclocking 2 to 2.8 really do? These cores keep getting faster and faster, yet the increase in number of floating-point operations per second achieved isn't really that spectacular. How about a more intelligent (parallel) architecture to begin with?
    • Re:uh (Score:5, Funny)

      by Anonymous Coward on Thursday April 07, 2005 @10:10PM (#12172187)
      Warning Independent Thought Detected.

      The white van has been dispatched.

      You will be taken to the Marketing 101 Re-education center.

    • by be-fan ( 61476 )
      The K8 architecture is already quite parallel, with 3 FPUs. You get much above that, and you have to use some sophisticated compilers to take advantage of the extra parallelism (as Itanium showed).
      • by kc8apf ( 89233 )
        Actually, Itanium's problem is that the parallelism has to be explicitly determined by the compiler. Most processors do dynamic dispatching, meaning they figure out what instructions can be run in parallel as it is running code. Itanium was made so that each "instruction" was really multiple instructions that could be run in parallel. This put the burden on the compiler which had never been tasked with this before (at least not at that level).
    • by MOBE2001 ( 263700 ) on Thursday April 07, 2005 @10:22PM (#12172259) Homepage Journal
      How about a more intelligent (parallel) architecture to begin with?

      Unless you have a way around the von Neumann bottleneck, what intelligent architecture are you thinking about? Adding multiple cores will eventually hit a wall because of memory bus contention. The only solution I see is for someone to create a memory architecture that permits unlimited simultaneous memory access. At that point, fast processors will not matter much. Just have a bunch of cheap processors share a single huge memory space.
      • by hyc ( 241590 ) on Thursday April 07, 2005 @10:37PM (#12172332) Homepage Journal
        re: unlimited simultaneous memory access - it's called a crossbar switch, and a lot of parallel supercomputers use them. They are fairly expensive, in real $$ and in terms of board space, etc...

        The HyperTransport that AMD uses is not a bad interconnect in the meantime, for people on smaller budgets...
        • re: unlimited simultaneous memory access - it's called a crossbar switch.

          Yes, but I was thinking of a new and more practical architecture, something revolutionary and cheap, maybe a new optical memory. This should be the holy grail of computing research, IMO.
          • something revolutionary and cheap, maybe a new optical memory

            Revolutionary and cheap.. You don't ask for much do you? Optical is coming slowly, but I'm not convinced it's ever going to replace electric current/voltage-based computing. At least not for general computing.. The problem is shrinking optical paths; you need a wave-guide for optical paths; for electric current, all you need is a string of closely spaced ionized atoms. Theoretically you could get down to a couple-atoms thick of wire with electric current.

            Moreover, photons are only slightly faster than electric-current. Electrons move between 0.6 and 0.9 times the speed of light. What photons are really good at is traveling long distances without dispersing as heat. Electrons move only a couple atoms before bouncing into something. But you can do lots of really useful things with electrons that you can't do with photons... Having photons mimic the functionality of electrons might not be doable on the same scale (meaning by the time you get 30 million photonic transistors on a die, you could probalby get a billion electric transistors).

            Quantum computing has the same density dilemma as photonic computing. But at least quantum computing does more than electric or photonic switching, so it doesn't need as many functional units. Don't expect to see an Intel Q4 any time soon.

            As for a more practical architecture. If practical and economic are what you want then the Pentium 3's with a flat BUS multi-CPU architecture is where it's at. Lots of cheap cores on as simple an architecture as you can get.

            The problem of course is in the mathmatical algorithms that we use to do real work. Most steps of computational algorithms are inherently dependent on the results of previous steps, and are thus not parallelizable. single-threaded CPU's have gotten VERY good at parallelizing individual instructions. The compilers aren't well suited for helping the CPU out, so things like the Itanium were supposed to exploit such parallelism. But the loss of backward compatability (and the Itanium's focus on floating point) spelled the death nell for that architecture.

            IBM, Intel, AMD are all pushing multi-threaded execution.. Basically giving up on figuring out how to make a particular algorithm work. They're pretending that a CPU which works well on a high-end server with lots of independent jobs (web pages, database transactions, IO requests, etc) can be sold to a market which is trying to scroll the mouse wheel on an excel spreadsheet with a thousand rows. The spread-sheet navigation is extremely sequential. A dual core CPU will be noticeable since there are periodic background tasks which often "get in the way" of your foreground task. But a 3'rd/4rth CPU is not likely to be useful at all to non-workstation end-users. (My workstation generally has 8 visible applications, all actively running).

            Personally, I think the answer is taking a step back from MHZ and pipelining. Go back to a 3, 4 or 5 stage pipeline with MASSIVE read-ahead decompilation of instructions (similar to transmeta). Get lots of high-speed cache on board with as little latency as possible (current large-cache architectures have HUGE latancies). By lowering the CPU MHZ, you reduce the latency to the all-important main-memory. Advance the state-of-the-art in power-consumption (I've read of several very novel approaches, including decreasing power to the point of statistically acceptible and correctable errors occuring in the computation). Perhaps put a second core on the CPU, but don't just put two identical masks.. Make use of the fact that a CPU has hot and cold regions.. Rewire both devices so they're really one big device with two functional CPUs..

            Develop better heat-dessipation techniques.. THey've been very creative over the years.. Flipping the chip so the silicon directly presses against the heat-sink, for example. They've introduced lower-resistence copper as the main wire interconnect, which was a major material-science challenge. Newer exotic materials may provide for better heat conductivity and voltage regulation. The cooler you run a CPU, the higher the power it can dessipate, the more power you can shove into it, the more work you can ask it to do.

            -Cheers
            • by Anonymous Coward on Thursday April 07, 2005 @11:51PM (#12172741)
              Electrons move between 0.6 and 0.9 times the speed of light.

              That's a pretty fundamental error for someone acting like an expert to make, don't you think? At 0.9c, we don't call them "electrons," we call them "seriously badass beta rays."

              It's not the electrons that propagate the signal, it's the potential difference the electrons are at. I have no idea what voltage you'd need to get electrons to be travelling at 0.9c, but I'd put it well into the MeV range.
            • Electrons move between 0.6 and 0.9 times the speed of light.

              You're talking about the electric current. The actual electrons don't move much, we're talking millimetres per second (and in the opposite direction).
        • I am not trying to flamebait, but this is an uninformed post trying to sound informed by throwing some keywords around.

          re: unlimited simultaneous memory access - it's called a crossbar switch, and a lot of parallel supercomputers use them.

          Crossbar switch lets N clients access N memories at the same time. That's hardly unlimited simultaneous access. And, even plain 2-channel desktop computers will let 2 requests onto each channel simultaneously, and 4-channel graphics cards have crossbar switches letting

      • IIRC, the way Niagara addresses this is by having multiple memory controllers and tons of bandwidth.
      • Unless you have a way around the von Neumann bottleneck, what intelligent architecture are you thinking about?

        The Sony PS2 and PS3.

        Post von Neumann is already here.

        For that matter, GPUs are already far from von Neumman architectures.

      • >> Unless you have a way around the von Neumann bottleneck,
        >> what intelligent architecture are you thinking about

        I believe we're going to see Itanium re-emerge in some shape or form when Moore's law levels off. Gigaherthz are fun, but at some point you're gonna have to find a way to run things in parallel effectively. And that's exactly what Explicitly Parallel Instruction Computing architecture was designed for.
      • by Perdo ( 151843 ) on Friday April 08, 2005 @03:17AM (#12173736) Homepage Journal
        With AMDs hypertransport and integrated northbridge, every processor you add adds another memory bus. It's call NUMA, for non uniform memory architecture, supported in Server 2003, XP Pro since sp2 and Linux since 2.4, perhaps earlier.

        NUMA was first used by SGI with their late 90s MIPS machines.

        Intel uses a shared bus, with the exactly the limitations you describe, except with their Itanium in 8 way+ configuration.

    • Re:uh (Score:3, Informative)

      by timeOday ( 582209 )
      What real good does overclocking 2 to 2.8 really do?
      Uh, it speeds up the FLOPS by exactly that amount. There is no "MHz myth" in this case - for a given processor, if you double the external clock and leave the multiplier the same, it will run twice as fast.
  • I don't mean to be flip, but if I can't judge the power of a processor by a simple metric like "megahertz" or nowadays "gigahertz", how can I know which processor is best suited to me? I've got a 2.8GHz P4 machine sitting next to me. How is that not better than the 2.0GHz AMD "Venice" processor that's only clocking in at 2.0GHz?

    If CPU speed is irrelevant to processor power, then why do we keep talking about it?
    • If CPU speed is irrelevant to processor power, then why do we keep talking about it?

      It's not irrelevant if you don't make stupid architectural changes specifically designed to raise the clock speed, like Intel did with Prescott. It's not everything, but it's still something.
    • Duh! (Score:5, Insightful)

      by bstadil ( 7110 ) on Thursday April 07, 2005 @10:13PM (#12172206) Homepage
      Within the same architecture the clockspeed is almost directly linear with performance. IE 2.8 is 40% faster than 2.0

      Or were you just trolling for Intel?

      • Re:Duh! (Score:3, Insightful)

        by grmoc ( 57943 )
        That is assuming you're compute-bound, instead of memory-bandwidth, harddrive-bandwidth, or some other kind of IO bound.

        This may not be the case for many applications out there in the wild these days, so the performance gain is likely to be less than linear for those applications.
        • Re:Duh! (Score:4, Insightful)

          by maraist ( 68387 ) * <michael,maraistNO&SPAMgmail,n0spam,com> on Thursday April 07, 2005 @11:01PM (#12172471) Homepage
          That is assuming you're compute-bound, instead of memory-bandwidth, harddrive-bandwidth, or some other kind of IO bound

          Hard-disk bound is hardly ever a factor for system-upgrades. If you're HD bound, it's unmistakable, and you usually are doing something worth the money of upgrading the disk-system. 3D grahpics-card bottlenecks, on the other hand are real and subtle.

          As for memory bound, I'm not aware of any benchmark (other than synthetic memory-testers) that didn't improve semi-linearly merely because of being memory bound. Increasing CPU speed these days generally means increasing the cache-speed which implies speeding up critical memory paths.
          • I've seen CPU speed increases actually -decrease- overall memory bandwidth (due to bus speed mismatch).

            Overall the trend is that it increases, as you mention, but it is not always linear, and it is certainly not monotonicaly increasing.

            Increasing the cache speed will increase the speed at which bits are fetched out of the L1 and L2, and maybe if you're lucky, even the L3 cache, but it doesn't generally speed up memory. Sometimes, in fact, you have to decrease the speed at which memory operates in order
      • Performance is how much work you do each cycle. If you assume the amount of work done each cycle is a constant and for all fabricants is equal then your statement works.

        If you do less each cycle, you'll reach higher clockspeeds, but your performance isn't "higher".(AMD is known to do "more work" each cycle)

        Hence therer have been proposition for a system to indicated performance not based on clockspeed (either from intel itself, AMD has their "+2400" etc naming, and I thought there has been proposition for

    • by JoeShmoe950 ( 605274 ) <CrazyNorman@gmail.com> on Thursday April 07, 2005 @10:38PM (#12172339) Homepage
      gigahertz are a fairly useless comparison between different chip types. A 2.0 ghz AMD64 might run circles around your 2.8ghz P4, while a 1.5Ghz Pentium-M could go faster than an AMD XP 1800 without worries. Architectures make this happen. If a 2.0ghz AMD64 can go the same speed as a 2.8ghz P4, obviously the 2.0ghz AMD64 is running more instructions per megahert. This means, that each one counts for more. Thus, a .8ghz increase is a huge increase in speed. Imagine running a 2.0ghz P4. Not very fun, eh? Now, the difference between a 2.0ghz P4 and a 2.8Ghz P4 is smaller than the difference between a 2.0Ghz AMD64, and a 2.8Ghz version of the same exact chip. That is a huge speed increase!
    • by Anonymous Coward
      One big reason is the difference in FSB. Yours is probably what...800MHz max? Intel's fastest FSB is 1066 MHz while AMD's fastest is 2.0 GHz....huge difference there! Even if you had identical core processors *say P4 Prescotts* and they were both at 2.0 GHz but one had a 533MHz FSB and the other had a 1066MHz FSB the one with the 1066MHz FSB would be MUCH faster since the whole system could transfer data among its components faster. That's why when overclocking it's normally better to drop the multiplie
    • Who ever said judging the performance of many different cpus just by looking at the "megahertz" was good enough?

      You want to know which cpu is faster than what? read reviews. Easiest and best way. Forget mhz, hell, even forget technical data if you don't feel like understanding it. Simply check out a few reviews on one product, take note of the benchmark results that interest you (such as gaming or compiling) and then see if the results from the different reviewers make any sense. If they look similar, then
    • by ArbitraryConstant ( 763964 ) on Thursday April 07, 2005 @11:37PM (#12172672) Homepage
      Intel and AMD chips have completely different designs. In general, Intel chips are designed to blast through simple code very quickly (as Intel thought that's all chips would be doing by now), and AMD chips are designed to be able to handle branches and conditional code better. Also, current AMD chips have a memory controller on the chip itself rather than on a helper chip on the motherboard, which makes their memory access faster.

      Before Intel hit the gHz wall, the strategy was actually working out pretty well. They were at a bit of a disandvantage in some areas, but for the most part the clock speeds were so high it didn't matter.

      With the new Prescott core in Intel chips, they increased the penalty for branching in anticipation of still higher clock speeds. Those speeds never came, so they're at a disadvantage now.

      At more or less the same time, AMD upgraded the memory interface of their chips, which improves performance in most areas in addition to helping them catch up with media stuff. At the same time they kept and in some cases improved their performance on branchy code. They avoided the gHz wall by improving performance without pumping clock speed.

      I think Intel assumed Itanium would take over in areas that needed branchy code back when they comitted to the Pentium 4 design in the 90s. It arrived very late, and it turns out regular desktop users still need to deal with branchy code.
    • If CPU speed is irrelevant to processor power, then why do we keep talking about it?

      Because the marketing droids need a metric to convince the masses to buy the latest shiny thing. If people used standardized benchmarks on processors (say, SPEC's CINT2000 [spec.org] or CFP2000 [spec.org]), then it would be too easy to see the benefits and shortcomings. However, the masses don't really care about any of that, they just want what the TV commercials say is "better."

      Just as a would-be car enthusiast thinks that the most impor
  • Hmm. (Score:4, Interesting)

    by iostream_dot_h ( 824999 ) on Thursday April 07, 2005 @10:09PM (#12172178)
    An 800MHz overclock on stock cooling is absolutely incredible... But it kind of makes me wonder why AMD doesn't make the default core speed on the proc higher.
    • Umm, so you buy the expensive, already-overclocked processors? Also has to do with market segments.

    • Re:Hmm. (Score:5, Funny)

      by eobanb ( 823187 ) on Thursday April 07, 2005 @10:40PM (#12172345) Homepage
      Because then we'd complain about how we can't overclock it. It's not about technology anymore, it's about psychology.
    • One possible reason is to give them room to move in the market. They can now, if this article reflects the general nature of these CPUs, ramp their clock speeds in a hurry if they need to. In the mean time, they can keep slowly upping the speeds and keep the higest speed ones priced to the max.
    • Re:Hmm. (Score:4, Informative)

      by Anonymous Coward on Friday April 08, 2005 @12:15AM (#12172876)
      This is partially a manufacturing issue.

      Since all the chips in a given line use the same core, they all have the same speed paths, ie some signals take longer to get from A->B than others because of more logic, longer distance, etc. The difference comes in during manufacturing. These companies are good at making transistors, but they don't get them perfect every time. When a chip is designed, they look for a theoretical maximum/minimum speed. If a chip doesn't meet the minimum speed at production is is scrapped, this is relatively rare considering the complexity.

      On the other end you have chips striving to make maximum speeds. If every chip off a die could be rated at the maximum speed, that would be quite a feat, but it doesn't work that way. After the chips are made, they perform speed tests on them and "bin" the chips.
      Chips get placed in lower bins for one of two reasons.
      (1)some of the transistors weren't quite up to par during testing/"binning" and ran a little slower and would become unstable in the higher speed ranges
      (2)they have to drop a chip into a lower bin for market segments, ie this speed is popular and we're out of them... take the next speed up and drop them into this slot.

      That's why sometimes overclocking works, and sometimes it doesn't. It's more likely to work on second gen chips, as they work out glitches in the manufacturing process and more chips are "artificially" lowered in clock speed. That's also why there's a risk in overclocking, if you have a chip that made it into the lower bins because of a manufacturing inconsistency, the chip will be unstable at higher speeds, generally only reasulting in calculation glitches, but possibly physical damage, depending on problem.

      -Anonymous Computer Engineer
    • Marketing...

      Let me explain. They release a chip they know they can clock higher on a whim. Then when Intel comes out with a faster chip, they don't have to do anything fancy. They have room to grow built into their current core. All they have to do is certify the next chips off the assembly line at the higher clock, and throw the ones that don't pass back down to the lower clock, and all is well.

      It is a technique where you milk the consumers for all they are worth, THEN drop the prices later when you are
    • But it kind of makes me wonder why AMD doesn't make the default core speed on the proc higher.

      Has anyone here ever considered that AMD might actually be listening to what people have been saying over and over, about wanting a processor that runs cooler, rather than something that squeezes out the maximum MHz and runs very hot?

      It would make sense for them to make their chip run at a lower power, where's it pretty effecient, rather than cranking it up to the maximum it can handle.

  • by BiggestPOS ( 139071 ) * on Thursday April 07, 2005 @10:09PM (#12172182) Homepage
    I love the overclock I've got on my 2600+ XP-M running at 12.5 * 200 with nothing but a nice heatsink and fan.

    The Barton core is awesome, and AMD is just refining their game here, working with the same basic silicon for the A64 and the XP. Intel's brains are divided up among way too many incompatible irrelevent architectures.

    Just my 2 cents.

  • What's the point of higher core clock if you are unable to ... feed it? (to the tone of Agent Smith and NEO)

    Seriously, with storage stuck in 7200 RPM or 10000 RPM, higher core clock is rather mood...
    • by Anonymous Coward
      Parent doesn't really seem to know what he's talking about (perhaps he glanced at an architecture book once). The memory hierarchy of almost all modern processors ensures that only a very tiny portion of instructions generate real disk accesses. Relatively few apps are really effected by storage speed... look at some gaming/application benchmarks for a 10k rpm disk vs. a 7.2k rpm disk with the same buffer size.
      • Thanks for shedding some light on his ignorance. Memory speeds have also lowered, comparative to clock speed, over the years. Several years ago, memory was at 100 mhz and cpu speed was at 500 mhz. Now that difference is over 10x. However, they've been beefing up the cache to hedge the performance hit. So they're going up the memory hierarchy, why the grandparent is sitting at the bottom reading Dvorak wondering why "System Idle" is eating all his cpu cycles.
    • Yeah your puny 7200rpm IDE/SATA drives are a limitation. That is why people who actually need the IO power use hardware RAIDed SCSI or fibre channel disks. Lets see, fibre are 15k rpm standard with about 200MB/s data rate. And that is single drive performance. Any decent hardware RAID will net you 500-600MB/s data in a RAID 5 config. High performance setups can net you over 1GB/s data rate from disk. Most current RAM only gives you 3.2GB/s in a single channel config, and 6.4GB/s in a dual channel configurat
    • *Seriously, with storage stuck in 7200 RPM or 10000 RPM, higher core clock is rather mood...*

      with 1.5gb of ram.. who the fuck cares? you constantly swapping or something so you care?
  • This is so dejavu.

    Now it's AMD's turn to pull an Even Steven on Intel with cool running cpus that also O/C high. That SOI sure does wonders ever since they started using it on the first A64's.

    Most people don't run around overclocking their cpus but it is a great market to target (oh I'm da rappa!) because Intel has had great cores to O/C ever since the first Northwoods until the first Prescott, the bacon-cooker.
    • Forgot to mention a great on the new core and its new features and benchmarks from Xbit Labs [xbitlabs.com]
    • The Northwood was a good chip, and IMHO the only one in the P4 series that was really worth its money.

      But the 90nm Athlons are even better, with real-live power consumption (3500+ at default clock speed) below 40 watts according to some magazines. Performance is by now also better than the Northwood ever was.

      I wonder if Intel will ever officially market the Pentium M as a desktop processor (I know that some vendors are pushing it in barebones by now).
      It is the only chip from Intel that could compete with
  • by Enrique1218 ( 603187 ) on Thursday April 07, 2005 @10:43PM (#12172364) Journal
    I remember when there was an actual megahertz race between amd and intel. Now it appears as though everyone is out breath. I can't believe we are still talking about 2.0 ghz AMD processors. Are they ever going to break 3 GHz? Intel seems to be no better off. How long was it since the first 3 Ghz was release and there is no 4 Ghz chips in sight? As a mac user, I can only revelled that physics has caught up with everyone and I no longer have to spout out about the megahertz myth in defence of my platform.
    • by mp3phish ( 747341 ) on Friday April 08, 2005 @12:21AM (#12172907)
      They are hitting the limits of the physical world with their current known solutions. Until there are more breakthroughs and improvements in chip fabrication, you won't see many 4GHz chips any time soon.

      Just as an example, for intel to be able to get to 3.8GHz they had to decrease their chip performance significatnly. So now their IPC (instructions per clock) are lower on the 3.8GHz chips than previous P4 chips. Every time they bump up the GHz they have to extend the pipeline. This lowers IPC.

      So you have this race between the physicists who are in charge of coming up with innovative ways to overcome physical limitations in chip fabrication, and you have the engineers redesigning their chip to work around these limitations. It is an uphill battle both ways and they have finally hit the ceiling where it is significantly detrimental to cost/performance at anything higher (for now).
    • AMD Athlon64 FX-55 overclocked to 4GHz [xtremesystems.org].

      A new FX release and we'll probably see some overclockers running stable 4GHz systems.

      3GHz... the FX will hit that for sure. They're at 2.8GHz now and a new model is on the way IIRC

    • by Perdo ( 151843 ) on Friday April 08, 2005 @04:00AM (#12173883) Homepage Journal
      Physics has caught up with no one. Transistors are still getting smaller, but heat is on the rise, as any 2.5 Ghz water cooled G5 owner knows.

      Think of it this way: work costs watts.

      No matter if you do a given amount of work using a narrow speedracer architecture like P4 or PPC970, Or a wide architecture like G4, Athlon64/Opteron, Itanium or Pentium M, the work done costs watts, and generally the speedracer designs start paying more in work per watt.

      The real current limitation is architecture complexity, where no one has a big enough brain to fit more than 150 million or so useful, non-cache transistors into their heads to debug the chip when there is a problem. Bob Colwell, former chief architect for Intel for the Pentium Pro/II/III/4, has spoken and written at length about this.

      When he left Intel, there were perhaps 2 people left that could debug the Pentium 4.

      Tejas was cancelled for this reason, as it was an even more complex version of the P4, certainly with AMD64 instructions included, possibly some EPIC (Itanium) compatibility, and a sort of SSE4 called at the time TNI or Tejas New Instructions, that were supposed to be the last straw in bringing complete vector processing to the x86 world, which Apple of course calls Altivec.

      This complexity limit has caused architecture advancement to virtualy stagnate, while Moore's law marches on. 200 million transistors last year. 400 million in 2005 a billion in 2007. What to do with the transistors? Add more cores, since individual cores can not get any more complex and cache has a limited effect after 1mb, as Itanium and the G4 show. Cache is a poor substitute for a good memory bus, and after 2mb it's all crutches to keep poor architectures competitive with the better architecures out there.

      Why the stagnation at 3 Ghz, or more specificly 3.06? Because that is all the northwood architecture could do, and Prescott, its replacement, was starting to hit that complexity limit and was delayed 8 months.

      When Prescott arrived, it was hot, almost 175w per cm^2. This was not the process, 90nm, that caused the heat, because the Dothan (Pentium M centrino) was only 27 watts on the same process, and no one could figure out why it was so hot, so Intel got stuck, ramping clockspeed only 533 mhz in two an a half years, after doubling clockspeed to 3.06 from 1.5 in the previous 2 years.

      AMD changed horses from 2.0 Ghz Athlon to 2.0 Ghz Athlon 64 and jumped 25% to 100% better perfromance, depending on the benchmark, mostly due to the integrated memory controller, not it's 64 bitness. It would take a 3.2 to 4 Ghz Athlon to match a 2.8 Ghz Athlon 64, and a 4.2 to 5.4 Pentium 4 to match it.

      There is a performance race on, and a marketing bullshit race for clockspeed which may or may not mean a processor performs better..

      Sounds like you have only been following the marketing bullshit race..

      But then, you are an Apple owner.

      • They knew why it was so hot...

        They had to leave the germanium from the Silicon stretching process on the die, since to remove it would require the use of a patented IBM process. Which BTW AMD has a license to. This causes a great deal more current leakage than the IBM or AMD chips have at 90nm. This is why the power consumption of the IBM and AMD chips went down at 90nm, while Intel's original 90nm chips got hotter.

        This is of course a simplification... But it's 3am...
  • Karaman (Score:5, Interesting)

    by Karaman ( 873136 ) on Thursday April 07, 2005 @10:55PM (#12172437)
    I think of AMD64 more as a consumer, then a flame=seeker. Is it the most powerful - NO CLUE Is it stable - YES Is it cooler - YES Is it affordable - YES Is it for a PC - YES Why should I buy anything that is more advertised, but actually too expensive. I dont buy it. Others buy it. But not me! I like my AMD :) IRTFA and I am going to say it once: Overclocking capabilities does not mean just speed, they mean stability under extreme circumstances, therefore granted stability under normal circumstances!

"Aww, if you make me cry anymore, you'll fog up my helmet." -- "Visionaries" cartoon

Working...