Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
AMD Hardware Technology

AMD's Piledriver To Hit 4GHz+ With Resonant Clock Mesh 286

MojoKid writes about some interesting news from AMD. From the article: "Advanced Micro Devices plans to use resonant clock mesh (PDF) technology developed by Cyclos Semiconductor to push its Piledriver processor architecture to 4GHz and beyond, the company announced at the International Solid State Circuits Conferences (ISSCC) in San Francisco. Cyclos is the only supplier of resonant clock mesh IP, which AMD has licensed and implemented into its x86 Piledriver core for Opteron server processors and Accelerated Processing Units. Resonant clock mesh technology will not only lead to higher clocked processors, but also significant power savings. According to Cyclos, the new technology is capable of reducing power consumption by 10 percent or bumping up clockspeeds by 10 percent without altering the TDP." Unfortunately, aside from a fuzzy whitepaper, actual technical details are all behind IEEE and other paywalls with useless abstracts.
This discussion has been archived. No new comments can be posted.

AMD's Piledriver To Hit 4GHz+ With Resonant Clock Mesh

Comments Filter:
  • Re:vaporware (Score:5, Informative)

    by Anonymous Coward on Monday February 27, 2012 @08:04PM (#39179955)

    This is an ad. What is a "resonant clock mesh"? That's sounds really cool. So I started RTFA (I know, sorry). You don't have to chastise me that much, because I stopped reading soon. Right after

    An average Google search is reported to
    require ~ 0.3 watts, about the same amount of power that it takes for a 100 watt light
    bulb to be lit for 10 seconds.

    Which was obviously not written by anybody who has any clue what they are talking about.

  • Re:vaporware (Score:2, Informative)

    by the linux geek ( 799780 ) on Monday February 27, 2012 @08:40PM (#39180331)
    Bulldozer - their current architecture - was really bad. Slow, mediocre price/performance ratio, and power-hungry. It remains to be seen if Piledriver can make it all better.
  • Re:vaporware (Score:5, Informative)

    by ifiwereasculptor ( 1870574 ) on Monday February 27, 2012 @08:48PM (#39180413)

    Well, here's AMD on a nutshell:

    Brazos, the ultra low power processor, is a success.

    Llano, the A series, is actually a very solid product. For the cost of an i3, you get a quad core that is about 1/4 slower overall, but whose integrated graphics about 3 times faster. Actually selling very well.

    Bulldozer is a disaster unless all you do is video encoding.

    Now, here's the puzzling part: they want to use bulldozer, the failure, as the new core for the A series, the success. I hope they find a way to fix it, otherwise my next rig will have an Intel for the first time in ten years.

  • Re:vaporware (Score:4, Informative)

    by hawguy ( 1600213 ) on Monday February 27, 2012 @08:55PM (#39180495)

    This is an ad. What is a "resonant clock mesh"? That's sounds really cool. So I started RTFA (I know, sorry). You don't have to chastise me that much, because I stopped reading soon. Right after

    An average Google search is reported to
    require ~ 0.3 watts, about the same amount of power that it takes for a 100 watt light
    bulb to be lit for 10 seconds.

    Which was obviously not written by anybody who has any clue what they are talking about.

    I think it was a typo (or edit by someone who doesn't know what they are talking about). They should have said 0.3 watt-hours (and should have said "energy" instead of "power")

    Google says they use 0.0003 kWh of energy per search [].

    A 100W bulb uses .1 kWh in an hour, or .0000278 kWh in a second, or .000278 kWh in 10 seconds. (or .278 Wh)

    Therefore, a 100W bulb running for 10 seconds uses about the same amount as energy as an average Google search. Which is a lot higher than I thought it would be - since I use 20W CFL's, each time I do a google search, that's the equivalent of 50 seconds of light per Google search. Just while typing this reply, I did enough Google searches to light up my room for about 15 minutes.

  • resonate clock mesh (Score:5, Informative)

    by slew ( 2918 ) on Monday February 27, 2012 @09:21PM (#39180763)

    Quick background: Currently clocks on most generic chips today are structured as trees. As you can imagine the fan-out of the clock trees is pretty large and thus require clock buffers/driver circuits which need to be balanced so that clock signal gets to the leaves at about the same time (in a typical design where you don't use a lot of physical design tricks). To ease balancing the propagation delay, the clock tree is often physically looks like a fractalized "H" (just imagine the root clock driving in the center of the crossbar out towards the leaves at the corners of the "H", the wire lengths of the clock tree segments are the same, then the corners the big H driving the center of a smaller "H", etc, etc). Of course at the leaves, there can be some residual imbalance due to small manufacturing variations and wire loading and that has to be accounted for in closing the timing for the chip (to avoid short paths), and ultimatly these imbalances limit the upper frequencies achievable by the chip.

    Additional background: In any electrical circuit, there are some so-called resonant frequencies because of the distributed (or lumped) inductance and capacitances in the network. That is some frequencies experience a lot less energy loss than average (for the car analogy buffs, you can get your car to "bounce" quite easily if you bounce it at it's resonant frequency).

    The basic idea of the Cyclos technology is to "short-circuit" the middle of the clock tree on the chip with a mesh to make sure all the middle of the clock tree is coordinated to be the same clock (as oppposed to a typical H tree clock, in every stage the jitter builds up from the root). That way you avoid some of the imbalances the limit the upper frequencies achievable by the chip. The reason I say "short-circuit" is that it really isn't a "short circuit". If you just arbitrarily put in a mesh in the middle of a clock tree, although it would tend to get the clocks aligned, it would presents a very large capacitive and inductive load to drive and would likely increase power greatly. **Except** if that mesh was designed so that it resonated at the frequency that you were going to drive the clock, then you can get the benefit of jitter reduction w/o the power cost. Since you get to pick the physical design parameters of the mesh (wire width, length, and grid spacing, and external tank circuit inductance) and the target frequency, theoretically you can design that mesh to be resonant (well, that remains to be seen).

    The reason this idea hasn't been used to date is that it's a hard problem to create the mesh with the proper parameters and now the processor really has to just run at that frequency all the time (well, you can do clock cycle eating to approximate lower frequencies). Designers have gotten better at these things now and the area budgets for these types of things have gotten in the affordable range as transistors have gotten smaller.

    FWIW, In a pipeline design (like a cpu), sometimes it's advantagous to have a clock-follows-signal clocking topology or even an async strategy instead of a clock tree, but there of course is a complication if there is a loop or cycle in the pipeline (often this happens at say a register file or a bypass path in the pipeline), so that trick is limited in appliciablity, where the mesh idea is really a more general solution to clock network jjitter problems.

    Here's a white paper that describes this idea... []

  • Re:That's nice (Score:4, Informative)

    by Shark ( 78448 ) on Monday February 27, 2012 @09:24PM (#39180803)

    He's not talking about running the g-code, he's talking about generating it from a model. Most CAM software are very CPU intensive for toolpath generation.

  • Re:vaporware (Score:5, Informative)

    by Nursie ( 632944 ) on Monday February 27, 2012 @10:46PM (#39181459)

    Err, there was a time 8-12 years ago when AMD *did* snatch the performance crown.

    Around about the time of the Athlon 64's appearance, when Socket 939 came along, they were actually both faster and cheaper than Intel. Nothing intel had could match the FX range on the desktop, and nothing intel were doing in the server room could match Opteron at the time. Intel was struggling with its netburst architecture (IIRC) which had high clock speeds and performed slightly better under some loads (video encoding IIRC) but markedly worse for pretty much everything else.

    It didn't last long, Intel took back the performance crown, and after a few years made serious inroads into the budget sector as well. But for a brief, shining moment (around the time the FX-55 and 57 were released) AMD held the crown.

  • Re:vaporware (Score:4, Informative)

    by level_headed_midwest ( 888889 ) on Monday February 27, 2012 @11:56PM (#39181891)

    AMD *does* push out affordable 4-socket Opteron setups- the Opteron 6000 series CPUs. They are selling those a whole ton less expensive now than they did in the K8 days. The least-expensive Opteron 6000s sell for $266 each and the most-expensive ones are around $1200-1500, compared to starting around $800 each and going on up to close to $3000 for the K8-era 4-way-capable Opterons. Considering a 4-way-capable Intel Xeon still costs close to $2000 and goes on up to near $5000- and is based on two-year-old technology- the Opterons are that great deal you were wishing for.

    However on the desktop, Intel has gotten much better in their pricing (i.e. they don't cripple lower-end chips as severely as they used to) and is giving AMD a real run for their money.

  • Re:vaporware (Score:4, Informative)

    by hairyfeet ( 841228 ) <bassbeast1968@gm ... minus herbivore> on Tuesday February 28, 2012 @08:00AM (#39183745) Journal

    May I make a suggestion? Tiger has been selling their remaining stocks of 95w Thubans (in case you haven't heard in a serious "WTF are they thinking?" move AMD has killed AM3 for two sockets that have less than a year of life in them, FM1 and AM3+) for around $100. Sign up for their emails, that is where they have been offering it as of late. i got one and with the money i saved upgraded my ECS board to a nicer Asrock and i must say i couldn't be happier, the 1035T is not only around 40% faster than my 925 Deneb but whereas the Deneb would max out at around 139f doing transcodes with the hyper N520 cooler i paired the thuban with i'm getting a MAX of 114F and that's after 7 and a half hours of slamming the CPU with Virtualdub. At idle this baby is literally below room temp, no shit looking at Coretemp my chip is at 67f and the room is 72f. Frankly I've never been happier with a chip upgrade in my life and its just a damned shame AMD has killed AM3 but their loss is your gain if you jump on it and snatch one while they're cheap. I mean 6 cores for $109? How can you beat that? Paired with 8gb of RAM and a CF enabled board i figure this baby will last me until 2020 easy, what a sweet chip.

    But for everyone that wants to save some money and have a nice chip snatch one of the AM3s NOW before the stock runs out because when they are gone, that's it. I went ahead and built my GF a new Athlon X3 box and gave the Deneb to my youngest and as soon as this next batch of laptops gets sold I'll be building the oldest an X3 or X4 before supplies run out. The really nice Am3 boards have never been cheaper and paired with 4-8Gb of DDR 3 and a Hyper212 or hyper N520 they make pretty badass desktops, plenty of OCing headroom if you desire and easy to unlock so that X3 can easily be the cheapest quad you'll ever buy. But for me that X6 so cheap? hell how could you not love getting 6 cores for $109 shipped? That's a no brainer.

  • Re:vaporware (Score:5, Informative)

    by hairyfeet ( 841228 ) <bassbeast1968@gm ... minus herbivore> on Tuesday February 28, 2012 @08:13AM (#39183785) Journal

    Actually I'd say buying ATI was one of the smartest things they ever did. one can argue if they had waited until the market tanked they could have gotten it cheaper but hindsight and all that. But have you tried bobcat? Less than 18w for a dual core with an HD6310 GPU and often runs at less than 12w. hell AMD had to slow down their desktop production simply because they didn't have enough capacity to meet demand for the Brazos platform. If that's failure I'll take two please. Go to someplace like Tiger and see how many units you have with the E350, we are talking netbooks and laptops, HTPCs and all in ones, the OEMs are cranking out new designs to use those chips as fast as they can. I walked into my local Wally World the other day and less than 4 units were Intel, the rest? All AMD Fusion. And don't forget this is still running on VLIW GPUs, the next revs will replace them for vector units which should behave like a hyper powerful FP when not needed for graphics.

    so I'd say while AMD has made some SERIOUS mistakes, killing the AM3 line and Stars arch before getting the bugs fixed (or better yet replacing for the consumer chip) the BD/PD design, trying to push a server chip like BD/PD as a desktop chip, frankly the APUs created thanks to the merger have been one of the few smart moves they've had. With Brazos they have a unit that stomps Intel+ION while often costing less than intel alone and thanks to intel shooting themselves in the face by killing the Nvidia chipsets there won't be any new ION designs. With Brazos you have a unit that sips power, is quiet, low enough heat it can be passively cooled, while still able to do 1080p over HDMI. If you haven't tried one you really should, its a sweet chip.

Neutrinos have bad breadth.