Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Supercomputing Upgrades Hardware

AMD Outlines Plans For Zen-Based Processors, First Due In 2016 166

crookedvulture writes: AMD laid out its plans for processors based on its all-new Zen microarchitecture today, promising 40% higher performance-per-clock from from the x86 CPU core. Zen will use simultaneous multithreading to execute two threads per core, and it will be built using "3D" FinFETs. The first chips are due to hit high-end desktops and servers next year. In 2017, Zen will combine with integrated graphics in smaller APUs designed for desktops and notebooks. AMD also plans to produce a high-performance server APU with a "transformational memory architecture" likely similar to the on-package DRAM being developed for the company's discrete graphics processors. This chip could give AMD a credible challenger in the HPC and supercomputing markets—and it could also make its way into laptops and desktops.
This discussion has been archived. No new comments can be posted.

AMD Outlines Plans For Zen-Based Processors, First Due In 2016

Comments Filter:
  • I've been hobbling along with my FX-8350 and AM3+ for awhile now and have been wanting to upgrade. If it lives up to the hype, unlike dozer, and piledriver, then I'll definitely get one. Now if only the process actually works....

    • by kalpol ( 714519 ) on Wednesday May 06, 2015 @06:00PM (#49633573)
      Hobbling???? I just upgraded TO an 8350 from a Athlon 5200+ (which did pretty much everything I asked of it, including MythTV and watching Netflix in Virtualbox). I don't know what to do with all these cores now.
      • Re: (Score:3, Informative)

        by drinkypoo ( 153816 )

        Still trucking along on a Phenom II X6 1045T... 6x2.8 GHz or 3x3.2 GHz still seems like a lot. I can't remember the last time I was CPU-bound. I have to spend more than a hundred bucks on a GPU, I guess.

        • One of our HTPCs is rocking an Athlon64 with an All-in-Wonder 9800 AGP card. One of the last ones to support component video out. Plugged into a 27" CRT. Works great for Plex Web client (via Google Chrome).

          The main server in the house is a Phenom-II something-or-other. With a lowly, silent Nvidia 210 GPU for transcoding videos as needed via Plex Mediaserver.

          Neither system is CPU-bound, or even GPU-bound, for what they do. The lowly 1 TB SATA drives in the server (even in a 4-disk RAID10) are the bottle

          • One of our HTPCs is rocking an Athlon64 with an All-in-Wonder 9800 AGP card. One of the last ones to support component video out. Plugged into a 27" CRT. Works great for Plex Web client (via Google Chrome).

            Nothing wrong with any of that... other than perhaps the 27" CRT. :) How you can stand to look at that anymore is beyond me, but to each their own...

            And that computer is fine for SDTV, but if you were running HDTV it probably would have issues.

            • Eh, it still works without any issues, it's in the bedroom, and really only used when we're too sick to walk downstairs. Why get rid of a perfectly working TV?

              Eventually, we'll replace the lowly 39" LCD downstairs with something larger, move that one into the bedroom, and move the CRT into the kids' room with the NES, SNES, and Wii. :)

              At that point, I'll have to replace the Athlon64 with something that can handle 720p or 1080p. Until then, we'll just keep on rocking.

              • Eh, it still works without any issues, it's in the bedroom, and really only used when we're too sick to walk downstairs. Why get rid of a perfectly working TV?

                Because staring at an electron gun shooting at your eyes is bad for you?

                CRTs always gave me headaches, the flicker and refresh of the screen. The move to LCDs was very welcome.

                Eventually, we'll replace the lowly 39" LCD downstairs with something larger, move that one into the bedroom, and move the CRT into the kids' room with the NES, SNES, and Wii. :)

                Oh goodie, ruin the kid's eyes! :)

                At that point, I'll have to replace the Athlon64 with something that can handle 720p or 1080p. Until then, we'll just keep on rocking.

                And that was the point, you said your old Athlon was just fine, and I was simply pointing out for everyone else who isn't running a CRT that your statement had a catch to it. :)

        • by Mal-2 ( 675116 )

          Moving up to a 1090T or 1100T Black Edition could be nice. 3.2 or 3.3 out of the box, but 3.8 or 3.9 is almost a given without worrying about voltage bumps and awesome cooling. My 1090 was throttling at 3.8 when running SuperPi on the stock cooler, but have no such problems with a Hyper212. I would need more cooling to run all-day-every-day at 4.0.

          Granted, 2.8 to 3.8 may not make a lot of real world difference when memory bandwidth comes into play, and as you pointed out, the GPU has a lot to do with it (wh

          • I might think about an upgrade like that when those processors are being thrown away, if I haven't upgraded before then. But right now they're still over $120 for the 1090T, and around $200 for the 1100T. The last time I spent that much I upgraded from a 720BE and doubled my cores. If I'm going to spend another hundred bucks, I need to double my performance again, which isn't really possible. It would make more sense to just buy a new MB+CPU, otherwise. The one I have now doesn't even support SLI, which I w

          • Why focus on clock speeds like that?

            The biggest advantage of newer chips is that they do the same job using less power.

            My 9-year old fileserver pulls 450W. Replacing the motherboard with a quad-core C200-atom or equivalent would drop that by 300W - and pay for itself in 3-4 months just in reduced electricity charges.

        • Comment removed based on user account deletion
        • by ooshna ( 1654125 )

          Still rocking my X3 720be

          • I have one of those too but I've been too lazy to badcap the system it's in. It was a really great processor too, got it to OC to 3.2 with a $20 cooler master heat pipe and arctic silver iv. I bought that originally for $100, and then upgraded to this twice-the-cores X6 1045T later, for $110 shipped or so. You just can't beat the value of these mid-range AMD chips.

        • I have the same chip and it is ridiculous at everything but gaming. It's single core performance is laughingly bad. My Girlfriends' 1st Gen i5 absolutely smokes it.

          • I have the same chip and it is ridiculous at everything but gaming.

            Ridiculously good for the amount of money I have spent on it versus the amount of time I've had it, I think you mean.

            It's single core performance is laughingly bad.

            Single-core performance? What is this single-core? Nothing which uses a lot of CPU is single-threaded, and hasn't been since forever. Whether I'm [de]compressing an archive, transcoding video, playing a game, or surfing the web, I'm using multiple threads which can be relocated across processors — and are.

            My Girlfriends' 1st Gen i5 absolutely smokes it.

            Bullshit. We all know you don't have a girlfriend.

        • by kriston ( 7886 )

          I just retired my 1045T desktop and moved it to a VMware server. With SSDs it performs very well in this application.

          • I just retired my 1045T desktop and moved it to a VMware server. With SSDs it performs very well in this application.

            That's probably what mine will do when I finally decide to upgrade to a newer processor in another year or two. The only problem is that it's relatively power-hungry compared to the C2D that I have doing this now. Well, it's a libvirt server, but anyway

            • by kriston ( 7886 )

              Yes, it has a slightly higher power consumption, but at this processor's insanely high 6MB L3 Cache and 6 x 512KB L2 Cache, I can accept it. It is a solid performer.

        • by armanox ( 826486 )
          The funny part is the Phenom II is a much better CPU then the FX.
          • Not so clear cut. The Phenom has a slight edge in single-core performance at the same speed, but the FX can give you more cores (8 are readily available but the Phenom topped out at 6) and is now available with higher clock speeds.

            One catch with the FX is that you need updates to the scheduling algorithm in the OS. In the case of Windows that means you have to run Windows 8 (or presumably the Windows 10 preview) because earlier Windows versions never got updated to take full advantage of the FX.

            The other ca

            • by armanox ( 826486 )
              Other catches with the FX vs the Phenom II - floating point. Since the modules share FPU, the Phenom II x6 has more FPUs then the FX-8xxx does. And since most people are running Windows 7, not 8 or Linux, the processor scheduling never happens.

              Other things I noticed - since most games rely on single threaded performance, the Phenom II has the edge there (and Intel a much larger edge). For heavy math, the AVX on the FX-81xx series performs poorly compared to Intel.
              • The shared FPU on the FX series is rarely a problem. Sure, there is only one that is shared between a pair of cores - but it is a 128 bit FPU that can do two 64 bit operations simultaneously. It's not likely to be a bottleneck unless you are working with long double or binary128 data, in which case it can only do one operation at a time.

                The most common problem with poor utilization of the core pairs in the FX series is cache contention. That's where the updated process scheduler comes in. To oversimplify a

      • by Zanadou ( 1043400 ) on Wednesday May 06, 2015 @06:16PM (#49633687)

        This.

        I have a FX-8350 too. Given how much the the cores sit around idling at less than 5% usage, I don't think I'll need to upgrade my CPU before 2020. RAM, on the other hand...

        As many other people have noted: the CPU speed wars have been over since the Intel Wolfdale/AMD Deneb days of 2009-2010.

        • Agreed on the ram. When DDR4 hits the mainstream product lines that is going to be a nice bump.

        • by hsa ( 598343 )
          AMD processors multithread badly. The multithreading inside a module was terrible in Bulldozer and not much better in future generations. Shortly put: you have 4 "fast" cores and each of those cores is paired with a "slow" core. The scheduler doesn't really know, which core is fast and which core is slow and that results in poor real life performance - just look at any decent game benchmark!

          Meanwhile, Intel is transitioning to 14nm process, while your processor still is still 32nm from 2012, and this intel
          • Meanwhile, Intel is transitioning to 14nm process, while your processor still is still 32nm from 2012, and this intel of mine actually 22nm from the same year. These numbers might not tell you much, but the main difference to me is that my processor runs much cooler and requires less power. AMD is way behind the competition.

            They tell me a lot, and you pretty much reached the same conclusions that I reached a year ago. AMD still makes good processors but they have been behind intel for awhile now. When it came to picking a new processor for my HTPC I picked a A5350 over a i3. It was a perfect processor for the job too.

            Next year I'll be sniffing around to replace the fx-8150 workstation I have. Unless AMD does something drastic to get caught up, I will probably be going with a i7 system this year.

            I've been a amd fan

      • by Rellon ( 28691 )

        Lol, yeah. I was running an Phenom 1100T when they first came out and upgrade to the FX-8350 when it was launched over 2 years ago. I've been considering buying the FX-9590 just to tide me over till these come out.

        To be honest, I have no real need to upgrade until I get more into 4K gaming and upgrade from the overclocked 290x that I have now.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Wait until you try to do some video encoding. The new instruction set support alone will justify the update.

      • by Anonymous Coward

        Sorry, but without ECC RAM support these chips are useless.
        AMD... please note, the Intel E3-1276v3 has ECC interface on the die (and also has GPU on die)... you need to do this, otherwise the server world is moving permanently to Intel.

        • by dshk ( 838175 ) on Thursday May 07, 2015 @06:05AM (#49636507)
          What are you talking about? All AMD server boards support ECC. In contrast to Intel, AMD always puts every feature into every processor of the same generation. AMD does not dumb down artificially even its cheapest processor. That is one of the thing I like in AMD processors. I do not have to check which random feature is disabled in a particular processor. Even some desktop AMD motherboards have ECC support, like the SABERTOOTH 990FX.
        • Nearly all AMD CPU/APU's support ECC memory, you just need the right Mother Board... ASUS bios's have consistently supported AMD/ECC memory combinations for many years.

          Not long ago, I configured a number of Phenom II X6 1045t and FX-8320 systems with 8-16GB of ECC memory on ASUS M4A88T-M and M5A78L-M motherboards. This link indicates the Zen series [fudzilla.com] will support 4 channel ECC/DDR4.

          • by Agripa ( 139780 )

            The AMD processors which use socket AM2, AM2+, AM3, and AM3+ support ECC but the processors for the FM series of sockets including the APUs do not. The recent exceptions are the processors for the FP3 (notebooks) and AM1/FT3 (small form factor/mobile) processors which leaves ECC out for AMD's highest performance desktop Piledriver and Steamroller APUs.

            AM1 with ECC looks great for lower power network appliances, servers, and disk storage but the only thing faster with ECC is AM3+ without the APU. :/

      • I won't say hobbling. I have both the fx-8150 and the fx-8350 and they are both de finally behind on the curve when it comes to technology. Both of these processors where gifted to me by a friend who was sick of always being behind on the curve when it comes to AMD. He hopped over to Intel with a couple of i7's last year.

        I'm convinced unless something changes an AMD gets on the ball with this release, my current AMD systems will be the last AMD systems I own.

        • I won't say hobbling. I have both the fx-8150 and the fx-8350 and they are both de finally behind on the curve when it comes to technology. Both of these processors where gifted to me by a friend who was sick of always being behind on the curve when it comes to AMD. He hopped over to Intel with a couple of i7's last year.

          I'm convinced unless something changes an AMD gets on the ball with this release, my current AMD systems will be the last AMD systems I own.

          And I am hoping or expecting that AMD's cpus reach or exceed the comparable performance of Intel products. Intel 4790 stuff is about $200 too high per unit.

    • If it is hobbling, you probably didn't do a very good analysis of if it matches your work load, because it is not a high-end general purpose CPU. ;)

      Mine is freakin' awesome. It does everything I ask of it so easily that I can't even hear the fans.

    • by Agripa ( 139780 )

      One of my requirements is ECC so I have been hobbling along with a Phenom II 940 and AM2+ which was less than half the price of the Intel equivalent at the time. The current Intel alternative would be a Xeon E3-1220 v3 to Xeon E3-1276 v3 which I will consider if I have to replace it but the Intel solution is still more expensive.

  • by rubycodez ( 864176 ) on Wednesday May 06, 2015 @05:43PM (#49633445)

    14nm tech may be the end of the line for CMOS. The 10 nm node that follows may not even be possible

    • Pundits have been saying that at least since 90nm, you know. Then 65, and again at 45, 32, 33, and now at 14.

      • by rubycodez ( 864176 ) on Wednesday May 06, 2015 @06:48PM (#49633901)

        None of those other nodes pitches involved dimensions of which quantum mechanical tunneling was the dominant effect, nor of gate thickness being one atom. But that's what 10nm is.

        • by ITRambo ( 1467509 ) on Wednesday May 06, 2015 @07:32PM (#49634123)
          We are a ways away from not being able to shrink dies further using known technologies. One atom, in this context, is much smaller than 10 nm. The range is 0.1 to 0.5 nm using various methods of calculating the atom's diameter (see link following). . Source on atom size: http://hypertextbook.com/facts... [hypertextbook.com]
          • by Anonymous Coward

            The problem is not building small structures. The problem is designing small structures to give you working chips. Sure, you build a 10 atom "isolator". Unfortunately a large number of electrons will disagree and act as if it's a conductor.

            This is made worse by the need to get all your isolator perfect, which is increasingly hard as the gate counts go up while the dimensions go down. An isolator that's 1 nm off is now a much bigger problem than it used to be.

        • by Kjella ( 173770 )

          None of those other nodes pitches involved dimensions of which quantum mechanical tunneling was the dominant effect, nor of gate thickness being one atom. But that's what 10nm is.

          Not even close. They have on the research stage made functional 3nm FinFET [eetimes.com] transistors, if they can be produced in the billions is unlikely as it requires every atom to be in the right place but 10nm still has some margin of error. The end of the road is in sight though...

        • Quantum effects really start dominating when insulator or in some cases semiconductor (which acts as insulator) layers get to about 6 atoms thick.

          Still a way to go.
      • by Mal-2 ( 675116 )

        Pundits have been saying that at least since 90nm, you know. Then 65, and again at 45, 32, 33, and now at 14.

        And those limits were dodged by some new process: SOI, copper interconnects, what have you. He's not saying we won't see 10 nm, he's just saying there's a good chance it will have to be something other than CMOS.

      • Pundits were saying that about 90nm, but usually that was because they misunderstood when the engineers (who were multiple product cycles ahead of the consumer-pundits) were speculating that they "would" "soon" be reaching these limits. ;)

        Other times it was as simplistic as, "Can we keep shrinking this forever?" "No." "How far can we go using current technology?" "Using current technology we can only go as far as we can currently go."

        • Maybe that could account for a few cases, but there were other issues [geek.com] at least some researchers didn't think would be solved so quickly either. In the link I provided (unfortunately, the original article isn't found, just the summary), a researcher in 2002 was claiming that CMOS would end up at 45 nm and halt there because of the issues with thermal noise. I also remember these types of predictions being made by researches at Intel as well.

          Look, I'll I'm saying is that *every* time so far the prediction w [slate.com]

    • by Megol ( 3135005 )

      10nm is proven possible. Samsung have demonstrated some 10nm chips already.

      Transistors have been made much smaller in process development labs so we know that the scaling will continue some time into the future.

    • by HannethCom ( 585323 ) on Wednesday May 06, 2015 @07:19PM (#49634079)
      Intel develops technology which doesn't doesn't make it into their plant for 5 to 10 years. Also they don't put things on their roadmap until they've proven possible.
      http://www.xbitlabs.com/news/c... [xbitlabs.com]
      Intel's 2012 roadmap shows 4nm process in 2022. Which means they have a process that has been tested to work, they are just tweaking it to reduce errors and working on the best way to outfit a plant for it. Also costs billions and time to refit a plant.
    • TSMC has already produced test wafers on 10nm and plan to enter volume production in 2016
      http://www.fudzilla.com/news/p... [fudzilla.com]

    • by Megol ( 3135005 )

      10nm chips have been manufactured and demonstrated by Samsung, Intel plans on producing 10nm chip in 2016. 10nm isn't a problem, process development is close to finished. 7nm doesn't seem to present any physical problem.
      And 3nm transistors have been manufactured. Not in a commercial manufacturing process but not only can they be made - they are demonstrated to be working.

      So you are way off.

  • by hey! ( 33014 ) on Wednesday May 06, 2015 @05:51PM (#49633519) Homepage Journal

    Featuring GGL (Gateless Gate Logic).

  • So close... (Score:3, Funny)

    by lga ( 172042 ) on Wednesday May 06, 2015 @06:00PM (#49633571) Journal

    >40% higher performance-per-clock from from the x86 CPU core.

    That could very slightly close the gap between AMD and Intel!

    • by Anonymous Coward

      40% higher IPC vs bulldozer is not ">40% higher performance-per-clock [sic]". this basically says it'll take them until late 2016 or early 2017 to have a part that competes with haswell... ok, great... yay, AMD! your mommy is so proud...

  • by Hrrrg ( 565259 ) on Wednesday May 06, 2015 @06:03PM (#49633605)

    Anyone care to extrapolate from current benchmarks as to how this new processor will compare to Intel's desktop offerings? I would like to see Intel have some competition there.

    • Re:Extrapolate? (Score:5, Interesting)

      by Kjella ( 173770 ) on Wednesday May 06, 2015 @07:43PM (#49634193) Homepage

      Anyone care to extrapolate from current benchmarks as to how this new processor will compare to Intel's desktop offerings? I would like to see Intel have some competition there.

      FX-8350: 2012
      "Zen": 2016

      The 40% jump is more like 0%, 0%, 0%, 40%.

      If you compare a 3770K (best of 2012) to a 4790K (best of today) you get a ~15% frequency boost and another ~10% IPC improvements. If the leaked roadmaps are to believed Skylake for the desktop is imminent [wccftech.com] which will bring a new 14nm process and a refined micro-architecture at the same time as Broadwell missed their tick for the desktop, so in the same timeframe Intel will have improved 30-40% too.

      Anyway you asked about AMD and I answered with Intel but it's a lot easier to get a meaningful answer without getting into the AMD vs Intel flame war. In short, even if AMD comes through on that roadmap they're only back to 2012 levels of competitiveness and honestly speaking it wasn't exactly great and AMD wasn't exactly profitable. They're so far behind that you honestly couldn't expect less if they weren't giving up on that market completely, which honestly thinking I thought they had. And I wonder how credible this roadmap is, I remember an equally impressive upwards curve for Bulldozer...

      • Re: (Score:1, Troll)

        Comment removed based on user account deletion
        • Re: (Score:2, Interesting)

          by Kjella ( 173770 )

          Uhhhh...just FYI but Intel has come right out and admitted it rigged the benchmarks so you can trust them about as much as the infamous FX5900 benches with its "quack.exe" back in the day.

          Yes yes, you spam that to every thread. That's exactly why I compared Intel with Intel. Unless you think they're creating benchmarks that's increasingly inaccurate for each new generation, the point was that AMDs "jump" isn't actually more than Intel has improved through yearly releases since. Do you think the benchmarks are more "rigged" for the 4790k than the 3770k? Is the lack of new FX processors not real? By the way, even Phoronix's conclusion says:

          From the initial testing of the brand new AMD FX-8350 "Vishera", the performance was admirable, especially compared to last year's bit of a troubled start with the AMD FX Bulldozer processors.
          (...)
          In other words, the AMD FX-8350 is offered at a rather competitive value for fairly high-end desktops and workstations against Intel's latest Ivy Bridge offerings -- if you're commonly engaging in a workload where AMD CPUs do well.

          In not all of the Linux CPU benchmarks did the Piledriver-based FX-8350 do well. For some Linux programs, AMD CPUs simply don't perform well and the 2012 FX CPU was even beaten out by older Core i5 and i7 CPUs.

          I guess "bit of troubled" was the most pro-AMD way he

      • I don't know why you were moded up.
        Zen is 40% IPC greater than EXCAVATOR. The FX-8350 is bulldozer core. Since Bulldozer there has been 3 cores: Piledriver, Steamroller and Excavator (to be released this year).
        Intel has in fact dropped the ball. They have not released a new desktop CPU in 2 years.

  • by Anonymous Coward

    Finally, a processor that will meditate on existential paradoxes and encourage us to push to deeper self-understanding as it half-assedly pretends to work on solving a task.

    • You might consider the wisdom in having less attachment to what people think about the value of you or your CPU's meditations.

  • Dear AMD (Score:4, Insightful)

    by faragon ( 789704 ) on Wednesday May 06, 2015 @06:21PM (#49633731) Homepage
    Please, focus. I don't know what "the market" need. I know what *I* need to buy from you again:
    • Netbook: 2-core CPU/APU at 2GHz with decent IPC for 30 USD
    • Laptop: 4-core CPU/APU at 2-2.5GHz with good IPC for 60-70 USD
    • Desktop: "PS4 on a chip" with twice the CPU frequency for 90-120 USD

    Note that, in comparison to ARM CPUs, x86 SoCs are *crazy* overpriced. There are superb ARM SoCs for just 20 USD. WTF are you doing selling similar consumer-grade chips for 100 USD??

    • by Anonymous Coward

      The best ARM CPUs (Apple) have close to Bay Trail Atom CPU performance in single-core mode, but Bay Trail is quad core and Apple's are dual core. Other ARM CPUs are quad core but don't have the single core performance Apple does.

      And once you get into better than Bay Trail performance (most everything x86 except cheapest AMD stuff), ARM is left in the dust.

      That and software backwards compatibility are why x86 is still a premium. But that's changing.

    • by Trogre ( 513942 )

      Hell I'd settle for just having one FPU per core again and none of this "module" nonsense.
      AMD jumped the shark for me in 2012 when they killed off their Phenom 2 line.

      • AMD gets a lot of shit for this, and there's plenty to give 'em shit for... but that's not one of 'em.

        It's a shared FPU only when in 256bit AVX mode. When in normal 128bit (2x64) SSE mode that you share one FPU with 2 cores.

        • by Trogre ( 513942 )

          Unfortunately no matter what mode it is in, the two FPU pipelines are still sitting behind a single FPU scheduler so the result is essentially the same.

          My numerous benchmarks with bulldozer processors back this up, particularly with the likes of Matlab. Those processors are great for multi-process servers (web, email, etc) but useless for floating-point math (gaming, render farms, compute servers, etc).

    • Note that, in comparison to ARM CPUs, x86 SoCs are *crazy* overpriced. There are superb ARM SoCs for just 20 USD. WTF are you doing selling similar consumer-grade chips for 100 USD??

      ARM CPUs still do not have proper pipe-lining. Out-of-order execution is castrated too. That basically kills ARM CPUs in any workload with lots of math (think games and encryption).

      And how about the CPU cache? 1MB of cache for ARM is still a high-end feature, while most desktop CPUs have 4 or more MB of cache. Huge HUGE difference for the memory bandwidth.

      ARM has its niche (which if ARM really wanted could have been very very broad and not niche at all) but as soon as you start gaming or do any real wor

  • Because it is not like there is another Xen [xenproject.org] around in the world of servers...
  • "Zen will use simultaneous multithreading to execute two threads per core"
    That is the exact polar opposite of what everyone wants. We want one thread executed on multiple cores. The single core performance of AMD 6 and 8 core CPUs is PATHETIC. Most software still runs on just one core.
  • I see lots of announcements - not just this one - shouting about their new microarchitectures, how cool they are, the amazing benefits, and so on. But documentation of exactly what the new microarchitecture is, exactly what it does, seems thin-to-non-existent. Maybe I'm not looking in the right place.

    All "big" processors nowadays have fancy pipelines, out-of-order execution, branch prediction, multiple cores, and so on. Fine. But how is Zen different from past microarchitectures? What makes it revolutiona

  • I'm currently bidding out a pretty hefty workstation (128GB RAM, RAID 5 disk array, highly parallel workload).

    From what I'm seeing, AMD is pretty competitive on price/performance. Our work load is integer heavy, and I can get a dual 16-core 2.8ghz AMD machine for $2500 cheaper than a dual 10 core 3.2ghz Intel machine. Even if you assume the AMD is 20-30% slower per core, it stacks up quite nicely.

    The MBs are cheaper, the processors are cheaper, and the registered DDR3 DIMMs are waaaaay cheaper than the DDR4

    • by Agripa ( 139780 )

      Back when I built my workstation I went with the AMD Phenom II 940 because the Intel solution for ECC was a lot more money which was better spent on system RAM and disk performance. The Intel processor, motherboard, and FB-DIMMs would have more than doubled the cost. It is not that bad now but an Intel system still carries a price premium.

What is research but a blind date with knowledge? -- Will Harvey

Working...