Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware Technology

Ampere Altra is the First 80-core ARM-based Server Processor (venturebeat.com) 64

Ampere has unveiled the industry's first 80-core ARM-based 64-bit server processor today in a bid to outdo Intel and Advanced Micro Devices in datacenter chips. From a report: Ampere announced today that it has begun providing samples of the Ampere Altra processor for modern cloud and edge computing datacenters. The Ampere Altra processor runs on 210 watts and is targeted at such server applications as data analytics, artificial intelligence, database, storage, telco stacks, edge computing, web hosting, and cloud-native applications. Intel dominates about 95.5% of the server chip market with its x86-based processors, and AMD has the rest. But Ampere is targeting power-efficient, high-performance, and high-memory capacity features. Renee James, former president of Intel and CEO of Ampere, said in an interview with VentureBeat that the chip is faster than a 64-core AMD Epyc processor and Intel's 28-core high-end Xeon "Cascade Lake" chip.
This discussion has been archived. No new comments can be posted.

Ampere Altra is the First 80-core ARM-based Server Processor

Comments Filter:
  • One of the big reasons why previous ARM based things failed in the dominance of the PC architecture in terns of software installations.

    Basically you can take almost any PC operating system/program and expect to be able to run them on any PC based hardware.

    On the previous attempts on ARM that was not the case, each producer had a different infrastructure with different requirements and such.

    Of course even if all ARM servers will have the same structure it will not magically overcome the basic problem, but w

    • I'm curious about what the passmark score is on that processor.

    • by sad_ ( 7868 ) on Thursday March 05, 2020 @07:03AM (#59798938) Homepage

      SBSA/ARM ServerReady is the standard for data center ARM servers, i wish it would just be a general standard all ARM products use;

      https://en.wikipedia.org/wiki/... [wikipedia.org]
      https://developer.arm.com/arch... [arm.com]

      ServerReady is a set of tests that include:

              Architecture Compliance Suites (ACS) for SBSA and SBBR standards
              Booting of standard Linux distros

    • by DarkOx ( 621550 )

      I think there is a temporal element here as well. A lot of work loads now run on Linux and architecture independent (at least at the source level) that were windows x86 only just a decade ago without no indication of that ever changing.

      I would argue that its 'different now' and there are a lot of mass-market applications that either support arm or could easily be supported on arm if a handful of big customers twisted a vendors arm (pun not really intended) to do it. So that in turn might make a market for

      • by DarkOx ( 621550 )

        Replying to my own post here because I want to go on the record of NOT making a prediction.

        I am not saying this time it WILL be different or that 2020 is the year of ARM in the data-center or anything like that.

        I am saying hold on to your butts because it COULD be different. As long as the trend remains cloud cloud cloud, than Microsoft's and Amazon's of the world have an interest more compute. They want faster, denser, cheaper (no necessarily in that order) and if someone can deliver that and it happens to

        • by rho ( 6063 )

          Just keeping options open is worth investigating whatever ARM-based solutions are out there if you have the means. A monoculture of CPUs is as bad or worse than everybody standardizing on Windows.

        • In the linked article, Jeff Wittich, senior vice president of products at Ampere, said the Ampere chip is 14% better than AMDâ(TM)s fastest Epyc chip on power efficiency and 4% faster on raw performance. And [much better compared to Intel].

          I'd like to see some independent benchmarks for that. With a bit of cherrypicking in the tests, creating a 14% advantage seems not difficult. But either way, I doubt that small advantage will outweigh the effort of porting stuff to ARM.

          • by DarkOx ( 621550 )

            but a lot of stuff does not have to be ported or is just a re-compile away. I don't know does AWS even specify what architecture lambda's run on?

            I have created a few things. I just pasted my python code into the AWS console and setup the triggers. Its probably running on amd64, if I had to make but I don't know and I don't have reason to care. The big cloud guys are doing lots of stuff like that and because of their scale even small advantage might be a big deal to their bottom line. But I am not sure this

    • Your problem is probably that you were credulous enough of morons to use a proprietary compiler. If you'd used GCC, your apps would be portable.

      I do embedded programming on ARM, usually with no OS, and I don't have this problem.

      Hint: Use CMSIS or TI driverlib on Cortex class processors, linux libraries on larger CPUs.

  • Is that figure from a couple of years ago, because I am pretty sure AMD are much higher than 5% on new shipments for servers.

    • Re: (Score:1, Informative)

      by guruevi ( 827432 )

      Not for servers, AMD is still at 5%. They are great for budget servers that aren't integrated. But Dell, Lenovo etc is barely carrying any. Sure the Epyc chips are faster at integer benchmarks which solves 'some' problems where GPU computing isn't affordable, but the Xeon bandwidth to memory and components still reigns supreme which is reflected in the 'real world' benchmarks (eg. SPEC) where AMD still is lagging behind by a large margin.

      You want an AMD in your gaming rig if you're looking for performance,

      • by spth ( 5126797 ) on Thursday March 05, 2020 @07:35AM (#59798990)

        While I agree that AMD market share in sever is still low, the SPEC benchmarks [spec.org] seem to show AMD being ahead.

        • EPYC rocks ass in highly parallel workloads.
          We're still buying Intels though, because per-core performance matters.
          If you've got a bunch of applications with limited thread utilization running on a processor, their performance is primarily bound by the speed of a core.
          For number crunchers- there's no doubt about it- EPYC is what you want.
          But if your servers are just running a shitload of services for various network infrastructures, etc- Intel still wins. Their single-core performance isn't even in the
      • by Thumper_SVX ( 239525 ) on Thursday March 05, 2020 @08:09AM (#59799034) Homepage

        Honestly curious what you mean by "budget servers that aren't integrated"?

        Dell, Lenovo and HP are all selling servers with AMD EPYC CPU's and they seem pretty solid. Sure, there isn't the breadth of form factors (like blades) that you get currently with Intel, but for basic 1U and 2U servers they seem decently healthy so not sure what you mean by then not carrying any.

        As spth noted as well, the SPEC benchmarks of the EPYC also seem pretty healthy and at least comparable with their Intel counterparts as well as a bit cheaper.

        AMD has 8 memory channels running at 3200MT/s compared to Intel's 6 running at 2933MT/s... so yeah... the EPYC has more memory bandwidth.

        As someone who works on the systems integrator side I can honestly tell you I'm seeing more and more interest in AMD especially from healthcare companies precisely because the performance benchmarks in their applications have been showing comparable power at a lower cost compared to Intel. Additionally, where you need PCIe lanes the AMD has it sewn up doing 128 lanes per socket.

        The only places I see AMD lagging so far are in the 4 socket market; their architecture just isn't designed for it and I'm not sure how they'd make it work with their inter-socket interconnect architecture. But here's the thing; demand for 4-socket and above systems has cratered in the last few years as two-socket boxes are much more cost-efficient and performant... not to mention the chilling effect of licensing for applications on those multi-socket boxes. Yeah, we still sell 4-socket boxes but generally most of the heavy lifting of analytics has moved from monolithic databases to things like Hadoop where you just throw more one and two socket systems at the problem. That's a space where AMD really shines in my testing particularly in terms of overall cost.

        And for the record I am not an AMD fanboy in any way... I just respect their new architectures and am glad to see some real competition for Intel in the processor space.

        • by Junta ( 36770 )

          Basically the server vendors are a bit sluggish to respond to change and there's this perfect storm of a lot of the vendors having existing half-assed token efforts to be AMD compatible while AMD was not popular and the Rome being backwards compatible to those half-assed platforms.

          So the processor is undeniably in a strong position, but vendors are slow to put out more full fledge servers because they wanted to try to limp along on their token pre-Rome efforts because it means less investment dollars. This

          • From a performance standpoint, PCIe Gen 4 support is missing.

            Ahem. Check out https://en.wikichip.org/wiki/amd/cores/rome [wikichip.org]. PCIe 4.0 is supported

            • by Junta ( 36770 )

              I understand that was confusing. I meant the servers that a lot of the vendors are selling for Rome can't do Gen4, because they are using an existing platform that was not designed to do so. It was generally trying to talk specifically about the platforms made for AMD lacking rather than the fact that AMD itself has made a lot of mistakes. AMD has been undermined by weak efforts by partners, despite their particular components being very compelling when used correctly.

              That's why I referenced an example o

          • Well, I'll use Dell as my example because it's their platforms I'm most familiar with at the moment. The R6515, R7515 and R7525 all have PCIe Gen 4 as the system board was a ground-up redesign from the Naples one (6415, 7415, 7425). Thankfully they also upped the power supplies which was my main problem with those early Naples gen systems.

            Management is through their iDRAC 9 which is actually really freaking good and identical to the Intel boards. Similarly, I can't see many glaring differences between the B

      • Re: (Score:3, Informative)

        by Khyber ( 864651 )

        "but the Xeon bandwidth to memory and components still reigns supreme which is reflected in the 'real world' benchmarks (eg. SPEC) where AMD still is lagging behind by a large margin."

        Best re-read your own goddamned data because SPEC shows AMD being AHEAD by quite a large bit.

        • by account_deleted ( 4530225 ) on Thursday March 05, 2020 @08:54AM (#59799154)
          Comment removed based on user account deletion
          • /emote with a starry eyed hazed empty stare he utters "So this isn't competing with SPARC this year?"
        • But still lagging significantly behind top end Xeons in per-core performance.
          Xeons are up to 73% faster per-core while performing real-world services tasks.
          You have to accept that there is a difference between aggregate massively parallel performance, and the stuff that server operators are actually doing.
          • by Khyber ( 864651 )

            "But still lagging significantly behind top end Xeons in per-core performance."

            So I just ran a test of an old game I was making. HUGELY unoptimized, compiled for basic x86 instructions (pre-SSE,) on both a Xeon and EPYC system, easily rentable online for a quick instance test for cheap.

            AMD won hands-down.

            Intel can't win when it isn't cheating with its own compilers.

          • by Khyber ( 864651 )

            Oh, and that game? It's a 2D version of SecondLife built using the same game engine that Space Station 13 is written in. Hugely server-heavy with all the stuff I had added in. I also ran multiple linked instances (because you can do that in this game, transferring your character from one server world to the next just like SL.)

            Perhaps you should do some coding that's server-intensive, like I have.

            • Perhaps you should do some coding that's server-intensive, like I have.

              I literally do that for a living. If you were one of my minions, and you said "some coding that's server-intensive," I would fire you for being an imbecile.
              Therefor, I'm forced to conclude that you're a liar.
              But don't take my word for it- there is plenty of documentation online about it.
              What's the point of lying to shill for a product? It's bizarre.

    • Comment removed based on user account deletion
      • by Bert64 ( 520050 )

        You can migrate between amd/intel cpus with KVM:
        https://www.linux-kvm.org/page... [linux-kvm.org]

        the caveat is that you must only expose features to your guest which are supported by all the cpus you want to migrate between, so if you add new processors to your cluster you can't make use of any of their new features until all the old nodes have been replaced.

      • You are in general correct.
        But most hypervisors have cross-CPU-vendor live migration ability using various hacks like flag masking.
        Now- the fact that it does this does not guarantee that what you just live migrated won't instantly crash and burn, but if everyone plays nice, it will work.

        We use Citrix and KVM. We have 16 front end nodes across 4 clusters. I just migrated off of our last old Opteron cluster, so I did this first hand.
    • AMD has greater than 5% of shipments more recently, but the installed base is still largely Intel and will be for several years since no one is replacing servers every single year. Up until the last few years, AMD was essentially a non-player in the server space. Their previous microarchitecture (Bulldozer) was so terrible that they didn't even bother making server products. Outside of a tiny few still running old Opteron servers, AMD had no presence until Epyc.
      • AMD has greater than 5% of shipments more recently

        In the server space? Because that doesn't fit what I'm seeing in my datacenters at all.
        There are a trickle of EPYCs, and most old installed Opterons that people are replacing with Xeons. Mostly Dells.

    • It's pure anecdote, but I've got 3 datacenters at WBX in Seattle, and 4 more in the region.
      We're easily above 95% Intel.
      Servers also have significant churn, and the rate is generally being maintained.
      Getting into AMD/Intel flame wars is too fucking tired and stupid, so I won't bother offering an opinion as to why that is- but it is how it is, at least in my 7 datacenters.
  • by aaarrrgggh ( 9205 ) on Thursday March 05, 2020 @05:44AM (#59798836)

    I get the low power, crazy core counts, and all that jazz— but why is it viable today but not 8 years ago when it was first being widely discussed? The original benefit was to eliminate virtualization and multi-tasking for a specific function. With 80 cores on a chip... you are back to needing those things, and presumably just as unlikely to have the necessary isolation that Intel and AMD are struggling with.

    I also thought most of the AI applications are primarily working with integer math as well, so how does a more general-purpose processor address that market well?

    • by guruevi ( 827432 )

      These aren't general purpose chips. They are purpose-built ARM chips and they solve a particular problem at low power (but not cost). These servers cost more than their Intel or AMD counterparts, but they are cheaper to operate if you have a whole datacenter of them. You wouldn't want to use them for AI training, a GPU is a lot more cost effective at that, but it's part of the marketing lingo today, everything remotely computational is now for 'AI', a decade ago it would've been "gene analysis", two decades

      • I’ll have to trust you on the Amazon/Facebook front; but again they were exactly the two companies that ARM server chips were targeted to ~8 years ago. Cutting through the buzzwords in the article, I see theoretical opportunities like a mass storage array, but near zero substance. I would expect a server targeted to Amazon or Facebook to look a lot different than what they are offering (beyond form factor). It is too high of a power density, too low of a storage component, and maybe marginally bett

        • I don’t care how Amazon, Google, or Facebook can use it, but I am extremely interested in the best performance of game servers. I often encounter a lot of work during peak hours when I enter Dota2 after work. Sometimes the quality of Twitch’s broadcasts also drops sharply and I’m forced to watch the dota 2 matches [hawkbets.com] results in the news. I would be very happy if this new product had a positive impact on the gaming sector.
      • Where do you get low power? 210w of just cpu consumption is quite the power appetite. Unless you, for some reason, needed that 28-core Xeon (205W) most servers deal with a thermal load of around 75W.

        • by Bert64 ( 520050 )

          210w for 80 cores vs 205w for just 28 cores is comparatively quite low power.

          • Apples and oranges. Just throwing cores at the problem doesnt instantly make it faster or better. Just like clock cycles. 28 cores of xeon might still out perform 80 cores of ARM. There are plenty of applications that perform identically on 4 cores as they do 8. Virtualization would be a likely benefit. But can a single ARM core really compare to a single Xeon core? That factor will impact how much virtualization you can really scale on it. If it takes 8 cores of ARM to do the work of a 2 core xeon then you

          • by Junta ( 36770 )

            Note I had some experience with a 48 core ARM platform a while back. It wasn't even close to a 16 core Intel of the same vintage. It was also a power hog for the performance.

            This has been the challenge with ARM in the server space. ARM is not inherently more power efficient and doing things, but their support for very flexible power states, the flexibility of chip vendors to embed related components into a one-stop SoC, and multi-vendor licensing situation make for a good match for embedded due to idle p

        • by Khyber ( 864651 )

          Son, do you even Pentium 4?

    • I do suspect there's something they're not telling us. They're reduced it to two very vague metrics (power efficiency and performance). I'm somewhat sceptical of the performance claims which are likely much more complex in reality.

      In tech there are a lot of missing untapped possibilities. If you take a look at a Raspberry Pi 4, it's quad core 1.5 Ghz with 4GB. Most of the junk on the Pi is unnecessary. If you strip that away, the CPU, RAM and a few other little bits are enough that you can easily put ten
  • Better article (Score:5, Informative)

    by Misagon ( 1135 ) on Thursday March 05, 2020 @06:29AM (#59798878)

    Anandtech has a better article [anandtech.com] with more technical details, better font size, less popups and no infinite scrolling.

    • better font size, less popups and no infinite scrolling.

      Firefox has ctrl+
      and you should be using NoScript .

    • Thanks. That article actually contains useful information for nerds. It sounds like a well conceived product, although the proof is in the pudding.

  • tell me again why I would go with a unproven silicon vendor who is 1.04x than my volume partner ? (thats their number i.e. best conservative case)

    Nope its not going to stack up, seriously are you doing your own custom cores and forgot enough room for the Cache ? WTF ?

    its nice but your going to have to make them very very cheaply and in huge volume which if your only putting 32MB of cache on while your competitors are using 64MB and standard cell library I would say game over...

    use standard ARM cores and do some funky 100Gb switch interfaces...

    cheers

    John Jones

  • Would someone tell me how this happened? We were the fucking vanguard of server processors in this country. The Intel Xeon was the processor to use. Then the other guy came out with a server processor with 64 cores. Were we scared? Hell, no. Because we hit back with a little thing called the Turbo Boost. That's lots of cores and a Turbo mode. For speed. But you know what happened next? Shut up, I'm telling you what happened—the bastards went to 80 cores. Now we're standing around with our cocks in our
    • I tried to arrange the 80 cores in a square, but when I plugged 80 into the calculator and I took the square root, I got the result with decimals.
      With the extra core they can be arranged in a neat little square.

  • How meny pci-e lanes?

"The medium is the message." -- Marshall McLuhan

Working...