Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Unveils 64-Bit ARM-Based Opteron A1100 System On Chip With Integrated 10GbE (hothardware.com) 98

MojoKid writes: AMD is adding a new family of Opterons to its enterprise processor line-up today called the Opteron A1100 series. Unlike AMD's previous enterprise offerings, however, these new additions are packing ARM-based processor cores, not the X86 cores AMD has been producing for years. The Opteron A1100 series is designed for a variety of use cases and applications, including networking, storage, dense and power-efficient web serving, and 64-bit ARM software development. The new family was formerly codenamed "Seattle" and it represents the first 64-bit ARM Cortex-A57-based platform from AMD. AMD Opteron A1100 Series chips will pack up to eight 64-bit ARM Cortex-A57 cores with up to 4MB of shared Level 2 and 8MB of shared Level 3 cache. They offer two 64-bit DDR3/DDR4 memory channels supporting speeds up to 1866 MHz with ECC and capacities up to 128GB, dual integrated 10Gb Ethernet network connections, 8-lanes of PCI-Express Gen 3 connectivity, and 14 SATA III ports. AMD is shipping to a number of software and hardware partners now with development systems already available.
This discussion has been archived. No new comments can be posted.

AMD Unveils 64-Bit ARM-Based Opteron A1100 System On Chip With Integrated 10GbE

Comments Filter:
  • You'd think they could have at least upgraded some of their x86 stock offerings to PCIe 3.0, but no, that'll have to wait...

    • "Enterprise is a much bigger market" is what I'd otherwise say, but seeing as that (and datacenters) runs on vmware, I doubt this will see that much adoption. Unless of course the value prop is so good that open source virtualization (I.e. xen) starts running on it.

      • These are great for ISPs, Google, and Amazon where power saving is important. Think nodes in clusters. Server 2012R2 has an arm port and has hyper-v too.

        Power costs are astronomical in these environment's

        • by subk ( 551165 )

          These are great for ISPs, Google, and Amazon where power saving is important. Think nodes in clusters. Server 2012R2 has an arm port and has hyper-v too.

          Power costs are astronomical in these environment's

          It will also be great for more-or-less embedded applications like firewalls, storage arrays, de-duplicators, media server/transcoders, etc where you need a healthy CPU and great ethernet bandwidth. I think they on the right path here in terms of an efficient platform for SoC-based appliance manufacturers, where watt-hungry dual and quad-core Intel chips have been the de facto choice. Atom provided an option for those craving a more efficient chip, but IMHO, while it works in clusters, it never performed w

          • Still there is a common misconception that all servers are CPU bound.

            Not true for databases or Java servlet apps. It is latency based. Latency on networks and disk i/o as your SQL query gets processed and the CPU goes on massive WAIT cycles in the process.

            An ARM will be great for lower power with an SSD and lots of bandwidth.

          • Could well be useful for memcache-type applications. Not likely to be terribly powerful; but quite possibly one of the cheaper ways to get 128GB of ECC RAM into a small box(Intel's C2000 Atom-for-server stuff currently tops out at 64GB, and their Xeons cost more).
        • Server 2012R2 has an arm port and has hyper-v too.

          Is there any proof of this outside of internal Microsoft test versions? As far as I can tell the failed tablet OS, Windows RT, is the closest.

        • Just because MS is noisy and nosey (requiring registration of their products) doesn't mean that they're the fastest growing or most popular overall. Ubuntu Linux is running on more than a billion devices already and that's a conservative estimate. Also, the other virtual machines solutions are fast, growing and free. Think Proxmox, LCX and more. Hey, even MS have embraced a lot of this, so it's time to rethink your assumptions, like you have with your Xen closing remark.

          Below are some Ubuntu figures, ch

      • by Anonymous Coward

        There is no Windows x86 support except for Wine support, which they say is slow, so I doubt that you will see any Windows VMs running on it

        However, there is support for Linux, Android and iOS, which will probably be the primary targets

        There are probably plenty of customer who would choose this over Xeon for datacenters which do not need to serve a win MS-centric market

        • I don't think iOS would ever run on it. Apple would shit bricks and summon a demon hoard army of lawyers on anybody who runs it on anything except for their in-house designed ARM SoC's.

        • However, there is support for Linux, Android and iOS, which will probably be the primary targets

          There are probably plenty of customer who would choose this over Xeon for datacenters which do not need to serve a win MS-centric market

          How many people use iOS in a datacenter? How many people use iOS on third party hardware?

          On Linux you'd still get better hardware/software support on x86, and better performance/ dollar, or performance per watt.

          I could certainly see this maybe in embedded applications, and maybe networking appliances (like a storage server, or maybe a web server). But I think it will be a few years still before you see a big foothold of ARM based CPUs in general purpose servers.

    • by bored ( 40072 )

      As much as I think that this is probably the "best" ARM server at the moment, I 100% agree with you. Its not going to make AMD any real money, and the amount of R&D invested would have been better spent upgrading their existing product lines.

    • Comment removed based on user account deletion
    • Keep in mind that their x86 offerings are half the price compared to the equal Intel offerings, something gotta give.
  • Only X8 pci-e? (Score:4, Insightful)

    by Joe_Dragon ( 2206452 ) on Thursday January 14, 2016 @01:00PM (#51300907)

    Only X8 pci-e?

    At least they have dual 10-gige but come on give at least x16 pci-e even if you need to cut down the sata links.

    • For what purpose? These are obviously meant for the server room. What need do you have for anything beyond pci-e X8 if you don't even have a video card?

      • Some servers do have video cards. We have them in lots of our servers for GPU compute tasks.
        • So would your GPU compute servers benefit from having an ARM instead of x86 CPU?

          Clearly, there is some market where the workload favors ARM CPUs, and some market where the workload favors GPU computing (and therefore PCIe x16). Combining both in the same product only makes sense if there's some market that needs both at the same time. It's not as if AMD were discontinuing x86 Opterons, after all.

          • by Junta ( 36770 )

            Well, maybe if the ARM servers did NV-Link. POWER and ARM can do NV-Link CPU to GPU, Intel however was either not invited to the party or was disinterested in working to enable it. Of course I'm suspecting AMD and nVidia collaborating would be unlikely

            Again though, there'd be no room to complain about PCIe lanes.

            • NV-Link just sounds like an Nvidia copy of Hypertransport [amd.com].

              • by Junta ( 36770 )

                Roughly, it is. However that has never been used to connect a discrete GPU to a CPU, to my knowledge. AMD did it first, Intel copied it, and nVidia has joined the world of proprietary interconnects, but the first to do so for CPUGPU situation.

                • I think AMD uses Hypertransport in its APUs, but yeah, I've been wishing for years for an AMD FX (or Phenom II) + discrete Radeon setup that used it (maybe with the GPU in a motherboard socket instead of a daughter card, although I guess the hard part of that is the lack of standards for slotted graphics RAM).

          • Combining both in the same product only makes sense if there's some market that needs both at the same time.

            While it might seem silly to try to save ~100W in a system that contains ~1000W of GPUs, that still adds up to another whole node relatively rapidly. If you're using pretty much all GPU and basically no CPU, it might make good sense. Isn't that a fairly common case now? With the 10GbE they are attractive cluster nodes. It might not have the crazy low latency of some of the more specialized interconnects, but it's still damned fast.

        • by wbr1 ( 2538558 )
          GPUs for compute tasks do not need the bandwidth that GPUs pushing videos do. Look at and old mining rig using GPUs, you get 16x to 1x cables and stack your compute GPUs.
          • by amorsen ( 7485 )

            Mining is very niche though. Very few workloads are so embarrassingly parallel, and it's usually worth it to use an FPGA or (ideally) an ASIC for those.

            • by wbr1 ( 2538558 )
              Still for rendering tasks, a large part of the bus is needed for moving large textures in and out of memory. While I am not an expert in this field, I do not think may of the other current GPU compute tasks require this memory bandwidth, so as large a bus is not needed.
              • by dbIII ( 701233 )

                I do not think may of the other current GPU compute tasks require this memory bandwidth

                Chicken/egg situation - there are compute tasks not considered for GPUs due to not having the memory bandwidth.

          • by dbIII ( 701233 )
            They do if your task requires using a lot more memory than the cards have onboard. At that point as much bandwidth as you can get never seems like enough.
        • by amorsen ( 7485 )

          If you are doing GPU compute, you are unlikely to be interested in ARM. Those GPU's need to be fed somehow with data somehow, and the ARM won't keep up.

          We are still not at the point where GPU's can just fetch a bunch of data over the network by themselves, crunch it, and send it out again.

        • Some people select their hardware for the application. This is not the server for you. Not even remotely.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I've got a hunch this will be more aimed at "lots and lots of dies in one small box" applications rather than as part of a large monolithic system. (Think virt host)

      You only need enough I/O to attach a few disks or a controller to attach to a SAN or something like infiniband and x8 is plenty. You could attach a single GPGPU as well but I don't think that would be a target application. (Does Nvidia even have Tesla support for ARM platforms?)

      • by DRJlaw ( 946416 )

        You only need enough I/O to attach a few disks or a controller to attach to a SAN or something like infiniband and x8 is plenty.

        Storage controllers tend to be x8 devices, meaning that you'd have to have 2 x8 expansion slots to enable redundant controllers in your small box.

        Rumor has it that redundant controllers is, indeed, a thing in that market.

        • by Anonymous Coward

          You'd need something like that for a dedicated storage node but I don't think that's the target application for this chip. This isn't a big monolithic server chip.

          If you look at modern server farms you have endless rows of boxes that are as simple and as cheap as possible. Usually just a power supply, board, cpu, memory, 2x ethernet, and 2 cheap commodity SATA hard drives.

          If anything on the node fails it's taken offline. The "cloud fabric" along with object based storage make s sure the data has already bee

          • by DRJlaw ( 946416 )

            I don't disagree with what you're saying, but the type of node that you describe does not need dual integrated 10Gb Ethernet network connections and 14 SATA III ports. The dual 10Gb ethernet ports alone indicate a a combination of high I/O throughput and redundancy, not merely an application node.

          • by Holi ( 250190 )
            Actually this sounds exactly like a storage node. Definitely not a VM host as it does not have nearly enough cores. It sounds like a perfect low cost nas for a vm cluster.
        • Storage controllers tend to be x8 devices

          8x PCIe v3 gives you 7,880MB/s. Even just throwing the data out over the two 10GigE connectors, you'll only use a third of that. I doubt that 8 Cortex-A57 cores are going to find themselves data starved processing the rest.

  • by slaker ( 53818 ) on Thursday January 14, 2016 @01:11PM (#51301025)

    10GbE Ethernet, (at least over copper, which is the only way I've gotten to mess with it), kinda sucks. Cost per port is really high and actually so are the power requirements per port. Infiniband was a lot easier and cheaper for me to deal with and having it implemented in relatively common hardware might improve its adoption.

    • by Anonymous Coward

      I desperately want an affordable home switch/router with a single 10Gb port. My server can feed around 400MB/s from its raid array but the single 1Gb ethernet port is the limiting factor.

      • by Anonymous Coward

        Unless you've got a 10GE client, it would be far more cost effective to add a few 1Gbps ports to your server and bond the interafces. There are lots of cheap Gbit switches capable of LAG/LACP and any server OS should be capable.

    • That's cuz you're not doing it right. Having dual 10GbE ports on each end will let you run SMB3 Multichannel [microsoft.com] for network transfer speeds that will outpace anything but the fastest RAID arrays. We're seeing real-world file transfer speeds of over 1.3 gigabytes (not gigabits) / second over copper Ethernet. I'm seldom a Microsoft advocate but it is awesome.
      • by jon3k ( 691256 )
        There's lots of ways to aggregate multiple NICs (ie LACP). It's very easy to bond multiple 10Gb ports to increase bandwidth. And I've gotten comfortably over 1GB/s over a single 10Gb NIC.
        • SMB3 Multichannel isn't the same as link aggregation. It assigns CPU cores to process SMB transfers as they come across the wire(s), thereby handling one of the real-world bottlenecks (i.e., that the client typically chokes trying to process all of that inbound data coming off of the fast pipe).
        • LACP doesn't actually increase bandwidth. Each host combination only talks to one port. At no point will any IP connection between those hosts go faster than 10gb.

          LACP will let multiple ports talk to multiple other machines with one IP and load splitting (its not balancing since its static mapping) across the LACP group. Its barely more useful than round robin DNS, and you'll lose any advantage from protocols that support multiple links, like iSCSI or SMB3.

    • 10GbE Ethernet, kinda sucks. Cost per port is really high and actually so are the power requirements per port.

      that's why it's built into the processor, you twit.

      • by Junta ( 36770 )

        Cost per port on the switch side and power requirements on the switch side I presume he means. Particularly if he's talking about CAT6 rather than DAC, it is a pretty huge power hog. Note that in relatively recent developments a wave of PHYs have come about that significantly improve that, but it's still pretty big.

        Now using DACs should bring the power complaint in line with infiniband, though the runs can't be very long by comparison, but then again 'cheap' Infiniband cables can't be that long either (wh

        • by slaker ( 53818 )

          You are correct in your presumption. I was not aware there's been any real improvement WRT Cat6 PHYs. Thanks.

        • by afidel ( 530433 )

          To put some numbers to it
          Cisco 3064 switches:
          3064-X 64 ports of DAC @ 143W = 2.2W per port
          3064-T 48 ports of GBaseT and 4 SR4 uplinks @362W = 7W per port

          Brocade 6740 switches:
          Brocade VDX 6740 48 ports of DAC and 4 ports of 40Gb QSFP @ 110W = 2.1W per port
          Brocade VDX 6740T 48 ports of 10GBaseT and 4 ports of 40Gb QSFP @ 460W = 8.8W per port

          24x7 operation at $.10/kw ~= $1/W/year so each port of 10GBaseT costs you ~$9-13/year (two sides to the link) over DAC, that would eat up the cable savings pretty fast. Ad

    • by hattig ( 47930 )

      The A1170 high end Opteron has a 32W TDP, and two built-in 10GigE ports, for under $150 (expected SoC price).

      So how is that expensive per port, in terms of power consumption, etc?

      • by slaker ( 53818 )

        I've read 5 - 7W per port. That's far from inconsequential.

      • by amorsen ( 7485 )

        Are they actually built in ports though? Do they have 10Gbase-T PHYs?

        Personally I prefer DAC, which is fairly cheap all around, but 10Gbase-T is winning in the market. Slowly. I haven't actually touched any 10Gbase-T equipment yet.

    • by jon3k ( 691256 )
      Look into 10Gb of direct attach copper SFP. Low power and low port cost (at least compared to Infiniband).

      10Gb port costs have come WAY down because of all the competition in the space (read: Arista) fighting with the established players (Cisco, Juniper). You can get a 48 port SFP based Nexus 9732PQ for around $13k (with multiple quad SFP ports). You don't need optics because you just use direct attached cables. Dual port 10Gb SFP based Intel NICs are around $300 now.

      Not sure what you mean by "im
    • If you care about power per port you don't use the cat6 version you use the SFP one with twinax or 10GBaseSR. Cat6 is good because you can use your existing cable plant but you get increased latency and power use.
    • See Avago's ExpressFabric for an interesting alternative.
  • by Anonymous Coward

    Finally, an Intel Xeon killer.

  • This is a perfect way of speeding up the Cloud.
  • is that this chip is going be about US $150.
  • AMD still around? (Score:1, Insightful)

    by Anonymous Coward

    I thought they went bankrupt years ago.

    Still not going to touch them with a bargepole :)

    • AMD is vital. All recent and semi-recent Intel CPUs include AMT which is a backdoor that can control any aspect of the running system without being detectable in any way by the operating system. It includes a completely separate sub-processor that has full control of the machine while being invisible to the main CPU.

      • by Anonymous Coward

        Blah Blah Blah conspiracy theory drivel. Too bad AMD embeds and entire ARM core into it's own chips to mimic the same functionality of "AMT". AMD calls it "Trustzone" but it's the exact same thing.

        Oh, and while you wet yourself like a frightened preschooler over the NSA backdoor that was supposedly entirely implemented around the RDRAND instruction (which AMD is flat-out copying for Zen), AMD also put in black-box "crypto accelerators" for these server parts since the CPU power of an ARM chip is so anemic

  • by wbr1 ( 2538558 ) on Thursday January 14, 2016 @01:55PM (#51301395)
    Hopefully the sales of this will be good enough for them to push the Zen core chips this year. If they come out as expected, they could be a great chip. I have several Bulldozer based machines, and when built they were a good balance of performance for price, but no more. They are really starting to show their age. They have always been spanked by i5 and above Intels, and as new intels continue to come out, prices on one or 2 generation back intels get better and the price difference ceases to be an issue.

    So, please AMD, I want to be a fan, give us a good CPU again!

  • These could be nice in appliances, ie routers, switches, NASs, etc. I can't see them being too useful in normal servers, since they're ARM, not intel, and have a relatively low clock rate (1.7 to 2.0 GHz).

  • And not from SoftIron. "Available today from SoftIron" actually means available to somebody soon, maybe. But you won't find a listing anywhere and they won't even respond to queries from individuals.

  • Didn't I read about this already like 2 years ago?

    http://www.anandtech.com/show/... [anandtech.com]

  • Everything old is new again.

    It's basically a SiByte or Cavium network processor, only with ARM instead of MIPS.

    It'll probably be useful for terminating SSL connections in a box to offload the cryptography, and for offload of packet reassembly and other less interesting things from your main compute cluster, but not much else.

    The main problem is still that most ARM implementations memory bandwidth sucks; not knowing who was on the design team for the thing, until we get real numbers out of benchmarks, it won

  • ....x86 cores with ARM cores and provide drivers for OS so that work load can be switched seamlessly from power saving ARM to high performance x86 as well as run both and use x86 and ARM based apps simultaneously. Now THAT would be interesting!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...