Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Announces First ARM Processor 168

MojoKid writes "AMD's Andrew Feldman announced today that the company is preparing to sample its new eight-core ARM SoC (codename: Seattle). Feldman gave a keynote presentation at the fifth annual Open Compute Summit. The Open Compute Project (OCP) is Facebook's effort to decentralize and unpack the datacenter, breaking the replication of resources and low volume, high-margin parts that have traditionally been Intel's bread-and-butter. AMD is claiming that the eight ARM cores offer 2-4x the compute performance of the Opteron X1250 — which isn't terribly surprising, considering that the X1250 is a four-core chip based on the Jaguar CPU, with a relatively low clock speed of 1.1 — 1.9GHz. We still don't know the target clock speeds for the Seattle cores, but the embedded roadmaps AMD has released show the ARM embedded part actually targeting a higher level of CPU performance (and a higher TDP) than the Jaguar core itself."
This discussion has been archived. No new comments can be posted.

AMD Announces First ARM Processor

Comments Filter:
  • Despite it's name (Score:5, Informative)

    by dbIII ( 701233 ) on Tuesday January 28, 2014 @11:37PM (#46097355)
    Jaguar is for tablets and seems to be designed for price point and not speed. That's why they are comparing it with the ARM stuff and not using an Opteron 6386 as a comparison.
    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Tuesday January 28, 2014 @11:48PM (#46097397)
      Comment removed based on user account deletion
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Nonsense.

        Most serve code can just be recompiled and it just works. Even that is not required it are are running a Linux distro.

        That's before we think that a lot of server code is Java, PHP, Python, Ruby, JavaScript etc that does not even need recompiling.

        I can't speak for the power budget on servers but clearly someone thinks there is a gain there.

        Besides, some competition in that space cannot be a bad thing can it?

             

        • Re: (Score:2, Insightful)

          by Anonymous Coward
          These things seem almost purpose-built for memcached servers and... well, can't think of much else. And for a memcached box, all the profit is going to the DRAM vendors. Saw somewhere else that "AMD will take a loss on every chip, but make up for it in volume..." That sounds about right for their business plan, but can they even execute on that...? Will be an interesting year for them, I suppose.
        • Re: (Score:2, Interesting)

          by bzipitidoo ( 647217 )

          I agree. Recompiling is not a big deal. C/C++ is standardized. The heavy lifting is the creation of standard libraries, and any sensible chip and system vendor will help do that because it's absolutely necessary. This is not the same thing as porting from an Oracle database to MariaDB or some other DB. That's a big job because every database has their own unique set of extensions to SQL.

          x86 never was a good architecture. It was crap when it was created back in the 1970s, crap even when compared to ot

          • by LordLimecat ( 1103839 ) on Wednesday January 29, 2014 @01:44AM (#46097781)

            Its not x86 today, which kind of makes me think you have no idea what youre talking about.

            opting for a horrible stack based approach.

            Im not one to argue architectural advantages, but id point out that both of the top two cpu manufacturers chose the same instruction set. Noone else has been able to catch the pair of them in about a decade.

            • by gbjbaanb ( 229885 ) on Wednesday January 29, 2014 @03:40AM (#46098109)

              its unfortunate, but sometimes the best way to drive a screw into a piece of wood is just to keep smashing at it with bigger and bigger hammers.

              I guess this approach is what Intel and AMD have been doing with x86.

            • Re: (Score:2, Interesting)

              by TheRaven64 ( 641858 )
              That depends on how you're measuring success. If it's by fastest available CPU, by most sales, or by highest profits, then neither AMD nor Intel has been dominant since the '80s.
              • Raw performance, performance-per-watt, performance-per-core.

                You know of a chip that beats a haswell or even steamroller in those departments?

          • x86 IS efficient (Score:4, Informative)

            by Crass Spektakel ( 4597 ) on Wednesday January 29, 2014 @02:41AM (#46097923) Homepage

            Actually x86 IS efficient for for something completely different. The architecture itself is totally unimportant as deep inside it is yet another micro code translator and doesn't differ significantly from PPC or Sparc nowadays.

            x86 short instructions allow for highly efficient memory usage and for a much, much, much higher Ops per Cycle. This is just that big of a deal that ARM has created a short command version of ARM opcodes just to close in. But then this instruction set is totally incompatible and also totally ignored.

            Short instructions do not matter on slow architectures like todays ARM world. These just want to safe power and therefore it fits in well that ARM also is a heavy user of slow in-order-execution.

            A nice example, increasing a 64 bit register in x86 takes ONE byte and recent x86 CPUs can run this operation on different register up to 100 times PER CYCLE, all commands to be loaded in THREE to EIGHT Cycles from memory to cache. On the other hand, the same operation on ARM takes 12 bytes for a single increment operation, to load some dozend of these operations would take THOUSANDS of clock cycles.

            And now you know why high end x86 is 20-50 times faster than ARM.

            • Re:x86 IS efficient (Score:5, Interesting)

              by luminousone11 ( 2472748 ) on Wednesday January 29, 2014 @04:53AM (#46098353)
              1 byte?, you have no idea what you are talking about. AMD64 has a prefix byte before first op code byte, so in 64bit mode no instruction is smaller then 2bytes, Also 64bit arm is a new instruction set, and it does not in any way resemble 32bit arm. The fact is 64bit ARM, looks much more CISC'y then 32bit ARM, providing access to multiple address modes for load instructions, integrating the SIMD instructions rather then using costly co-processor extensions, having lightly variable length instructions, dedicated stack register and access instructions, And in a huge break from prior arm instruction sets they drop the conditional execution instructions from the instruction set. And it manages to increase the register count from 16 to 32 to boot as well. ARM has a bright future, It is not forced to waste huge swaths of transistors on decoding stupidly scatter brained instruction encodings built from 40 years of stacking shit on top of shit.
            • Re:x86 IS efficient (Score:5, Interesting)

              by TheRaven64 ( 641858 ) on Wednesday January 29, 2014 @06:47AM (#46098689) Journal

              Actually x86 IS efficient for for something completely different. The architecture itself is totally unimportant as deep inside it is yet another micro code translator and doesn't differ significantly from PPC or Sparc nowadays.

              This is true, unless you care about power. The decoder in an x86 pipeline is more accurately termed a parser. The complexity of the x86 instruction set adds 1-3 pipeline stages relative to a simpler encoding. This is logic that has to be powered all of the time (except in Xeons, where they cache decoded micro-ops for tight loops and can power gate the decoder, reducing their pipeline to something more like a RISC processor, but only when running very small loops).

              x86 short instructions allow for highly efficient memory usage and for a much, much, much higher Ops per Cycle.

              It is more efficient than ARM. My tests with Thumb-2 found that IA32 and Thumb-2 code were about the same density, plus or minus 10%, with neither a clear winner. However, the Thumb-2 decoder is really trivial, whereas the IA32 decoder is horribly complex.

              This is just that big of a deal that ARM has created a short command version of ARM opcodes just to close in. But then this instruction set is totally incompatible and also totally ignored.

              Thumb-2 is now the default for any ARMv7 (Cortex-A8 and newer) compiler, because it always generates denser code than ARM mode and has no disadvantages. Everything else in your post is also wrong, but others have already added corrections to you there.

              • by thogard ( 43403 )

                There is one disadvantage of the different ARM modes and that is the an arbitrary program will contain all the needed bit patters to make some useful code. This means that any reasonable large program will have enough code to support hacking techniques like Return Oriented Programming if another bug can be exploited. I would love to see some control bits that turn off the other modes.

                • Yes, it is a slight problem with ARM. It's also a big problem on x86 - there are a large number of ways of interpreting a two-instruction sequence depending on where you start within the first instruction. ROP (and BOP) benefit a lot from this, unfortunately. There isn't really a good solution.
              • Except that 64-bit ARM (AArch64) doesn't have Thumb. Source. [realworldtech.com]So in 64-bit mode (which is what these server processors will be running in), x86-64 again has a code density advantage over AArch64.
          • Re:Despite it's name (Score:5, Interesting)

            by evilviper ( 135110 ) on Wednesday January 29, 2014 @04:26AM (#46098271) Journal

            Your criticisms are probably quite apt for a 286 process. Some might be relevant to 686 processors too... But they make no sense in a world that has switched to x86-64.

            The proprietary processor wars are over. Alpha and Vax are dead. PA-RISC is dead. MIPS has been relegated to the low-end. SPARC is slowly drowning. And even Itanium's days are severely numbered. Only POWER has kept pace, in fits and starts, and for all the loud press, ARM is only biting at x86's ankles.

                x86 has been shown able to not just keep pace but outclass every other architecture. Complain about CISC all you want, but the instruction complexity made it easy to keep bolting on more instructions... From MMX to SSE3 and everything in-between. The complaints about idiosyncracies are quite important to the 5 x86 ASM programmers out there, and compilier writers, and nobody else.

            I wouldn't mind a future where MIPS CPUs overtake x64, but any debate about the future of processors ended when AMD skillfully managed the 64-bit transition, and they and Intel killed off all the competition. With CPU prices falling to a pittance, and no heavy computational loads found for the average person, there's no benefit to be had, even in the wildest imagination, of switching the PC world to a different architecture, painful transition or no.

            • I'd say something completely different.

              Manufacture technologies were always the most important factor on the speed of a CPU. That meant that R&D money translated into faster chips, and all the R&D money was obviously on the market leader, first the x86 and then the amd64 architectures.

              Well, manufacture is still extremely important, but it's taking bigger and bigger investiments to deliver the same gains in CPU speed. At the same time, the x86 market is shrinking, and arm64 is exploding. Expect huge

              • R&D money translated into faster chips, and all the R&D money was obviously on the market leader, first the x86 and then the amd64 architectures.

                Except x86 wasn't the most profitable, so it didn't get all the R&D money. Entire companies were built around proprietary lock-in. Without a good CPU, customers won't buy your servers, your OSes, your other software, your support contract, etc. Those proprietary architectures absolutely got lots of R&D, as multi-billion dollar businesses were dep

                • Well, ok, margins are always bigger for loked-in products and at some time, total profits were bigger for them too, and a more open standard normally wins over more closed ones. Still, that does not apply to the x86 vs arm fight, there is a small parenthesis about arm being more open, and getting the mobile market because of it, but for servers the x86 is open enough.

                  • I don't disagree, EXCEPT, if AMD disappears, x86 instantly becomes 100% Intel proprietary.

                    Now, maybe some other company could come along and use AMD's x64 instructions, plus only the Intel x86 bits that aren't under patent or some such, some of their own, and then compilers and binaries will only need minor changes... But that's a hell of a lot of work, so I wouldn't assume it'll happen.

            • The war is over, and x86 won. But it didn't win because it's the best, but because of economics, marketing, and the quirks of history.

              It's the same story everywhere in technology, be it instruction sets, MP3 players, or video media. The winner only needs to be good enough. And x86 has been good enough, hasn't it? Not the best, of course, and by accident some things (like SSE, as you pointed out) have even been made easier.

              And yet, the computing world would be better off if we could somehow break our bac

          • by Kjella ( 173770 )

            TL;DR but to paraphrase Churchill "x86 is the worst form of instruction set, except for all those other forms that have been tried". The rest are dead, Jim. The cruft has been slowly weeded out by extensions and x86-64 and compilers will avoid using poor instructions. The worst are moved to microcode and take up essentially no silicon at all, they're just there so your 8086 software will run unchanged. It's like getting your panties in a bunch over DVORAK, whether or not it's better QWERTY is close enough t

          • . I mean, the instruction set has specialized instructions for handling packed decimal! And then there's the near worthless string search REPNE CMPSB family of instructions. The Boyer-Moore string search algorithm is much faster, and dates back to 1977. Another sad thing is that for some CPUs, the built in DIV instruction was so slow that sometimes it was faster to do integer division with shifts and subtracts. That's a serious knock on Intel that they did such a poor job of implementing DIV. A long time criticism of the x86 architecture has been that it has too few registers, and what it does have is much too specialized. Like, only AX and DX can be used for integer multiplication and division. And BX is for indexing, and CX is for looping (B is for base and C is for count you know-- it's like the designers took their inspiration from Sesame Street's Cookie Monster and the Count!) This forces a lot of juggling to move data in and out of the few registers that can do the desired operation. This particular problem has been much alleviated by the addition of more registers and shadow registers, but that doesn't address the numerous other problems. Yet another feature that is obsolete is the CALL and RET and of course the PUSH and POP instructions, because once again they used a stack. Standard thinking 40 years ago.

            It was standard on the 8086 (introduced in 1978). The 80368 (1985) is a general purpose register machine and can use a 0:32 flat memory mode. And modern x64 (2003) has twice as many registers and the ABI specified SSE for floating point, not 8087. Also in 64 bit mode segment bases and limits for code and data (i.e any instruction which does not have a segment override prefix) are ignored.

            I.e pretty much all the things you're complaining about have been fixed and if you look at benchmarks x64 chips have been

      • by Osgeld ( 1900440 )

        its been scaling since the mid 1980's and a hell of a lot more gracefully than any other cpu ever made to date, and btw even windows NT 4.0 supported arm

      • If one thinks ARM does not scale, it would be interesting if he would point out why he thinks so.
        There is no thechnical reason for ARM not to scale ...

      • by DrXym ( 126579 ) on Wednesday January 29, 2014 @05:12AM (#46098393)
        Most server code? Most server code is Java, Python, PHP or some other abstraction running over the hardware. Providing the runtime exists to support the abstraction it is largely irrelevant what architecture is powering it. I expect that operations that are already using Linux or some other Unix variant are well positioned to jump over. Windows based operations, not so much though Microsoft are in the cloud computing space too and this might motivate them to support ARM.

        Why companies might do choose ARM really depends not on whether it is faster than Intel CPUs, but whether it is fast enough for the task at hand, and better in other regards such as power consumption, cooling, rack space etc. Google, Facebook, Amazon et al run enormous data centers running custom boot images and have teams capable of producing images for different architectures. This would seem to be the market that AMD is targeting.

      • by higuita ( 129722 )

        You are also forgetting that a x86 or a amd64 is a RISC cpu with a layer od CISC hidding the RISC. That layer takes cpu space, power and resources (both designing and working). Going to a simples CPU design saves silicon wafers, increasing the number of cpu per wafers and so increasing the profit. Also, a simpler cpu saved internal resources developing the cpu.

        So AMD building ARM cpus is a way to reduce costs, increase potential profit per cpu and of course, being ready and testing the market demand for ARM

        • You are also forgetting that a x86 or a amd64 is a RISC cpu with a layer od CISC hidding the RISC.

          You ignorants keep saying this shit, but its not accurate at all. You have taken a small truth and inflated it into a big lie.

          The small truth is that there IS a layer that converts instructions into micro-ops, that there are instructions will in fact generate 3+ micro-ops.

          The big bullshit ignorant lie is that you then conclude that ALL instructions are converted into multiple micro-ops. Thats just not the case.

          Its not "CISC on RISC" -- its "CISC and RISC" -- The basic technique in the inevitable con

          • by higuita ( 129722 )

            call it whatever you want... CISC AND RISC? can be also small-CISC AND small-CISC and small-CISC AND small-CISC... but that start to translate to a plain RISC

            If you would take out that translation layer and use all available micro-ops, you would call that CPU a RISC like, not a CISC like... Even if sometime you can only execute one operation, other times you CAN execute several operations, being further way from a CISC design and close to a RISC design.

            anyway, that layer takes out performance, consume reso

      • Re:Despite it's name (Score:5, Interesting)

        by Alioth ( 221270 ) <no@spam> on Wednesday January 29, 2014 @08:09AM (#46098953) Journal

        ARM scales fine (in another way). Sophie Wilson (one of the ARM's original developers) indeed said that ARM wouldn't be any better today than x86 in terms of power per unit of computing done. However, an advantage ARM has for parallelizable workloads is you can get more ARM cores onto a given area of silicon. Just the part of an x86 that figures out how long the next instruction is is the size of an entire ARM core, so if you want lots of cores this will count for something (for example, the Spinnaker research project at Manchester University uses absurd numbers of ARM cores).

      • by AmiMoJo ( 196126 ) *

        AMD is betting that their "APU" designs, where a GPU offloads a lot of heavy lifting from the CPU, will provide good performance and power consumption. Offloading to the GPU is actually more advanced on mobile platforms than on the desktop, so it makes sense.

    • by MrEricSir ( 398214 ) on Wednesday January 29, 2014 @12:08AM (#46097451) Homepage

      Jaguar is for tablets and seems to be designed for price point and not speed. That's why they are comparing it with the ARM stuff and not using an Opteron 6386 as a comparison.

      The question is whether Jaguar itself is really 64-bit, or if it's just the graphics processor that's 64-bit and the rest is 32-bit.

  • An FX-8150 has a specInt_rate of 115. I've never seen an 8350 but it should be around 130-ish, just like an Opteron 6212.

  • I would have thought AMD would have a licensing clause as part of the sale of the Imageon (Adreno) to Qualcomm in case they ever decided to re-enter the market.

  • The microserver market is still less than half a percent of the server market and most of that is x86, not ARM. That's probably why Calxeda went bust.

  • They're calling it the Opteron A. Seriously, AMD? That won't be confusing, when Opteron can now mean ARM or x86_64. AMD's processor naming scheme is already confusing, and they just decided to make it more confusing. Idiots.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...