Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Announces First ARM Processor 168

MojoKid writes "AMD's Andrew Feldman announced today that the company is preparing to sample its new eight-core ARM SoC (codename: Seattle). Feldman gave a keynote presentation at the fifth annual Open Compute Summit. The Open Compute Project (OCP) is Facebook's effort to decentralize and unpack the datacenter, breaking the replication of resources and low volume, high-margin parts that have traditionally been Intel's bread-and-butter. AMD is claiming that the eight ARM cores offer 2-4x the compute performance of the Opteron X1250 — which isn't terribly surprising, considering that the X1250 is a four-core chip based on the Jaguar CPU, with a relatively low clock speed of 1.1 — 1.9GHz. We still don't know the target clock speeds for the Seattle cores, but the embedded roadmaps AMD has released show the ARM embedded part actually targeting a higher level of CPU performance (and a higher TDP) than the Jaguar core itself."
This discussion has been archived. No new comments can be posted.

AMD Announces First ARM Processor

Comments Filter:
  • Re:Despite it's name (Score:2, Interesting)

    by Anonymous Coward on Wednesday January 29, 2014 @01:01AM (#46097429)

    Nonsense.

    Most serve code can just be recompiled and it just works. Even that is not required it are are running a Linux distro.

    That's before we think that a lot of server code is Java, PHP, Python, Ruby, JavaScript etc that does not even need recompiling.

    I can't speak for the power budget on servers but clearly someone thinks there is a gain there.

    Besides, some competition in that space cannot be a bad thing can it?

         

  • Re:Despite it's name (Score:2, Interesting)

    by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Wednesday January 29, 2014 @02:21AM (#46097731) Journal

    I agree. Recompiling is not a big deal. C/C++ is standardized. The heavy lifting is the creation of standard libraries, and any sensible chip and system vendor will help do that because it's absolutely necessary. This is not the same thing as porting from an Oracle database to MariaDB or some other DB. That's a big job because every database has their own unique set of extensions to SQL.

    x86 never was a good architecture. It was crap when it was created back in the 1970s, crap even when compared to other CISC architectures of that era, and despite tremendous improvement, it's still crap today. Motorola's 68000 series was superior. Intel went with a load/store design for the integer math, which is okay, but then for no good reason whatsoever they didn't stay consistent for the floating point math, opting for a horrible stack based approach. The reason the true underlying architecture of a modern x86 CPU is RISC is that RISC is just that much better. Yes, so much better that even after allowing for the overhead in translating from x86 to RISC instructions, it is still faster than a CPU that executes x86 operations natively. They've done an amazing job of working around and amending the shortcomings of the x86 design, but it would be better to ditch the legacy cruft and make a fresh start. I mean, the instruction set has specialized instructions for handling packed decimal! And then there's the near worthless string search REPNE CMPSB family of instructions. The Boyer-Moore string search algorithm is much faster, and dates back to 1977. Another sad thing is that for some CPUs, the built in DIV instruction was so slow that sometimes it was faster to do integer division with shifts and subtracts. That's a serious knock on Intel that they did such a poor job of implementing DIV. A long time criticism of the x86 architecture has been that it has too few registers, and what it does have is much too specialized. Like, only AX and DX can be used for integer multiplication and division. And BX is for indexing, and CX is for looping (B is for base and C is for count you know-- it's like the designers took their inspiration from Sesame Street's Cookie Monster and the Count!) This forces a lot of juggling to move data in and out of the few registers that can do the desired operation. This particular problem has been much alleviated by the addition of more registers and shadow registers, but that doesn't address the numerous other problems. Yet another feature that is obsolete is the CALL and RET and of course the PUSH and POP instructions, because once again they used a stack. Standard thinking 40 years ago, but today, we know that more flexibility is better, and calls and returns can be achieved with a JMP instruction that stores a return address at a location determined through some indirection, rather than using a specialized CALL and RET instruction that pigs out on a precious register to hold and update a stack pointer for a call stack, and which is a pain to work around to implement things like tail end recursion. Finally, the support for task switching, virtual memory, and concurrency was lacking. Their so-called segmented memory architecture was terrible. The first attempt at OS level instructions, in the 80286, was so badly done that hardly anyone tried to use it. The 80386 was much better, but still lacked an atomic instruction for handling semaphores. Wasn't until the 80486 that they finally got it good enough to support a real OS. That's a big reason why PCs had such a poor reputation compared to Big Iron, and were often dismissed as toys.

    That's not to say that ARM and other architectures don't have issues. But the x86-- it's like they were trying for the worst possible design they could think of.

  • Re:Despite it's name (Score:5, Interesting)

    by evilviper ( 135110 ) on Wednesday January 29, 2014 @05:26AM (#46098271) Journal

    Your criticisms are probably quite apt for a 286 process. Some might be relevant to 686 processors too... But they make no sense in a world that has switched to x86-64.

    The proprietary processor wars are over. Alpha and Vax are dead. PA-RISC is dead. MIPS has been relegated to the low-end. SPARC is slowly drowning. And even Itanium's days are severely numbered. Only POWER has kept pace, in fits and starts, and for all the loud press, ARM is only biting at x86's ankles.

        x86 has been shown able to not just keep pace but outclass every other architecture. Complain about CISC all you want, but the instruction complexity made it easy to keep bolting on more instructions... From MMX to SSE3 and everything in-between. The complaints about idiosyncracies are quite important to the 5 x86 ASM programmers out there, and compilier writers, and nobody else.

    I wouldn't mind a future where MIPS CPUs overtake x64, but any debate about the future of processors ended when AMD skillfully managed the 64-bit transition, and they and Intel killed off all the competition. With CPU prices falling to a pittance, and no heavy computational loads found for the average person, there's no benefit to be had, even in the wildest imagination, of switching the PC world to a different architecture, painful transition or no.

  • Re:Despite it's name (Score:4, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday January 29, 2014 @05:27AM (#46098275) Journal
    I have no love for Android; but there is one major difference between Intel's latest and assorted ARM:

    Has Intel managed to cram some impressive x86 punch into ever lower power envelopes? Yes, yes indeed. Are they the only game in town, period, if you want reasonably speedy x86s at low power? Yes, unfortunately so. And, to the degree that the threat from iPads and the like doesn't keep them in check, prices reflect that.

    ARM, by contrast, lacks some punch and a lot of legacy software; but approximately a zillion vendors using undistinguished foundry processes can achieve decent results at low power. Prices reflect this.

    So long as ARM remains a looming threat, Intel will price their parts such that they (by virtue of Intel's unquestioned technical prowess) are very, very, compelling. If ARM shows any signs of weakness, it'll be back to the early Pentium M days, when Intel pretended that the 'Pentium 4 Mobile' was good enough, and that a Pentium M deserved a massive price premium. Not fun, at all.
  • Re:x86 IS efficient (Score:5, Interesting)

    by luminousone11 ( 2472748 ) on Wednesday January 29, 2014 @05:53AM (#46098353)
    1 byte?, you have no idea what you are talking about. AMD64 has a prefix byte before first op code byte, so in 64bit mode no instruction is smaller then 2bytes, Also 64bit arm is a new instruction set, and it does not in any way resemble 32bit arm. The fact is 64bit ARM, looks much more CISC'y then 32bit ARM, providing access to multiple address modes for load instructions, integrating the SIMD instructions rather then using costly co-processor extensions, having lightly variable length instructions, dedicated stack register and access instructions, And in a huge break from prior arm instruction sets they drop the conditional execution instructions from the instruction set. And it manages to increase the register count from 16 to 32 to boot as well. ARM has a bright future, It is not forced to waste huge swaths of transistors on decoding stupidly scatter brained instruction encodings built from 40 years of stacking shit on top of shit.
  • Re:Despite it's name (Score:2, Interesting)

    by TheRaven64 ( 641858 ) on Wednesday January 29, 2014 @07:39AM (#46098665) Journal
    That depends on how you're measuring success. If it's by fastest available CPU, by most sales, or by highest profits, then neither AMD nor Intel has been dominant since the '80s.
  • Re:x86 IS efficient (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Wednesday January 29, 2014 @07:47AM (#46098689) Journal

    Actually x86 IS efficient for for something completely different. The architecture itself is totally unimportant as deep inside it is yet another micro code translator and doesn't differ significantly from PPC or Sparc nowadays.

    This is true, unless you care about power. The decoder in an x86 pipeline is more accurately termed a parser. The complexity of the x86 instruction set adds 1-3 pipeline stages relative to a simpler encoding. This is logic that has to be powered all of the time (except in Xeons, where they cache decoded micro-ops for tight loops and can power gate the decoder, reducing their pipeline to something more like a RISC processor, but only when running very small loops).

    x86 short instructions allow for highly efficient memory usage and for a much, much, much higher Ops per Cycle.

    It is more efficient than ARM. My tests with Thumb-2 found that IA32 and Thumb-2 code were about the same density, plus or minus 10%, with neither a clear winner. However, the Thumb-2 decoder is really trivial, whereas the IA32 decoder is horribly complex.

    This is just that big of a deal that ARM has created a short command version of ARM opcodes just to close in. But then this instruction set is totally incompatible and also totally ignored.

    Thumb-2 is now the default for any ARMv7 (Cortex-A8 and newer) compiler, because it always generates denser code than ARM mode and has no disadvantages. Everything else in your post is also wrong, but others have already added corrections to you there.

  • Re:Pretty low bar... (Score:4, Interesting)

    by gtall ( 79522 ) on Wednesday January 29, 2014 @08:12AM (#46098771)

    And the point is that this is about servers, it doesn't matter if there are more ARM chips selling....you wouldn't compare a smart phone SoC with a server chip.

    I've heard arguments on both sides about server stats for ARM vs. Intel servers. Personally, I hope Intel gets kicked in the teeth, but I have yet to see a knock down argument that ARM has what it takes to beat them. There will probably be applications for both were each excels.

    Making comparisons now is also somewhat pointless. What's more important are the trajectories of both architectures, and Intel could also try to pull another Itanic, only be successful this time. At that point, attempting to plot trajectories now is pointless because a new Intel architecture is an entirely different trajectory.

  • Re:Despite it's name (Score:5, Interesting)

    by Alioth ( 221270 ) <no@spam> on Wednesday January 29, 2014 @09:09AM (#46098953) Journal

    ARM scales fine (in another way). Sophie Wilson (one of the ARM's original developers) indeed said that ARM wouldn't be any better today than x86 in terms of power per unit of computing done. However, an advantage ARM has for parallelizable workloads is you can get more ARM cores onto a given area of silicon. Just the part of an x86 that figures out how long the next instruction is is the size of an entire ARM core, so if you want lots of cores this will count for something (for example, the Spinnaker research project at Manchester University uses absurd numbers of ARM cores).

The use of money is all the advantage there is to having money. -- B. Franklin

Working...