AMD Announces First ARM Processor 168
MojoKid writes "AMD's Andrew Feldman announced today that the company is preparing to sample its new eight-core ARM SoC (codename: Seattle). Feldman gave a keynote presentation at the fifth annual Open Compute Summit. The Open Compute Project (OCP) is Facebook's effort to decentralize and unpack the datacenter, breaking the replication of resources and low volume, high-margin parts that have traditionally been Intel's bread-and-butter. AMD is claiming that the eight ARM cores offer 2-4x the compute performance of the Opteron X1250 — which isn't terribly surprising, considering that the X1250 is a four-core chip based on the Jaguar CPU, with a relatively low clock speed of 1.1 — 1.9GHz. We still don't know the target clock speeds for the Seattle cores, but the embedded roadmaps AMD has released show the ARM embedded part actually targeting a higher level of CPU performance (and a higher TDP) than the Jaguar core itself."
Despite it's name (Score:5, Informative)
Comment removed (Score:4, Insightful)
Re: (Score:2, Interesting)
Nonsense.
Most serve code can just be recompiled and it just works. Even that is not required it are are running a Linux distro.
That's before we think that a lot of server code is Java, PHP, Python, Ruby, JavaScript etc that does not even need recompiling.
I can't speak for the power budget on servers but clearly someone thinks there is a gain there.
Besides, some competition in that space cannot be a bad thing can it?
Re: (Score:2, Insightful)
Re: (Score:2, Interesting)
I agree. Recompiling is not a big deal. C/C++ is standardized. The heavy lifting is the creation of standard libraries, and any sensible chip and system vendor will help do that because it's absolutely necessary. This is not the same thing as porting from an Oracle database to MariaDB or some other DB. That's a big job because every database has their own unique set of extensions to SQL.
x86 never was a good architecture. It was crap when it was created back in the 1970s, crap even when compared to ot
Re:Despite it's name (Score:4, Insightful)
Its not x86 today, which kind of makes me think you have no idea what youre talking about.
opting for a horrible stack based approach.
Im not one to argue architectural advantages, but id point out that both of the top two cpu manufacturers chose the same instruction set. Noone else has been able to catch the pair of them in about a decade.
Re:Despite it's name (Score:4, Insightful)
its unfortunate, but sometimes the best way to drive a screw into a piece of wood is just to keep smashing at it with bigger and bigger hammers.
I guess this approach is what Intel and AMD have been doing with x86.
Re: (Score:2)
Once again theyre not doing anything with x86. Theyre using AMD64.
Re: (Score:2, Interesting)
Re: (Score:2)
Raw performance, performance-per-watt, performance-per-core.
You know of a chip that beats a haswell or even steamroller in those departments?
Re: (Score:2)
x86 IS efficient (Score:4, Informative)
Actually x86 IS efficient for for something completely different. The architecture itself is totally unimportant as deep inside it is yet another micro code translator and doesn't differ significantly from PPC or Sparc nowadays.
x86 short instructions allow for highly efficient memory usage and for a much, much, much higher Ops per Cycle. This is just that big of a deal that ARM has created a short command version of ARM opcodes just to close in. But then this instruction set is totally incompatible and also totally ignored.
Short instructions do not matter on slow architectures like todays ARM world. These just want to safe power and therefore it fits in well that ARM also is a heavy user of slow in-order-execution.
A nice example, increasing a 64 bit register in x86 takes ONE byte and recent x86 CPUs can run this operation on different register up to 100 times PER CYCLE, all commands to be loaded in THREE to EIGHT Cycles from memory to cache. On the other hand, the same operation on ARM takes 12 bytes for a single increment operation, to load some dozend of these operations would take THOUSANDS of clock cycles.
And now you know why high end x86 is 20-50 times faster than ARM.
Re:x86 IS efficient (Score:5, Interesting)
Re: (Score:2)
Damn, moderation mistake.
Re:x86 IS efficient (Score:5, Interesting)
Actually x86 IS efficient for for something completely different. The architecture itself is totally unimportant as deep inside it is yet another micro code translator and doesn't differ significantly from PPC or Sparc nowadays.
This is true, unless you care about power. The decoder in an x86 pipeline is more accurately termed a parser. The complexity of the x86 instruction set adds 1-3 pipeline stages relative to a simpler encoding. This is logic that has to be powered all of the time (except in Xeons, where they cache decoded micro-ops for tight loops and can power gate the decoder, reducing their pipeline to something more like a RISC processor, but only when running very small loops).
x86 short instructions allow for highly efficient memory usage and for a much, much, much higher Ops per Cycle.
It is more efficient than ARM. My tests with Thumb-2 found that IA32 and Thumb-2 code were about the same density, plus or minus 10%, with neither a clear winner. However, the Thumb-2 decoder is really trivial, whereas the IA32 decoder is horribly complex.
This is just that big of a deal that ARM has created a short command version of ARM opcodes just to close in. But then this instruction set is totally incompatible and also totally ignored.
Thumb-2 is now the default for any ARMv7 (Cortex-A8 and newer) compiler, because it always generates denser code than ARM mode and has no disadvantages. Everything else in your post is also wrong, but others have already added corrections to you there.
Re: (Score:3)
There is one disadvantage of the different ARM modes and that is the an arbitrary program will contain all the needed bit patters to make some useful code. This means that any reasonable large program will have enough code to support hacking techniques like Return Oriented Programming if another bug can be exploited. I would love to see some control bits that turn off the other modes.
Re: (Score:2)
Re: (Score:2)
Re:Despite it's name (Score:5, Interesting)
Your criticisms are probably quite apt for a 286 process. Some might be relevant to 686 processors too... But they make no sense in a world that has switched to x86-64.
The proprietary processor wars are over. Alpha and Vax are dead. PA-RISC is dead. MIPS has been relegated to the low-end. SPARC is slowly drowning. And even Itanium's days are severely numbered. Only POWER has kept pace, in fits and starts, and for all the loud press, ARM is only biting at x86's ankles.
x86 has been shown able to not just keep pace but outclass every other architecture. Complain about CISC all you want, but the instruction complexity made it easy to keep bolting on more instructions... From MMX to SSE3 and everything in-between. The complaints about idiosyncracies are quite important to the 5 x86 ASM programmers out there, and compilier writers, and nobody else.
I wouldn't mind a future where MIPS CPUs overtake x64, but any debate about the future of processors ended when AMD skillfully managed the 64-bit transition, and they and Intel killed off all the competition. With CPU prices falling to a pittance, and no heavy computational loads found for the average person, there's no benefit to be had, even in the wildest imagination, of switching the PC world to a different architecture, painful transition or no.
Re: (Score:2)
I'd say something completely different.
Manufacture technologies were always the most important factor on the speed of a CPU. That meant that R&D money translated into faster chips, and all the R&D money was obviously on the market leader, first the x86 and then the amd64 architectures.
Well, manufacture is still extremely important, but it's taking bigger and bigger investiments to deliver the same gains in CPU speed. At the same time, the x86 market is shrinking, and arm64 is exploding. Expect huge
Re: (Score:2)
Except x86 wasn't the most profitable, so it didn't get all the R&D money. Entire companies were built around proprietary lock-in. Without a good CPU, customers won't buy your servers, your OSes, your other software, your support contract, etc. Those proprietary architectures absolutely got lots of R&D, as multi-billion dollar businesses were dep
Re: (Score:2)
Well, ok, margins are always bigger for loked-in products and at some time, total profits were bigger for them too, and a more open standard normally wins over more closed ones. Still, that does not apply to the x86 vs arm fight, there is a small parenthesis about arm being more open, and getting the mobile market because of it, but for servers the x86 is open enough.
Re: (Score:2)
I don't disagree, EXCEPT, if AMD disappears, x86 instantly becomes 100% Intel proprietary.
Now, maybe some other company could come along and use AMD's x64 instructions, plus only the Intel x86 bits that aren't under patent or some such, some of their own, and then compilers and binaries will only need minor changes... But that's a hell of a lot of work, so I wouldn't assume it'll happen.
Re: (Score:2)
The war is over, and x86 won. But it didn't win because it's the best, but because of economics, marketing, and the quirks of history.
It's the same story everywhere in technology, be it instruction sets, MP3 players, or video media. The winner only needs to be good enough. And x86 has been good enough, hasn't it? Not the best, of course, and by accident some things (like SSE, as you pointed out) have even been made easier.
And yet, the computing world would be better off if we could somehow break our bac
Re: (Score:2)
Re: (Score:2)
No, they aren't relevant anywhere that they've been deprecated in favor of better alternatives that are available. Ranting about a 286 not having an MMU is, of course not true of modern chips. The same goes for complaining that x86 doesn't have enough registers, when x64 certainly does.
Re: (Score:3)
TL;DR but to paraphrase Churchill "x86 is the worst form of instruction set, except for all those other forms that have been tried". The rest are dead, Jim. The cruft has been slowly weeded out by extensions and x86-64 and compilers will avoid using poor instructions. The worst are moved to microcode and take up essentially no silicon at all, they're just there so your 8086 software will run unchanged. It's like getting your panties in a bunch over DVORAK, whether or not it's better QWERTY is close enough t
Re: (Score:2)
That quote is not appropriate; x86 has been markedly inferior to each and every architecture displaced.
Aside from, you know, minor things like cost and performance.
Re: (Score:2)
. I mean, the instruction set has specialized instructions for handling packed decimal! And then there's the near worthless string search REPNE CMPSB family of instructions. The Boyer-Moore string search algorithm is much faster, and dates back to 1977. Another sad thing is that for some CPUs, the built in DIV instruction was so slow that sometimes it was faster to do integer division with shifts and subtracts. That's a serious knock on Intel that they did such a poor job of implementing DIV. A long time criticism of the x86 architecture has been that it has too few registers, and what it does have is much too specialized. Like, only AX and DX can be used for integer multiplication and division. And BX is for indexing, and CX is for looping (B is for base and C is for count you know-- it's like the designers took their inspiration from Sesame Street's Cookie Monster and the Count!) This forces a lot of juggling to move data in and out of the few registers that can do the desired operation. This particular problem has been much alleviated by the addition of more registers and shadow registers, but that doesn't address the numerous other problems. Yet another feature that is obsolete is the CALL and RET and of course the PUSH and POP instructions, because once again they used a stack. Standard thinking 40 years ago.
It was standard on the 8086 (introduced in 1978). The 80368 (1985) is a general purpose register machine and can use a 0:32 flat memory mode. And modern x64 (2003) has twice as many registers and the ABI specified SSE for floating point, not 8087. Also in 64 bit mode segment bases and limits for code and data (i.e any instruction which does not have a segment override prefix) are ignored.
I.e pretty much all the things you're complaining about have been fixed and if you look at benchmarks x64 chips have been
Re: (Score:2)
tl;dr
Something about Kermit - I think that was a networking protocol popular in the 80s.
Re: (Score:2)
You would have to go way out of your way to get x86 binaries on an arm box. Your package manager certainly won't be giving you pre-built packages for the wrong architecture.
Re: (Score:2)
We've gone full circle back to the early PC days of interpreted instead of compiled code so that leaves a space for these things.
Re: (Score:2)
Re: (Score:2)
Most business apps are written in java (as understood by one or two supported java server engines, probably not the ones available on ARM unless one of those happens to be Apache Tomcat). Seriously, most of the time even upgrading the java server to a newer subrelease will break things for any non-trivial application which is why you have very specific requirements in the support matrix.
Re:Despite it's name (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Do you have the numbers on that? I can't find them and assumed the ARMs had better power consumption but at the cost of CPU which is fine for servers.
Unless of course these ARM ones are powerful? Jaguar is the atom competitior and recent reviews of the fastest ARMs in the latest iPhone put it in the same league as the pentiumIV and AthlonXPs of a decade ago which means it is catching up.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
its been scaling since the mid 1980's and a hell of a lot more gracefully than any other cpu ever made to date, and btw even windows NT 4.0 supported arm
Re: (Score:3)
even windows NT 4.0 supported arm
Not as far as I know. Maybe you are thinking about Alpha?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I cannot find any evidence of Windows NT supporting ARM before Windows 8.
I suspect they're confusing it with MIPS.
Re: (Score:2)
If one thinks ARM does not scale, it would be interesting if he would point out why he thinks so. ...
There is no thechnical reason for ARM not to scale
Re:Despite it's name (Score:4, Insightful)
Why companies might do choose ARM really depends not on whether it is faster than Intel CPUs, but whether it is fast enough for the task at hand, and better in other regards such as power consumption, cooling, rack space etc. Google, Facebook, Amazon et al run enormous data centers running custom boot images and have teams capable of producing images for different architectures. This would seem to be the market that AMD is targeting.
Re: (Score:2)
You are also forgetting that a x86 or a amd64 is a RISC cpu with a layer od CISC hidding the RISC. That layer takes cpu space, power and resources (both designing and working). Going to a simples CPU design saves silicon wafers, increasing the number of cpu per wafers and so increasing the profit. Also, a simpler cpu saved internal resources developing the cpu.
So AMD building ARM cpus is a way to reduce costs, increase potential profit per cpu and of course, being ready and testing the market demand for ARM
Re: (Score:2)
You are also forgetting that a x86 or a amd64 is a RISC cpu with a layer od CISC hidding the RISC.
You ignorants keep saying this shit, but its not accurate at all. You have taken a small truth and inflated it into a big lie.
The small truth is that there IS a layer that converts instructions into micro-ops, that there are instructions will in fact generate 3+ micro-ops.
The big bullshit ignorant lie is that you then conclude that ALL instructions are converted into multiple micro-ops. Thats just not the case.
Its not "CISC on RISC" -- its "CISC and RISC" -- The basic technique in the inevitable con
Re: (Score:2)
call it whatever you want... CISC AND RISC? can be also small-CISC AND small-CISC and small-CISC AND small-CISC... but that start to translate to a plain RISC
If you would take out that translation layer and use all available micro-ops, you would call that CPU a RISC like, not a CISC like... Even if sometime you can only execute one operation, other times you CAN execute several operations, being further way from a CISC design and close to a RISC design.
anyway, that layer takes out performance, consume reso
Re:Despite it's name (Score:5, Interesting)
ARM scales fine (in another way). Sophie Wilson (one of the ARM's original developers) indeed said that ARM wouldn't be any better today than x86 in terms of power per unit of computing done. However, an advantage ARM has for parallelizable workloads is you can get more ARM cores onto a given area of silicon. Just the part of an x86 that figures out how long the next instruction is is the size of an entire ARM core, so if you want lots of cores this will count for something (for example, the Spinnaker research project at Manchester University uses absurd numbers of ARM cores).
Re: (Score:2)
AMD is betting that their "APU" designs, where a GPU offloads a lot of heavy lifting from the CPU, will provide good performance and power consumption. Offloading to the GPU is actually more advanced on mobile platforms than on the desktop, so it makes sense.
Re:Despite it's name (Score:4, Informative)
Microsoft supports Windows, IIS, SQL Server, and Exchange on ARM. Linux and its FOSS supports ARM as well. I believe RHES has an openJDK for java apps to run on ARM servers too.
Besides a few niche apps I really do not see the application compatibility problem.
It is not like these are used to run win32 desktop apps.
Re: (Score:2)
Microsoft supports Windows (kinda), on ARM
FTFY, MS only supports a limited set of the Windows 8 client piece on ARM (specifically Modern UI based apps)
Re: (Score:2)
Who cares about the client API? - We're talking about a server, which depending on the quality of Microsoft's remote admin tools, should run entirely headless. (One would hope you don't *still* need to Remote Desktop into a server in 2014 to change a trivial setting)
Even if Windows RT dies a miserable death in tablet-land, all the necessary plumbing to run SQLServer, Exchange, Active Directory etc should be present.
Re: (Score:2)
Re:Despite it's name (Score:4, Interesting)
Has Intel managed to cram some impressive x86 punch into ever lower power envelopes? Yes, yes indeed. Are they the only game in town, period, if you want reasonably speedy x86s at low power? Yes, unfortunately so. And, to the degree that the threat from iPads and the like doesn't keep them in check, prices reflect that.
ARM, by contrast, lacks some punch and a lot of legacy software; but approximately a zillion vendors using undistinguished foundry processes can achieve decent results at low power. Prices reflect this.
So long as ARM remains a looming threat, Intel will price their parts such that they (by virtue of Intel's unquestioned technical prowess) are very, very, compelling. If ARM shows any signs of weakness, it'll be back to the early Pentium M days, when Intel pretended that the 'Pentium 4 Mobile' was good enough, and that a Pentium M deserved a massive price premium. Not fun, at all.
Re: (Score:2)
The Pentium M is a good example of how Intel remains dominant. It takes about 5 years to bring a CPU to market, for any vendor. You start with an approximate transistor and power budget and an estimate of what the market will want in 5-7 years. You then start work. Hopefully, the process technology gets where you need it to be and the market does what you expect. With the Pentium 4, neither happened: they were expecting to get to 10GHz with a thermal envelope of around 60W and didn't, and the market st
Re: (Score:2)
Except you are forgetting about AMD and how they have a total lock on the console space for the next 5 to 7 years which should keep plenty of money flowing towards the Jaguar.
The console market is high volume on tiny margins, so that's probably a pretty small 'plenty of money'.
Re: (Score:2)
Re: (Score:2)
Re:Despite it's name (Score:5, Funny)
The question is whether Jaguar itself is really 64-bit, or if it's just the graphics processor that's 64-bit and the rest is 32-bit.
Re:Despite it's name (Score:5, Informative)
Where you even came up with an idea like that is beyond me.
Obviously [wikipedia.org]. BTW, isn't tonight a school night, kid?
Re: (Score:2)
Re: (Score:2)
Depends, if its running Windows Storage Spaces, it'll need more than the 128 GB.
Re: (Score:3)
Re: (Score:3)
"Port buffers:
1 MB plus 65 MB per port on ingress and 80 MB per port on egress for dedicated mode operation
1 MB per port plus 65 MB shared per 4-port group on ingress and 80 MB per 4-port group on egress in shared mode"
That's higher than several other ones I've seen, and even if you're buying a 32-port switch only to use 8 of the ports, you're still nearly an order
Re: (Score:3)
Depeneds what exactly the "storage box" is doing with the data.
If it is doing block level deduplication then ram starts to become very important since you really want to keep the deduplication tables in ram. The freenas guys reccomend 5GB of ram per terabyte of storage for ZFS deduplication.
If it's serving up the same files repeately then more ram means more chance that those files will be cached in memory rather than having to be read from relatively slow storage devices.
Re: (Score:2)
Re: (Score:2)
The more RAM, the less you use the disks, the less you wear them, and the less power you spend.
Also, you'll want services that use too many files running at the file server.
Not bad (Score:2)
An FX-8150 has a specInt_rate of 115. I've never seen an 8350 but it should be around 130-ish, just like an Opteron 6212.
"it's not clear what GPU IP is used" (Score:2)
I would have thought AMD would have a licensing clause as part of the sale of the Imageon (Adreno) to Qualcomm in case they ever decided to re-enter the market.
Competing for 0.46% of server market (Score:2)
The microserver market is still less than half a percent of the server market and most of that is x86, not ARM. That's probably why Calxeda went bust.
And the name: Opteron. (Score:2)
They're calling it the Opteron A. Seriously, AMD? That won't be confusing, when Opteron can now mean ARM or x86_64. AMD's processor naming scheme is already confusing, and they just decided to make it more confusing. Idiots.
Re: (Score:2)
If Intel thought ARM was a good idea long term, they wouldn't have sold their XScale business.
They still have a full ARM license, so there is nothing stopping making their own.
they sold it in 2006 (Score:2)
Back then the mobile space hadn't exploded yet.
Re: (Score:2)
iPods had exploded, it was the year before the iPhone.
There's also absolutely nothing stopping them making ARM CPU's again. They'd also be the best in the industry, because they'd be a process node smaller than the competition.
Re: (Score:2)
I think they started exploding a little later [dailymail.co.uk].
Sorry, couldn't resist.
Re:ARM processing (Score:4, Funny)
Re:ARM processing (Score:5, Funny)
I believe you've read it wrong. Basically, AMD actually traveled back in time to develop the first ARM processor.
No - its making a food processor for cannibals. The design brief was that you should be able to process a whole arm.
Re: (Score:2)
It's a quip-- of course I know that ARM stands for Amalgamated Regional Militia [wikia.com].
Re: (Score:2)
AMD SkyNet!!!!
Re: (Score:2, Informative)
power efficiency which is important in datacenters. electricity isnt free.
Re: (Score:2)
Re: (Score:2)
are you sure this is true about ARMv8? i saw they made significant strides in power consumption which i'm betting relates mostly to the higher frequencies.
Re: (Score:2)
A better instruction set.
Re: (Score:2, Informative)
1) cheap
2) competition
3) custom SoCs
If that is enough to work out remains to be seen.
Re: (Score:2)
Power efficiency and market competition.
Re: (Score:3)
Re: (Score:3)
Is this another nail in the coffin for the x86 architecture? Is it realistic to expect Windows/Mac OS X for ARM in their desktop versions in the near future? (Linux is already there). Of course x86 won't suddenly disappear, but may become "legacy". Intel should start moving on the ARM front
No - ARM is still a long way off the high-end x86 chips. At the moment they largely complement each-other with some overlap in the low-mid range.
Re: (Score:2)
Is it realistic to expect Windows/Mac OS X for ARM in their desktop versions in the near future?
What I find really curious is that MS did all the work needed to put a full version of windows on arm.
Then they turned it into a crippled peice of shit with artificial restricitions. No third party desktop apps, no ability to join corporate domains, third party developers pushed hard into using the windows store with it's apple-like fees (AIUI there are ways to load your own metro apps without using the store but they aren't exactly user friendly).
Re: (Score:2)
So that it would compete with the iPad. I mean that crippled piece of shit(the iPad which has these same limitations) seems pretty popular among the people that want things like slate devices in the workplace.
That's because it has an Apple logo on the front.
Apple are the Lexus of the computer market, while Microsoft are the Yugo. People only buy Windows products because they want to run Windows programs, and Windows For ReTards doesn't.
Re: (Score:2)
But much more important, what is the benefit of using ARM over x86 here?
I don't know much about servers but ARM chips are currently outselling x86 ships. It makes sense for chip manufacturers to get into the ARM market (unless you're Intel).
Re: (Score:2)
> unless you're Intel
I'm curious about the arguments for and against for Intel...
Re: (Score:2)
Re: (Score:2)
Yes, but I'm curious why it would make less (or more, even) sense for Intel to 'get into the ARM market' than any other chip manufacturer.
I can think of a reason of the top of my head - ie it might dilute their stance/marketing message that 'IA is best' or something like that, but I'm not sure if that is really true. In fact, I can imagine Intel saying, 'well, this isn't the first time we've made ARM' and that making people say, 'oh, right...ok then...nothing to see here'.
I'm just curious what other reasons
Re:Pretty low bar... (Score:4, Interesting)
And the point is that this is about servers, it doesn't matter if there are more ARM chips selling....you wouldn't compare a smart phone SoC with a server chip.
I've heard arguments on both sides about server stats for ARM vs. Intel servers. Personally, I hope Intel gets kicked in the teeth, but I have yet to see a knock down argument that ARM has what it takes to beat them. There will probably be applications for both were each excels.
Making comparisons now is also somewhat pointless. What's more important are the trajectories of both architectures, and Intel could also try to pull another Itanic, only be successful this time. At that point, attempting to plot trajectories now is pointless because a new Intel architecture is an entirely different trajectory.
Re:Pretty low bar... (Score:4, Funny)
ARM chips are currently outselling x86 ships
That doesn't surprise me at all. It might be interesting to have an ARM-powered x86 ship though.
Re: Until ARM gets PCI, ACPI, UEFI equivalents (Score:2)
Re: (Score:2)
You're not looking at an Android platform, which is for the most part what you are describing. I think you'll find for these platforms there will be standard Linux support in Debian, Red Hat, etc. As I understand it, the standard Linux kernel will run on it (want to get my hands on one when they're available, not because I'm excited about ARM servers; but, because I have high performance embedded applications in mind.).
On the desktop and workstations, to get more than basic functionality, most video car
Re: (Score:2)