AMD Licenses 64-bit Processor Design From ARM 213
angry tapir writes "AMD has announced it will sell ARM-based server processors in 2014, ending its exclusive commitment to the x86 architecture and adding a new dimension to its decades-old battle with Intel. AMD will license a 64-bit processor design from ARM and combine it with the Freedom Fabric interconnect technology it acquired when it bought SeaMicro earlier this year."
Comment removed (Score:5, Informative)
Re:Oh snap. (Score:4, Informative)
An over-priced slow server, ARM will grow to dominate the market. The same way Intel's slow and over priced servers have become commonplace.
Well we'd try something else, but it turns out monkeys with notepads and crayons are even slower (and more expensive).
Biodegradable, though.
Re:AMD might stand a chance (Score:3, Informative)
Re:AMD might stand a chance (Score:3, Informative)
AMD no longer has a fab of their own, as of two years ago(?). I believe they are currently using TSMC for most of their production.
Re:The fat lady is singing (Score:4, Informative)
Re:Intel (Score:4, Informative)
Who want's to make bets on who is going to win this race? AMD has won all of the previous ones.
I assume you are joking, right? It's not a sprint, it's a marathon. Being first to market means nothing, it's winning the market. And Intel is crushing the 64-bit processor market right now.
Re:The fat lady is singing (Score:2, Informative)
I completely agree (although the Cyrix guys weren't a part of AMD if I recall correctly, they're now Via).
I don't get why Via and AMD don't do any collaboration. Via seems to have decent CPUs and some pretty bright sparks in their CPU design division but they use fucking awful graphics chipsets. Or Via and Nvidia for that matter.
Originally designed for mobile phones??? (Score:5, Informative)
ARM architectures are considered more energy-efficient for some workloads because they were originally designed for mobile phones and consume less power.
Fuck no. The ARM1 was released in 1987 as a coprocessor for Acorn's BBC Micro. They were designed for low power operation because the engineers were impressed with the 6502's efficiency. There weren't any significant mobile phone deployments until 18 years later in 2005.
Re:AMD might stand a chance (Score:4, Informative)
The Thubans were good, but everything based on Bulldozer just blows through power while having terrible IPC, thanks to having shared integer and floating point units. If they were to be honest the "modules" would be treated as single cores with hardware assisted hyperthreading, because the benches show that is a hell of a lot closer to what they are than to true cores.
Errrm, all of the integer units are dedicated and the shared floating point units still give each core as much floating-point resources as on the previous generation of AMD chips even if every single core is using floating point 100% of the time. If AMD hadn't screwed up on the engineering side, it'd be a really great design.
Re:AMD might stand a chance (Score:4, Informative)
Re:Welcome to the club (Score:4, Informative)
Re:Welcome to the club (Score:5, Informative)
I am trying to grasp, somewhat desperately, the events that must have taken place inside AMD headquarters when the CPU design team said they wanted to do hyper-threading. Having seen how badly Intel got knocked around when they did it, and the fact that for the price of duplicating a fair amount of the CPU, you are still only occasionally eking out a slight performance gain...and sometimes, a performance loss, their strategy doesn't make sense
Perhaps they looked at IBM or Sun's implementation of SMT instead. Adding a second context to the POWER series added about 10% to the die area and gave around a 50% speedup. If you have multithreaded workloads (especially on a server) then it can significantly improve throughput for two very simple reasons. The first is that when one context has a cache miss, the CPU doesn't sit idle, it can let the other core work. The second is that it makes branch misprediction penalties lower, because if you're issuing instructions alternately from two contexts you can get the instruction that the branch depends on a lot closer to the end of the pipeline than before you need to make the prediction. This also helps with various other hazards, so you don't need so much logic for out-of-order execution to get the same throughput.
Re:Welcome to the club (Score:5, Informative)
Also, there is nothing about ARM that inherently makes it more powersaving @ the same performance level than other RISC CPUs, be it SPARC, POWER, MIPS and so on.
I can think of several things. For Thumb-2, there is instruction density. MIPS16 does about as well as Thumb-1, but it is massive pain to work with. AArch64 doesn't (yet) have a Thumb-3 encoding, but one will almost certainly appear after ARM has done a lot of profiling of the kinds of instruction that CPUs like to generate. Even in ARM mode, the big win over the other RISC architectures is the it has fairly complex addressing modes, so you can do things like structure and array offset calculations in one instruction on ARM or 3-4 on MIPS. For AArch32, you also have predicated instructions. These make a big difference on a very low power chip, because you don't need to have any branches for small conditionals. For AArch64, most of these are gone, but there is still a predicated move, which is a very powerful version of a select instruction and lets you do mostly the same things. With AArch32 you have store and load multiple instructions, which basically let you do all of your register spills and reloads in a single instruction (the instruction takes a mask of the registers to save, the register to use as the base, and whether to post- or pre- increment or decrement it as two flags). With AArch64, they replaced this with a store-pair instruction, which can store two registers, and has the advantage of being simpler to implement (fixed number of cycles to execute).
Re:Originally designed for mobile phones??? (Score:2, Informative)
Almost. The first ARM1 was produced in 1985. This was used in the BBC micro coprocessor to design the ARM2. The first ARM2 silicon was produced in 1986 and the Archimedes computers, which ran on the ARM2, were released in 1987. I've still got my A310.
But yeah, it had nothing to do with mobile phones.