AMD Licenses 64-bit Processor Design From ARM 213
angry tapir writes "AMD has announced it will sell ARM-based server processors in 2014, ending its exclusive commitment to the x86 architecture and adding a new dimension to its decades-old battle with Intel. AMD will license a 64-bit processor design from ARM and combine it with the Freedom Fabric interconnect technology it acquired when it bought SeaMicro earlier this year."
The fat lady is singing (Score:2, Interesting)
Comment removed (Score:5, Informative)
Re: (Score:2, Informative)
I completely agree (although the Cyrix guys weren't a part of AMD if I recall correctly, they're now Via).
I don't get why Via and AMD don't do any collaboration. Via seems to have decent CPUs and some pretty bright sparks in their CPU design division but they use fucking awful graphics chipsets. Or Via and Nvidia for that matter.
Re: (Score:2)
It's been a while, but wasn't VIA responsible for the really screwy AMD chipsets that used to make people curse under their breathe?
Re:The fat lady is singing (Score:4, Insightful)
Indeed. The one order the CEO can give to save the company is this: "Magical turn-arounds for companies who have been f*cked only happens in textbooks and fair-tales; as such, all resources for CPU design will go into creating a Phenom III with 12 cores and PCI-Express 3.0 and an Opteron design which employs liquid cooling (for the short term), as we are going to give it a major Mhz boost on top of the extra cores / cache we are going to staple on."
Getting involved in the already overgrown ARM market shows nothing but lack of vision. "We're going where everyone else is going, that'll be profitable!" You are going to be *that* guy who shows up late to the party, and wonders why all the booze is gone. Seriously, how do you mismanage stuff this badly? You're a CPU company, and you come up with the brilliant plan that despite being a major competitor in the x86 market, you're going to fix things by buying an oversubscribed design for a CPU in a market that...recursion error.
Think of it being like Ford, not using its own resources to think up a new car design, but paying Honda to license it the design for the Civic. Things are either absolutely atrocious, like AMD's stock should be worth a Haitian penny right now bad and we just haven't been told anything, or somebody doesn't know what he's doing. Go get the old guys your predecessor fired, and bring them back for more money. Find the DEC guys, and offer stock options if you have to to get them on board. Then follow their advice. After a year or two of punishment, AMD will be back on firm ground again.
Re: (Score:3)
Really, what was your point again?
Re: (Score:3)
Or how they collaborated with Audi? The Mazda 3, Focus, and S40 shared the same platform.
You mean Volvo. Germans don't share the good stuff. Only Swedes do that.
Re: (Score:2)
Re:The fat lady is singing (Score:4, Informative)
AMD might stand a chance (Score:2)
Re:AMD might stand a chance (Score:4, Insightful)
If AMD can push their engineering into ARM quickly, they might not only stand a chance but they might dominate fairly quickly, I'd think. They're not on par with Intel on die size, but IIRC they're pretty close - that knowledge is certainly applicable.
Remember, they've got good GPUs already. A lot of what they tried to do with the Mobility and later generations were very "ARM-like" already, it just didn't exactly work due to x86 limitations. I'd think they've got a pretty good chance overall. (If anything, it's a big market. Tegra# are really pushing NVidia along, after all...)
Re: (Score:3)
Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.
Any ARM CPU is at least an order of magnitude behind the current x86-64 server CPUs. Not to mention the additional work required to support multiple ARM CPUs on a motherboard, and even convince the major server manufacturers to build an ARM-based server in the first place. Good luck AMD, though you won't need it since even luck won't
Re: (Score:3)
Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.
That is as has been, but I'm wondering if this is not a strategic move on their part. Perhaps They are thinking of large clusters of low power ARM cores that kick in as the workload demands with some kind of clever way of sharing resources (Freedom Fabric?). With the global political landscape the way it is, that could be an important point of difference.
Reducing energy consumption is now the "in thing" and will continue to grow in purchasing decisions as financial incentives to reduce carbon emissions grow
Re: (Score:3)
That is as has been, but I'm wondering if this is not a strategic move on their part. Perhaps They are thinking of large clusters of low power ARM cores that kick in as the workload demands with some kind of clever way of sharing resources (Freedom Fabric?). ...
If that server can idle at less than a Watt and then ramp up in small increments as demand requires, that might also yield an overall advantage.
After 20 years of Wintel I finally caved to try a Mac. The new MBP Retina is insanely fast CPU-wise for t
Re: (Score:2)
You're comparing old technology to new technology and proclaiming the new technology to be better. I am shocked.
Re: (Score:3)
This is almost certainly for a SeaMicro-based architecture. The GPU might be mildly irrelevant in this market today but will continue to gain importance as more tasks transition to being executable via OpenCL & its cousins.
What you are looking at is a small box densely packed with lots of cores. Another flavor will likely come as a box with a few weak ARM CPUs used to control a large quantity of GPUs for HPC applications.
The thing that will make or break ARM in a SeaMicro style chassis is whether they
Re: (Score:3)
Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance
Begging the question. Is power usage actually secondary? Not for many kinds of workloads, which are storage-intensive. For SOME servers it doesn't make sense. For OTHER servers, it clearly makes sense; people are already using ARM-based servers. Perhaps you should consider a little self-education.
Re: (Score:2)
They're not on par with Intel on die size
This statement may have been true... back when they actually owned some fabs. :)
Comment removed (Score:5, Interesting)
Re: (Score:3, Informative)
Re:AMD might stand a chance (Score:5, Interesting)
Maybe the new direction is going to be heterogeneous computing. We're already seeing AMD and Intel combine x86 and a GPU on one die; maybe AMD will try to combine everything and have a couple of ARM cores for low-power tasks, a couple of Bulldozer modules for more intensive tasks, all combined with their GPU.
Re: (Score:2)
Premature Optimization in translating x86-ARM (Score:3)
I would think that taking pre-
Re: (Score:2)
How would that work in a situation where you have both x86 and ARM cores on the same system?
From what I've read, I definitely get the impression that AMD is doing some kind of modular system, whereby their APU cores can be coupled with either ARM or x86 variants. I'm not sure if that also includes ARM+x86 coupling, but that seems to be the point of "fabric" - it's a universal interconnect of some kind. However, how would that work in real life? What would be the advantage of it? ARM for seriously low power,
re: how would that work (Score:2)
1 - x86 code
2 - ARM code
Then, depending on power-resource utilization requests by the OS or directly by the user, the executing instance of the application can be migrated from one of the types of cores (eg x86 core) to one of the other types of cores (eg ARM-cores) by copying over (a) the current instances variable values, (b) th
Re: (Score:2)
Re: (Score:2)
I just find it ironic that Apple could very well be going back to RISC after not even a decade of being on x86. Even more ironic given the amount of work Apple contributed with Acorn back in the 1980's.
A MacBook with ARM chip wouldn't surprise me. After all the iPad, iPhone, and iPod are all arm chips already.
But then again I bought this MBP earlier this year as well as parallels and Windows 7 Pro because I do enough development work on multiple platforms that i do need to test against windows as well as
NOTironic that Apple could very well be going back (Score:4, Insightful)
.
And considering what they'd been doing with Pink / Taligent in keeping a parallel universe of development of their codebase always going on the x86 architecture while publicly showing only PowerPC development, they've probably got a skunks-work factory team somewhere that's already been running ARM-based IOS or even ARM-based OSX for a year if not for years...
sorry, I typo-d my statement (Score:2)
(poetically getting the theme-scheme a+d and c+b to complement a+b and c+d
Re: (Score:2)
Given that Intel is trying to wind down its StrongARM line it inherited from DEC, AMD may see the ARM line as a place where it can finally be top dog
Intel isn't trying to wind this line down, they sold it outright to Marvell two years ago. Even then, they were pretty anaemic. XScale was the P4 of the ARM world: twice as high a clock speed as everyone else but a much lower instruction-per-clock. It's an ARMv5 implementation, which seems painfully archaic today (especially given the lack of FPU, which even most ARMv6 implementations have).
Re: (Score:2)
StrongARM? Didn't intel sell that to Marvel years ago?
Re: (Score:2)
Given that Intel is trying to wind down its StrongARM line it inherited from DEC, AMD may see the ARM line as a place where it can finally be top dog. It has the expertise to give Broadcom, TI and Samsung a run for their money.
Unless AMD has hired some of the bright names of ARM (and maybe they have, I've not been following such) they have essentially zero chance to come up to speed on ARM quickly enough to challenge any of the entrenched players any time soon.
Taking a really big drink from the hypothetical Kool-Aid, I could see ARM64 processors being used as x86-64 replacements in palmtops and laptops
Stop seeing that. It's not really plausible. There's no reason to do it, either, and there never will be unless ARM kicks x86's ass sometime in the nebulous future.
Re:AMD might stand a chance (Score:4, Informative)
Re: (Score:2)
I'm sure Intel could price AMD out of existence if they wanted to.
Because I'm not familiar with Intel's budget, that makes me wonder if Intel can exist at all without being able to throw around massive piles of cash.
Re: (Score:2, Interesting)
Re:AMD might stand a chance (Score:4, Informative)
The Thubans were good, but everything based on Bulldozer just blows through power while having terrible IPC, thanks to having shared integer and floating point units. If they were to be honest the "modules" would be treated as single cores with hardware assisted hyperthreading, because the benches show that is a hell of a lot closer to what they are than to true cores.
Errrm, all of the integer units are dedicated and the shared floating point units still give each core as much floating-point resources as on the previous generation of AMD chips even if every single core is using floating point 100% of the time. If AMD hadn't screwed up on the engineering side, it'd be a really great design.
Re: (Score:2)
If AMD hadn't screwed up on the engineering side, it'd be a really great design.
I thought Bulldozer's reason to exist was to enable high clock rates. That doesn't seem to have panned out. Is that what you're talking about, or something else?
Re: (Score:2)
Not without the top-level chip designers their previous CEO nuked. They may be the Chicago Bulls in name, but the player lineup has changed.
Re: (Score:3, Informative)
AMD no longer has a fab of their own, as of two years ago(?). I believe they are currently using TSMC for most of their production.
Re: (Score:2)
Then they should take some of their idle Sales / Marketing / Business guys, have them fly over to {country}, and let them spend some time charming the other foundries into not only giving them the capacity they need, but doing so at an excellent price. At the very least, it will give them something to do.
Fingers Crossed! (Score:2)
I'm hoping AMD does something to stay relevant. If they were to leave the market (or effectively leave the market by selling super low volume), then there's nothing to keep Intel honest.
Intel (Score:2)
Re: (Score:2, Interesting)
I wouldn't be surprised at all if Intel had a team working on ARM ISA designs as a contingency plan, but I highly doubt they'd transition to ARM unless x86 was facing virtual annihilation. They're well aware that if they start releasing ARM chips, the whole industry will much more quickly transition away from x86. There's no way they would willingly destroy their extremely profitable, high-margin x86 business.
Re: (Score:3, Interesting)
Intel will be doing the same thing in 3... 2... 1... Just like missing the 64-bit era with Itanium, it is missing he mobile era with Atom.
What are you even talking about? Since when did Intel miss the "64 bit era" as you put it? Sure, Itanium was a failure and Intel sunk billions of dollars trying to make it work. However, Intel could afford that mistake and still continue chugging along. As things stand today, Intel absolutely dominates the 64 bit market. In fact, except for Intel, AMD, and the IBM Power chips, there is no other game in town as far as 64 bit is concerned, and in this market, Intel probably has 80% or 90% market share, and ha
Re: (Score:3)
Re: (Score:2)
Intel went for IA-64 and it was a complete failure. Ultimately, it was forced to adopt the AMD-64 instruction set. That's what I mean -- Intel missed the boat and the 64-bit instruction set it uses isn't even its own. Since adopting AMD-64, it's dominated the market space. If it wants to get anywhere in the mobile space, it will need to fold its current Atom strategy and go all-out ARM. Until it does that, it's Itanium all over again.
Okay, I get what you were trying to say earlier. Fair point too - because AMD64 was a vastly superior design and more importantly, a vastly more pragmatic design compared to what Intel was trying to shove down people's throats. Goes to show what hubris can do.
I'm not 100% sold on your recommendation of Intel dropping Atom and adopting ARM though. x86 is still very attractive to corporate clients and others who value legacy support and enterprise support. Business upgrade cycles are often very slow, and the
Re: (Score:2)
One of the bugbears of the ARM platform is the absence of mature, complete FOSS drivers for the embedded GPUs. e.g. PowerVR (proprietary), Mali (lima), Tegra (proprieatry), Adreno (Freedreno).
I could see Intel going the other way - keeping ARM at a distance but licensing its HD Graphics GPU to SoC manufacturers at minimal cost on the condition that they use Intel's factories to fabricate them.
(Just speculating - have no idea what % of a Sandy Bridge CPU's power draw is due to the graphics core(s))
Re:Intel (Score:4, Informative)
Who want's to make bets on who is going to win this race? AMD has won all of the previous ones.
I assume you are joking, right? It's not a sprint, it's a marathon. Being first to market means nothing, it's winning the market. And Intel is crushing the 64-bit processor market right now.
Future for AMD (Score:2)
Or the past (Score:2)
Consider that they used to sell Imageons (ARM CPU + ATI GPU), which they sold off.
Let's hope this time it works out for them. Power optimization is now important, unlike the Imageon days..
Back to Imageon? (Score:2)
Are they bringing back the Imageon line now? (ARM + ATI GPU on die?)
use cases? (Score:3)
When it comes to servers, I use comparatively few (a small lab with a few rack's worth that used for research projects) at work, so I'm wondering what sort of tasks these would be useful for? It sounds like they'll run RHEL and other Linux distributions, but even after looking at the second slide in this [hothardware.com] presentation, it's unclear to me advantage this would be to a a small business, or, in my case, a small department in a larger organization.
Is this new CPU/server line intended only for the enterprise? If so, what would the "trickle down effect" be for small groups like my own? Also, why would someone want to throw out their investment in existing hardware (including whatever talent they might have at programming and maintaining said hardware) for a design that's relatively proprietary?
Regards,
Aryeh Goretsky
Re: (Score:2)
Yeah I'm guessing this is geared more towards people who do the whole vendor support thing, and they've got a handful (or at least one) person dedicated to maintaining specific equipment (eg. linux servers vs. switches). Homogenous is key, but high thread count will also push ARM advantage here, because you could fit (say) 8 of these small systems with multiple CPUs each in a single 2U without much issue, and still leverage your SAN storage.
You'll probably see them in low-end "server" devices too, I imagine
Re: (Score:2)
For IT shops with fewer than 24 typical servers, what AMD might do in 2014 is not relevant. You would not be interested in trying this thing until it was field proven for three years. Even if it arrived on time (not AMD's strong suit) and it was nerdvana, that's 2017 before you're racking it. More likely the first version is quirky and your pilot starts two years later. But let's say 2017, for giggles. A typical 2 socket rack server can now be configured with 32 2.7GHz cores, 768 GB RAM, and terabytes
Re: (Score:2)
If AMD are really going ARM for the server market it's desperate clutching for straws.
It has to be some other market or they are committing suicide.
Can we see (Score:5, Interesting)
I can see this being a remarkable selling point for Windows devices if both ARM and x86 code can execute on the same device without emulation.
Re: (Score:2)
Re: (Score:2)
Actually I see this eventually rendering "for X architecture..." irrelevant. The more important "for what OS" will be dealt with using VMs. The need for more cycles to pull all this off competitively will mean we finally find a use for all those "solutions looking for a problem" engineers have been dreaming up.
Re: (Score:2)
I disagree. I think that would be foolish for customers. It would be great for AMD though, because people would be upgrading left and right (or overspecifying) to make sure they don't wind up limited by one or the other.
It would make more sense for AMD to finally invent a system which can take asymmetric processors linked via HyperTransport, so that you can plug one amd64 or ARM processor and then however many amd64 or ARM processors you like after that.
Re: (Score:2)
Why not x86-64 and ARMv8 on the same *core*?
Every x86 chip on the market is some secret, internal RISC design with an x86 translator in front of it. I do not believe it would be terribly difficult to redesign the translator unit to accept ARM code as well, although getting it to perform as well as x86 does may be challenging. With a decent design and some clever firmware, you could probably make it boot as either ARM or x86 depending only on a BIOS setting, and change cores on the fly.
This is interesting (Score:5, Interesting)
x86/AMD64 is overkill for many server functions.
It will be interesting to see if chips appear optimized for different functions.
For example hardware sql accelerators or massive i/o for file serving.
Since many hardware raid controllers are nothing but ARM cores anyway it would be interesting to see multiple cores, some used as RAID controllers and some more advanced cores for the os and file serving with a 10GB lan controller all on one chip.
Add power, drives and Ram and have a killer file server.
Re: (Score:3)
It's overkill if you have precisely one hardware server per function. That's becomming increasingly rare.Nowadays, a "server" is most often a VM that doesn't need exclusive access to the physical CPU.
Re: (Score:2)
Re: (Score:2)
Really I have an IBM Storewise and SVC that beg to differ.
ARM64 + Hypertransport = Interesting Outlook (Score:5, Interesting)
In fact AMD has an amazing technology portfolio. Having graphics chip (ATI Division), the hypertransport technology and AMD64, we can expect some interesting developments
Re:ARM64 + Hypertransport = Interesting Outlook (Score:5, Interesting)
I remember when AMD bought ATI many years ago... everybody (including us Slashdot posters) were saying what a bone-headed waste of money that was.
Now everybody's saying AMD is really fucked except for one bright spot which is its graphics division....
Re: (Score:2)
These are not contradictory statements.
AMD could be really fucked today because they put too much effort into graphics, and not enough effort into CPUs. Only time will tell if their graphics will save them. I suspect they won't unless they learn how to write drivers that work.
Originally designed for mobile phones??? (Score:5, Informative)
ARM architectures are considered more energy-efficient for some workloads because they were originally designed for mobile phones and consume less power.
Fuck no. The ARM1 was released in 1987 as a coprocessor for Acorn's BBC Micro. They were designed for low power operation because the engineers were impressed with the 6502's efficiency. There weren't any significant mobile phone deployments until 18 years later in 2005.
Re: (Score:2, Informative)
Almost. The first ARM1 was produced in 1985. This was used in the BBC micro coprocessor to design the ARM2. The first ARM2 silicon was produced in 1986 and the Archimedes computers, which ran on the ARM2, were released in 1987. I've still got my A310.
But yeah, it had nothing to do with mobile phones.
Re:Originally designed for mobile phones??? (Score:4, Interesting)
They were designed for low power operation because the engineers were impressed with the 6502's efficiency.
Nope. They were designed for low power so that they could use cheap plastic packaging instead of expensive ceramic packaging.
Re: (Score:2)
I read an article a while ago that stated that the ARM processors were so efficient by accident. They started from scratch with the design, not having the experience of Motorola, IBM, Intel and AMD of what a fully-fledged processor requires, and so it became a very simple one. This happens to be an important element for power efficiency.
Re: (Score:3)
Much like everyone else says, they were designed more for simplicity than anything else, and extremely low power consumption was an unintended side effect. Of course they were going for low power so they could use the cheap housings as mentioned above, but the frugal amounts it did actually eat were unintentional.
There was an article on The Register some months ago on ARM development history (can't seem to find it now), and if it's to be believed they were investigating a series of mysterious crashes in the
so now we know... (Score:2)
So now we know what Jim Keller is back at AMD to do...
MIPS64 (Score:2)
Re: (Score:2)
The market chose ARM over MIPS because MIPS stalled. You can still buy a SuperH core, but why would you do that? Same for MIPS. Unless you really don't need performance and you're getting a really great deal, it's a bit difficult to fathom. All the interest is in ARM, so that's where the talent is developing. And since newer stuff tends to be built on a lower process it's not just faster, but also lower-power which is what everyone and their mom (literally) is demanding today.
MIPS didn't keep up with ARM. U
ARM will succeed for servers (Score:3)
When you're on the client side of the network, it makes no difference what's on the server side. It could be a giant room full of hamsters and abacus. As long as the results come back fast and correct, you shouldn't care. That's the way the internet was designed. Heck, that's why it'd called the Inter-Net. Inter networking between different processor platforms.
Intel is a one trick pony. Besides the evolution of the x86, they have never fielding an architecture that had any staying power. Anyone remember the i432 or the i860? The current standard x86-64 architecture was defined by AMD, not Intel. Itanium got that moniker because it was accurate. The only reason that the Itanium is alive is because of a civil suit by HP.
What Intel is really really good at is putting gates on silicon. They did not succeed on architectural grounds, but by having the best implementation of a clunky architecture. They were always able to succeed by using more gates at a lower price then the competition.
ARM is an architectural rival to x86. Intel won with the x86 because they could cram more gates onto silicon. They loose this advantage against ARM because ARM requires less silicon to do the same job. This translates to lower power usage, which is getting more and more important as time goes on. Other foundries can compete even if they are trailing Intel in processes capabilities, and they want to be in this market. As does AMD.
ARM also benefits from being the dominant architecture for the smart phone/tablet sector, which means that there is a large community of developers and all the software one could ever want. An ARM-centric ecology exists, and it applies to servers as well as client software. Linux/GCC/MySql are happy on ARM, so any open source server software is easily available. And Microsoft has shown they are ready to run on ARM as well. It's not a risk from a software point of view.
It's not that Intel/AMD x86 is going away, but ARM will also be a player. And we should all be glad about it, because AMD being less competitive with Intel is the road to monopoly, which means increased prices and a stagnant CPU sector.
Re: (Score:2)
I once saw a 1U rack that contained something like 16 ARM boards (the entire board, networked together with a switch, powered from individual cables, with disk interfaces over some custom central channel). It cost less, used less power, and did more in the same amount of space. It was a bit homebrew-esque (despite being a professional product), but the advantages were rife.
I was sorely tempted to use it just because, as you say, server-side doesn't matter for most things. And with that sort of basic setu
Re: (Score:2)
£500 to the first person to supply a 1U filled to the brim with Raspberry Pi's (or equivalent)
I think the Parallella board [kickstarter.com] would be perfect for this, much better than the Raspberry Pi.
Re: (Score:2)
Not really.
1) Kickstarter. Sign of a project doomed to failure when it concerns hardware, really. Especially where they are talking on the scale of producing hardware boards with en-masse dozens of cores on them from a few hundred thousand dollars.
2) No OS support - it seems to be a number-cruncher with an ARM-controller, not a generic computer with lots of software already ported. Nobody will rewrite their software to take advantage of it unless it's MADLY to their advantage (i.e. number crunchers, not
Re: (Score:2)
Not really.
1) Kickstarter. Sign of a project doomed to failure when it concerns hardware, really. Especially where they are talking on the scale of producing hardware boards with en-masse dozens of cores on them from a few hundred thousand dollars.
That's just prejudice on your part. There are many amateurs doing Kickstarter projects they don't fully understand, but there are also some professionally done ones on there. The hard task of backing something there is to find out which of these two types the creator is.
2) No OS support - it seems to be a number-cruncher with an ARM-controller, not a generic computer with lots of software already ported. Nobody will rewrite their software to take advantage of it unless it's MADLY to their advantage (i.e. number crunchers, not generic machines).
It supports OpenCL, which is the standard for this kind of thing across many device types. Of course, if you're talking about web servers and databases, you might have a problem.
Actually, I can't think of anything in a web server that could
Probably: they want to do both x86 and ARM? (Score:2)
Welcome to the club (Score:2, Interesting)
Welcome to the club, AMD !
Unlike the X86 community, there are so many more competitors in the ARMs camp - companies such as TI and Broadcom from USA, Samsung from Korea, Hitachi from Japan, Allwinner from China, which produces $7 ARM-based SoCs.
AMD, you can't even compete against ONE company in the x86 arena - Intel.
Are you sure you can complete against the whole slew of them, this time??
Re:Welcome to the club (Score:5, Insightful)
Your facts are off two ways. First, going up against one big monopolistic company is a lot harder than going up against a lot of small ones. (Do you think it's easier to fight an elephant or a bunch of guys who are also fighting each other,) Second, they've managed to survive in the x86 market for 30 years. I think that counts as competing.
Re: (Score:3)
Second, they've managed to survive in the x86 market for 30 years. I think that counts as competing.
The OP has a point.
AMD has abandoned the high end CPU market to Intel. [anandtech.com]
AMD's brand new, 8-core, flagship CPU, is competing with Intel's 4-core i5 chip.
And despite being clocked higher, it loses to the i5 in almost(?) every single-core test.
I know AMD pioneered the multi-core field, but they've gotten left behind.
Re:Welcome to the club (Score:5, Insightful)
Indeed. I am trying to grasp, somewhat desperately, the events that must have taken place inside AMD headquarters when the CPU design team said they wanted to do hyper-threading. Having seen how badly Intel got knocked around when they did it, and the fact that for the price of duplicating a fair amount of the CPU, you are still only occasionally eking out a slight performance gain...and sometimes, a performance loss, their strategy doesn't make sense. What was so hard about welding two Phenom II X6's together, using the hyperlinks already present in the CPU design, and calling it a day? Knowing full well that Intel wouldn't be able to compete with that design (they've been core adverse compared to AMD), being happy that all of the cores were full cores (who'd complain?), and that they'd be a hot item for system builders everywhere. Sure, some of the gaming websites like to barf about how single-threaded performance still matters, on some games that no one cares about (the GPU, of course, mattering a lot more than the single-threaded performance of a CPU here), but to take the advantage of having 6 full cores, and trade it in for 8 half-cores...was this some idiotic attempt at market segmentation? Did some moron in a suit have a brain fart, and think "we can't have 12-core Phenom IIIs, it will cannibalize our Opteron server sales"? Fire his ass, and cut the strings on his golden parachute on the way out.
For the life me, I just can't fathom how they turned a major market advantage, with the CPU design practically on the design table already, with a popular and critically acclaimed design, and decided that f*ck it, we're doing so well here, let's go for a lobotomy, and compete on Intel's turd with an unproven half-assed design. Let's go from a full-core design that everyone complements, to some terrible half-core design that nearly killed Intel at some point. Seriously, who is commanding AMD such that they were in their nappies when the whole Intel hyper-threading business was going down (which every half-decent tech knows about), and how did they get boardroom approval?
The proper response, of course, was not the Business School of Failure's attempt at mandating some perverse product differentiation, which bears as much similarity to surgery as bludgeoning a person to death with a hammer, but through true, non-crippling differentiation. Phenom IIIs get 12-cores, and the latest SSE instructions + something that the boys down in the instruction lab cook up; Opterons get larger caches + more cores + special server instruction sets that mean something concrete, even if it means implementing hardware Apache threads; that's on top of the SSE3 stuff and so forth. Would companies buy Opterons over Phenoms if one had hardware accelerated support for web services over the other? I believe the survey would say hell yes.
As for the GPU stuff, the low-cost, low-power stuff is nice for chump change, but it's a fierce market with many competitors. What you want, what large companies no doubt want, is the ability to slam in GPU-daughter boards, to add 10 or 20 7970 GPUs on a single board (preferably with sockets, which drives up the cost a few cents, but also taps into the smaller markets, where you may buy 4 GPUs now, and 6 later), so that they can drive those large super-computing projects that already make use of these GPUs, but do so more efficiently.
As for gaming, the more stream processors, I imagine, the better. When in doubt, double them, as it will give Intel and Nvidia something to curse over.
Re:Welcome to the club (Score:4, Interesting)
Sadly the systems I work on are all Intel because we do a great deal of report and post-processing on data and that requires CPU grunt and running as much as we can in parallel. Had AMD done this they would have been under consideration. Hyper-threading makes very little if any difference to us really, it's all about getting as many full cores on as possible.
Re:Welcome to the club (Score:5, Informative)
I am trying to grasp, somewhat desperately, the events that must have taken place inside AMD headquarters when the CPU design team said they wanted to do hyper-threading. Having seen how badly Intel got knocked around when they did it, and the fact that for the price of duplicating a fair amount of the CPU, you are still only occasionally eking out a slight performance gain...and sometimes, a performance loss, their strategy doesn't make sense
Perhaps they looked at IBM or Sun's implementation of SMT instead. Adding a second context to the POWER series added about 10% to the die area and gave around a 50% speedup. If you have multithreaded workloads (especially on a server) then it can significantly improve throughput for two very simple reasons. The first is that when one context has a cache miss, the CPU doesn't sit idle, it can let the other core work. The second is that it makes branch misprediction penalties lower, because if you're issuing instructions alternately from two contexts you can get the instruction that the branch depends on a lot closer to the end of the pipeline than before you need to make the prediction. This also helps with various other hazards, so you don't need so much logic for out-of-order execution to get the same throughput.
Re: (Score:3)
All that is right, throughput increases, but you'll still reduce the single thread performance.
I'm aware that AMD replicated a fuller set of the core on their hyperthreading architecture than Intel did. But even then, hyperthreading always means highter throughput but lower thread performance.
And if they really did get 50% more throughput by using 10% more area, they lost the perfect opportunity to come out with a 8 cores 16 threads model. Why did they kept it at 6 cores?
Re:Welcome to the club (Score:5, Interesting)
I guess you take the words of Intel fanboys literally. No, the Bulldozer architecture is not hyper-threading. No, it does not mean only a slight performance gain and especially not a performance loss. I recently made 3 microbenchmarks on an Opteron 6234 (Bulldozer too). I measured the negative effect of sharing some circuits in a Bulldozer core. This negative effect varies from insignificant to small (3%, 13%, 25%). I run the same two threads on the two cores of a single bulldozer unit vs two cores on separate units. Intel hyper-threading brings 30% more performance - in the best case. The bulldozer core pair brings 75% more performance - in the worst case. How can you compare them? They are not in the same league.
The funniest benchmark was the floating point. The most frequent complaint against the Bulldozer architecture is that two cores share a single floating point unit. AMD should tell one million times that yes, they share a single floating point unit, but that is a 256 bit wide unit, which can be split into two 128 bit parts. And what is the size of the usual floating point number? Not 256 bit, not 128 bit, but only 64. In reality I measured that the two cores in a single unit processes floating point instructions almost at full speed. The negative effect of circuit sharing was only 3%, barely measurable. How ironic.
Re: (Score:3)
There has never been a chip to consume that much power because it would catch fire. It isn't speculation. The highest wattage CPU made was 140W. There is a really good reason for that. You can carry away all of the heat unless the cores are laid side by side, but then you would have a processor the size of your motherboard.
The GPU on die stuff is cute, but honestly,
Very funny - desktop isn't the high end (Score:3)
Re:Welcome to the club (Score:5, Insightful)
Your argument doesn't stack up.
First you say they're bringing an 8 core chip to compete with a 4 core chip. Fine. Then you complain the cores cannot keep up 1:1. So you're expecting AMD's chips to be twice as good as intel's to be able to compete.
That, of course, is rigging the test, and so is dishonest.
One could also say that with single cores not much worse than the competition, but double the number of cores, and a lower price to boot, you get better value. Moreso if you can make good use of the double number of cores.
And that's before considering that single-core benchmarks are entirely unrepresentative for multi-core performance thanks to various tricks like turbo core and turbo boost — that aren't 1:1 comparable so you'd have to do full, sustained benchmarks on all cores simultaneously to find out which delivers the most sustained instructions per second.
Meaning that AMD's offering takes more marketing footwork, but technically is not all bad. Not at all.
Re: (Score:3)
But it's not a lot of small ones - it's companies like TI, Broadcom, Qualcomm, NVIDIA, Freescale, Samsung, Hitachi, and so on. Each much larger than AMD.
Also, there is nothing about ARM that inherently makes it more powersaving @ the same performance level than other RISC CPUs, be it SPARC, POWER, MIPS and so on.
Re:Welcome to the club (Score:5, Informative)
Also, there is nothing about ARM that inherently makes it more powersaving @ the same performance level than other RISC CPUs, be it SPARC, POWER, MIPS and so on.
I can think of several things. For Thumb-2, there is instruction density. MIPS16 does about as well as Thumb-1, but it is massive pain to work with. AArch64 doesn't (yet) have a Thumb-3 encoding, but one will almost certainly appear after ARM has done a lot of profiling of the kinds of instruction that CPUs like to generate. Even in ARM mode, the big win over the other RISC architectures is the it has fairly complex addressing modes, so you can do things like structure and array offset calculations in one instruction on ARM or 3-4 on MIPS. For AArch32, you also have predicated instructions. These make a big difference on a very low power chip, because you don't need to have any branches for small conditionals. For AArch64, most of these are gone, but there is still a predicated move, which is a very powerful version of a select instruction and lets you do mostly the same things. With AArch32 you have store and load multiple instructions, which basically let you do all of your register spills and reloads in a single instruction (the instruction takes a mask of the registers to save, the register to use as the base, and whether to post- or pre- increment or decrement it as two flags). With AArch64, they replaced this with a store-pair instruction, which can store two registers, and has the advantage of being simpler to implement (fixed number of cycles to execute).
Re: (Score:2)
Do you think it's easier to fight an elephant or a bunch of guys who are also fighting each other
Better yet: would you rather fight one elephant-sized duck or 100 duck-sized elephants?
on a 1 billion trans die, = 500 cores? (Score:2)
If they can fit 200-500 cores per die, and then have 8 dies per server, that will kick ass.
Since most server tasks are long in terms of seconds. And dont require 2800 bogomips per thread.
Re:Welcome to the club (Score:4, Informative)
Use cases: AMD leverage ARM+Radeon (Score:2)
Also, AMD can spit out some interesting use cases where it can find a nice empty niche market:
by leveraging their built-in GPUs.
Not simply putting a low powewred mobile GPU (AMD radeon's Qualcom Adreno cousin, PowerVR, etc) something more high-powered (some of their own low power radeon designs):
- Coupled with a multi-core ARMv8 CPU, Can be useful for netbooks with good graphic performances (the same kind of market after which Nvidia is running with their own Tegra series).
- Some numerical loads can benefit
Re: (Score:3)
http://en.wikipedia.org/wiki/List_of_x86_manufacturers [wikipedia.org]
Now, regarding Intel, the company that has bleeding edge fabs, as simply "one competitor" is a demagogy.
x86 is also so much more complex than ARM. AMD in ARM world would be like heavy weight boxer competing with a bunch of 60kg guys.
Re:Oh snap. (Score:4, Informative)
An over-priced slow server, ARM will grow to dominate the market. The same way Intel's slow and over priced servers have become commonplace.
Well we'd try something else, but it turns out monkeys with notepads and crayons are even slower (and more expensive).
Biodegradable, though.