Qualcomm Debuts 10nm Server Chip To Attack Intel Server Stronghold (tomshardware.com) 110
An anonymous reader quotes a report from Tom's Hardware: Qualcomm and its Qualcomm Datacenter Technologies subsidiary announced today that the company has already begun sampling its first 10nm server processor. The Centriq 2400 is the second generation of Qualcomm server SOCs, but it is the first in its new family of 10nm FinFET processors. The Centriq 2400 features up to 48 custom Qualcomm ARMv8-compliant Falkor cores and comes a little over a year after Qualcomm began developing its first-generation Centriq processors. Qualcomm's introduction of a 10nm server chip while Intel is still refining its 14nm process appears to be a clear shot across Intel's bow--due not only to the smaller process, but also its sudden lead in core count. Intel's latest 14nm E7 Broadwell processors top out at 24 cores. Qualcomm isn't releasing more information, such as clock speeds or performance specifications, which would help to quantify the benefit of its increased core count. The server market commands the highest margins, which is certainly attractive for the mobile-centric Qualcomm, which found its success in the relatively low-margin smartphone segment. However, Intel has a commanding lead in the data center with more than a 99% share of the world's server sockets, and penetrating the segment requires considerable time, investment, and ecosystem development. Qualcomm unveiled at least a small portion of its development efforts by demonstrating Apache Spark and Hadoop on Linux and Java running on the Centriq 2400 processor. The company also notes that Falkor is SBSA compliant, which means that it is compatible with any software that runs on an ARMv8-compliant server platform.
First post (Score:1, Informative)
Re: (Score:2)
In the 90s, a few companies used to do CPU-neutral motherboards. Can't someone make a server/workstation w/ those fast interconnects, and then give sub-vendors the choice of using either x64 or ARM? That way, they can configure servers depending on which CPU they want, based on price, ISA, et al
Re: (Score:3)
The server is not much different these days. You no longer have discrete memory controllers (you do sometimes), or discrete north and south bridges to handle things like expansion cards, network connectivity, etc. Now, a lot of that resides in the CPU. So the CPU determines what s
AMD ZEN to put the hurt on intel! (Score:2)
If they do it even slower with more PCI-E lanes then Intel it's a win for the end user. A slower storage server loaded with pci-e storage can be better then a faster Intel one. and with the lower end Intel cpu's less having pci-e then $200-$300 more cpus that are a little bit faster in the same socket with more pci-e turned on can force Intel to give up on that idea.
Re: First post (Score:2)
It takes a LOT of cache and very clever data paths (Score:5, Interesting)
It takes a LOT of cache and very clever data paths to keep 48 cores fed with data. Intel cores typically have 2.5MB of local level 3 cache for each core and multiple ring busses so cores can access the whole cache and not waste precious off-chip bandwidth trying to read from main memory. If this is a special purpose chip for executing deep learning algorithms that's one thing, but for a general purpose server where tasks are uncorrelated, it ain't easy to prevent stalls while cores wait for data.
Servers (Score:1)
It's aimed at servers, so its pretty safe to say it will be running 48 Apache threads with the socket code pretty much always in cache.
Or 48 other *identical* threads servicing multiple users for the same thread type.
Re: (Score:1)
It's aimed at servers, so its pretty safe to say it will be running 48 Apache threads with the socket code pretty much always in cache.
Or 48 other *identical* threads servicing multiple users for the same thread type.
Eh? Maybe you missed the whole IT thing that's been going on for like 40ish years but servers are used for a few things other than just apache.
Re: Servers (Score:2)
It's called an "example" (Score:5, Insightful)
It's called an "example". There are millions of servers that do almost nothing but run a bunch of Apache threads, many that do nothing but smtp, many that do nothing but nosql lookups, etc. It's very common, especially for companies with thousands of servers, to have servers dedicated to a single task.
If you need 1/4 of a server, absolutely (Score:3)
Absolutely, if your WordPress blog needs about 1/4 the resources of a server, a virtual machine is a good way to do that. I offer that for our smallest customers. (We call it "Half Server", two cores and 8GB dedicated to each customer.)
If you need a cluster of 4, 40, or 400 nodes in your cluster of Squid proxies, the virtualization works the other way around - a true cluster is a rack row of machines that look and act like one. Each node, each piece of hardware, is an interchangeable and disposable part o
Re: (Score:2)
I used to think like this.
Then I discovered that hypervisors offer a distinct improvement in manageability, even for single-purpose hardware. You don't need to do much to maintain the hypervisor, and it makes upgrades of the actual OS relatively trivial. And once you're down that path, chewing up a bit of idle CPU with that little task the boss wants done becomes just as easy.
For single machines, yeah. For clusters, the virt (Score:2)
For single machines, like you say you can upgrade the metal OS without disturbing the guests (hopefully). If you have a cluster of 16 Snort nodes, or 32 storage servers, you just take each offline as you upgrade it, then it rejoins the cluster when ready. It's kind of reverse virtualization - the 16 pieces of hardware are virtually one service.
Re: (Score:2)
The biggest differences, to me at least, are that:
- You can pre-bake your images, which means that the server is only down for a minute or two for the ugprade rather than having to wait through the install process.
- You don't have to fight with $RANDOM_VENDOR's dodgy implementation of out-of-band management to powercycle the server and watch the console (or, $deity forbid, actually attach virtual media).
- If you're dealing with an impending hardware failure, migration of the host and all of its data to anot
Re: It takes a LOT of cache and very clever data p (Score:2)
Re: It takes a LOT of cache and very clever data p (Score:4, Insightful)
Look at any standard library or application framework and you will not find any cache oblivious algorithms.
Linked lists are just traditionally implemented linked lists. Hash tables are just traditionally implemented hash tables. Trees are just traditionally implemented trees. Even sorting will be a ham-fisted quicksort.
pretty much only assembly language programmers give a shit, mainly because they are the only ones that understand the issues. Any exceptions you find are the exceptions that prove the rule.
Re: (Score:2)
Re: (Score:2)
You don't need it for a large portions of an application's code.
We arent talking about application code. We are talking about library code.
If you write libraries like you write applications, then you are part of the problem.
Re: (Score:2)
Linked lists are just traditionally implemented linked lists. Hash tables are just traditionally implemented hash tables
Linked lists suck for caches, but hash tables don't have to. There's a trend for libraries to provide things like hopscotch hash tables as the default hash table implementation and these are very much cache aware. The real problem is the trend towards languages that favour composition by reference rather than by inclusion, which means that you do a lot of pointer chasing, which is very bad for both caches and modern pipelines.
Re: (Score:2)
Not true. Most of the stuff that programs do are totally dependent on the speed of a) a Database b) an Online web service c) a File system.
In those cases Caches are definitely used, a lot. And you get 95% of your speed gains from there.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
ARMing servers. (Score:2)
It would be interesting since AMD cancelled their ARM efforts in the server space.
Re: (Score:2)
What x86 failures? I've been using AMD CPU's for years now. I'm typing this on one.
He probably just meant Zen, Jaguar, Bulldozer, K10, K8, K7 etc. Those failures.
Re: (Score:2)
The Athlon series was pretty successful for a while, when AMD had the ex DEC Alpha guys w/ them: that was the first time they successfully made a CPU that challenged the Pentium. Same when AMD came out w/ AMD64. Intel turned things around after they dropped their plan to move everything to the Itanic, and took advantage of the multi-core architecture.
Also, AMD had a unique market opportunity to build up a good manufacturing base w/ quality fabs for their CPUs, but didn't. Intel gave top priority to the
Re: (Score:2)
Apparently people have forgotten SeaMicro and AMD's microserver push. The comparison between AMD and Qualcomm has nothing to do with fabs and more to do with x86 dominance in the server market as mentioned in the article vs ARM in the same.
Re: (Score:3)
AMD had a unique market opportunity to build up a good manufacturing base w/ quality fabs for their CPUs, but didn't. Intel gave top priority to their fabs, and are the standard
AMD spun off their fabs for precisely this reason. Building fabs is insanely expensive and the only way to do is to amortise the cost over a lot of chips. Even at its peak, Intel was producing 4-5 times as many CPUs as AMD and had a load of lower-end products (e.g. network interfaces) that they'd start using the fabs for once they were a generation old. There was absolutely no way for AMD to compete head to head with Intel in fab technology, because they couldn't get the economies of scale.
This does; ho
Re: (Score:2)
Until the K7, AMD was always in catch-up mode vs Intel, and usually targeted the low end of the market. But the TAM for AMD was the TAM for Intel: if you sold an AMD system to someone, there wasn't much it couldn't do. Problem was that AMD had trouble making the volumes needed in the market. If they had focused more on their fabs, that wouldn't have been the story. In fact, AMD was ahead of Intel when the latter had the Pentium 4, and also when Intel was waffling b/w the Pentium line and the Itanium.
Re: (Score:2)
AMD increased their production quite a lot. First with the switch from Austin to Dresden and then when they doubled Dresden's size. It takes a couple years to build a modern manufacturing plant and it takes quite a lot of capital. AMD could have done it much faster had Intel not interfered with their anti-competitive practices with the Japanese manufacturers, among others, remember that AMD had to basically give away chips to Compaq back then so they would even accept their because of Intel's monopolist pra
Re: (Score:2)
The first K6 chips were killer too, really outgunned Intel there. twice the performance for half the price sort of good. Sold like hotcakes.
Re: (Score:1)
Re: (Score:2)
Intel would have been the next Motorola had it not been for those anti-trust abuses.
Re: (Score:3)
It would be interesting since AMD cancelled their ARM efforts in the server space.
Really we should use AMD's market success as some sort of indicator? What about their 80x86 failures? Is this the market?
How's that? AMD had failures in the 80x86 market? Well, depends on what you call a failure.. If we just look at the CPU market......
With a few exceptions, I find the AMD x86 processor family a pretty good value for the money. (Your mileage may vary) Yea they tend to run hotter and take more power than the Intel offerings, but they perform well enough in most environments to be viable. They may not do as well in the mobile and Server spaces (due to power consumption being higher at the same processing ca
Re: (Score:1)
Its a failure for their investors, but I'm not an investor, so why should I care?
Re: (Score:3)
Intel in the previous century needed AMD for Intel's own survival for several reasons:
In the 80s and 90s when Intel was considered a small player in computation, many contracts called for a s
Re: (Score:2)
I never called AMD a success. I inteded to show AMD was at times kept alive by Intel needing its existence. Why cower behind an AC? I stand by my historical view.
Re: (Score:3)
Anti-trust.
Re: ARMing servers. (Score:2)
amd's 64 bit system did not fail like the ititanic (Score:2)
amd's 64 bit system did not fail like the ititanic
Re: (Score:2)
They may not do as well in the mobile and Server spaces
When you put it like that, it doesn't sound too bad. Until you realise that mobile is the largest overall market segment, and that mobile and server are both the highest-margin and fastest-growing segments of the market.
Re: (Score:2)
Perhaps, but they still sell into these markets. I have a number of laptops sporting AMD processors that work fine. What I'm saying is that is not their best market share, not that they've failed there.
Re: (Score:2)
For a damn good reason too. Every single attempt at ARM server has failed, there have been roughly (IIRC) 4 server ARM chips that made it to sampling and all bit the dust shortly afterward when the performance was shown to be abdominal. Personally I believed AMD dropping the effort was a clear indicator that even with all their experience they couldn't build something that would beat x86.
The problem isn't the instruction set, it's all the stuff bolted onto it to try to keep cores fed. Multi-threading isn't
Re:ARMing servers. (Score:4, Funny)
Well, at least performance wasn't thoracic.
Re: (Score:2)
Gotta love autocorrect. :( Was rather funny though.
Intel 10nm != Other Foundry 10nm (Score:2)
Since node geometry now has more to do with marketing than it does with feature size, it's no longer a meaningful comparison. Intel's 14nm node is generally superior to TSMC's 10nm node (where the Centriq will most likely be fabbed).
Re: (Score:2)
Intel's 14nm node is generally superior to TSMC's 10nm node (where the Centriq will most likely be fabbed).
Can you quantify that? I generally assign a 30% "marketing penalty" to TSMC. By that rule of thumb, TSMC's 10nm node is a bit better than Intel's 14nm, other things being equal, which is of course a gross simplification. IMHO, the reality is, Intel's traditional process advantage is no more. It may even be turning into a process handicap as Intel persists in going it alone in a shrinking market while others are pooling resources in growing markets.
Cache dear chap Cache (Score:2)
Intel have known it for some time and spent a lot of time refining the cache down to the geometry...
what they do not specify is the cache size or any benchmarks... personally I would like nothing more than to see a mix of architectures with a standard board interface layout...
john
Re: (Score:1)
The table here: https://www.semiwiki.com/forum... [semiwiki.com]
give a breakdown of the different foundries nodes.
As you can see, TSMC's 10nm is about 15% denser than Intel's 14nm, however density isn't the only factor. Performance-wise I would say Intel's 14nm is going to be better for a server chip, because it's specifically tuned for high performance computing, while TSMC's nodes are tuned for low power mobile SoC's
Re:Intel 10nm != Other Foundry 10nm (Score:5, Interesting)
I don't know much about foundries but I remember TSMC had some problems getting to this node as does everybody. What I do know is that fabs are all TSMC does. Intel is a bigger beast that does fabbing, software, motorboards, chip design, etc.
It is this, but I don't think its in the way you think.
Intels problem is that it cannot sell FAB time because they are vertically integrated. Intel builds a FAB and runs its next gen chips off of it for a few years, then they are stuck looking for something to do with the FAB when it is no longer current-gen. The problem is this specifically. Intel is competing with just about every FAB on the planet in this older-gen market (unlike with their desktop chips) so therefore margins are thin even on much older FAB's that are good enough to satisfy the bulk of the markets needs for all these secondary sub-products (drive controllers, etc...)
There are 3 kinds of semiconductor fabricators:
1) Vertically integrated like Intel. Only they can use their FABs.
2) Integrated device manufacturer like Samsung. They can sell FAB time to other companies so long as there isnt a conflict of interest.
3) Fair-play like TSMC. They only sell FAB time.
TSMC's revenue is now approaching Intel's, and unlike Intel they can keep all their FABs busy making money, so the outlook for Intel is grim without a serious restructuring, which they are doing (see recent massive layoffs, and bullshit marketing about their new "cloud strategy")
I've posted more than once about this on slashdot, and each time I end with the same recommendation: Sell your Intel stock.
Re: (Score:3)
Basically Intel and by extension x86 won in a large part by exploiting a FAB advantage. That FAB advantage is over, and the chip architectures that managed to survive have an opportunity to come back from life support. So the likes of Power, Sparc, MIPS and ARM now have a chance to compete on a level technological playing field with x86.
Coupled with the increasing use of open source which also negates the value of the x86 instruction set lock in then interesting times indeed.
Re: (Score:2)
Intels problem is that it cannot sell FAB time because they are vertically integrated
This is true. Intel will fab chips for other people, but they've had very few customers because everyone knows that the priority customer at Intel fabs is Intel and if yields are lower than expected it won't be Intel chips that get delayed.
Intel builds a FAB and runs its next gen chips off of it for a few years, then they are stuck looking for something to do with the FAB when it is no longer current-gen
This is simply not true. Slashdot likes to think of Intel as a a CPU vendor, but that's actually quite a small part of their business. They make a lot of other kinds of chip and a great many of these don't require the latest and greatest fab technology. This has alway
Re: (Score:2)
This is simply not true. Slashdot likes to think of Intel as a a CPU vendor, but that's actually quite a small part of their business. They make a lot of other kinds of chip and a great many of these don't require the latest and greatest fab technology. This has always been a big part of their advantage over AMD: they have products that will use the fab for 10+ years, so they can amortise the construction costs over that long a period.
They dont keep those fabs busy. Thats the flaw and end of your argument. Getting 20% usage of your production capacity while your competition gets 100% of theirs, means you have massive overhead in comparison.
Intel is in serious trouble and a decade from now you wont be here telling us how you got it wrong, you will just be fanboying something else.
Qualcomm doesn't make chips (Score:2)
Not really seeing how this threatens Intel outside of the whole ARM vs x86 thing. My understanding is most server farms are connected to dedicated nuclear power plants anyway, so power consumption isn't an issue. Heat dissipation? Yeah, that might be an issue.
Qualcomm doesn't make monopolies. (Score:2)
It potentially could end up freeing the server space from a monopoly. You know? The thing Slashdot's always rallying against.
Re: (Score:3)
You're entirely right that the memory subsystem is 90% of the battle for most server workloads once you exceed ten cores.
For integer workloads with unreasonable parallelism and unreasonable cache locality (that Intel's AVX doesn't already handle almost ideally), I'm sure this design will smoke Intel on the thermal management envelope, a nice niche to gain Qualcomm some traction in the server mix, but hardly a shot heard around the world.
And Qualcomm better be good, because Intel will soon respond with Omni-
Re: (Score:2)
"smoke" is perhaps the wrong word here...but ok I see your point
Re: (Score:2)
My understanding is most server farms are connected to dedicated nuclear power plants anyway, so power consumption isn't an issue. Heat dissipation? Yeah, that might be an issue.
With recent news that Google is shooting for 100% renewable energy for its datacentres (and many others will follow suit), I'm not quite so sure that's true any more.
More cores fed by that $12 billion power plant (Score:2)
Data center power is expensive. Mostly because it's reliable and redundant. And yes, every watt used is a watt of heat that has to be removed by the cooling system.
Suppose it was literally true that a data center was powered by a dedicated nuclear power plant. It costs about $12 billion to build a power plant. How many cores would you like to be able to power from your $12 billion investment? If I operated a big DC, I'd rather power a million low-power CPUs from my X gigawatts of power than only be able
Re: (Score:2)
Qualcomm designs chips, from my experience based on ARM not x86, and outsources the actual making of chips to other companies (TSMC, Samsung, whomever).
Almost certainly not Samsung as Samsung isn't Pure-play, its IDM. Maybe Qualcomm can use them for some things, but probably not ARM SoC's.
Re: (Score:2)
My understanding is most server farms are connected to dedicated nuclear power plants anyway, so power consumption isn't an issue. Heat dissipation? Yeah, that might be an issue.
Heat and power are the same issue. The conservation of energy means that power in is power out, and the power out is heat that needs to be dissipated. A rule of thumb for data centres is that every dollar you pay in electricity for the computers, you need to pay another dollar in electricity for cooling. If you want high density, then you hit hard limits in the amount of heat that you can physically extract (faster fans hit diminishing returns quickly). This is why AMD's presence in the server room went
Not really the whole story... (Score:2)
Intel starts up 10nm factory [wccftech.com]
Re: (Score:2)
Intel is only "refining" the 14nm design through the natural course of their "tick-tock" process (which has now added a third "tock", which seems likely to be due to lack of real competition).
No its because their 10nm test yields aren't even close to economical.
Intel has blown their advantage. I'm sure someone will reply with "but its not real 10nm" while being completely ignorant that not only isn't Intel doing "real 14nm" that they are the ones that invented lying about feature size.
Re: (Score:2)
Only none of what you said is true [semiwiki.com].
They haven't blown their advantage, though it's certainly shrunk, and they will continue to hold it through the "10nm" node where Intel's process is actually below the standard 10nm and TSMC et al's most certainly is not.
Apple's A10 is nearly 2x faster per core than Qual (Score:2)
Re: (Score:3)
I searched on Google. Found this in under two seconds. Took me more than that to write this reply.
http://www.theverge.com/2016/9... [theverge.com]
Data Sheet? (Score:2)
(I suspect the answer is "no.")
Narrow (Score:2)
Re: (Score:3)
The current trend in labeling since transistors started on their vertical adventures is to extrapolate an "equivalent" feature size based on overall transistor density. These TSMC-made chips have about a 30% higher density than their "14nm" chips, just as Intels "10nm" chips will have about a 30% higher density than their "14nm" chips.
Re: (Score:2)
Re: (Score:2)
Therefore a 14 layered 14nm chip would be labeled as 1nm?
Give or take... but they are probably still 15 years away from actual 14nm feature sizes. If current features were simply reduced by a factor of 15 across the board, the smallest feature size would still be about 3nm.
Re: (Score:2)
The current trend in labeling since transistors started on their vertical adventures is to extrapolate an "equivalent" feature size based on overall transistor density.
And when did it start? I mean what was the last non-extrapolated feature size?
But Heat Undermines the Value (Score:2)
As more than half the cores have to remain ideal most of the time to keep it from over heating.
Next Next Gen Firewalls (Score:1)
Next Next Gen Shaders (Score:4, Interesting)
Or use a GPU (http://shader.kaist.edu/packetshader/)