Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Announces Atom S1200 SoC For High Density Servers 78

MojoKid writes "Intel has been promising it for months, and now the company has officially announced the Intel Atom S1200 SoC. The ultra low power chip is designed for the datacenter and provides a high-density solution designed to lower TCO and improve scalability. The 64-bit, dual-core (four total threads with Hyper-Threading technology) Atom S1200 underpins the third generation of Intel's commercial microservers and feature a mere 6W TDP that allows a density of over 1,000 nodes per rack. The chip also includes ECC and supports Intel Virtualization technology. Intel saw a need for a processor that can handle many simultaneous lightweight workloads, such as dedicated web hosting for sites that individually have minimal requirements, basic L2 switching, and low-end storage needs. Intel did not divulge pricing, but regardless, this device will provide direct competition for AMD's SeaMicro server platform." Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.
This discussion has been archived. No new comments can be posted.

Intel Announces Atom S1200 SoC For High Density Servers

Comments Filter:
  • by Anonymous Coward

    How can lots of slow processors be better than a few fast ones with virtualization on top?

    • by TechyImmigrant ( 175943 ) on Wednesday December 12, 2012 @02:01PM (#42262949) Homepage Journal

      >How can lots of slow processors be better than a few fast ones with virtualization on top?

      More physical contexts => less context switch overhead => can handle multiple simultaneous sessions more efficiently provided that those sessions are not individually compute or memory intensive.

      • by godrik ( 1287354 ) on Wednesday December 12, 2012 @02:05PM (#42262997)

        Well, and that's the difference between scale up and scale out in parallel computing. Throughput is typically given by many simple processing units. Latency is typically given by highly specialized processing units.

        If it is throughput you care about, simple is the way to go.

    • by Anonymous Coward

      If you are a large corporation with a WAN that handles emails, database, file retention/storage, project management, etc. BUT does not do any rendering or accurate computations, then this is the ideal server. Very little wasted CPU computing potential.

    • How can lots of slow processors be better than a few fast ones with virtualization on top?

      A few points..

      1. Most hyperscale server applications are memory and/or I/O bound, not CPU bound (and "memory bound" meaning frequent memory accesses, not memory size bound)

      2. Typical applications are search, web serving and data mining. Anything that requires Apache or Hadoop where the processing is highly parallel (and memory or I/O bound...)

      3. For those types of workloads, there are often frequent idle times for any individual CPU, so individual CPUs can frequently enter a low power state while only the

  • how much is it? (Score:4, Informative)

    by alen ( 225700 ) on Wednesday December 12, 2012 @01:53PM (#42262849)

    one of the reasons no one uses Intel in mobile is the cost.

  • by Aphrika ( 756248 ) on Wednesday December 12, 2012 @01:59PM (#42262919)
    Quite a few scientific customers will required that, and for performance per watt computing, it's likely that this chip will find its way into those applications.

    However, I am amazed that they are using the Atom branding for what is essentially a very different underlying chip. The initial range of Atoms were lacklustre enough that the name seems somewhat tarnished now. Dumping that brand into the server arena may cause some people to have reservations, regardless of how good the underlying technology is.
    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday December 12, 2012 @02:05PM (#42263005) Journal

      I'm sure that Intel's Xeon team, and their margins, are 100% totally delighted with this chip, have greatest confidence in its success, and wish it only the best in the future...

    • by Kjella ( 173770 )

      However, I am amazed that they are using the Atom branding for what is essentially a very different underlying chip.

      Why so surprised, Intels are selling "Pentiums" now that have nothing to do whatsoever with any Pentium architecture only watered down versions of Intel Core processors. Same with the Celerons it's more a price segment than an actual technology.

      The initial range of Atoms were lacklustre enough that the name seems somewhat tarnished now.

      The initial range of Atoms sold really well, it was only after AMD started making decent APUs and the tablet market stole the whole show that they disappeared into obscurity. Maybe to people watching the battle of AMD vs Intel they're a bit lackluster but I think to

      • Why so surprised, Intels are selling "Pentiums" now that have nothing to do whatsoever with any Pentium architecture

        By definition, if they're calling it "Pentium", it's "a" Pentium architecture. What you say might have had more weight if the original Pentium architectures were all the same. However, to the best of my knowledge, the original Pentium (P5) was an extension of the 486 architecture, whereas the Pentium Pro and Pentium II (P6) were sort-of-RISC non-x86 cores wrapped in a translation layer; that is, drastically different.

        However, I do agree that bringing back the Pentium name after ditching it was a confusing

      • ...but I think to most it was just about having a computer for light work at all.

        I've asked it around a bit (hell, I ask all kinds of strange questions around!) Most people aware enough to know what their procesor is and not technical enough to know what it means won't ever touch it again.

        But then, you've just excluded everybody that'll buy a server in "people watching the battle of AMD vs Intel".

    • It's a low-power x86 compatible from Intel. Why not apply the Atom label?

      Personally I think it's sad these parts aren't available for desktop applications. I wouldn't mind a server-grade (ECC support, virtualization, 64 bit), low power x86 CPU, and I'm sure I'm not the only one. If some company had the guts to put this CPU on a Mini-ITX board or a small all-in-one PC, no doubt it would sell.

    • by gman003 ( 1693318 ) on Wednesday December 12, 2012 @02:44PM (#42263469)

      They're using the Atom branding because it is an Atom processor underneath. The Atoms and the Core/Xeon/Pentium/Celeron lines have completely different underlying microarchitectures. In particular, the Atom uarch ("Sodaville" in the current generation) has really poor floating-point and SIMD performance, so you can forget about scientific computing on this.

      More to the point, the "Atom" brand implies "cheap, low-power device". The same thing "ARM" implies, and as this processor is mainly there to seize control of a niche ARM was trying to grab, it makes sense to use a similar brand name.

      • by elwinc ( 663074 )
        Good points. Another thing that makes the S1200 similar to Atom & different from Core is the S1200 doesn't do out-of-order execution. Core chips have something like a 50 instruction re-order buffer, and that helps Core execute an average of 1.5 instructions per clock per thread (at the cost of greatly increased complexity). Atom on the other hand, so far, does no re-ordering, which makes it much simpler and a bit slower.
  • by Anonymous Coward

    This is all Internet Speculation, but:

    This chip won't be sold to end-users. This will only be available in pre-configured high-density systems. You will still pay through the nose.

    [citation needed]

    • Given that the press shots for the part show a damn lot of teeny BGA balls on the bottom, I'd hope that it isn't an end user part...

      The question is whether it will(as some Atoms in the past have) show up fairly cheaply in the nicer Prosumer/SMB NASes and assorted 1U/shallow server barebones kits, or whether this will be a "Well, the totally proprietary cardcage is $25,000, I'll throw in a license for our Enterprise Backplane Management Console for just 3k more, cause I like you, and cards are 6k a pop..." t

      • Given that the press shots for the part show a damn lot of teeny BGA balls on the bottom, I'd hope that it isn't an end user part...

        Existing Intel Atom chips are also BGA-soldered, but you can purchase motherboards with the chip already included for DIY systems. The same is true of AMD's E-series. The question is whether any of Intel's customers will want to supply S1200-series boards to end users, or if they prefer to reserve them for charging out the nose in prebuilt systems.

        • If someone wants to make a low cost, low wattage server board with all the servery goodness (ecc, failover, vt etc.) afforded by the S1200 I'm pretty sure intel would be happy to sell them the chips.

  • by Revotron ( 1115029 ) on Wednesday December 12, 2012 @02:02PM (#42262965)
    At first glance I read the title as "Intel Announces Atom $1200 SoC For High Density Servers".

    My first thought: "$1200 for an underpowered Intel server chip? Sounds about right."
    • My first thought: "$1200 for an underpowered Intel server chip? Sounds about right."

      AMD cpus really are dead, market place confirms.

  • Good old Slashdot (Score:5, Insightful)

    by kiwimate ( 458274 ) on Wednesday December 12, 2012 @02:04PM (#42262987) Journal

    Oh the irony...

    • Listed as being from the "race to the bottom" department.
    • Person responsible: "Unknown Lamer"
    • Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

      Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.

    Damn, but Slashdot is a sad place these days.

    • by Trepidity ( 597 )

      He's a fan of AMD [slashdot.org] perhaps?

    • Listed as being from the "race to the bottom" department.

      The departments have always been jokey.

      Person responsible: "Unknown Lamer"

      Slashdot has alwas been driven by user submissions. Given your UID you have been here even longer than me, which means probably around for at least 10 years, so I'm surprised this comes as a shock to you.

      Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

      Actually, it's neither silly nor irrelevent.

      It is quite significant that the

      • Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

        Actually, it's neither silly nor irrelevent.

        It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it. AMD supports ECC memory on the mid range desktop CPUs and above, whereas for Intel, you have to fork out for the Xeon brand and pay a very hefty premium.

        Damn, but Slashdot is a sad place these days.

        Then leave and demand your money back.

        Man, you are clutching at straws, just like the OP did with his snarky comment about ECC. The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?? By what stretch of the imagination is ECC not relevant to a server CPU? In fact, it would have been noteworthy if Intel had cut corners and just rebranded their mobile Atom CPU and not even added ECC support.

        And Newegg sells 8GB ECC RAM for 52 bucks vs 40 bucks for non-ECC RAM. Even if you put asid

        • Man, you are clutching at straws,

          How so?

          The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?

          You do realise that nested replies are replied to parent posts, not the original story, right?

          I claim that Intel do charge a hefty premium for ECC, which is why the comment is relevent. AMD do not as can be witnessed by cheap midrange desktop CPUs supporting ECC. In other words, you can use cheap AMD CPUs for server grade tasks. Because AMD don't ch

          • Man, you are clutching at straws,

            How so?

            The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?

            You do realise that nested replies are replied to parent posts, not the original story, right?

            I claim that Intel do charge a hefty premium for ECC, which is why the comment is relevent. AMD do not as can be witnessed by cheap midrange desktop CPUs supporting ECC. In other words, you can use cheap AMD CPUs for server grade tasks. Because AMD don't charge a premium for ECC and Intel do. Because for Intel, you need to fork out for a low performing Xeon which will be more expensive than an equivalent AMD desktop processor by a long way. And you can use the AMD desktop processors for servers. Because they support ECC, cheaply, unlike Intel ones, which don't. Got it yet?

            Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

            I don't believe you. Why don't you paste a link. Oh look, now you've pasted it go back and read it really carefully. Go on, read it again. But carefully this time. You will see that, surprise, it is NOT intel who you're buying the RAM from, in fact, Intel don't even sell RAM.

            You at least expect a certain standard when it comes to snarkiness

            As requested, I've upped the level of snarkiness.

            I can't make head or tail of what you are trying to say.

            For the record, I'm not trying to be snarky *at* you or asking you to be - my comment was about the OP's comment being lame - which it was.

            Yes, I agree with what you are saying about AMD, and definitely, AMD offers and has always offered better value for money than Intel. That is indeed their USP and how they compete. And it is a good thing for average customers like you and me.

            My point was that this is a dedicated server CPU so ECC is to be expected.

            • I can't make head or tail of what you are trying to say.

              So it would seem. We are talking at crossed purposes entirely.

              For the record, I'm not trying to be snarky *at* you or asking you to be

              Oh OK. I'll dial it back a bit then :)

              ECC RAM is cheap. Intel process supporting it generally are not. You can make a cheap server out of AMD desktop processors because they support ECC. The same cannot be said of Intel: Intel charge a big premium for processors supporting ECC.

        • The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs??

          Because AMD desktops come with the functionality, but lots of Intel servers don't.

    • by Anonymous Coward

      Okay smartass, what other cheap/low end intel CPUs support ECC ram?
      And no, a "i3" that's more expensive than a E3 xeon and needs a C20x chipset doesn't count.

      • by fa2k ( 881632 )

        The new E3 Xeon "V2" processors seem to be just a bit more expensive than the equivalent i7 processors. These are all expensive parts, but there isn't a huge premium for the Xeon. There isn't much to choose from in the mobo department though, but there's an Asus that seems decent.

  • by Anonymous Coward

    A beowulf cluster of these!

    • And for the first time in 20 years of slashdot, a beowulf cluster joke was actually appropriate.

  • High density. (Score:5, Interesting)

    by serviscope_minor ( 664417 ) on Wednesday December 12, 2012 @02:13PM (#42263079) Journal

    So, it's high density and supports 1000 nodes per rack, or 2000 cores per rack, since it's dual core. At 6W TDP, that's 6kW.

    Sounds great, except...

    You can cram 64 piledriver cores into 1U, and they have a 140W TDP for the hottest.

    So, crunching some numbers (a typical rack is 45U high).

    You would need 31 Opteron servers to have as many cores. That gives... uh what? 4400W.

    Hmm

    So, if you buy cheapie quad socket piledriver machines, you can fit your 2000 cores into a mere 32U, and draw 2/3 of the power. Of course comparing cores discounts the quality of the cores. While AMD is known for a MOAR COAREZZZZZ1!1!!one! approach, the piledriver cores are considerably faster than Atom ones clock for clock. Generally hard to find benchmarks, but the AMD processors usually lie between the i3 and i5 in terms of single threaded performance and the i3 and i5 trounce the Atom.

    This is one of the very strange things.

    People keep banging on about high density servers, but even the most cursory check from a standard online price quoter almost always shows that not only are the quad Opteron machines denser, they are usually cheaper too. They also have the advantage that they offer a larger unified system image making them more flexible too.

    About the only thing that's comparable in terms of price, performance and density seem to be those intel machines where you can cram 4 dual socket machines into 2U. The quad socket Intel boxes are more expensive.

    So, what gives?

    Can anyone enlighten me?

    What's the appeal?

    • OOps!

      Out by a factor of 4 on the Opterons.

      2000 Opteron cores would cost you 17,000W, not 6000.

      Still, given that 2000 Opteron cores will be much faster than 2000 Atom cores, it's going to be much closer.

      The Opterons are still denser, however and almost certainly competitive on power.

      • The use case for these isn't compute-intensive.

        Imagine running static-content webservers on these. Your main bottlenecks are going to be disk and network (and maybe memory), not CPU. Or maybe running an NFS share, or anything else where the spinning disc is the biggest obstacle.

        Also, do some idle-power comparisons between the Atom and the Opteron*. Maybe they use the same power under peak load, but what happens when half your processors are idling? I would imagine the Atoms do much better about dropping to

        • Imagine running static-content webservers on these. Your main bottlenecks are going to be disk and network (and maybe memory), not CPU.

          In that case, you'd presumably go for the lowest end Opteron processors which only draw 85W or so, giving you the same kind of thing for less power.

          Though interestingly, if IO is really a problem, then they could offer a solution quite easily: the Opteron processors connect to both the chipset and each other using HT. You could put two 6xxx Opterons in one box and use the fo

        • I don't get that. If the task is not compute-intensive, why do you want so many cores?

          You solve disk throughput by offloading the disks at specialized servers (SAN), and you solve memory throughput by having more servers... And then, you can only increase density and memory throughput at the same time if you go with a custom server design, and less cores here equals to less power and thus more density.

    • by bored ( 40072 )

      You can cram 64 piledriver cores into 1U, and they have a 140W TDP for the hottest.

      I don't really think this chip is aimed at AMD, its aimed at ARM (and friends). The ARM guys have been making a lot of noise lately about how ARM is perfect for the datacenter, and this chip is just intel pointing out that if you want a whole bunch of "low" power and crappy performance CPU's they can provide them too.

      Even the name is indication of that, Atom's are CPU's aimed at the ARM market, Xeon's are CPU's aimed at the s

      • I mean, why do people think they want ARM servers or these funky "high density" ones which are all but.

        I guess the absolute minimum power draw is lower, but if you've got a rack full of 45 machines, you're probably expecting a utilisation of greater than 2%.

        The Supermicro (Intel and AMD based) give excellent performance in price, power draw, throughput and density. All the new ones seem to be more expensive, less dense and more of a pain in the ass.

    • by pavon ( 30274 )

      Because those 31 64-core piledriver machines won't be able to push the same amount of IO as 1000 2-core Atom machines.
      These things aren't for compute intensive tasks. Intel's own advertising comparing to Xeons show the Atoms having twice the performance-per-watt for scale-out tasks, but half the performance-per-watt for compute intensive tasks. It is about providing another option to better match the processor to the task. And it is here today while 64-bit ARM is still a year into the future.

      • Because those 31 64-core piledriver machines won't be able to push the same amount of IO as 1000 2-core Atom machines.

        How so?

        In your 1U, you get 4 processors, 64 cores, and 4 PCI Express 2.0 x16 slots, giving 32 GB/s per U, or about 1TB / s for the rack of 31 machines. You'll also get a bunch (12?) SATA ports or so for your troubles and a couple of gig-E ones too, if you care for such things.

        Remember, Opteron processors are popular for supercomputers which rely on very high speed, very low latency interconn

    • by Kjella ( 173770 )

      Generally hard to find benchmarks, but the AMD processors usually lie between the i3 and i5 in terms of single threaded performance and the i3 and i5 trounce the Atom.

      I guess it must be hard, with the blindfold on and all. Here [anandtech.com] is a list for example, where the FX-8350 is even beaten by the Phenom II x6 and performs worse than the Intel Pentium G840 in single threaded performance. Anyway comparing 6W/2 = 3W and 140W/16 = 8.75W those Piledriver cores had better do much more than one Atom core. Intel is again trying to create a two-front war against AMD, should they go lower to match the Atoms or higher to match the Xeons or spread themselves too thin doing both. Worst thin

  • by Larry_Dillon ( 20347 ) <dillon.larryNO@SPAMgmail.com> on Wednesday December 12, 2012 @02:41PM (#42263421) Homepage

    I'm using a AMD E-350 as a home server on Fedora 17. It's not a gaming rig but it has plenty of power for DHCP/DNS/File server and can run a Windows 7 VM via KVM. CPU's are so fast these days that even a low-end/low-power offering is fast enough for many jobs. I'm glad to see Intel offering 8GB of RAM on Atom as the older systems could only support 4GB. That's what pushed me to the AMD Bobcat/Zacate platform.

    I figure it's saving me about half of the electricity versus running a older Intel PC as a server. Plus the Asus E35M1-M has decent onboard video, USB3 and plenty of STAT3 ports

  • How do they compare to ARM 64 chips? in price, performance and power usage
  • There are a number of vendors providing high density E5 xeons that probably beat this thing on both performance and density. Supermicro's dual twin puts 4 E5 2600's in a single U. Which works out to 1344 cores in 42U.

    Its quite possible that the E5 even beats it on benchmark units/watt as well given that the xeon's probably get 5x-10x the performance per core.

  • by JDG1980 ( 2438906 ) on Wednesday December 12, 2012 @04:12PM (#42264631)

    Original poster: "Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that."

    This article [anandtech.com] gives some insight into why Intel is doing this. Basically, ARM has been making noises for some time about getting into the server market. Intel is very concerned about this, because ARM is used to lower margins and willing to license their designs widely, and could easily undercut Intel on price. They see the writing on the wall. Sure, they would like to keep ECC and other server-type goodies as premium features, but that's no longer a realistic option. Either they have to offer something cheaper, or customers who want low-cost, high-reliability server hardware will jump ship as soon as they can. This is the market niche the Atom S1200 is designed to fill. Intel gets to tout its advantage of backwards compatibility while being able to dramatically undercut other server-grade hardware on price. With this, ARM is going to have a much harder time convincing data centers to switch.

    By the way, if all you care about is ECC, you don't have to buy an expensive CPU from Intel to get that (though you do need a C-series chipset rather than the consumer-grade stuff). Many of Intel's Ivy Bridge Pentium and Core i3 processors now support ECC, though this has not been widely publicized. For example, this i3-3220 [newegg.com] is only $119.99 at Newegg and according to Intel's official site [intel.com] it supports ECC.

  • low PCI-e lanes it should have at least 16 so you can have a X8 raid card and room for say 10GB / e-net / fiber cards / other IO cards.

    • Each CPU supports 8 lanes of PCIe 2.0 (4GB/s) meaning it can flush and fill its 8GB (max) of main memory from an IO device every 2 seconds, if you actually had that much IO to pump.

      These things are meant to live 1000/rack which is ~24 CPUs per 1U. Give each motherboard a pair of 1Gbit/s ethernet pipes, and i'm sure it's sufficient for the scaleout they expect.

      These are not intended to build your normal 4U server chassis with 40 PCIe lanes.

  • There is one thing I'm still not clear about.

    Wikipedia says that TDP is “thermal design power”. I thought it was Thermally Dissipated Power, but I obviously was wrong. Anyway, Intel used to publish TDP numbers where “T” was equivalent to “Typical”, while AMD's “T” was equivalent to “Top” (in the sense of maximum). Has this changed? The S1200's 6W are a Typical or a Top value?

    • TDP is the maximum amount of power the thing should ever draw. So if your TDP is 85W it could be anywhere between 0 and 85W depending on whether its powered on, and what the workload is. I have a Sand Bridge 35W TDP I3 that runs on ~12W most of the time.
      • by tzot ( 834456 )

        So your reply is “Yes, this has changed, and 6W is the maximum power that the S1200 SoC should draw.”

        Thank you.

        • Hmm someone posted a link in response to me indicating that Intel says that TDP and the current draw are different. I know that the TDP is used as an indication of how much cooling you need. I also know that you will not see 100% of the W being converted to heat, so I could be wrong. I posted that based on my own testing with a Watt meter when trying to build a very low power box. I did various tests of idle and max usage consumption and never saw the W go above the TDP. For my 35W TDP processor I beli
      • by tzot ( 834456 )

        Although Intel still declare that their TDP is *not* maximum draw in their Measuring Processor Power: TDP vs ACP [intel.com] paper, so I am not sure whether you answered out of personal experience/knowledge or based on plain theory.

        • Well I know the TDP has to do with the heat released by the processor. But I can only tell you that my watt meter suggests that the TDP is ~ what kind of load I see on the meter. Of course there are other peripherals drawing power as well, and the motherboard uses some itself. Also you will not see 100% of the W being converted to heat, so I suppose that its possible that the TDP would be somewhat lower than the actual draw. But I was specifically trying to create a low W system and had the meter hooked

The use of money is all the advantage there is to having money. -- B. Franklin

Working...