Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Samsung May Start Making ARM Server Chips 116

angry tapir writes "Samsung's recent licensing of 64-bit processor designs from ARM suggests that the chip maker may expand from smartphones and tablets into the server market, analysts believe. Samsung last week licensed ARM's first 64-bit Cortex-A57 and Cortex-A53 processors, a sign the chip maker is preparing the groundwork to develop 64-bit chips for low-power servers, analysts said. The faster 64-bit processors will appear in servers, high-end smartphones and tablets, and offer better performance-per-watt than ARM's current 32-bit processors, which haven't been able to expand beyond embedded and mobile devices. The first servers with 64-bit ARM processors are expected to become available in 2014."
This discussion has been archived. No new comments can be posted.

Samsung May Start Making ARM Server Chips

Comments Filter:
  • Re:Remember now (Score:5, Informative)

    by White Flame ( 1074973 ) on Thursday November 08, 2012 @04:18AM (#41916421)

    The recent 32-bit ARMs supports LPAE, so you can have over 4GB no problem. That's still running a 32-bit address space per app, which would probably still work fine for a mobile environment.

  • by cstdenis ( 1118589 ) on Thursday November 08, 2012 @04:45AM (#41916497)

    * DNS servers (if you aren't virtualizing stuff)
    * email servers (if your spam scanning is external)
    * some database servers (generally io bound not cpu bound, tho it of course depends on the nature of the queries)
    * simple web hosting (stuff like a CDN serving static files needs almost no CPU)
    * monitoring servers
    * Camera/surveillance servers (video processing is mostly done by dedicated chips on capture cards)

    Really, most servers are not CPU bound these days and would probably benefit from many low-clocked cores than few high-clocked ones. They are exceptions of course, that is why we have super-computers at the other extreme.

  • by Anonymous Coward on Thursday November 08, 2012 @05:45AM (#41916735)

    There's no technical reason why an ARM chip of comparable performance to x86 could not be made. There's also no reason to believe such a beast would use significantly less power than an x86 chip, either. In order for the entire exercise to make any sense, they would have to target a niche between current ARM and x86. If they can keep the design sufficiently simple, they should at least be able to beat current ARM designs in performance and x86 in price. It is not clear that they can beat future Intel CPUs on power usage, especially since Intel's manufacturing process leads the industry by a significant margin.

  • Re:Remember now (Score:3, Informative)

    by Anonymous Coward on Thursday November 08, 2012 @06:08AM (#41916833)

    Umm, you're still processing a single item of data per cycle, but it's now 64-bit long instead of 32. Performance increases if you can process 64 bit as a vector of 2x32/4x16/8x8 bit values, which you could do before with NEON for ARM or MMX/SSE for x86.

    Extra addressable space doesn't matter for most tasks, some, like DBMS, do benefit from it though.

    Biggest performance increase from new 64-bit architectures, applicable both for x86_64 and ARM64, is bigger register set - you don't need as much memory accesses because you can keep most local variables in registers or you can unroll loops more, loading a bigger batch of data in more registers than before.

    On cons side you get higher pressure on memory bandwidth and caches and lower instruction stream density.

    All in all, you get somewhat higher memory requirements, almost unchanged performance on most tasks and nice performance increase on some, like compilers, VMs, databases and media processing.

  • by Anonymous Coward on Thursday November 08, 2012 @08:47AM (#41917397)

    Pedantic: yes the GP probably meant wordsize, but byte is not by definition 8 bits. Of course you're unlikely to encounter different byte sizes these days, but still. See http://en.wikipedia.org/wiki/Byte [wikipedia.org]

  • by ledow ( 319597 ) on Thursday November 08, 2012 @10:28AM (#41918031) Homepage

    Because "we require" rarely means "we won't touch you with a bargepole unless you have". It's there to weed out the chaff who think they're not good enough or important enough to apply.

    I've applied for numerous jobs that have "required" things like MCSE's and A+, and first-class degrees and I clearly state that I don't have them, but what I do have is X amount of experience doing Y.

    The bright employers (i.e. the only type you *want* to work for anyway) pick it up and say "Oh, right, he's probably spent so long DOING the job, he never got around to paying the certification tax on a bit of paper to say he could do it." or "He was out earning a wage in this sector while our own guys were still in university playing with microcontrollers". The bad ones, of course, shove it off and it gets lost in the HR department because it "doesn't meet criteria".

    I've also advised people to ignore this sort of thing in the past, so long as you *CAN* put forward a reasonable case of being suitable for the job anyway, and it's never perfect (there is no magic way to get a job) but it's helped a lot of them to get positions they didn't think they were good enough for. How many of the industry founding fathers and visionaries had PhD's or Masters? Nowhere near all, and they still got there.

    Don't blatantly ignore high requirements, just substitute what you have instead (and, if you like, in your covering letter explain that: "Although I notice that the job requirements include X, I feel that my extensive expertise in position Y performing task Z should be sufficient to prove that I'm capable of performing to the standards required") if you think you have a shot of doing the JOB.

    Applications processes are mainly about weeding out the vast number of applicants, but secondarily they are about YOU weeding out the vast number of jobs available. Because if your employer can't see that you can do the job, just because you have an absence of certain desired letters after your name, you probably don't want to work there anyway (and they probably will ignore your application, but the chances that they veto you for future posts because of your politely-worded ambition are vanishingly small... and again, those sort of people you just don't want to work for anyway).

    That may be *why* they bothered emailing everyone. Because they aren't just interested in PhD's, but they just want a high standard of applicant. One who has those qualifications, or one who has the skills and knows how to get through a job application process by playing on them.

    The worst that happens is they say No, and keep your information on file for future reference. The chances it will prejudice any future applications - a concern I've heard from the people I've given personal advice - are basically zero (do you really think HR departments keep years and years and years worth of applications that they are already TRYING to narrow down to just a few candidates from thousands and somehow and check them for every post? No.).

    And, you never know, they might just say "Well, actually, you're not right for this particular position, but we are just about to advertise for X as well, and that's look more suited to you."

    In job-hunting, there's nothing wrong with being ambitious, so long as you're honest. And even if they offer it you and you don't like the idea of working in a crowd full of bitter PhD's, or it's not better than your current job, again - you can so "no" just as easily as they can.

"Never face facts; if you do, you'll never get up in the morning." -- Marlo Thomas