Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Knights Landing — 72 Cores, 3 Teraflops 208

New submitter asliarun writes "David Kanter of Realworldtech recently posted his take on Intel's upcoming Knights Landing chip. The technical specs are massive, showing Intel's new-found focus on throughput processing (and possibly graphics). 72 Silvermont cores with beefy FP and vector units, mesh fabric with tile based architecture, DDR4 support with a 384-bit memory controller, QPI connectivity instead of PCIe, and 16GB on-package eDRAM (yes, 16GB). All this should ensure throughput of 3 teraflop/s double precision. Many of the architectural elements would also be the same as Intel's future CPU chips — so this is also a peek into Intel's vision of the future. Will Intel use this as a platform to compete with nVidia and AMD/ATI on graphics? Or will this be another Larrabee? Or just an exotic HPC product like Knights Corner?"
This discussion has been archived. No new comments can be posted.

Intel's Knights Landing — 72 Cores, 3 Teraflops

Comments Filter:
  • by Frosty Piss ( 770223 ) * on Saturday January 04, 2014 @07:20PM (#45867507)

    Because you can never have too many cores that you aren't using most of the time.

    Ask the NSA, they might have a (SECRET) opinion on that.

  • by Anonymous Coward on Saturday January 04, 2014 @07:23PM (#45867523)

    Yes, it's too hard. The future is in concurrency. The actor model will probably take off since it's easy to pick up and use.

  • by icebike ( 68054 ) on Saturday January 04, 2014 @07:35PM (#45867587)

    Because you can never have too many cores that you aren't using most of the time.

    How about more speed? Or is that too hard?

    Pretty sure it wasn't meant for you (or me).

  • by H0p313ss ( 811249 ) on Saturday January 04, 2014 @07:49PM (#45867649)

    Because you can never have too many cores that you aren't using most of the time.

    How about more speed? Or is that too hard?

    Pretty sure it wasn't meant for you (or me).

    However, for servers, including hypervisors, it would be very interesting. There are lots of client/server products that scale better with more cores.

  • Unobtainium (Score:3, Insightful)

    by Anonymous Coward on Saturday January 04, 2014 @08:12PM (#45867745)

    This is another one of those IBM things made from the most rare element in the universe: unobtainium. You can't get it here. You can't get it there either. At one point I would have argued otherwise, but no. Cuda cores I can get. This crap I can't get. Its just like the Cell Broadband engine. Remember that? If you bought a PS3, then it had a (slightly crippled) one of those in it. Except that it had no branch prediction. And one of the main cores was disabled. And you couldn't do anything with the integrated graphics. And if you wanted to actually use the co-processor functions, you had to re-write your applications. And you needed to let IBM drill into your teeth and then do a rectal probe before you could get any of the software to make it work. And it only had 256MB of ram. And you couldn't upgrade or expand that. With IBM's new wonder, we get the promise of 72 cores. If you have a dual-xeon processor. And give IBM a million dollars. And you sign a bunch of papers letting them hook up the high voltage rectal probes. Or you could buy a Kepler NVIDIA card which you can install into the system you already own, and it costs about the same as a half-decent monitor. And NVIDIA's software is publicly downloadable. So is this useful to me or 99.999% of the people on /.? No. Its news for nerds, but only four guys can afford it: Bill G., Mark Z., Larry P. and Sergey B..

  • by tepples ( 727027 ) <tepples.gmail@com> on Saturday January 04, 2014 @08:44PM (#45867869) Homepage Journal

    I think you'd be surprised how many real world day to day task can be and are parallelized: [...] searching

    I thought searching a large collection of documents was disk-bound, and traversing an index was an inherently serial process. Or what parallel data structure for searching did I miss?

    rendering web pages

    I don't see how rendering a web page can be fully parallelized. Decoding images, yes. Compositing, yes. Parsing and reflow, no. The size of one box affects every box below it, especially when float: is involved. And JavaScript is still single-threaded unless a script is 1. being displayed from a web server (Chrome doesn't support Web Workers in file:// for security reasons), 2. being displayed on a browser other than IE on XP, IE on Vista, and Android Browser <= 4.3 (which don't support Web Workers at all), and 3. not accessing the DOM.

    compiling

    True, each translation unit can be combined in parallel if you choose not to enable whole-program optimization. But I don't see how whole-program optimization can be done in parallel.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...