Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Details Eight-Core Poulson Itanium Processor 102

MojoKid writes "Intel has unveiled details of their new Itanium 9500 family, codenamed Poulson, and the new CPU appears to be the most significant refresh Intel has ever done to the Itanium architecture. Moving from 65nm to 32nm technology substantially reduces power consumption and increases clock speeds, but Intel has also overhauled virtually every aspect of the CPU. Poulson can issue 11 instructions per cycle compared to the previous generation Itanium's six. It adds execution units and re-balances those units to favor server workloads over HPC and workstation capabilities. Its multi-threading capabilities have been overhauled and it uses faster QPI links between CPU cores. The L3 cache design has also changed. Previous Itanium 9300 processors had a dedicated L3 cache for each core. Poulson, in contrast, has a unified L3 that's attached to all its cores by a common ring bus. All told, the new architecture is claimed to offer more than twice the performance of the previous generation Itanium."
This discussion has been archived. No new comments can be posted.

Intel Details Eight-Core Poulson Itanium Processor

Comments Filter:
  • Why? (Score:4, Interesting)

    by PCK ( 4192 ) on Friday November 09, 2012 @12:06PM (#41933201) Homepage

    I was under the impression that Itanium was all but dead. I'm guessing Intel must be contract bound to bring out new versions.

    • by Anonymous Coward

      If that was the case, why bother making performance improvements inside the core? Why not just move it to 32nm and double/triple the number of cores / socket?

      Though I agree, this was likely a significant loss on intel's books.

    • Re:Why? (Score:5, Funny)

      by Guignol ( 159087 ) on Friday November 09, 2012 @12:26PM (#41933395)
      I understand
      In death, an agent of project Itanium has a name
      His name is Robert Poulson
      • by Anonymous Coward

        I came for this joke. /. does not disappoint.

    • Yeah, I'm sure there was a big argument with oracle threatening to sue when Intel said they were dropping Itanium architecture several months ago.

    • Comment removed based on user account deletion
      • The recommended ASP is ~$4000/tray. Anyone know how many itaniums are there in a tray? Multiply the unit price by 200k, and you'll get the cash that Intel would be making on those.

        But honestly, there are some markets Intel should attack w/ this CPU. For starters, supercomputers. The platform from Cray discussed yesterday - that one looks just perfect for a whole bunch of these. There are quite a few supercomputer projects in a number of countries, and Intel should target the Itanium at all of them.

        • by turgid ( 580780 )

          But honestly, there are some markets Intel should attack w/ this CPU. For starters, supercomputers. The platform from Cray discussed yesterday - that one looks just perfect for a whole bunch of these. There are quite a few supercomputer projects in a number of countries, and Intel should target the Itanium at all of them. That alone would have a bunch of them flying off the shelves.

          Er, no. Itanic is just an over-grown, over-engineered DSP. The GPUs that they use as co-processors in supercomputers these d

      • by g00ey ( 1494205 )
        Who is to judge whether developing and marketing the Itanium is worthwhile other than Intel themselves? Perhaps the development and marketing of these chips will give them valuable information that is useful for the development of future generation processors.

        The EPIC architecture (which is looked upon as a continuation of the development of the WLIV architecture) is significantly different from other more wide-spread architectures and perhaps the performance issues are there because people have not yet f
        • RISC was actually the optimal CPU architecture. CISC had a lot of things, such as variable instruction lengths, different modes of addressing and so on that complicated the hardware. RISC simplified some of that by reducing the number of instructions that were needed since all the programming was done in higher level languages like C, but still kept techniques like branch prediction, speculative execution and register renaming in the CPU itself. As a result, RISC never had problems maintaining compatibil

          • by g00ey ( 1494205 )
            Perhaps breaking of compatibility between CPU generations is not a weakness of the VLIW/EPIC architecture per se but rather a weakness in how people look at software and software distribution. First of all, why should software be distributed as pre-compiled binaries? A much better way would be to distribute the sources while maintaining a compiler/installation environment that automatically handles the software. This environment would then automatically optimize the software for the specific computer system
            • Another approach would be to add an abstraction layer between the hardware and software very much like what is done with virtualization, Java, ZFS, LVM, DirectX, Crossbow et al. That would make the software more independent of the underlying hardware...

              Isn't that basically how CISC works nowadays?

              • by g00ey ( 1494205 )
                The problem many CISC CPUs (such as x86 based CPUS) are facing today is that they are encumbered by legacy instruction sets so as to maintain backwards compatibility. I understand that there is an abstraction layer in many x86 CPUs that emulates some of these legacy instructions at the hardware level. The downside with this is that the wafer space required for this circuitry logic could be used for something else that would improve performance instead of maintaining this backwards compatibility.

                As an abs
            • Comment removed based on user account deletion
              • by g00ey ( 1494205 )
                Nowhere did I state that it should be free. If you remove the "F" from the FOSS you are mentioning then you are talking my language. While I'm a proponent of FOSS I don't think all open source software necessarily should be free. If people are concerned about this openness then perhaps some kind of encryption or other ways to obfuscate the source code that would make it understandable only to the compiler would be in place. However, in the end I don't think people would want to obfuscate the code before dis
    • by rot26 ( 240034 )
      Yep.

      It's GOOD to be an important supplier to a black project with a black budget. *cough* NSA *cough*

      There will be trainloads sent to Bluffdale, Utah, in boxes labeled as containing Donnie and Marie CD's. I imagine much if not most of the development was done by SAIC contractors with TS clearances. There will no doubt be a few thousand crippled versions marketed though the normal channels.
  • by gtirloni ( 1531285 ) on Friday November 09, 2012 @12:15PM (#41933283)
    The next upgrade will surely make things fly!
  • by davecb ( 6526 ) <davecb@spamcop.net> on Friday November 09, 2012 @12:15PM (#41933287) Homepage Journal

    My leaky/biased memory says these machines were a speed disappointment. Is this doubling going to make them faster or slower than an x86?

    --dave

    • At least according to Wikipedia, Itanium's performance was disappointed when compared to other RISC architectures, ten years ago:

      https://en.wikipedia.org/wiki/Itanium#Itanium_.28Merced.29:_2001 [wikipedia.org]

      One of the traps Intel tends to fall into, at least according to someone I know who worked there during the Itanium "hype days," is that the architecture team does not communicate with the compiler team. Both Itanium and x86 fall into this trap, although x86 is far more illustrative of the problem (most compil
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Perhaps Intel fell into the trap of not communicating with the compiler team, but HP certainly did not.

        The development of the EPIC concept at HP already started in 1992 as the to-be-successor for the HP-PA architecture. Look up e.g. the many joint research papers of CPU-architecture and compiler engineers for PlayDoh (or see e.g. http://www.hpl.hp.com/techreports/93/HPL-93-80.html for an intro to PlayDoh).

        The compiler technology to do well for EPIC architectures was mostly available by the time IA64 launche

        • by fatphil ( 181876 )
          I understand that for "compatibility" they squeezed a little 386-compatible core in the corner of the chip. It was also my understanding that some people benchmarked the chip by feeding it x86 code, and saw a 10-year-old core struggle with the load. This was not good publicity for their enterprise flagship.

          Might all be urban legend, or misremembered, or I might be on drugs.
        • The "typical hacker" didn't have access to IA64 and the major companies supporting IA64 didn't invest in Linux-for-IA64

          This has nothing to do with availability, or even cost per unit factor. These machines were plagued with the same problem that Alpha servers had... they weigh in excess of 200lbs. Even as a hobby, for a free machine, I'm not paying shipping on that bastard
          • Just as the AlphaServer had desktop equivalents, so did the Itanium, which were discontinued.

        • HP had in the 90s acquired 2 VLIW companies - Multiflow and Cydrome - and already had a lead in VLIW compiler technology. Once they made the alliance w/ Intel, they had the grand vision of replacing both the x86 and PA-RISC lines w/ the successor Merced. As I pointed out above, leading RISC CPUs were already adapting MIMD techniques intrinsic to VLIW, while moving the dynamic analysis to the CPU to the compiler didn't do much for the CPU real estate, since they weren't using much to begin w/.

          Yeah, Intel

    • by Anonymous Coward

      The current Xeon E5-2670 (8 core, 2.6GHz, 2012) can do roughly 4x the performance of the previous Itanium 9350 chip (4 core, 1.73GHz, 2010), according to spec.org CPU2006 benchmark results. I think Itaniums do slightly better with FP, the Xeons win with INT.

      But that's per chip, and the Itanium systems are going up to 8 chips, so a single 8 socket Itanium system was getting roughly the same performance (in 2010) as a 2-socket E5-2670 in 2012. I don't think the Xeons go up to 8-sockets.

      • I don't think the Xeons go up to 8-sockets.

        Intel do have xeon processors that support 8-socket systems and afaict at least HP and supermicro make 8-socket xeon solutions (I think HP sell them as fully-built servers while supermicro sell them as a "barebones" system to which you add processors and drives yourself).

        However afaict the processors that support 8 socket setups are both underwhelming (high core counts but low clockspeeds and still on nahelm technology) and expensive compare to those for 2-socket systems.

    • My leaky/biased memory says these machines were a speed disappointment. Is this doubling going to make them faster or slower than an x86?

      --dave

      The big issue, IIRC, is that Itanium was dead slow in its x86 emulation in the first few rounds. Intel's ideas as initially to emulate the x86 chips in software so that the Itanium wouldn't lose their x86 market and they could switch everyone over. They later went back and remove the software emulation and put an x86 die on to do the work in order to make it faster.

      In native mode, I've never heard a complaint about Itanium and speed - only its x86 support mode.

      • by davecb ( 6526 )

        I looked at some older TPC results, and see the previous Itanium delivering 4/7 the speed of the T5440, one of Sun's oldest threads-not-clock-speed boxes. Compared to IBM Power 7, Itanium delivered 4/10, so the doubling should being it up to 80% of the IBM.

        Not to be sneezed at! Nevertheless, not competitive with Power, Fujitsu (Sun) M series or even the new Sun T4 boxes.

        --dave

        • I looked at some older TPC results, and see the previous Itanium delivering 4/7 the speed of the T5440, one of Sun's oldest threads-not-clock-speed boxes. Compared to IBM Power 7, Itanium delivered 4/10, so the doubling should being it up to 80% of the IBM.

          Not to be sneezed at! Nevertheless, not competitive with Power, Fujitsu (Sun) M series or even the new Sun T4 boxes.

          One question that begs: were those TPC tests done on Itanium optimized well enough for Itanium? Or was there another bottleneck other than the processor?

          One of the early on issues with Itanium was that it was hard for the optimizers to get right. I think they solved that, but I don't know when.

          And of course it is hard to make apples-to-apples comparisons between architectures unless you have a reference system where the only thing you change is the processor, and verify that the code running on top of

          • by davecb ( 6526 )
            They're TPC results, so they are from the vendors, and optimized up the gazoo (:-)) --dave
  • by Anonymous Coward

    From Intel's view as an innovation company, it kind of makes sense to try out new stuff on a platform that will not matter that much.
    And since they know HP will buy them, Intel know they will be field tested.

  • Thank you HP? (Score:4, Interesting)

    by jandrese ( 485 ) <kensama@vt.edu> on Friday November 09, 2012 @12:36PM (#41933515) Homepage Journal
    I guess all of that money that HP has been dumping into Itanium development is finally paying off. Everybody else assumed Intel was just going to discontinue the product for obvious reasons, but here they are releasing a major upgrade to the core architecture. It still makes me wonder what HP sees in Itanium that makes them so gung ho about it though. Is it the vendor lock in? Is this upgrade enough to finally push Itanium past x86 based processors in some performance metric?
    • by Desler ( 1608317 )

      It's because they spent a shit ton of money porting software to it. They don't want to have to incur that cost again to port away.

      • Still who is going to buy it now?

        Remember the Alpha? Slashdot ran on Alpha's for 5 years. They had a new version out and it didn't matter. HP wanted Itanium and purposedly made sure people wouldn't buy it and crippled the product line for the inferior Itanium. Makes you wonder why they bought it?

        After Windows 2000 dropped support in RC 3, it didn't matter. Who in their right mind would invest in a dead platform?

        This new chip could be 20x faster than a xeon and use 1/10 of the power! No one wants to invest i

        • OpenVMS and NonStop effectively only run on Alpha and a surprising number of companies have mission-critical software that works on one of these two platforms.
          • I wonder what management is going to do or are doing? I expect they are already underway replacing them. I doubt HP is porting them to x86 or ARM as it maybe too late for those that are retiring these with win32 or Linux equilivent of different applications that do the same tasks. It is not like you can get an emulator for these but these are systems I would not want to invest a penny into anymore as it would be a penny lost 3 years down the road when intel stops production and I can no longer even get moth

            • If you think some brand name beige Linux box is going to replace a nonstop system do yourself a favour and come out of mom's basement.

              Nonstop actually means what they say.

              No. Stops.
              Period.

          • NonStop on MIPS, not Alpha
            • Uh, I meant Itanium. Freud got me again - I'm still bitter about it killing a superior architecture through employing better sales drones.
      • Hell, on the OpenVMS side, it wouldn't shock me a bit to find out that they don't even HAVE a team any more that's capable of porting it to other architectures. They likely say they do, to fulfill government contracts that specify that OpenVMS can't be orphaned, but I wonder what the reality is.

    • It still makes me wonder what HP sees in Itanium that makes them so gung ho about it though

      The same thing Apple saw in MIPS
    • by linatux ( 63153 )

      If HP ditch Itanium, they effectively ditch HP-UX. They can (have?) ported HP-UX to x86, but why would anyone pay top $ for HP-UX on x86 - they would just use Linux instead. Without HP-UX, they don't have a tier 1 platform & will be drowned by Red Hat & SuSE.

      Meanwhile, Intel is busy building the RAS features of Itanium into x86 - as these get implemented into Linux, HP-UX will become irrelevent anyway.

      IBM & Power have a little more headroom - be interesting to see how long it lasts.

  • We already switched. ... ok a former customer I worked with already switched.

    Thank you Oracle for convincing us that it is dead.

    No one will touch it with a 10 foot pole. I hope HP wins the lawsuit agaisnt them and Intel also sued Oracle for damages. When they violated that contract it gave a lot of hurt for those who have invested so much in Itanium.

    Now it doesn't matter as no one will touch it.

  • by eap ( 91469 ) on Friday November 09, 2012 @01:23PM (#41934003) Journal

    From TFA:

    Poulson can issue 11 instructions per cycle compared to Tukwila's six.

    These go to eleven.

  • by vinn ( 4370 ) on Friday November 09, 2012 @01:56PM (#41934397) Homepage Journal

    You can still buy Itanium chips? Holy crap. Are they found on the same aisle of the department store as the iceboxes and cotton gins?

  • This is just Intel putting on a show of competing with themselves so that they don't get accused of monopolistic behavior... :p

  • This is an annoucment for a 32nm Itanium. Intel has been shipping 22nm x86 since spring.

    • Intel has been shipping 22nm x86 since spring.

      It seems that in the Intel x86 world the higher you move up the product line the older the technology gets.

      Intels x86 processors right now are best grouped by the sockets they use. There are basically three "current" (that is not "replaced" by a newer socket) sockets.

      LGA1155 is the mainstream desktop and low end single socket server socket. This is the only socket for which 22nm parts are currently available.
      LGA1356 is intended for low end dual socket systems but I get the impression it didn't really catch

    • Probably want to fill up fab utilization.
  • If it hadn't been for AMD's 64 bit extensions, we'd all be running Itanium servers right now. AMD forced Intel to release a 64bit x86. If AMD hadn't, all of the effort that is being put into Intel's current 64bit chips would have go into Itanium and it would be a very strong platform. The alternative, PAE, sucked.

    • by Desler ( 1608317 )

      Well as long as you ignore that all the legacy x86 software that is still running today wasn't going to be ported to Itanium. People would have just stuck to x86 rather than spending billions on porting and rebuying working software.

    • It was easy to criticise Itanium at the time, in comparison to Alpha, or PowerPC too. If we'd somehow all been forced to rewrite all of our legacy x86 code, either of these would have been a better choice. In fact, emulating x86 on PowerPC is a lot easier than on Itanium, so it would have been a more natural path if Intel had managed to kill x86. Lucky for them, they failed...
  • https://www.youtube.com/watch?v=HLgQMtquS6Y
  • I remember that in the "Hyperion" space opera, there is a "Poulsen" anti-age treatment. It has only one drawback: repeated applications of it make the beneficiary's face glow ever bluer. I wonder about these ones...
  • Sounds like the itanic all right.

    They had one at LinuxExpo once, back in the day, allegedly running DeadRat, but we couldn't see it because it had overheated and they took it away.

  • its name was Itanic Poulson... its name was...

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...