Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Unveils Meteor Lake Architecture (windowscentral.com) 59

Intel has taken the wraps off its forthcoming next-gen Meteor Lake processors following its successful 12th (Alder Lake) and 13th Gen (Raptor Lake) processors with its new E- and P-core design. WindowsCentral: Its first chip built on the Intel 4 process node with Foveros 3D packaging, Intel calls Meteor Lake its "biggest architectural shift in 40 years" and that it will "lay the foundation for innovations for the PC," as noted by Tim Wilson, VP, Design and Engineering Group and GM, SoC Design at Intel. Meteor Lake is Intel's next-gen CPU and the first built on the Intel 4 process, which is part of Intel's long-term goal of "5 nodes in 4 years." Previous generation naming would suggest it would be called Intel 14th Gen, but Intel is moving away from its older naming schema. Some reports have suggested Meteor Lake may reflect a reboot in generation numbers. Current rumors suggest Intel 14th Gen is simply a refresh of Raptor Lake, although Meteor Lake may play a part in that for laptops.

Meteor Lake processors are expected to ship in late 2023 or early 2024 in new laptops with thinner and lighter designs, better cooling, and much better battery life. The significant change for Meteor Lake is what Intel calls disaggregation, which means the breaking down of core components into separate 'tiles' on the SoC. Meteor Lake features four Tiles, including:
Compute Tile: New E-core and P-core microarchitecture, built on Intel 4 process technology
SoC Tile: Low power island E-cores, NPU, Wi-Fi 6E/7, native HDMI 2.1 and 8K HDR AV1 support
Graphics Tile: Integrated Intel Arc architecture
IO Tile: Thunderbolt 4 (and presumably Thunderbolt 5) and PCIe Gen5

This discussion has been archived. No new comments can be posted.

Intel Unveils Meteor Lake Architecture

Comments Filter:
  • "Meteor Lake processors are expected to ship in late 2023 or early 2024 in new laptops with thinner and lighter designs, better cooling, and much better battery life"

    Do they mean that regardless of the chip's features, laptops must have better cooling and battery life? Or do they mean that this new architecture will facilitate thinner laptops that don't need robust levels of cooling seen in current designs, leading to better battery life? I assume it's the latter, but it's actually hard to tell from t
    • Re: (Score:1, Informative)

      by Anonymous Coward

      You're the type of person who holds up those damn boring company meetings because you have to ask these questions at the end. You really can't figure out that statement? It's a fucking marketing promo.

      • You really can't figure out that statement?

        To be fair, nobody's expected to figure out that statement. The actual nugget of true information is only hinted at with marketing speak. It may as well have been written with ChatGPT if it's going to be verbose and not actually mean anything.

        • by Askmum ( 1038780 )
          Well, it's either "the biggest architectural shift in 40 years" or "simply a refresh of Raptor Lake".
          What would it be?
    • Since they don't have a lot of Meteor Lake manufacturing capacity, I wouldn't be surprised if only Evo certified laptops get them. Which require battery life testing at least.

    • by SirSlud ( 67381 )

      I know asking questions is designed to make you feel smart, but instead it actually makes you sound pretty stupid.

  • by TheRealMindChild ( 743925 ) on Tuesday September 19, 2023 @12:19PM (#63860682) Homepage Journal

    You guys remember Itanium?

    • by Junta ( 36770 ) on Tuesday September 19, 2023 @12:28PM (#63860712)

      Not to mention 40 years covers x86 going 32 bit, and 64 bit. From going from external PCIe and Memory controllers to integrated. From single core to multiple cores....

      Yeah, going for 40 years in the hyperbole is a bit much.

      • by linzeal ( 197905 )

        The scaling for this new arch in terms of the number of onboard processors is going to be insane though. We might see 32 core processors with 16 big boys and 16 efficiency cores become the norm in a few years. That is huge.

        Intel's new server lineup is rumored to be verging on 1000 with 500+ cores already revealed.

        • by Junta ( 36770 )

          It may be huge, but they've had lots of huge in the last 40 years, and with a subjective thing like 'huge architecture change', 40 years covers a lot of crazy ground for them.

          15 years would have been a better number. That would have got them past Nehalem, and it's probably a much more reasonable bar to claim most dramatic if Nehalem is off the table.

          Suppose we get to see whether it lives up to Intel's hype or not. They've had 10 years of not quite meeting their promises of unambiguous leadership (they've

      • Intel has been run by Marketing since the 486SX and that's been the root cause of all their failures.

        An engineering-driven company will always toast them.

        • by Junta ( 36770 )

          Sure, Itanium and Netburst represented huge missteps, and their desktop series has had another rough patch since about 2017 (AMD released Zen to a stagnant Intel portfolio, and then in 2018 AMD released on a better manufacturing process than Intel, and that gap has never been closed since), but they've enjoyed at least some leadership over the years since 486.

      • Not to mention 40 years covers x86 going 32 bit, and 64 bit.

        Well to be fair that wasn't Intel's biggest architectural shift. That was Intel's biggest architectural catch-up sprint with AMD.

    • Itanium did hang around longer than most people thought. Besides some niche clusters using it the biggest install base had to be OpenVMS. The odd part is you'd think Itanium boxes would be dirt cheap on the secondary market but they aren't. They were never cheap used. I always wanted one for my collection.

    • by Z80a ( 971949 )

      It was a good idea in the paper.
      Increasing the number of parallel instructions the CPU can execute the "normal way" increase the size of the logic of the chip by a LOT.
      This logic has to somehow find a way to make x instructions work in parallel, by all sorts of weird means like register renaming, fetching instructions further ahead etc.. i read somewhere that every extra parallel instruction double the size of the logic.
      The whole idea of the itanium was to just make the compiler do this job instead of havin

      • It was a good idea in real life too.
        Lots of CPU architectures require the instruction generation phase of the compiler to jump through hoops for the architecture (MIPS comes to mind as most obvious, with its "always executes the instruction after a conditional branch").

        The problem with Itanium was that people wanted x86_64, not a new architecture that could kind of run x86_32 code. Itanium as an architecture was just fine.
        • The problem with Itanium was that people wanted x86_64, not a new architecture that could kind of run x86_32 code. Itanium as an architecture was just fine.

          I'm going to mildly dispute that. The first big splash AMD64 made was with machines running Linux. They were a huge hit in server rooms and datacentres, and it took quite a long time for proprietary stuff on desktops to be compiled for the new arch to catch up. On Linux it was easy so the shift was fast and dramatic.

          That never happened with Itanium. It w

          • I'm going to mildly dispute that. The first big splash AMD64 made was with machines running Linux. They were a huge hit in server rooms and datacentres, and it took quite a long time for proprietary stuff on desktops to be compiled for the new arch to catch up. On Linux it was easy so the shift was fast and dramatic.

            No disagreement there at all.
            It was proprietary stuff on servers, actually. Just not linux.
            There's a reason Itanium's hideous emulated x86_32 performance was widely panned. It certainly wasn't people using the $12k processors in their desktop machines.

            That never happened with Itanium. It was earlier at a point where Linux in the server room was less dominant, but it did last into the era of Linux dominance. Even so it was hugely expensive and with mediocre performance so it never caught on.

            It wasn't mediocre performance at all. It took x86_64 years to catch up with Itanium performance, even at its lower clocks, and at massively huge costs in CPU complexity that we pay for today with a new side-channel attack aimed at superscalar features every

            • OK So...

              I'm going off memory here and some of it may be a either distorted through time or I may have got the wrong end of the stick at the time. As in, I remember during an internship the company's first Itanium tower arriving.

              Anyway, from what I remember Itanium was always very uneven. Ignoring the x86 emulation disaster, it was fast at some things but dreadful at others, which more or less aligned with the strengths and weaknesses of VLIW. I think there were pretty popular in the HPC world, and did prett

    • Itanium was a change in ISA. Meteor Lake doesn't represent that.

    • Ha ha ha... Yes, the shift from P + E cores to *checks notes* umm, more P + E cores is the biggest architectural shift in 40 years.

      The move from 16 bit x86 to 32 bit? Nah. That was nothing. 32 bit to 64 bit wasn't Intel, so that couldn't have been the biggest architectural shift.

      I don't think Intel knows what the word "innovation" means. Literally every thing mentioned in the article has been done before, mostly by other companies. Do they think "innovation" is when they finally catch up?

  • Chiplets? (Score:5, Interesting)

    by im_thatoneguy ( 819432 ) on Tuesday September 19, 2023 @12:43PM (#63860764)

    Are Meteor Lake "Tiles" just Intel branding for chiplet designs?

    • by higuita ( 129722 )

      correct, but they can't call it that, that would look they are following AMD... name it tiles and they are innovating, the very first to use it

      • by kriston ( 7886 )

        That's right. They're copying AMD chiplets.

        • Re:Chiplets? (Score:4, Informative)

          by TechyImmigrant ( 175943 ) on Tuesday September 19, 2023 @02:43PM (#63861052) Homepage Journal

          Nope. MCMs (Multi Chip Modules) have been around for many decades.
          This is just an example of MCM technology that has been improved over the years.
          The chiplets allow different circuits to be on silicon processes that suit the circuits - power, RF, IO, compute, etc.
          Chiplets also lead to smaller dice so the yields will be better than with larger dice.
          This is a general trend that the whole industry is going through because the economics of advanced silicon now makes chiplets make sense for high volume manufacture, whereas in the past, MCMs were for high performance or high density, expensive circuits. They were popular with the military, space applications and some telecom applications.

          • Nope. MCMs (Multi Chip Modules) have been around for many decades.

            Just because something has been around for decades doesn't mean it has been used in CPUs. It wasn't. AMD did it and called it chiplets. Intel is copying AMD chiplets.

            • Just because something has been around for decades doesn't mean it has been used in CPUs.

              The Core 2 Quad was an MCM.

            • Nope. MCMs (Multi Chip Modules) have been around for many decades.

              Just because something has been around for decades doesn't mean it has been used in CPUs. It wasn't. AMD did it and called it chiplets. Intel is copying AMD chiplets.

              Nope. This stuff (the recent chip-chip interconnect stuff) has been in development for many years in multiple places. You can go and read the papers if you like, to get a timeline. but the specifics of when products came out has little relationship with how, when or where the technology developed. It is not a case of company A woke in up morning, read 'company B is selling chiplets' and decided they needed it too.

              It may have escaped your attention that AMD doesn't manufacture chips. They outsource manufactu

    • by linzeal ( 197905 )

      Shame MISC never took off or we could see 5000 core processors or something ridiculous like that.

  • ... Lake (Score:4, Funny)

    by rossdee ( 243626 ) on Tuesday September 19, 2023 @01:22PM (#63860866)

    "next-gen Meteor Lake processors following its successful 12th (Alder Lake) and 13th Gen (Raptor Lake)"

    And I suppose next up is Kari Lake

  • by kriston ( 7886 ) on Tuesday September 19, 2023 @01:42PM (#63860914) Homepage Journal

    Is this the generation that's also copying Arm implementations, where there are low-powered and high-powered compute cores?

    • by kriston ( 7886 )

      Since I finally RFA, yes, yes they are doing this.

    • by Junta ( 36770 )

      They have already been doing that for a bit, the 'p cores' and 'e cores', since 2021 with Alder Lake.

    • Is this the generation that's also copying Arm implementations, where there are low-powered and high-powered compute cores?

      No, P and E cores were copied from ARM two generations ago (12th), it's been part of their product portfolio for over 2 years now. All they've done here is change the node size and transistor pattern from "Intel 7" to "Intel 4".

      This generation is copying chiplets from AMD. Sorry I meant "innovating" chiplets from AMD.

      • Chiplets isnâ(TM)t exactly new stuff, itâ(TM)s been around since the 70s. Iâ(TM)m fairly sure IBM had something in the early 2000s and many SoC use the design. Intel had done some engineering in the late 2000s as well and Iâ(TM)m not sure if it was Intel or another producer that made FPGA based on that. Voodoo experimented with it.

        Chiplets often seem to be a patch to a stuck engineering path, like AMD that had some serious issues with memory bandwidth ever since. HBM seems to be the curr

  • No performance benchmarks, and availability won't be until December for a very limited run of products.

  • Does this really matter? Do we still care about CPU cores? I just want more/better GPU cores and more GPU unified memory for my larger models.

    • Yes, many people care very much about CPU cores. Just because your use case doesn't fit it doesn't mean it doesn't exist.

  • Man, I kind of thought Intel was afraid of AMD but how large of a Freudian slip is the name "Meteor Lake"?

    • Maybe it's supposed to indicate its expected meteoric rise?
      Only meteors don't rise, the usually fall, never got that term of phrase.
      So maybe some other attribute of a meteor, like heating up rapidly possibly disintegrating?

      • So maybe some other attribute of a meteor, like heating up rapidly possibly disintegrating?

        Now you're gettin' it!

A computer lets you make more mistakes faster than any other invention, with the possible exceptions of handguns and Tequilla. -- Mitch Ratcliffe

Working...