Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Broadwell-E, Apollo Lake, and Kaby Lake Details Emerge In Leaked Roadmap 117

bigwophh writes: In Q4 2016, Intel will release a follow up to its Skylake processors named Kaby Lake, which will mark yet another 14nm release that's a bit odd, for a couple of reasons. The big one is the fact that this chip may not have appeared had Intel's schedule kept on track. Originally, Cannonlake was set to succeed Skylake, but Cannonlake will instead launch in 2017. That makes Kaby Lake neither a tick nor tock in Intel's release cadence. When released, Kaby Lake will add native USB 3.1 and HDCP 2.2 support. It's uncertain whether these chips will fit into current Z170-based motherboards, but considering the fact that there's also a brand-new chipset on the way, we're not too confident of it. However, the so-called Intel 200 series chipsets will be backwards-compatible with Skylake. It also appears that Intel will be releasing Apollo Lake as early as the late spring, which will replace Braswell, the lowest-powered chips Intel's lineup destined for smartphones.
This discussion has been archived. No new comments can be posted.

Intel Broadwell-E, Apollo Lake, and Kaby Lake Details Emerge In Leaked Roadmap

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Sunday November 22, 2015 @07:24PM (#50982561)

    native USB 3.1 is not that big of a thing as on most board be it native or add on chip it's still over the same DMI bus.

    Now intel needs to add more cpi-e to the cpu. At least 20 lanes + DMI. 16 for video and 4 for other stuff like TB 3.0 PCI-e SSD's.

    • Since USB 3.1 Gen 2 is faster than a PCI Express 3.0 lane, perhaps it's better to implement it closer to the CPU and memory controller?

      • by blankinthefill ( 665181 ) <blachanc.gmail@com> on Sunday November 22, 2015 @09:19PM (#50983003) Journal

        One of the issues that I've been running into for a long while, and expect to be running into even more with the expansion of the M.2 and related slots, has been the serious lack of PCI-E lanes that Intel supports. It's very easy, running SLI and one of two other things that use PCI-E, to run out of PCI-E lanes on today's boards, especially if you're a power user. And with new expansion slots for SSDs and other applications starting to enter the market, using multiple PCI-E lanes (up to 4 for a single M.2 slot), it's going to be even easier to suck all those lanes up and still need more. Honestly, for some power users, Intel could probably double the number of PCI-E lanes natively supported, and still not provide enough.

        • *One or two. Maybe I should start using that preview instead of ignoring it and going straight to posting...

        • by AHuxley ( 892839 )
          +1 Re "t's going to be even easier to suck all those lanes up and still need more"
          The news about lanes, gpu, M.2 needs and todays lack of lanes is getting interesting. Once a user starts to add up the lane options and the ability to run the M.2, gpu, USB as expected it becomes an issue at the consumer, entry level.
          Lets hope the lane count is much better and actually not an issue next gen.
        • There's quite a few PCI-E lanes.

          16 directly from the CPU
          20 from the chipset via the DMI link (in the Z170, it was 8 2.0 lanes prior). The new chipset for these new CPU's ups that to 24 lanes.
          That's a total of 40 PCI-E 3.0 lanes.

          • by sexconker ( 1179573 ) on Monday November 23, 2015 @01:19AM (#50983597)

            2 video cards will take 32 of them, a high end SSD will take up 4, if you've got a wireless card, a sound card, or some other shit you're eating a couple more. And then you've got all the legacy SATA ports and whatnot that may eat up some of those lanes opportunistically.

            40 is by no means future-proof. I'd like to see 48 or 64 for a pro/enthusiast rig,

            • The GP does have a point though. The number of PCI-E lanes is being actively addressed by Intel already. 40 may not be much but it's a step up from the status-quo.

          • The DMI link from the CPU is only pci-e X4

          • I may be confused, but the older Haswell line had 40 PCI lines directly off the CPU and either 4 or 8 off the chipset. The newer architecture drops the CPU to having only 16 lanes directly off of it, and the chipset now has up to 24. 40+8 in the old, and 16+24 in the new is a downgrade, right?

            With most motherboards having a SATA controller, USB 3.1 controller, network card (where are the 10GB network ports???), and sound. Then drop in a couple video cards (32 lanes), a M.2 SSD (4 lanes) and you are eithe

      • by Agripa ( 139780 )

        With an external MAC and PHY, there is no reason for a USB 3.1 Gen 2 controller to use just one PCI Express 3.0 lane. PCIe is convenient that way.

    • by WarJolt ( 990309 )

      There are a few markets where fitting extra USB chips on a board is actually a big deal.
      Power consumption might also be improved.
      Also an external chip adds cost and many times pennies matter.

  • by Anonymous Coward

    when does Intel "Cornf Lake" come along?

  • 6~10 cores, some with 15~25MB of L3 cache.
    A generation of experts will have to work to ensure computer math, science and games can often be spread over the many cores.
    • Do problems really have to scale up to consume the available compute power?

      Big CPU suckers like Monte Carlo and HiDef video processing are near trivial to parallelize, while most "normal" compute tasks are sub-millisecond on a single 2GHz thread, especially with FPU and other specialized instructions.

      Granted, as camera prices fall, I want to have real-time intelligent video processing on an array of 20 cameras, but, can you spot the parallel opportunity there?

      • Big CPU suckers like Monte Carlo ... are near trivial to parallelize

        MCMC isn't. The first MC part of MCMC means each calculation depends on the previous one, more or less the definition of not parallelisable. Of course, you can run several in parallel which is fine, but they still have to burn in. If the burn in is a significant part of the time it takes for the computation, then parallelisation doesn't buy you all that much. I've seen problems when the burn is amazingly the main cost.

        If it's hard to estima

        • Most MonteCarlos I've seen do benefit from multiple runs to improve accuracy - not to insult a very important area of computational methods, but the whole idea of MC simulation seems an extravagant use of compute resources just to get a statistical prediction of an unknown quantity. In nuclear medicine, ok, fine, you are actually simulating physical particles that have reliable statistically modeled behaviors, but Blackman-Scholes pricing? That's sociology, and I have a hard time believing that the market

    • by gweihir ( 88907 ) on Sunday November 22, 2015 @10:39PM (#50983265)

      That is unlikely to happen. Parallelizing most things is orders of magnitude more complex than writing them single-task, and for quite a few things it is either impossible or gives poor results.

  • by tkrotchko ( 124118 ) on Sunday November 22, 2015 @07:50PM (#50982667) Homepage

    14nm for these chips puts us close to the end of currently deployed technologies for transistor densities.

    "The path beyond 14nm is treacherous, and by no means a sure thing, but with roadmaps from Intel and Applied Materials both hinting that 5nm is being research, we remain hopeful. Perhaps the better question to ask, though, is whether itâ(TM)s worth scaling to such tiny geometries. With each step down, the process becomes ever more complex, and thus more expensive and more likely to be plagued by low yields. There may be better gains to be had from moving sideways, to materials and architectures that can operate at faster frequencies and with more parallelism, rather than brute-forcing the continuation of Mooreâ(TM)s law."

    http://www.extremetech.com/com... [extremetech.com]

    • by JoeMerchant ( 803320 ) on Sunday November 22, 2015 @09:44PM (#50983081)

      We've been moving sideways for 10 years. In the 20 years before that, clock speeds were doubling every year or two. For the last 10, we've moved from a norm of single cores to a norm of 4 (or 2 + "Hyperthreads"), rotating hard drives to SSD, and specialized architectures to support HD video, but clock speed has been basically stagnant while the processors are getting fatter, more parallel, and not just in core count.

      10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time), they've been slow to deliver on that in terms of core count, but are making progress on other fronts - especially helping single cores perform faster without a faster clock.

      • We've been moving sideways for 10 years. In the 20 years before that, clock speeds were doubling every year or two. For the last 10, we've moved from a norm of single cores to a norm of 4 (or 2 + "Hyperthreads"), rotating hard drives to SSD, and specialized architectures to support HD video, but clock speed has been basically stagnant while the processors are getting fatter, more parallel, and not just in core count.

        We hit a wall on MOSFET clock speeds way before we expected. Turns out that power consumption is quadratic, not linear, to clock speed. Once you get over 4GHz or so, it becomes a substantial problem, and getting over 5GHz is a real ordeal. There are ideas for non-FET transistors, but so far none has worked out.

        10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time), they've been slow to deliver on that in terms of core count, but are making progress on other fronts - especially helping single cores perform faster without a faster clock.

        Well, Intel was right. They just aren't CPUs, but GPUs. Even a bottom-end GPU will have 80 cores, the price/performance is pretty good all the way up to 1500 cores, and if you really want, you can get

      • by Kjella ( 173770 ) on Monday November 23, 2015 @12:39AM (#50983531) Homepage

        The real problem is that we're mostly redistributing the watts.

        4 core @ 4GHz (i7-4790K) = 91W, 4*4/91 = 0.175 GHz/W
        4 core @ 3.2GHz (i7-4790S) = 65W, 4*3.2/65 = 0.197 GHz/W
        4 core @ 2.2GHz (i7-4790T) = 35W, 4.*2.2/35 = 0.251 GHz/W

        So from top to bottom we're seeing 40% better perf/W with perfect linear scaling. Neat, buit not exactly revolutionary when you subtract overhead. We've already got so much scale out capability that power is clearly the limiting factor:

        8 core @ 4GHz (doesn't exist) = ~185W
        8 core @ 3.2GHz (1680v3) = 140W
        8 core @ 2.2GHz (2618Lv3) = 75W
        16 core @ 4GHz (doesn't exist) = ~370W
        16 core @ 3.2GHz (doesn't exist) = ~280W
        16 core @ 2.2GHz (E7-8860v3) = 165W

        We can't go faster or wider unless we find a way to do it more efficiently, either that or we need extremely beefy PSUs and water cooling.

        • We can't go faster or wider unless we find a way to do it more efficiently

          Isn't that exactly what Intel has been doing for the past decade anyway?
          1 core @ 3GHz (Pentium 4) = 89W 1*3/89 = 0.033GHz/W
          4 core @ 2.4GHz (Core2Duo Q6600) = 105W 4*2.4/105 = 0.091GHz/W

          (Both previous processors to my current i7)

        • Agreed - cooling is the issue, and moving to smaller feature sizes (22nm, 14nm, 5!?!nm) is improving thermal efficiency, while simultaneously shrinking packages, making things like the Cedar Trail Compute Stick a possibility. People who really need 1000 core machines are getting them today, smaller, cheaper, and lower power than ever - if there were a market, you could shoehorn about 50 of your 4GHz cores into a "Full Size Tower" case that wasn't at all unusual (size-wise) 20 years ago - dissipating ~1000W

        • by Agripa ( 139780 )

          It is actually worse or better than that depending on your viewpoint.

          Over the last several generations the limit has been power density. If you make a plot of total power versus chip area going back through at least the beginning of the Core2 line of processors, the power density is roughly constant. In addition, total chip area has decreased because process density has increased faster than area needed to implement the processor. The result is that power has decreased roughly following the decreasing ch

      • 10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time)

        I think the 80 core processor Intel was developing at the time eventually turned in to the Knights Corner [wikipedia.org] aka Xeon Phi chip. Originally Intel developed this tech for the Larrabee project [wikipedia.org], which was intended to be a discrete GPU built out of a huge number of X86 cores. The thought was if you threw enough X86 cores at the problem, even software rendering on all those cores would be fast. As projects like llvmpipe [mesa3d.org] and OpenSWR [freedesktop.org] have shown, given a huge number of X86 cores this isn't as crazy of an idea as it

      • by Alomex ( 148003 ) on Monday November 23, 2015 @08:11AM (#50984465) Homepage

        10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time),

        An Intel higher up told me a while back that they could ship them today if they wanted. The problem is that users in the field report having a hard time using more than 6 cores outside host virtualization. Since then Intel has been dedicating the extra real estate to more cache, which programs can easily take advantage of, and less to cores, which no one knows quite how to use beyond 6 to 8 cores.

        • All depends on the app. In 2008 I was doing some signal processing work that would have easily parallelized out to 22 cores, and probably get partial benefit up to 80+ cores - nature of the source data (22 time series signals going through similar processing chains, the chains themselves might not get use out of more than 4-8 cores, but there are 22 of these things, so....)

          Lots of video processing work can be trivially split up by frame, so if you don't mind a couple seconds of processing delay, you can gr

          • by Alomex ( 148003 )

            ...and all of those are ideal for a GPU not extra cores, which brings us back to the Intel problem. Either is embarrassingly parallelizable (hence GPU) or you have a hard time using more than a handful of cores via multi-threading (hence 6-8 cores in most upper-end, non-virtualization CPUs).

            • Back in 2008, CUDA and friends were too bleeding edge for the applications I was working on, plus - a standard desktop PC had acceptable performance, so why kill yourself with exotica? Since then, I haven't had any applications where CUDA would have been practical, well, o.k., I did work with a group that did video processing who _should_ have been using CUDA, but they were having enough trouble keeping their stuff stable on ordinary servers.

              And, that 22 signal application, probably would be a major pain t

    • I'm just hoping we make it to 20ghz before the party ends. Imagine all the processing you can do between frames at that speed!
      Of course, faster would be better, but I don't have much hope.
      • by Anonymous Coward
        Modern transistors can run at 100ghz, but synchronizing all of the transistors is hard and we get left with 3.5ghz CPUs. Async CPUs could help a bit with this by reducing the amount of synchronization required.
        • All CPUs have both synchronous and asynchronous circuitry, so it makes no sense to say " Async CPUs could help a bit with this by reducing the amount of synchronization required."
      • by gweihir ( 88907 )

        There is a reason 5HGz seems to be a hard "wall" and around 4GHz commercial viability starts to end. It is interconnect. That is unlikely to go away anytime soon, if ever.

        • It is interconnect.

          What does that mean? I'm not familiar with this.

          • by gweihir ( 88907 )

            Chips basically have components (transistors, diodes, capacitors, resistors, and recently inductors) and interconnect ('wires').

            Interconnect has been the primary speed-limiter for about 20 years. At 5GHz or so, it starts to become exceptionally difficult to get signals from one component to the next, and in particular distributing clocks becomes a limiting issue as clocks need long wires in order to reach everything. Making transistors smaller helps a bit because the wires get shorter and signal-strength (v

            • But that effect is limited and seems to have mostly reached its end.

              Does this mean that as features get smaller, the interconnects have not?

              • by gweihir ( 88907 )

                Interconnect gets smaller if you reduce speed as well when you reduce size. If you keep speed constant, interconnect stays the same size and it will consume the same amount of power. Well, roughly. The problem is that at these speeds you are dealing with RF laws, not ordinary electric ones and RF laws are pretty bizarre.

                • by slew ( 2918 )

                  Interconnect gets smaller if you reduce speed as well when you reduce size. If you keep speed constant, interconnect stays the same size and it will consume the same amount of power. Well, roughly. The problem is that at these speeds you are dealing with RF laws, not ordinary electric ones and RF laws are pretty bizarre.

                  The problem can easily be described to first order "electrically". No bizarre RF laws necessary.

                  Interconnect is dominated by "resistive" issue (a good approximation of RF-impedance) and capactive coupling (a good approximation to RF field effects)... Since the interconnect is relatively getting thinner and longer, the resistance of that wire is going up (R ~ L/w/h) and it capacitively couples more with nearby lines (Cild = W*L/X or Cimd = H*L/Ls) and makes it take longer to move charge to and from the gat

                  • by gweihir ( 88907 )

                    You know, it can possibly be described by fairies and dragons as well. That would just be a fantasy as much as your "description" is.

    • by AHuxley ( 892839 )
      Just add more cores, threads to the consumer end :)
    • As a materials scientist, I think they squeezed the last bit of potential out of silicon. Well, they could perhaps go for isotopycally pure silicon, but the gain would be relatively modest for a high price. III-V semiconductors such as GaAs, InGaAs etc. are expensive mostly because it's hard to grow large crystals, but it is worth it due to the far higher mobilities of electrons in them.

      • by Anonymous Coward

        As a materials scientist, I think they squeezed the last bit of potential out of silicon. Well, they could perhaps go for isotopycally pure silicon, but the gain would be relatively modest for a high price. III-V semiconductors such as GaAs, InGaAs etc. are expensive mostly because it's hard to grow large crystals, but it is worth it due to the far higher mobilities of electrons in them.

        Mobile electrons help, but that's not the limiting factor these days, it's leakage. The problem with leakage is that small feature sizes mean lots of leakage and small feature sizes are needed to cram billions of transistors into an economical die size.

        Having big-fast transistors won't really save the industry, we've been relying on more transistors for the same $$$ to drive the industry forward and got a free ride on performance increases per transistor for a while and more mobile electrons will help with

    • ...or... quantum computing?! ... or... Getting software developers to cut the bloat-ware BS and get serious about EFFICIENT, bug-free, secure apps, that allow faster performance?!
  • Just needs to last 5 more years. Hopefully a real reason to upgrade will happen around 2020.
  • Tick you're alive; tock it was nice knowing you.

Life is a game. Money is how we keep score. -- Ted Turner

Working...