Intel Broadwell-E, Apollo Lake, and Kaby Lake Details Emerge In Leaked Roadmap 117
bigwophh writes: In Q4 2016, Intel will release a follow up to its Skylake processors named Kaby Lake, which will mark yet another 14nm release that's a bit odd, for a couple of reasons. The big one is the fact that this chip may not have appeared had Intel's schedule kept on track. Originally, Cannonlake was set to succeed Skylake, but Cannonlake will instead launch in 2017. That makes Kaby Lake neither a tick nor tock in Intel's release cadence. When released, Kaby Lake will add native USB 3.1 and HDCP 2.2 support. It's uncertain whether these chips will fit into current Z170-based motherboards, but considering the fact that there's also a brand-new chipset on the way, we're not too confident of it. However, the so-called Intel 200 series chipsets will be backwards-compatible with Skylake. It also appears that Intel will be releasing Apollo Lake as early as the late spring, which will replace Braswell, the lowest-powered chips Intel's lineup destined for smartphones.
native USB 3.1 is not that big of a thing (Score:3)
native USB 3.1 is not that big of a thing as on most board be it native or add on chip it's still over the same DMI bus.
Now intel needs to add more cpi-e to the cpu. At least 20 lanes + DMI. 16 for video and 4 for other stuff like TB 3.0 PCI-e SSD's.
Re:Nor is HDCP 2.2 (Score:5, Insightful)
No. People want to play media. They have no desire whatsoever to have it "protected" against them.
Re: (Score:2, Insightful)
No. People want to play media. They have no desire whatsoever to have it "protected" against them.
People also would rather not pay for their media, so if they have to choose between protected content and no content at all (because the content providers think that it is not economically viable enough for them to release it DRM-free) then the consumer will choose the former option. And if the protection is implemented well so that it doesn't adversely affect the consumer then they probably wouldn't give a damn.
Re:Nor is HDCP 2.2 (Score:4, Insightful)
No. People want to play media. They have no desire whatsoever to have it "protected" against them.
People also would rather not pay for their media, so if they have to choose between protected content and no content at all (because the content providers think that it is not economically viable enough for them to release it DRM-free) then the consumer will choose the former option. And if the protection is implemented well so that it doesn't adversely affect the consumer then they probably wouldn't give a damn.
I think you confused "not economically viable" with "profit maximizing". You think that famous artists, movie stars and authors that make tens of millions of dollars would say "Nah, I'd rather go work at McDonald's" if you cut their wage in half? And I'm sure you noticed how the music industry imploded after iTunes gave up the DRM. Oh wait, it didn't. And there's a whole lot of countries I'd live in if North Korea was the other option, we don't have to allow unreasonable terms if we don't want to. Just because it would be economically profitable to weld shut the hood of the car and control how you drive it after you've sold it, doesn't make it right. The doomsday scenarios are false. We could easily drop the DRM-protection, ban DRM and go back to plain old copyright infringement without the world coming to an end.
Re: (Score:1)
I think you confused "not economically viable" with "profit maximizing".
I'm not confusing any terms, because it is not my decision to make. It is the publishers who make that decision.
And I'm sure you noticed how the music industry imploded after iTunes gave up the DRM. Oh wait, it didn't.
I also noticed that for the majority of people, the removal of DRM made little to no difference at all. That is because they made the protection as unobtrusive as possible. Yes, the protection did prevent you from moving your digital files around, but it didn't stop playing on the Apple devices or burning the tracks to an audio CD (up to 7 times).
We could easily drop the DRM-protection, ban DRM and go back to plain old copyright infringement without the world coming to an end.
We are not talking about the world coming to an end
Re: (Score:2)
I'm not confusing any terms, because it is not my decision to make. It is the publishers who make that decision.
Between money or more money. Not making money or losing money.
I also noticed that for the majority of people, the removal of DRM made little to no difference at all. That is because they made the protection as unobtrusive as possible. Yes, the protection did prevent you from moving your digital files around, but it didn't stop playing on the Apple devices or burning the tracks to an audio CD (up to 7 times).
But it made it impossible (or at least extremely inconvenient) to move away from an Apple device. The market effects were obvious and was a huge part of the iPod's success and cost the consumers millions through lack of competition. The consumer might not have really understood, but they knew it worked on Apple and didn't work anywhere else.
But we are in the minority. The majority of people in the world either don't notice DRM or they are accepting of it.
They don't notice it because what millions and millions of people download have DRM removed.
And DRM could stay as it is and the world won't come to an end.
True, but you
Re: (Score:2)
I also noticed that for the majority of people, the removal of DRM made little to no difference at all. That is because they made the protection as unobtrusive as possible.
No for the majority of people it made no difference at all because at the first error they just jumped online and pirated the content. Serious, I see luddites every month who can't figure out why they can't move their e-book, or song onto their media player, or don't understand why their "ultraviolet bundled download" doesn't seem to work on their device simply fire up their malware infested copy of uTorrent and obtain another copy of something they've already paid for.
We are not talking about the world coming to an end, we are talking about whether consumers are willing to accept DRM-encumbered media.
By-n-large they're not. You just think
Re: (Score:2)
"No for the majority of people it made no difference at all because at the first error they just jumped online and pirated the content. Serious, I see luddites every month who can't figure out why they can't move their e-book, or song onto their media player"
Luddite is not the right term for these people. They are simply consumers who expect a consistent interface and know that there is no technological problem with implementing the operations they seek. They just resent that lawyers won't let them do it.
Re: (Score:2)
Between money or more money. Not making money or losing money.
Irrelevant to this discussion.
But it made it impossible (or at least extremely inconvenient) to move away from an Apple device. The market effects were obvious and was a huge part of the iPod's success and cost the consumers millions through lack of competition. The consumer might not have really understood, but they knew it worked on Apple and didn't work anywhere else.
That is correct, but as you say the consumer didn't understand and in most cases didn't care because they simply didn't ever try to move away from the Appleverse.
They don't notice it because what millions and millions of people download have DRM removed.
No, they didn't notice the removal of DRM because they dutifully installed iTunes and never tried anything that would trigger the rights management. People were far more likely want to write their music to CD format than copy it to a non-Apple brand of player and that was still supported.
True, but you were the one claiming that publishers wouldn't publish without DRM.
No, I never claimed that. It on
Re: (Score:2)
No for the majority of people it made no difference at all because at the first error they just jumped online and pirated the content.
No they didn't. The majority of people in the world do not pirate stuff. They do not have torrent software loaded. If they did then all forms of DRM would have died out years ago. DRM works because, as you said:
the majority of consumers do the "approved thing" with their copy
You never hear about the people who come up against the limits of DRM and simply accept it because they don't jump online to complain. If you only see the people who complain (which by definition you do) then you are seeing a skewed picture of the situation.
So I stand by my original statement that fo
Re: (Score:2)
Re: (Score:1)
People also would rather not pay for their media, so if they have to choose between protected content and no content at all (because the content providers think that it is not economically viable enough for them to release it DRM-free) then the consumer will choose the former option.
Ecept that's crap. Just about everything is freely available on the Pirate Bay. Everything is released on DVD still which while technically hsa DRM, it's so thoroughly hacked that it may as well not have.
And guess what? People s
Re: (Score:1)
Ecept that's crap. Just about everything is freely available on the Pirate Bay. Everything is released on DVD still which while technically hsa DRM, it's so thoroughly hacked that it may as well not have.
The people who download stuff from the pirate bay are not consumers, they are pirates. The argument about DRM does not apply to them because they don't ever use DRM-encumbered media. DRM is not designed to stop the people who identify themselves as pirates; it is to prevent those people who would balk at being called that (and wouldn't dream of loading torrent software) but who see no problems with copying an album or a movie to give to a friend. Morals are not absolute. While there is no difference in down
Re: Nor is HDCP 2.2 (Score:1)
Re: (Score:2)
Re: (Score:3)
Since USB 3.1 Gen 2 is faster than a PCI Express 3.0 lane, perhaps it's better to implement it closer to the CPU and memory controller?
Re:native USB 3.1 is not that big of a thing (Score:5, Insightful)
One of the issues that I've been running into for a long while, and expect to be running into even more with the expansion of the M.2 and related slots, has been the serious lack of PCI-E lanes that Intel supports. It's very easy, running SLI and one of two other things that use PCI-E, to run out of PCI-E lanes on today's boards, especially if you're a power user. And with new expansion slots for SSDs and other applications starting to enter the market, using multiple PCI-E lanes (up to 4 for a single M.2 slot), it's going to be even easier to suck all those lanes up and still need more. Honestly, for some power users, Intel could probably double the number of PCI-E lanes natively supported, and still not provide enough.
Re: (Score:2)
*One or two. Maybe I should start using that preview instead of ignoring it and going straight to posting...
Re: (Score:2)
The news about lanes, gpu, M.2 needs and todays lack of lanes is getting interesting. Once a user starts to add up the lane options and the ability to run the M.2, gpu, USB as expected it becomes an issue at the consumer, entry level.
Lets hope the lane count is much better and actually not an issue next gen.
Re: (Score:3)
There's quite a few PCI-E lanes.
16 directly from the CPU
20 from the chipset via the DMI link (in the Z170, it was 8 2.0 lanes prior). The new chipset for these new CPU's ups that to 24 lanes.
That's a total of 40 PCI-E 3.0 lanes.
Re:native USB 3.1 is not that big of a thing (Score:4, Insightful)
2 video cards will take 32 of them, a high end SSD will take up 4, if you've got a wireless card, a sound card, or some other shit you're eating a couple more. And then you've got all the legacy SATA ports and whatnot that may eat up some of those lanes opportunistically.
40 is by no means future-proof. I'd like to see 48 or 64 for a pro/enthusiast rig,
Re: (Score:2)
The GP does have a point though. The number of PCI-E lanes is being actively addressed by Intel already. 40 may not be much but it's a step up from the status-quo.
Re: (Score:3)
The DMI link from the CPU is only pci-e X4
Re: (Score:2)
I may be confused, but the older Haswell line had 40 PCI lines directly off the CPU and either 4 or 8 off the chipset. The newer architecture drops the CPU to having only 16 lanes directly off of it, and the chipset now has up to 24. 40+8 in the old, and 16+24 in the new is a downgrade, right?
With most motherboards having a SATA controller, USB 3.1 controller, network card (where are the 10GB network ports???), and sound. Then drop in a couple video cards (32 lanes), a M.2 SSD (4 lanes) and you are eithe
Re: (Score:2)
With an external MAC and PHY, there is no reason for a USB 3.1 Gen 2 controller to use just one PCI Express 3.0 lane. PCIe is convenient that way.
Re: (Score:2)
There are a few markets where fitting extra USB chips on a board is actually a big deal.
Power consumption might also be improved.
Also an external chip adds cost and many times pennies matter.
Upcoming names (Score:2, Funny)
when does Intel "Cornf Lake" come along?
3+GHz speeds, extra cores, more lanes. (Score:2)
A generation of experts will have to work to ensure computer math, science and games can often be spread over the many cores.
Re: (Score:2)
Mozilla Foundation now gets money from Microsoft. (Score:2, Interesting)
Thunderbird and SeaMonkey Composer GUIs: Damaged, apparently deliberately. Every time you do a file save, the newer versions of both ask for a
Re: (Score:3)
Do problems really have to scale up to consume the available compute power?
Big CPU suckers like Monte Carlo and HiDef video processing are near trivial to parallelize, while most "normal" compute tasks are sub-millisecond on a single 2GHz thread, especially with FPU and other specialized instructions.
Granted, as camera prices fall, I want to have real-time intelligent video processing on an array of 20 cameras, but, can you spot the parallel opportunity there?
Re: (Score:2)
Big CPU suckers like Monte Carlo ... are near trivial to parallelize
MCMC isn't. The first MC part of MCMC means each calculation depends on the previous one, more or less the definition of not parallelisable. Of course, you can run several in parallel which is fine, but they still have to burn in. If the burn in is a significant part of the time it takes for the computation, then parallelisation doesn't buy you all that much. I've seen problems when the burn is amazingly the main cost.
If it's hard to estima
Re: (Score:2)
Most MonteCarlos I've seen do benefit from multiple runs to improve accuracy - not to insult a very important area of computational methods, but the whole idea of MC simulation seems an extravagant use of compute resources just to get a statistical prediction of an unknown quantity. In nuclear medicine, ok, fine, you are actually simulating physical particles that have reliable statistically modeled behaviors, but Blackman-Scholes pricing? That's sociology, and I have a hard time believing that the market
Re:3+GHz speeds, extra cores, more lanes. (Score:4, Informative)
That is unlikely to happen. Parallelizing most things is orders of magnitude more complex than writing them single-task, and for quite a few things it is either impossible or gives poor results.
Re: (Score:2)
Why do idiots like you feel they can blatantly ignore some 30 years of CS research into the subject?
Re: (Score:1)
Parallelising tasks is not hard? Here's a test I came up with to test human parallelism. - Try reading four books at once.. I mean lay them out in front of you and read all four simultaneously.. Its impossible, humans basically cant parallel process. What we can do semi-well is multi-task, which is the brain switching between one task and another - and even then if the tasks require intellectual effort we tend to get muddled and do things wrong..
Re: (Score:2)
There are _very_ few workloads that scale close to linear. Most workloads scale massively worse or not at all. Have a look at the relevant research some time. Parallel algorithm and multi-CPU systems are not a new thing. At all.
We're almost at the end with current tech (Score:5, Informative)
14nm for these chips puts us close to the end of currently deployed technologies for transistor densities.
"The path beyond 14nm is treacherous, and by no means a sure thing, but with roadmaps from Intel and Applied Materials both hinting that 5nm is being research, we remain hopeful. Perhaps the better question to ask, though, is whether itâ(TM)s worth scaling to such tiny geometries. With each step down, the process becomes ever more complex, and thus more expensive and more likely to be plagued by low yields. There may be better gains to be had from moving sideways, to materials and architectures that can operate at faster frequencies and with more parallelism, rather than brute-forcing the continuation of Mooreâ(TM)s law."
http://www.extremetech.com/com... [extremetech.com]
Re:We're almost at the end with current tech (Score:4, Funny)
That's like walking over an unfinished bridge.
No problem. If you close your eyes the quantum bridge will be both finished and unfinished.
Re: (Score:2)
I don't have mod points today, but this is funny joke. Well played.
Re: (Score:1)
Re: (Score:2)
Can they even do stuff like addition, multiplication and conditional branching?
Re:We're almost at the end with current tech (Score:5, Informative)
We've been moving sideways for 10 years. In the 20 years before that, clock speeds were doubling every year or two. For the last 10, we've moved from a norm of single cores to a norm of 4 (or 2 + "Hyperthreads"), rotating hard drives to SSD, and specialized architectures to support HD video, but clock speed has been basically stagnant while the processors are getting fatter, more parallel, and not just in core count.
10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time), they've been slow to deliver on that in terms of core count, but are making progress on other fronts - especially helping single cores perform faster without a faster clock.
Re: (Score:2)
We've been moving sideways for 10 years. In the 20 years before that, clock speeds were doubling every year or two. For the last 10, we've moved from a norm of single cores to a norm of 4 (or 2 + "Hyperthreads"), rotating hard drives to SSD, and specialized architectures to support HD video, but clock speed has been basically stagnant while the processors are getting fatter, more parallel, and not just in core count.
We hit a wall on MOSFET clock speeds way before we expected. Turns out that power consumption is quadratic, not linear, to clock speed. Once you get over 4GHz or so, it becomes a substantial problem, and getting over 5GHz is a real ordeal. There are ideas for non-FET transistors, but so far none has worked out.
10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time), they've been slow to deliver on that in terms of core count, but are making progress on other fronts - especially helping single cores perform faster without a faster clock.
Well, Intel was right. They just aren't CPUs, but GPUs. Even a bottom-end GPU will have 80 cores, the price/performance is pretty good all the way up to 1500 cores, and if you really want, you can get
Re: (Score:1)
With modern chips, if you went say 4-bit and cut every corner you could I would think a single chip with 10 million plus cores would be possible. Not useful but possible.
Re:We're almost at the end with current tech (Score:5, Interesting)
The real problem is that we're mostly redistributing the watts.
4 core @ 4GHz (i7-4790K) = 91W, 4*4/91 = 0.175 GHz/W
4 core @ 3.2GHz (i7-4790S) = 65W, 4*3.2/65 = 0.197 GHz/W
4 core @ 2.2GHz (i7-4790T) = 35W, 4.*2.2/35 = 0.251 GHz/W
So from top to bottom we're seeing 40% better perf/W with perfect linear scaling. Neat, buit not exactly revolutionary when you subtract overhead. We've already got so much scale out capability that power is clearly the limiting factor:
8 core @ 4GHz (doesn't exist) = ~185W
8 core @ 3.2GHz (1680v3) = 140W
8 core @ 2.2GHz (2618Lv3) = 75W
16 core @ 4GHz (doesn't exist) = ~370W
16 core @ 3.2GHz (doesn't exist) = ~280W
16 core @ 2.2GHz (E7-8860v3) = 165W
We can't go faster or wider unless we find a way to do it more efficiently, either that or we need extremely beefy PSUs and water cooling.
Re: (Score:2)
We can't go faster or wider unless we find a way to do it more efficiently
Isn't that exactly what Intel has been doing for the past decade anyway?
1 core @ 3GHz (Pentium 4) = 89W 1*3/89 = 0.033GHz/W
4 core @ 2.4GHz (Core2Duo Q6600) = 105W 4*2.4/105 = 0.091GHz/W
(Both previous processors to my current i7)
Re: (Score:2)
Agreed - cooling is the issue, and moving to smaller feature sizes (22nm, 14nm, 5!?!nm) is improving thermal efficiency, while simultaneously shrinking packages, making things like the Cedar Trail Compute Stick a possibility. People who really need 1000 core machines are getting them today, smaller, cheaper, and lower power than ever - if there were a market, you could shoehorn about 50 of your 4GHz cores into a "Full Size Tower" case that wasn't at all unusual (size-wise) 20 years ago - dissipating ~1000W
Re: (Score:2)
It is actually worse or better than that depending on your viewpoint.
Over the last several generations the limit has been power density. If you make a plot of total power versus chip area going back through at least the beginning of the Core2 line of processors, the power density is roughly constant. In addition, total chip area has decreased because process density has increased faster than area needed to implement the processor. The result is that power has decreased roughly following the decreasing ch
Re: (Score:3)
10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time)
I think the 80 core processor Intel was developing at the time eventually turned in to the Knights Corner [wikipedia.org] aka Xeon Phi chip. Originally Intel developed this tech for the Larrabee project [wikipedia.org], which was intended to be a discrete GPU built out of a huge number of X86 cores. The thought was if you threw enough X86 cores at the problem, even software rendering on all those cores would be fast. As projects like llvmpipe [mesa3d.org] and OpenSWR [freedesktop.org] have shown, given a huge number of X86 cores this isn't as crazy of an idea as it
Re:We're almost at the end with current tech (Score:4, Interesting)
10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time),
An Intel higher up told me a while back that they could ship them today if they wanted. The problem is that users in the field report having a hard time using more than 6 cores outside host virtualization. Since then Intel has been dedicating the extra real estate to more cache, which programs can easily take advantage of, and less to cores, which no one knows quite how to use beyond 6 to 8 cores.
Re: (Score:2)
All depends on the app. In 2008 I was doing some signal processing work that would have easily parallelized out to 22 cores, and probably get partial benefit up to 80+ cores - nature of the source data (22 time series signals going through similar processing chains, the chains themselves might not get use out of more than 4-8 cores, but there are 22 of these things, so....)
Lots of video processing work can be trivially split up by frame, so if you don't mind a couple seconds of processing delay, you can gr
Re: (Score:2)
...and all of those are ideal for a GPU not extra cores, which brings us back to the Intel problem. Either is embarrassingly parallelizable (hence GPU) or you have a hard time using more than a handful of cores via multi-threading (hence 6-8 cores in most upper-end, non-virtualization CPUs).
Re: (Score:2)
Back in 2008, CUDA and friends were too bleeding edge for the applications I was working on, plus - a standard desktop PC had acceptable performance, so why kill yourself with exotica? Since then, I haven't had any applications where CUDA would have been practical, well, o.k., I did work with a group that did video processing who _should_ have been using CUDA, but they were having enough trouble keeping their stuff stable on ordinary servers.
And, that 22 signal application, probably would be a major pain t
Re: (Score:2)
Of course, faster would be better, but I don't have much hope.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
There is a reason 5HGz seems to be a hard "wall" and around 4GHz commercial viability starts to end. It is interconnect. That is unlikely to go away anytime soon, if ever.
Re: (Score:2)
It is interconnect.
What does that mean? I'm not familiar with this.
Re: (Score:3)
Chips basically have components (transistors, diodes, capacitors, resistors, and recently inductors) and interconnect ('wires').
Interconnect has been the primary speed-limiter for about 20 years. At 5GHz or so, it starts to become exceptionally difficult to get signals from one component to the next, and in particular distributing clocks becomes a limiting issue as clocks need long wires in order to reach everything. Making transistors smaller helps a bit because the wires get shorter and signal-strength (v
Re: (Score:2)
But that effect is limited and seems to have mostly reached its end.
Does this mean that as features get smaller, the interconnects have not?
Re: (Score:2)
Interconnect gets smaller if you reduce speed as well when you reduce size. If you keep speed constant, interconnect stays the same size and it will consume the same amount of power. Well, roughly. The problem is that at these speeds you are dealing with RF laws, not ordinary electric ones and RF laws are pretty bizarre.
Re: (Score:3)
Interconnect gets smaller if you reduce speed as well when you reduce size. If you keep speed constant, interconnect stays the same size and it will consume the same amount of power. Well, roughly. The problem is that at these speeds you are dealing with RF laws, not ordinary electric ones and RF laws are pretty bizarre.
The problem can easily be described to first order "electrically". No bizarre RF laws necessary.
Interconnect is dominated by "resistive" issue (a good approximation of RF-impedance) and capactive coupling (a good approximation to RF field effects)... Since the interconnect is relatively getting thinner and longer, the resistance of that wire is going up (R ~ L/w/h) and it capacitively couples more with nearby lines (Cild = W*L/X or Cimd = H*L/Ls) and makes it take longer to move charge to and from the gat
Re: (Score:2)
You know, it can possibly be described by fairies and dragons as well. That would just be a fantasy as much as your "description" is.
Re: (Score:2)
III-V semiconductors (Score:2)
As a materials scientist, I think they squeezed the last bit of potential out of silicon. Well, they could perhaps go for isotopycally pure silicon, but the gain would be relatively modest for a high price. III-V semiconductors such as GaAs, InGaAs etc. are expensive mostly because it's hard to grow large crystals, but it is worth it due to the far higher mobilities of electrons in them.
Re: (Score:1)
As a materials scientist, I think they squeezed the last bit of potential out of silicon. Well, they could perhaps go for isotopycally pure silicon, but the gain would be relatively modest for a high price. III-V semiconductors such as GaAs, InGaAs etc. are expensive mostly because it's hard to grow large crystals, but it is worth it due to the far higher mobilities of electrons in them.
Mobile electrons help, but that's not the limiting factor these days, it's leakage. The problem with leakage is that small feature sizes mean lots of leakage and small feature sizes are needed to cram billions of transistors into an economical die size.
Having big-fast transistors won't really save the industry, we've been relying on more transistors for the same $$$ to drive the industry forward and got a free ride on performance increases per transistor for a while and more mobile electrons will help with
Re: (Score:1)
Looks Like My i7-920 @3.8 Ghz (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Do you change thermal paste every two years or so
Re: (Score:2)
Re: (Score:2)
And that keeps you happy? Sounds extremely unreliable...
Don't get me wrong, I'm running a 2500k @ 4.5GHz myself, but I've never had a crash since I set the system up a few years ago.
Re: (Score:2)
Re: (Score:1)
Tick? Tock? (Score:1)
Re: (Score:1)
That's to keep vampires away. He has a non-stop line of people kissing his butt, and he doesn't trust the TSA (tushie security agency) to keep the vampires off his ass.
Re: (Score:2)