Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Leaked Benchmarks Suggest Intel Will Drop Hyperthreading From Core i7 Chips (arstechnica.com) 199

According to leaked benchmarks found in the SiSoft Sandra database, there is an Intel Core i7-9700K processor that doesn't appear to have hyperthreading available. "This increases the core count from the current six cores in the 8th generation Coffee Lake parts to eight cores, but, even though it's an i7 chip, it doesn't appear to have hyperthreading available," reports Ars Technica. "It's base clock speed is 3.6GHz, peak turbo is 4.9GHz, and it has 12MB cache. The price is expected to be around the same $350 level as the current top-end i7s." From the report: For the chip that will sit above the i7-9700K in the product lineup, Intel is extending the use of its i9 branding, initially reserved for the X-series High-End Desktop Platform. The i9-9900K will be an eight-core, 16-thread processor. This bumps the cache up to 16MB and the peak turbo up to 5GHz -- and the price up to an expected $450. Below the i7s will be i5s with six cores and six threads and below them, i3s with four cores and four threads. Even without hyperthreading, the new i7s should be faster than old i7s. A part with eight cores is going to be faster than the four-core/eight-thread chips of a couple of generations ago and should in general also be faster than the six-core/12-thread 8th generation chips. Peak clock speeds are pushed slightly higher than they were for the 8th generation chips, too.
This discussion has been archived. No new comments can be posted.

Leaked Benchmarks Suggest Intel Will Drop Hyperthreading From Core i7 Chips

Comments Filter:
  • by InvalidsYnc ( 1984088 ) on Thursday July 26, 2018 @07:11PM (#57016038)

    I've always seen them as "pseudo" cpu's, and not been all that happy with them overall. Yeah, some workloads benefit from it just fine, but others get tanked, but you'll never know because it just looks like those CPU's are flying along (according to task mangler or whatever).

    Anyway, glad to see that there will be some parts out there that people can choose to buy that don't have it.

    • What are those workloads that actually benefit from thinking the CPU has more cores than what it actually has?
      • by Anonymous Coward on Thursday July 26, 2018 @07:30PM (#57016146)
        workloads that cause a lot of switching between processes/Jobs. We have many systems that benefit hugely from hyper threading, others that get bugger all and still more where we explicitly disable it as it slows down the system.
      • by Anonymous Coward on Thursday July 26, 2018 @07:30PM (#57016152)

        Integer heavy workloads, sometimes branch heavy coding. The physical core 0 really can do two things at once, but only for some things. Much like how you can pet two cats in half the time with your two hands, but can only drink water at a rate limited by your mouth.

        The big things are this:
        1- It is rare to find a workload that works better with HT off
        2- It is common to find a workload that gets some speedup
        3- HT may have some vague security issues, based on recent actions from BSD etc. Maybe.

        Anyway, those three things should mostly determine whether you turn HT on or off on your box, and really just the first two unless you are Server Guy.

        • by Anonymous Coward on Thursday July 26, 2018 @07:44PM (#57016248)

          I welcome these simultaneous-cat-petting analogies.

          • by burhop ( 2883223 )

            I welcome these simultaneous-cat-petting analogies.

            You are going to love the future. Quantum computing is based on having a herd of schrodinger cats... more or less.

        • by AmiMoJo ( 196126 )

          Which brings us to the real question, why is Intel removing hyperthreading capability?

          There are some security issues. Maybe it can't overcome them, but AMD has their own version (simultaneous multithreading) that so far hasn't been affected by these problems.

          The other issue is complexity. Maybe they want to simplify the CPU and simply have more of them.

          • Its Intel, I cant give them the benefit of the doubt. So I would bet it has to do with getting more people to shell out i9 money that dont really need i9 computing power. This also may backfire for them and hand a large chunk of the market to AMD.

      • HT can reduce context switching overheads and it can make use of more execution units at once.

      • by gman003 ( 1693318 ) on Thursday July 26, 2018 @08:16PM (#57016414)

        A lot of them, actually.

        A modern "core" has several "execution units". Unlike very early x86, where it was divided between ALU and FPU, these are divided more finely and evenly - one might do integer math, vector shifts, and branches, while another might do integer math, vector logic, and data stores. There's usually redundancy on common instruction types (eg. Haswell has three that can do address stores, but only one can do divides).

        In a single thread, this is used for superscalar execution. If you have code something like "a = b / c; d = e * f;", both instructions can be run in parallel since neither depends on the other. This also hides the cost of x86's more complicated addressing modes - computing the address gets dispatched to an execution unit just like a normal multiply/add, and the result just gets sent to the store unit.

        But sometimes a thread has lots of dependencies, or does mainly a single type of operation. Maybe it's crunching through a bunch of multiply-adds. Rather than let the remainder of the core sit idle, you can run another thread, or even another process, on it. If this second one mainly hits a different EU - say, it's doing a lot of shifting and bit-twiddling - you can get a 100% speedup.

        You rarely get so much of a boost in practice. A worker-thread type of program, splitting a parallel task across cores, will generally be using the same execution units in each thread. And SMT doesn't help if you're bottlenecked on something besides execution - well-optimized code, as often as not, is limited by memory throughput rather than execution.

        The other boost comes from covering memory latency. If one thread hits a load that isn't in L1 cache, it will stall while the load is served. If it's in L2 cache, that's not too long - a dozen cycles or so. If you're going out to main memory, you're looking at a few hundred, maybe a few thousand cycles of NOPs - so why not switch to another thread, that has all it's data in L1 cache already? Modern x86 processors have pretty low memory latency compared to other architectures, so two threads is generally the most you'd find useful for this, but other systems with harsher memory latency will go even wider - the latter-day SPARCs do eight threads per core, and some parts of a GPU will operate in the hundreds. This is why some non-superscalar architectures will still have multiple threads per core - it's only ever actually running one instruction, but it will rarely be running zero.

        • More threads on a core also burns more cache. And if both threads need to access ram then you get even more delay.

          Personally I see the hyperthreading as a poor mans multicore solution.

          Your mileage may vary.

          • More execution threads may consume more cache, but CPUs have cache coherence algorithms and can share when accessing the same physical addresses. This reduces the impact when executing the same process in two threads.

            SMT (and multicore processing in general) would benefit greatly by coalescing L1 caches (including TLB) where the running processes are assigned the same page table. L2, of course, uses physical addresses.

      • by AHuxley ( 892839 )
        When someone pays a really, really smart person to work on the difficult math of video and photography software.
        Do that to some really great software standard with a supportive OS and a new CPU can really offer more.
      • by Miamicanes ( 730264 ) on Friday July 27, 2018 @12:35AM (#57017318)

        It made a big difference on WinXP with single-core CPUs because XP had lots of performance chokepoints that were limited to a single thread per "CPU".

        The name resolver (which handled not only DNS lookups, but drive-path resolution for Explorer as well) is a noteworthy example. If the browser triggered a "bad" DNS lookup, it would hang Explorer (including the Start menu) until the DNS lookup timed out (30-90 seconds later, IIRC).

        Hyperthreading mitigated 99% of that, because even if one name resolver thread got hung up, the other could keep chugging along.

        As of Win10, most of those chokepoints are gone, and HT is useful mainly with virtual machines (by simplifying program logic since each virtual core gets its own set of registers). The catch is, recently-documented security vulnerabilities suggest it can be used to leak info between VMs... a minor issue for someone using a VM to run Linux under Windows for convenience, but a potentially HUGE issue for comercial hosting services w/multiple unrelated customers.

        In any case, HT is a huge benefit with one single-core CPU, but offers little if you have 8 cores to begin with.

        • WinXP is perfectly capable of scheduling another thread to run, when one gets hung up. It didn't need a virtual core for that.
      • by mikael ( 484 )

        Anything computationally intensive that could be parallelized'; numerical simulations, ray-tracing, 3D games, even a HTML get request. The ideas of hyper-threading was that if one thread got blocked with a pending IO operation (like fetching data due to a cache miss), the other thread could continue working. But they both shared the same cache memory and that led to all sorts of problems when you tried to write multi-threaded code accessing the same memory space. All data had to be cache-line aligned and ev

        • Anything computationally intensive that could be parallelized

          Anything that's computationally intensive like that will start by asking "How many cores do I have?" and then break up the workload accordingly. If it got back 16, when really there were 8 cores, it would break up the work into 16 pieces instead of 8. Hyperthreading can squeeze out some performance if two threads are using completely different instructions, but are using the same cache lines. That's the opposite of what will happen here. The parallelized numerical solutions will be executing the same instru

      • Web servers, or rather front-end web proxies, tend to be highly threaded and thus benefit from technologies like hyperthreading.

        • Okay, but if all of the threads are running at the same time, wouldn't that just make for more false cache sharing? And if the threads aren't running at the same time, the OS's thread scheduler can take care of that.
      • If you're bound by memory I/O or limited by cache then something like hyperthreading won't improve throughput and will probably make it worse. But even in that case you can have better fairness between threads and less bursty behavior at high load when doing hyperthreading.

        Even in an optimal case total throughput doesn't improve very much with hyperthreading. So results are disappointing if your only performance metric is how many operations you can perform a second or how much memory bandwidth you can blas

    • by Tough Love ( 215404 ) on Thursday July 26, 2018 @08:12PM (#57016402)

      So what you're saying is, you have no idea what hyperthreading is but you don't like it because it does not benefit all workloads? I hope you can see what's wrong with that argument. If not, I will try to help.

      Hyperthreading (a simplified form of SMT [wikipedia.org] is about putting cpu cores to work that would otherwise stall, e.g., waiting on a memory load. If your workload has everything in L1 cache then SMT won't do anything and just wastes transistors, which could have been better used by providing more cores or more cache memory. Most loads do hit enough memory that SMT is a win for parallel latency, however for those that don't, SMT can actually slow things down as multiple threads compete for limited superscalar resources such as register files. If there aren't too many parallel threads then it would be better to give each its own core.

      So here is the thing: to make best use of SMT you need to know something about how SMT works, and something about your workload. Then you can potentially use CPU affinity to tune your application. If you are ignorant about either of these things, or you can't justify the time to do the necessary tuning, or you don't have the necessary access, then sometimes, yes, SMT is just going to bite. But it wins on average, which is why all modern general purpose processors implement it, with the notable exception of ARM.

      Throughput is not the entire story about SMT, there is also power efficiency. Additional logic is required to fetch two independent instruction streams in parallel, keep them independent, and manage the additional cache complexity. This does not slow things down but eats power, which is why ARM so far does not implement it, and Intel does not use it for Atom. This sacrifices parallel throughput, and if you have any Atom devices, you will be painfully aware that they suck compared to their core arch cousins. Intel had to about face on this when AMD dropped Ryzen on them so now, some low end Intel parts also have hyperthreading.

      Fast forward to today's rumor. It should be clear that not having Hyperthreading/SMT hurts performance on average, but can improve power efficiency. Simple conclusion: these i7's are aimed at the notebook/ultrabook/chromebook market where battery life and weight are more important than performance. You don't want these for your server, desktop or fat ass laptop.

      • Simpler conclusion: brand spanking new attack reported recently here on slashdot takes advantage of weaknesses in hyperthreading, and intel has no idea how to make it secure so they are removing it instead

      • So from this discussion it's interesting that hyperthreading has been stuck on 2 threads per core for so long. Why not 16 today?

        And what would be more power efficient - more hyperthreading or more cores?

        • by tlhIngan ( 30335 )

          So from this discussion it's interesting that hyperthreading has been stuck on 2 threads per core for so long. Why not 16 today?

          Because of diminishing returns. It's called "Multithreading" and not "Multiprocessing" because even though it shows up as a separate CPU core, it's shared resources could benefit from threads of the same process far more than disparate threads, especially with respect to cache utilization.

          It comes about because a modern CPU is super-scalar. That is, there are multiple execution uni

        • There is such a thing as MIPS MT [wikipedia.org] supporting 4 SMT threads. While 2 thread SMT is clearly worth the extra chip real estate on average, it is not clear that the additional complexity to support 4 is worth it, and nobody else followed MIPS in that direction.

    • You can buy a $99 dollar processor with decent single thread performance and not have to worry about games that require 4 threads.
  • spectre mitigation? (Score:2, Interesting)

    by Anonymous Coward

    spectre-related attacks rely on predictive execution in hyperthreading.

    could this be a mitigation, while providing improved performance so the new part still exceeds the outgoing part?

    • I would think this is far too soon in the development cycle to have a product ready to go out the door with that level of a fundamental change at this point.
      • I'm no CPU expert, but it's my understanding that to increase yields, Intel already disables features or entire cores that don't work right on a particular chip. If HT isn't all that useful on the new chip, and it reduces Spectre-type risks, would it not be rather easy to disable it? It can already be disabled in the BIOS, right, so disabling it doesn't require any changes to the silicon.

    • by viperidaenz ( 2515578 ) on Thursday July 26, 2018 @07:43PM (#57016240)

      spectre related attacks rely on speculative execution. hyperthreading is not speculative execution.
      There are plenty of CPU's vulnerable to spectre attacks that have no hyperthreading capability. turning it off on your Intel CPU doesn't mitigate it either.

      • Not my field, but as far as I know speculative execution is the first step, the second step is a side channel attack to get the results of the speculative execution from the cache. And hyperthreading seems perfect for side channel attacks.
      • Yeah maybe this is more about side channel attacks such as this one:
        https://www.theregister.co.uk/... [theregister.co.uk]
    • spectre-related attacks rely on predictive execution in hyperthreading. could this be a mitigation, while providing improved performance so the new part still exceeds the outgoing part?

      A thought that occurred to me also, but I don't think so. Intel isn't going to drop Hypthreading on all its parts, or AMD will happily kick their tail with its superior SMT implementation. This is about low power parts aimed at the ultraportable market, see my longer post in this thread.

      • Intel isn't going to drop Hypthreading on all its parts, or AMD will happily kick their tail with its superior SMT implementation.

        Doubtful. Increasing cores experiences diminishing returns. The difference between 8 cores and 8 cores + HT may be quite minimal except in very rare instances.

        • Intel isn't going to drop Hypthreading on all its parts, or AMD will happily kick their tail with its superior SMT implementation.

          Doubtful. Increasing cores experiences diminishing returns. The difference between 8 cores and 8 cores + HT may be quite minimal except in very rare instances.

          Those rare instances tend to the be ones you care about, like compiling or video encoding. If you don't care about performance then by all means accept a wimpy processor, it's right for you.

          • Intel isn't going to drop Hypthreading on all its parts, or AMD will happily kick their tail with its superior SMT implementation.

            Doubtful. Increasing cores experiences diminishing returns. The difference between 8 cores and 8 cores + HT may be quite minimal except in very rare instances.

            Those rare instances tend to the be ones you care about, like compiling or video encoding. If you don't care about performance then by all means accept a wimpy processor, it's right for you.

            Actually compiling would not be one of those rare instances. I/O and RAM are also huge factors. And again, **diminishing returns**, 8 core vs 8 core + HT will probably be less helpful than 2 vs 4 cores or 4 vs 8 cores. 8 cores without hyper threading is not wimpy.

            • Increasing cores experiences diminishing returns. The difference between 8 cores and 8 cores + HT may be quite minimal except in very rare instances.

              Those rare instances tend to the be ones you care about, like compiling or video encoding. If you don't care about performance then by all means accept a wimpy processor, it's right for you.

              Actually compiling would not be one of those rare instances. I/O and RAM are also huge factors.

              You have it exactly backwards. The more RAM accesses you have, the more SMT helps. And I/O has almost nothing to do with SMT because the OS takes care of that... a thread waiting for I/O is not running, or in other words, not in any CPU core, so not affected by SMT.

              And again, **diminishing returns**, 8 core vs 8 core + HT will probably be less helpful than 2 vs 4 cores or 4 vs 8 cores. 8 cores without hyper threading is not wimpy.

              You are talking nonsense. I get my fastest compiles with 8x2 SMT cores, exactly what I my Ryzen part gives me. I just don't care about your other comparisons, they look like smokescreen to me. A decent compiler (gcc, llvm) will use as many cores

              • Actually compiling would not be one of those rare instances. I/O and RAM are also huge factors.

                You have it exactly backwards. The more RAM accesses you have, the more SMT helps.

                You misunderstand, its the amount of RAM that can greatly effect compilation speed.

                ... And I/O has almost nothing to do with SMT because the OS takes care of that... a thread waiting for I/O is not running, or in other words, not in any CPU core, so not affected by SMT.

                You misunderstand, its the amount of I/O that can greatly effect compilation speed.

                And again, **diminishing returns**, 8 core vs 8 core + HT will probably be less helpful than 2 vs 4 cores or 4 vs 8 cores. 8 cores without hyper threading is not wimpy.

                You are talking nonsense. I get my fastest compiles with 8x2 SMT cores, exactly what I my Ryzen part gives me. I just don't care about your other comparisons, they look like smokescreen to me. A decent compiler (gcc, llvm) will use as many cores as you have. You just tell it how many threads you want and away it goes. Compiling is embarrassingly parallel, it approaches best case for SMT. Linking was a serializing bottleneck historically but there is no good reason for it, and is now largely fixed. Your evident confusion coupled with confident pronouncements that are flat wrong is a little concering. Is this somehow normal for the circles you move in?

                Again, you evade the point, **diminishing returns**, and substitute a different goal, "faster", and offer an apples and oranges comparison, Intel v AMD.

                Merely saying faster with HT than without is not a meaningful counterargument. If you want to have a meaningful counterargument you might compare an Intel 8 core with hyperthreading enabled

          • Intel isn't going to drop Hypthreading on all its parts, or AMD will happily kick their tail with its superior SMT implementation.

            Doubtful. Increasing cores experiences diminishing returns. The difference between 8 cores and 8 cores + HT may be quite minimal except in very rare instances.

            Those rare instances tend to the be ones you care about, like compiling or video encoding. If you don't care about performance then by all means accept a wimpy processor, it's right for you.

            No, only people with giant bushy neckbeards care about those things these days, for everybody else it is fast enough. Video encoding doesn't need to be fast for normal use cases. And slow compilation isn't really improved by a faster CPU; everybody else who has to compile it will still complain about your sucky code and insufficient build system, if it is actually slow in a use case where that matters.

            • The performance processor market is clearly not for you. For those of us who do care about performance because time equals money will continue to invest in high performance equipment. A rubbernecking bystander such as you should be perfectly fine with an Atom, it will suit you perfectly.

              • by Khyber ( 864651 )

                "For those of us who do care about performance because time equals money will continue to invest in high performance equipment."

                WRONG. Those who care about performance learn to code at the bare metal level and OPTIMIZE. Meanwhile, you keep using shitty bloated high-level languages that need so much beefy hardware because y'all don't know how to code for shit.

                • Those who care about performance learn to code at the bare metal level and OPTIMIZE.

                  Whoosh. Those of us who code professionally want to do it on fast hardware so we don't waste our own time.

            • It'll be the gaming crowd with $1,000 CPUs they're going to replace in 4 months with another $1,000 CPU talking about how many AIs and individual bullets they can have on-screen at any given time without lag.
      • by Khyber ( 864651 )

        "This is about low power parts aimed at the ultraportable market,"

        Actually, no. Increasing physical core count means increasing actual power consumption. This would be a shitty move to make towards the portable market, versus just having HT on a lower core count CPU where you can keep power consumption lowered.

    • by Targon ( 17348 )

      To address Specter and Meltdown problems, Intel needs a more significant change to the core design, and that won't happen until 2019 or 2020. These chips won't have those fixes.

  • by Gravis Zero ( 934156 ) on Thursday July 26, 2018 @07:14PM (#57016054)

    Will these still need to have Meltdown and Spectre patches? If Intel are just pooping out new chips with no fix for the root cause then it's kind of a moot point to talk about it's speed.

    • Will these still need to have Meltdown and Spectre patches?

      Spectre, Meltdown and other speculative execution exploits can work more rapidly with Hyperthreading/SMT but they also work perfectly well with single threaded cores. Cache timings are the leakage point.

    • then it's kind of a moot point to talk about it's speed

      Why would it be moot?

      a) Talking about speed when the software fix for the problems have a speed cost is entirely relevant.
      b) Talking about a security problem that doesn't affect the vast majority of people should have no impact on speed.

      To be honest the presence or absense of Spectre / Meltdown patches will have no bearing on my purchasing decision going forward. I do not provide computers to 3rd parties to analyse my system and execute code as they please. I do not need to segregate systems via virtualisat

  • by Joe_Dragon ( 2206452 ) on Thursday July 26, 2018 @07:14PM (#57016056)

    Can we get more pci-e lanes on the desktop?

    Not pay more or get less.

    AMD FOR THE WIN!!!!!!!!!!

    • by Targon ( 17348 )

      In some cases, the PCI Express lanes are a function of the chipset, not the CPU. AMD is closer to the system on a chip approach by having the PCI Express lanes having dedicated lanes right on the CPU itself. The only downside to that is that socket AM4 places limits due to the number of pins, so more PCI Express lanes and more memory channels would actually require a new socket. On the positive side, Zen2 cores will be the 2019 generation, and in 2020, even though AM4 will still be used, we may see an

    • I'm hoping for a high-clock 2 CCX threadripper part on Zen2.

  • by jader3rd ( 2222716 ) on Thursday July 26, 2018 @07:21PM (#57016084)
    I know we have to disable CPU level Hyper Threading anyway. Too many false cache swaps when it's on.
    • Re: (Score:3, Informative)

      by Anonymous Coward
      anyone who cares tests to see if they are in one of the fringe cases that gets worse performance. Far more likely to either have no effect on performance or a slight increase.
    • No, anyone who doesn't have a clue disables it anyway.

      Anyone who cares assesses their workloads against the benefits. Nearly all people who care find they are far better off leaving it on.

      • No, anyone who doesn't have a clue disables it anyway.

        Anyone who cares assesses their workloads against the benefits. Nearly all people who care find they are far better off leaving it on.

        How many times do you go "I wonder if turning off hyper threading will help", and then it does, every time, before it just becomes an SOP.

        • How many times do you go "I wonder if turning off hyper threading will help", and then it does

          Very close to zero for most computing loads.

      • Not really, most real-world workloads don't have enough instruction level parallelism to make use of current Intel cores as it is. If the workload isn't strictly single threaded, and the scheduler is halfway decent, hyper-threading is usually beneficial.

        • Not really, most real-world workloads don't have enough instruction level parallelism to make use of current Intel cores as it is.

          Except for the types of workloads people would normally buy i7s for.

          • Like? I know There are a few graphics and video application than can bring the ILP needed to saturate an intel core, though hyper-threading tends to have less than a 1% penalty there. https://www.phoronix.com/scan.... [phoronix.com]. The other kind of workload that doesn't benefit are can only utilize a few threads, in which case an i3 or i5 with 4 cores and an equal clock would perform just as well on the workload. The reason I bough the sylake i7 over the i5 was precisely hyperthreading to speed up compiling and the vi

  • by slack_justyb ( 862874 ) on Thursday July 26, 2018 @07:30PM (#57016154)

    So to recap the 8th gen of Intel. The i7 had the most cores at six with HT enabled. i5 was just like the i7 but with HT turned off. The i3 had HT in gen 7 so it was two cores/four threads, in 8th gen they gave it two more cores and turned off HT. So: i7=6/12, i5=6/6, i3=4/4. The i9 in gen 8 was really weird. The clock would scale down the more cores you used, it was very odd and minus the fact that the 18 core version was roughly the price of a used car, it was expensive. The price per performance with the i9 was incredibly low. A 3.4 Ghz i7 would give you a better CPU mark / $ by almost 200%, not to mention that an AMD six core FX-6300 would give you better CPU mark/$ by almost 800%. So clearly the i9 wasn't going to win you an award for price sensitive consumers.

    So all that said, and this is my opinion so it's literally worth whatever value you choose to give it, I think Intel is going to reposition the line up to disable HT on all "consumer" processors and focus on just keeping HT and "pro" features in the i9. I personally think it's a back hand to Intel consumers, but I'm an AMD fanboy so full disclosure there. But yeah, I think the i3, i5, and i7 are all going to eventually be labelled as the "cheapy", "actual desktop", "gamer" CPUs in that order and the i9 is going to be viewed as "workstation" and thus the i9 isn't going to focus on price/performance balance. So, i3 will be 4/4, i5 will be 6/6, and i7 will be 8/8 with the i9 being whatever crazy numbers they throw at the chips with hopefully not any of that weird scaling core/HT/Ghz stuff.

    That's just my hot take on this, open to hear what others think.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      "Workstation" absolutely requires ECC support. So, no, i9 ain't going workstation anytime soon, that's for the gimped try-out low-end Xeons (the Xeon E3 in the old nomeclarture, maybe they're called Xeon "Tin" nowadays? because Bronze is the low-end Xeon E5)

      • Why would workstation absolutely require ECC support? And before you answer have a think about the benefits of ECC vs the risk you're actually mittingating working on a workstation. Not a bank, not some critical file server, but a workstation.

    • by AmiMoJo ( 196126 )

      Just buy Ryzen or Threadripper.

      Cheaper, better, no performance crippling security flaws, and AMD CPU sockets last much longer than Intel ones so you will probably be able to upgrade in a few years without replacing mobo/RAM as well.

      Threadripper in particular gives you loads of PCIe lanes which means better future proofing too (lots of bandwidth available for that USB 4 and 9000Gb LAN card you will want in 2023). With TR or Ryzen Pro you also get stuff like hardware RAM encryption so the dream of a physicall

    • If Ryzen 2 is as good as AMD says it well be (projected to match Icelake level IPC), then not having HT on the flagship lines will backfire on Intel. AMD has consistently been great on on not using segregated features. (even if Motherboard vendors don't always enable them)

      Maybe Intel wants to simplify cores to improve yields, and having dies without HT at all lets them do that, but it's going to be at the expense of raw performance, right when the competition is closer than it's been in a long time, might

  • It's? (Score:5, Insightful)

    by DontBeAMoran ( 4843879 ) on Thursday July 26, 2018 @09:18PM (#57016686)

    It's base clock speed is 3.6GHz

    Surely you mean "Its base clock speed is 3.6GHz" and not "It is base clock speed is 3.6GHz".

    • by AmiMoJo ( 196126 )

      Thanks, it's been too long since Slashdot had a good grammar nazi post. Some days it feels like no-one cares about apostrophes any more.

  • From Intel, it has been confusing for a long time. Is a CPU 2 core/2 thread, 2 core/4 thread, 4 core/4 thread, 4 core/8 thread, 6 core/6 thread, 6 core/12 thread, or now, 8 core/8 thread, or 8 core/16 thread? The name alone does not really tell you much, so doing a lookup online is needed. To make it worse, the U series of chips has tended to be dual-core, even if it is branded as an i7.

    In general, we have seen the i3 line cover the 2 core and up to the 4 core/4 thread mark(as of the 8th generation i3

"If it ain't broke, don't fix it." - Bert Lantz

Working...