Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades AMD Hardware

Intel Removes "Free" Overclocking From Standard Haswell CPUs 339

crookedvulture writes "With its Sandy Bridge and Ivy Bridge processors, Intel allowed standard Core i5 and i7 CPUs to be overclocked by up to 400MHz using Turbo multipliers. Reaching for higher speeds required pricier K-series chips, but everyone got access to a little "free" clock headroom. Haswell isn't quite so accommodating. Intel has disabled limited multiplier control for non-K CPUs, effectively limiting overclocking to the Core i7-4770K and i5-4670K. Those chips cost $20-30 more than their standard counterparts, and surprisingly, they're missing a few features. The K-series parts lack the support for transactional memory extensions and VT-d device virtualization included with standard Haswell CPUs. PC enthusiasts now have to choose between overclocking and support for certain features even when purchasing premium Intel processors. AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."
This discussion has been archived. No new comments can be posted.

Intel Removes "Free" Overclocking From Standard Haswell CPUs

Comments Filter:
  • by KZigurs ( 638781 ) on Thursday June 13, 2013 @02:19PM (#43998605)

    AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs.

    It is also significantly slower buck for buck in real life workloads.

    • by Squiddie ( 1942230 ) on Thursday June 13, 2013 @02:27PM (#43998715)
      I try to practice the good enough philosophy, and AMD is good enough. I don't get the whole Intel/AMD fanboyism. I certainly would feel cheated if I just had to have Intel, though.
      • Re: (Score:2, Insightful)

        by Lumpy ( 12016 )

        Mostly a bunch of whiny babies that actually do not do anything with their computers.

        Real computer users want cores, lots of cores...

        • Re: (Score:3, Insightful)

          by Anonymous Coward
          Who modded this insightful? "Real computer users want lots of cores?" Is that the only thing useful in processors, now? Apparently anyone whose workload isn't able to be easily split up over several cores isn't a "real computer user." Imagine that.
        • No, they don't (Score:4, Informative)

          by Sycraft-fu ( 314770 ) on Thursday June 13, 2013 @05:14PM (#44001011)

          More cores are useful if, and only if, you have software threaded out enough to use it. Some workloads are, many are not. This "OMG moar cores lol," attitude is silly, and to me reeks of fanboyism. "My chosen holy grail platform does this, therefore everyone should want it!"

          Also more cores aren't necessarily useful if things over all are too much slower. For example, you'd expect a T1100 to be faster than a 2600 at x264 encoding. I mean it is all kinds of multi-threaded, and the T1100 has 50% more cores. Maybe the FX-8350 too. While it isn't 6 core, it does have 8 modules so 8 threads.

          Well, the reality it that they are not (http://www.anandtech.com/bench/CPU/27). The T1100 and FX-8350 are behind pretty much all modern Intel CPUs. An i5-2400 beats them out. Despite the core advantage, the speed disadvantage per core is too much.

          But go ahead and keep telling yourself that you are the only TRUE kind of computer user because you care more about cores than actual performance.

          • The motherboard is also a consideration, The CPU is not very useful without it.
            When I built my current primary system the Intel motherboards cost way to much for significantly fewer capabilities, I was able to get a motherboard that had the features I wanted cheaper than the closest I could get in Intel and take the savings to get a better CPU.
            If money is no object, and the feature set you need is available in Intel and you need the highest end per core performance, then sure
    • by Rockoon ( 1252108 ) on Thursday June 13, 2013 @02:50PM (#43999025)

      It is also significantly slower buck for buck in real life workloads.

      Buck for buck? Are you on crack?

      AMD wins the price/performance comparison. Intel wins the peak performance comparison.

      Looks to me like you are practicing the big lie for your masters at Intel.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Intel also wins watts/performance.

    • by Mashiki ( 184564 )

      It is also significantly slower buck for buck in real life workloads.

      Yeah...well no, [cpubenchmark.net] you might want to look up the price/core cost vs AMD and Intel, then you'll quickly see AMD tromps all over it. And really with the Vishera cores, you're seeing a negligible loss in real world performance. The only place where Intel beats AMD in cost-per-core is with the celery(celeron) line.

  • That's because they are not number one. Like Avis, they have to try harder.

  • Obvious sales pitch is obvious:

    AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."

    Feature #1 TSE: http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions [wikipedia.org] I'd imagine nobody codes for this.

    Feature #2 : http://software.intel.com/en-us/articles/intel-virtualization-technology-for-directed-io-vt-d-enhancing-intel-platforms-for-efficient-virtualization-of-io-devices [intel.com]

    It can still do virtualizion just fine: http://forums.anandtech.com/archive/index.php/t-2133898.html [anandtech.com]

    Not an Intel fanboy or anything, but they're not as arrogant as people are making them

    • I'd imagine nobody codes for processor features that are limited to a particular brand or model lineup...

      • by Zan Lynx ( 87672 )

        From what I've read in the last few months, the Linux kernel and glibc will both be adding transaction lock support. The performance benefits are pretty nice even when limited to backwards compatibility with existing lock methods.

        Also, libraries like Intel's (of course) TBB will add support.

        But all of that will be done with feature detection and fall back to using existing code.

        It's like saying that nobody codes for MMX, SSE, Altivec or 3DNow. Or that nobody uses a particular Nvidia OpenGL extension only av

    • Re:Sales Pitch (Score:5, Informative)

      by TopSpin ( 753 ) on Thursday June 13, 2013 @02:57PM (#43999107) Journal

      I'd imagine nobody codes for this. [TSE]

      That is going to be an important feature when programmers eventually leverage it. Hardware assisted optimistic locking can make concurrency easier, safer and more efficient as the CPU takes care of coherency problems usually left to the programmer and CAS instructions. Imagine being able to give each of thousands or millions of actors in a simulation their own independent execution context (instruction pointer, stack, etc.,) all safely sharing state and interacting with each other using simple, bug free logic, as opposed to explicit and error prone locking and synchronization. This has been done with software transactional memory but it frequently fails to scale due to lock contention. Hardware based TM can prevent that contention by avoiding lock writes.

      It is extremely cool that Intel is implementing this on x86.

  • by Anonymous Coward on Thursday June 13, 2013 @02:22PM (#43998655)

    Is there anyone besides a small group of people who benefit from higher clock rates? Most people I know would pick battery life over performance on mobile devices. Desktops have been "powerful enough" for at least the past 5 years. Is it just about bragging rights at this point?

    • I know it does for photo processing. I have a laptop with a dual core i5 (something like 2.9GHz), and when I come home with a card full of RAW images it takes an hour at least to render them to jpeg in lightroom. RawTherapee is also somewhat slow. Faster storage would help somewhat (I really need to find the right size Torx screwdriver so I can put my SSD in this laptop), but it is still rather CPU-bound.

    • Construction, architecture, engineering and manufacturing companies all use 3d heavy workstations. The faster the memory, the faster the harddrive and faster cpu clock speed make a large difference in employee downtime when designing new products. So no, Desktops are not powerful enough.
    • by BLKMGK ( 34057 ) <{morejunk4me} {at} {hotmail.com}> on Thursday June 13, 2013 @02:53PM (#43999057) Homepage Journal

      Add to the list below rendering and those of us who compress and process video - of which I am one. Faster clock speeds can save me HOURS of time and is why I run an overclocked Sandy i7 at over 4ghz. It runs for hours at a time fully slammed with no problems.

      So yeah, there are use cases for this outside of your sphere of knowledge.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        As someone who writes the software you're probably using for your video compression:

        Fuck you, Fuck you, Fuck you!

        I have wasted more of my life in idiotic bullshit bug reports from people with clocked to hell hardware. A one in ten thousand failure rate times hundreds of thousands of OCed users = big waste of my @#$@ time. There is a reason processor vendors sell parts clocked at the speeds they do.

        • by Mycroft_VIII ( 572950 ) on Thursday June 13, 2013 @08:55PM (#44002869) Journal
          Yes, people who OC should NOT send in bug reports except to the processor manufacture and should give detailed reports of their OC in that case.
              I've seen lots of weird bugs vanish when even "factory overclocked" parts are put back at stock settings.
              If I were you I'd post no bug reports if you oc anything policy.
              And I'd go through you bug reports and lable anything from an oc'r as "bug possible oc failure, will not investigate, closed".
                It's like someone who hot-rods his car screaming at shell about their gas because their car only gets 10mpg.

          Mycroft
    • by slaker ( 53818 )

      If you have a compute task that's not bound by I/O or RAM such as media transcoding, a faster CPU can be quite helpful. My time to reencode a BD dropped by almost 30% in a move from Lynnfield to Ivy Bridge versions of i7; that's not insignificant for a process that still takes hours. Putting aside my dubious need, we're not that far from consumer 4k video and the increased demands that will bring.

    • by armanox ( 826486 )

      Some remarks:

      1 - If you're buying an i5 or i7, chances are you're using more then the average user (especially if you're going with an i7).
      2 - The processors in question are desktop processors, not the mobile ones.

    • That's what I wonder as well. For all CPU intensive workloads, wouldn't the extra cores do it? Also, if certain applications require faster cores, wouldn't it be better if they were multi-threaded more?

      As for the engineering & video processing apps, seems to me like they could make use of something like the Itanium

  • by Anonymous Coward

    Now that AMD is no longer a threat to them, they can go back to their old tricks again.

  • Shouldn't the unlocked multiplier version be a primium product? This is annecessary step backwards. I think most people who are interested in a K-series would be more willing to pay a premium. Who in their right mind would EVER give up VT-d for an unlocked multiplier? Maybe they just want to kill the tradition once and for all.

  • by apexdawn ( 915478 ) on Thursday June 13, 2013 @02:25PM (#43998697)

    Well, "free" clock headroom aside, Intel removing features from the K series parts (VT-d, etc.) has been going on since Sandy Bridge I believe. Basically, if you want the best of both worlds you will want to invest in an Extreme Edition processor. As quick search on ark will show, the 3770K does not have VT-d while the 3930K does.

    -Reed

    • Best of all worlds is the socket 2011 platform - 40 PCIe lanes on-die vs. 16 PCIe lanes on everything else except the even older socket 1366 platform.

      I was looking into upgrading my system when the Haswell CPUs came out, and I was disappointed. Then I ordered a socket 2011 motherboard with 4 full-length PCIe slots and quad-channel DDR3. It ended up being about $100 more than a comparable Haswell Z87 chipset build, with a faster (MHz) cpu.

      I got the (sandybridge-E) core i7 3820 quad core for $249, which

  • by Joe_Dragon ( 2206452 ) on Thursday June 13, 2013 @02:26PM (#43998711)

    This is why AMD can not die just think of what intel will do with out AMD in the market.

  • Meh. (Score:5, Insightful)

    by nitzmahone ( 164842 ) on Thursday June 13, 2013 @02:28PM (#43998727)

    I've never found overclocking to be worth the trouble. Anytime there's a stability issue with an overclocked PC, there's always that nagging doubt that all my troubleshooting is for naught, because it was a fluke bit fail due to the overclocking. Life's too short- skip the anxiety and run your processor at it's rated speed.

    • My thoughts as well. I kind of wonder how many people out there are still overclocking. It's so rare that anything I do is CPU bound anymore. Maybe I'm getting old becuase I just want things to work.

      • by Xenx ( 2211586 )
        The biggest reasons would be for encode/decode, gaming, enthusiast. Each has their reasons and at least two of them have actual use for higher clock speeds. Why pay $1000 for a CPU when I can pay $250 and overclock. Only time I've ever had a problem is physical, and my own fault. Sometimes you have to settle for a little less clock speed, but you can test and maintain relative stability.
        • by s.petry ( 762400 )

          <shrug> I never pay that much for a CPU, since I have had exceptional experiences with the AMD CPUs. In my experiences, they have always outperformed Intel's processors, and generally cost half as much. I could overclock them if I wanted, and back in the Athalon 800'ish series did.

          • Re:Meh. (Score:5, Informative)

            by girlintraining ( 1395911 ) on Thursday June 13, 2013 @04:12PM (#44000169)

            In my experiences, they have always outperformed Intel's processors, and generally cost half as much.

            That hasn't been the case for several generations of processor design, unfortunately. The top end of the AMD processor line can't compete with Intel on performance. That's why they've gotten so cheap -- so OEMs build systems on them. The 'Intel Tax' puts a lot of their mid-range and above stuff out of reach of the average consumer, and generally you're only finding them in laptops now because of the superior power usage and thermals...

            If you want per-unit performance today, you buy Intel. If you want commodity, you buy AMD.

      • by lgw ( 121541 )

        I do video transcoding that doesn't know how to use the GPU yet, so I overclock at home on my server. My gaming box has "everything overclocked" just because it was a fun project.

    • So troubleshoot at stock speeds, then switch back to your overclock when you've solved the problem. That also has the positive effect of actually showing you whether it's your overclock that's the issue.

    • by BLKMGK ( 34057 )

      Then you aren't doing it right. If you setup an overclocked machine correctly and don't try to push it right to the bleeding edge you'd get plenty of bang for the buck. My current Sandy machine is pushed to 4.5GHZ and I save a great deal of time processing video as a result - it's an i7 3770K. It process video for hours on end with no issues and reboots only for updates occasionally. Cooling is your biggest issues, water works best and don't push a ton of voltage through it. Start with the basics and work u

    • Life's too short- skip the anxiety and run your processor at it's rated speed.

      With liquid cooling, your processor can run significantly above its rated speed because most failures are based on thermal overload. The core in your "slower" processor is the same as a "faster" one, but it failed qualification at some point, and it's not due to a physical defect per-se but because thermal tolerances are so tight that there may be a circuit cluster that becomes unstable due to parasitics; Usually it's highly localized heating. Liquid cooling can bring not just that component, but all the ot

    • Overclocking stopped having a real impact once clock speeds took a back seat to cores. I guess it's still fun for certain people to see how much they can squeeze out, but real-world performance just doesn't seem to justify the trouble.

  • those are enterprise features? why would you OC a chip in something that brings you revenue and risk a problem?

    • I think this is precisely they point. This is a business decision to prevent people from buying cheap unlocked desktop CPUs with VT-d, overclocking them, and say, using them to run their dev/test QA VM environments - hell, even production environments if you're really pinching pennies. If you want to get really "out there", it's possible that there was pressure from hypervisor vendors for Intel to lock this down so that they didn't have to support the random failures that can occur with overclocking.

      Intel (

  • If Intel will ever be allowed to become monopoly again, it will produce extremely pricey and extremely limited processors. Everybody should love AMD, because it is the only thing stopping Intel from selling them shit wrapped in golden paper for thousands $$.
    • You think AMD is any threat to Intel? They stopped having any real competitive pressure on Intel years ago.

  • = short life span for your CPU.. so, I'm not too worried..
  • by girlintraining ( 1395911 ) on Thursday June 13, 2013 @02:50PM (#43999021)

    The K-series parts lack the support for transactional memory extensions and VT-d device virtualization

    Yeah, well, fun fact... a lot of enthusiasts like myself like things like VMWare, which depend on this kind of thing. Deleting those features from the unlocked line means I just won't buy them... one of the big drivers for overclocking is to run virtualization. You might think it's "just gamers" doing this, but a lot of us do network and system administration and deployment and like the ability of having a "lab in a box" offered by current processors. You take that away and you're going to find your bottom line hurting, possibly more than a little.

    I don't know which of your marketing assclowns came up with this idea as a revenue generating measure, but it's going to backfire in their face and I hope when it does you fire their ass, apologize, and never try this again. You're only succeeding in driving us towards commodity hardware like AMDs offerings... All they need to capitalize on the market you've just shit on now is offer mainboards with multiple sockets for their CPUs and make the mainboards cheap and the core system very energy efficient... and not only will the enthusiasts ditch you, but so will the data centers...

    You're opening a can of worms here. Bad plan, darlings.

    • by BLKMGK ( 34057 )

      This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...

      • This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...

        Got one right now, actually; It's a i5-3570K. To the best of my knowledge, no features are disabled compared to other models based on this core. But vmware needs VT-d to function, and if they kill this feature off, it won't work. So, no, it hasn't been opened for "awhile", this is something that's started rolling out in the last year.

        • by BLKMGK ( 34057 )

          Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise! I found out the hard way myself when I built an ESX server and couldn't install, I found the feature greyed out in the BIOS. A quick Google on that model and I realized I'd been had too.

          http://ark.intel.com/products/65520 [intel.com]

          http://www.tomshardware.com/forum/356118-28-purchased-3570k-virtualization [tomshardware.com]

          • Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise!

            Sorry, my bad. I confused VT-d with VT-x. Yes, you're correct -- it won't run an ESX server, but I use Workstation, so it's been fine for me. That sucks though -- I know a lot of people who build dedicated lab machines on a rack; I don't have the funds to lay out on something that complex, nor the space where I live right now, but I can see how that would screw you over... especially when VMWare's hardware requirements [vmware.com] white sheet doesn't specifically list it either. :(

            This kind of cpu fragmentation I think

        • But vmware needs VT-d to function, and if they kill this feature off, it won't work.

          Bullshit. Even ESX/ESXi can work just fine without VT-d. The only thing you lose is I/O pass-through. Cut out the hyperbole. The fact that you can explicitly disable VT-d in VMWare's settings disproves your ridiculous claims.

        • Re: (Score:2, Insightful)

          by armanox ( 826486 )

          No it doesn't. Look up the difference between VT and VT-d. The i5-3570K does not have VT-d (I was aware of that when I bought mine). This feature is only used by Xen and HyperV (I can't speak for ESX) for very specific functions.

          Comparison for you (scroll down so you can see VT-d, VPro, and Trusted Execution):

          Sandy Bridge:
          i5-2500K: http://ark.intel.com/products/52210 [intel.com]
          i5-2500: http://ark.intel.com/products/52209 [intel.com]

          Ivy Bridge:
          i5-3570K: http://ark.intel.com/products/65520 [intel.com]
          i5-3570: http://ark.intel.com/products/6 [intel.com]

    • The K-series parts lack the support for transactional memory extensions and VT-d device virtualization

      Yeah, well, fun fact... a lot of enthusiasts like myself like things like VMWare, which depend on this kind of thing. Deleting those features from the unlocked line means I just won't buy them... one of the big drivers for overclocking is to run virtualization.

      None of the K processors have ever had VT-d. Also, VMWare ESXi is about the only virtualization product which uses VT-d (direct hardware access f

    • Overclocking is risky. The clock rate is what it is because that's what the chip is reliable with. Increasing that is increasing risk, and maybe that's ok if you're a game and don't mine breaking things, but if you're dependent upon overclocking for important business reasons then you're better off just getting a faster CPU in the first place.

  • by BLKMGK ( 34057 ) <{morejunk4me} {at} {hotmail.com}> on Thursday June 13, 2013 @02:56PM (#43999087) Homepage Journal

    Current K rated CPU lose this and possibly some other features. I didn't pay attention to this and found out the hard way when I couldn't run an overclocked ESX-i Sandy machine. Pissed is an understatement! There's no good reason to do this other than to screw with the marketplace.

    I've switched to a XEON CPU of Ivy heritage and GL finding a board for one of those that runs ESX-i and can be overclocked. Nearly every machine I own is overclocked and has been for many years and it pisses me off to get jerked around like this by Intel.

    • There's no good reason to do this other than to screw with the marketplace.

      Maybe. Another possibility is that those features are heavily timing dependent and the OC chips caused more problems than they solved.

      • by BLKMGK ( 34057 )

        Current CPU that aren't K rated can be overclocked though not to the same degree. I've never heard of issues from folks overclocking those and running them in virtual environments. Somehow I doubt that this is for our protection but they certainly haven't said one way or the other. If I could overclock my damned XEON I'd sure do it.

    • > There's no good reason to do this other than to screw with the marketplace.

      This is what happens when there is less competition. We need AMD or some other company to scare Intel into competing on quality rather than artificial scarcity.

  • I actually think this makes sense from a business perspective since the virtualization features would be targeted towards their Xeon line vs. the home PC market. As for overclocking, I do it moderately on both Intel and AMD systems but this lock on the Haswell reminds me of the same debates around Sandy Bridge and Ivy Bridge and ... back to when they started locking the clocks on the Pentium IIs. The advantages of overclocking don't just go against getting the most speed out of the hardware, they also all

  • by TheSkepticalOptimist ( 898384 ) on Thursday June 13, 2013 @03:23PM (#43999449)

    Yes, I remember the good ol' days when you can get a $100 CPU and make it work like a $800 one. I remember in particular the days of buying a cheap Celeron and having it perform like much more expensive Pentium II or even P3.

    And I also remember days of headaches with stability issues, over heating and other stupid problems all to squeeze a few extra FPS out of Doom.

    Nobody overclocks anymore, and if they do, it like getting a trophy for trolling a blog. Its completely unnecessary and doesn't really offer anything except a feel good, slap on the thy own back when you see your completely arbitrary and virtual benchmark numbers rise up while you ruin your CPU.

    What needs the extra performance these days? You need to Tweet faster? Like on Facebook faster? Browse a website factions of milliseconds faster?

    Games used to drive overclocking but GPU's are where game performance lies these days. Sure maybe overclocking your CPU by 50% might offer 1% more FPS, but who the fuck really cares, nobody with a life that is.

    Intel realizes that the enthusiast market for PC's has nose dived and its obviously cheaper to produce CPU's where you don't have to worry about the kind of performance tolerances that are required for overclocking.

    And I don't think "enterprise" level developers are buying cheap computers and then overclocking to get better VM performance. I mean really? If you consider yourself an "enterprise" developer then get the "enterprise" to buy you a decent workstation or VM server. I don't think your "enterprise" wants you to spend days trying to optimize performance on your workstation, I'd fire anybody that wastes any amount of time in a BIOS.

    I would say Intel should focus on offering one "enthusiast" level CPU that is completely unlocked for overclocking. I mean if people want to burn out their CPU repeatedly its more money from a market segment that is drying up, but I think in general Intel or any CPU company should not have to worry about providing overclockable CPU's across their product line.

    The bottom line is that benchmarks aside, if you ever looked at your Task Manager you'd probably realize that your CPU is idling at 1% usage 99% of the time, so you want to make the System Idle task run faster? I don't get it anymore.

    • Re: (Score:3, Interesting)

      by jason777 ( 557591 )
      Whats with this attitude on this article? I overclocked my 3ghz i7950 to 4.2ghz with a better cooler and a couple nights of testing. Now 2 and a half years later, the machine still performs very well. And I do a lot of development, video editing, and audio recording. I have not once had a overtemp, blue screen, crash, freeze, nothing.
    • by Rockoon ( 1252108 ) on Thursday June 13, 2013 @04:13PM (#44000183)
      You bring up the past a lot, pointing out that the enthusiast/etc market is much smaller than it used to be...

      ...but then you bring an argument from the past, that of burning out CPU's, and try to use that as some sort of point.

      Pick a decade and stick to it, rather than picking and choosing facts. People dont burn out their CPU's anymore when overclocking, and thats been true for an entire fucking decade now. Seems to me that you never overclocked anything, ever, and are using lots and lots of excuses now to rationalize your irrational fear of it ("idle task" .. really? Fucking retard..)
    • by gl4ss ( 559668 )

      ..they didn't notice the market nosediving.

      back in the day they didn't sell overclocking friendly chips separately. they realized there's a market and started selling to them, that's why you have these chips on the market. on their marketing dept it would be problematic if they had all the same features and happened to realiably overclock 10-20%, because that would make people ask wtf are they paying for if they're buying premium non-K chips..

    • by Holi ( 250190 )

      Is it really necessary to say no one needs something just because you don't. Sorry but responses like yours are useless as they are more insult then info. Next time try and leave your attitude out of your responses and maybe you'll get some good karma for once.

    • The big issue nowadays is how much RAM you can install on your system. If you can install 16 to 32 GB of RAM to run under Windows 7 Professional, you can work on VERY large media files with nary a slowdown issue on most Intel Core i5 and i7 CPU's.

  • The real reason for this change is fab yields.

    It's how they do all the other processors as well:

    o manufacture
    o test
    o blow fuses as needed for failed tests
    o bin the part a an xxxyyyzzz part

    One of the reasons Apple machines tend to be more expensive is they pay a premium for higher performance "speed burt" relative to other laptop vendors, so the chips that rate out at supporting a higher speed burst clocking go into the Apple bin.

    Similarly, RAM chips get binned as well; those that bin out as supporting withi

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...