Intel Removes "Free" Overclocking From Standard Haswell CPUs 339
crookedvulture writes "With its Sandy Bridge and Ivy Bridge processors, Intel allowed standard Core i5 and i7 CPUs to be overclocked by up to 400MHz using Turbo multipliers. Reaching for higher speeds required pricier K-series chips, but everyone got access to a little "free" clock headroom. Haswell isn't quite so accommodating. Intel has disabled limited multiplier control for non-K CPUs, effectively limiting overclocking to the Core i7-4770K and i5-4670K. Those chips cost $20-30 more than their standard counterparts, and surprisingly, they're missing a few features. The K-series parts lack the support for transactional memory extensions and VT-d device virtualization included with standard Haswell CPUs. PC enthusiasts now have to choose between overclocking and support for certain features even when purchasing premium Intel processors. AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."
Nice biased wording there (Score:5, Insightful)
AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs.
It is also significantly slower buck for buck in real life workloads.
Re:Nice biased wording there (Score:5, Insightful)
Re: (Score:2, Insightful)
Mostly a bunch of whiny babies that actually do not do anything with their computers.
Real computer users want cores, lots of cores...
Re: (Score:3, Insightful)
No, they don't (Score:4, Informative)
More cores are useful if, and only if, you have software threaded out enough to use it. Some workloads are, many are not. This "OMG moar cores lol," attitude is silly, and to me reeks of fanboyism. "My chosen holy grail platform does this, therefore everyone should want it!"
Also more cores aren't necessarily useful if things over all are too much slower. For example, you'd expect a T1100 to be faster than a 2600 at x264 encoding. I mean it is all kinds of multi-threaded, and the T1100 has 50% more cores. Maybe the FX-8350 too. While it isn't 6 core, it does have 8 modules so 8 threads.
Well, the reality it that they are not (http://www.anandtech.com/bench/CPU/27). The T1100 and FX-8350 are behind pretty much all modern Intel CPUs. An i5-2400 beats them out. Despite the core advantage, the speed disadvantage per core is too much.
But go ahead and keep telling yourself that you are the only TRUE kind of computer user because you care more about cores than actual performance.
Re: (Score:3)
When I built my current primary system the Intel motherboards cost way to much for significantly fewer capabilities, I was able to get a motherboard that had the features I wanted cheaper than the closest I could get in Intel and take the savings to get a better CPU.
If money is no object, and the feature set you need is available in Intel and you need the highest end per core performance, then sure
Re:Nice biased wording there (Score:5, Insightful)
Re: (Score:3)
Re:Nice biased wording there (Score:5, Insightful)
I practice the time is valuable philosophy. I don't want to wait on my computer any longer than absolutely necessary.
People who really think their time is valuable don't overclock. It's a hobby that tries to squeeze the most out of a given $ of hardware. But after you factor in the amount of time you spend messing around with the thing to try to eek out that additional performance, and add in the lost work time caused by unexpected crashes and instability, you're better off just buying the most expensive hardware you can, and replacing it when something better comes along.
That said, the people who do that need to be grateful to the overclocking crowd. There needs to be bleeding edge people finding out what works and what doesn't, such as the great work they've done with cooling technology. The best of what the overclockers are doing today turns into tomorrow's high end mainstream.
Re: (Score:3)
both are incorrect.
intel chips tend to be slightly faster, but much more expensive.
compare Intel's latest i-4770k to AMD's FX-8350 for example.
the Intel chip is roughly 10% faster overall than the AMD, being generous, you can say it's up to maybe 15% faster. The i-4770K at $384 costs 70% more than the FX-8350 at $225.
(prices in AUD because that's where i am)
The i-4770k also has yet another new soccket (1150-pin rather than 1155 or 2011), so it's not just a simple CPU upgrade if you had an existing system,
Re:Nice biased wording there (Score:5, Informative)
It is also significantly slower buck for buck in real life workloads.
Buck for buck? Are you on crack?
AMD wins the price/performance comparison. Intel wins the peak performance comparison.
Looks to me like you are practicing the big lie for your masters at Intel.
Re: (Score:2, Insightful)
Intel also wins watts/performance.
Re: (Score:3)
It is also significantly slower buck for buck in real life workloads.
Yeah...well no, [cpubenchmark.net] you might want to look up the price/core cost vs AMD and Intel, then you'll quickly see AMD tromps all over it. And really with the Vishera cores, you're seeing a negligible loss in real world performance. The only place where Intel beats AMD in cost-per-core is with the celery(celeron) line.
Re:Nice biased wording there (Score:4, Informative)
They do have VT-d, but I believe transactional memory is a Haswell only for the moment. I have read nothing on whether AMD will implement such extensions (I could be wrong on this).
-Reed
Re: (Score:2, Insightful)
"Yeah, I use machines with SSD for those so IO isn't an issue."
Yes it is... SSD is an absolute DOG for extended writes if you are ripping to a SSD you are being brain dead.
Lies (Score:5, Informative)
AMD has superior FP capabilities. In both CAD and CAE benchmarks, honors always go to AMD for the math. But what really hit me as a big-ole liar fanboi comment was the one about CAD rendering. The majority of that is not related to your CPU, but your GPU. The portion of GPU that is CPU related still benefits from AMD chips which have the memory at the front end of the chip, compared to the Intel that has the memory pipeline as far back as possible in order to claim "we have more MHz than AMD".
Video compression really depends on who's chip the code has been modified for (if any). As with CAE math, native chip math functions are much faster on AMD.
I run annual benchmarks inside companies for Intel vs. AMD and have for over a decade. These benchmarks show real world performance of Unigraphics, CATIA, HyperMesh, MSC Patran, Ansys, and Muses. CATIA and Ansys are always the worst on AMD, as they have both been assimilated by DirectX over OpenGL with no option to force OpenGL. They still however slightly favor AMD over Intel.
I don't rely on Tom's hardware or someone else for opinion, since Tom's showed us long ago that you can't trust "independent" benchmarks for much. I have read benchmark reports from others that indicate the opposite, but have yet to have anyone recreate their results for me. I use real decks and models from real products, I don't use code exercising a subset of CPU instructions as fast as it possibly can.
Re: (Score:3)
Intel beat AMD at floating-point long ago.
The only thing AMD is beating Intel at is its interconnect technology, and there are rumors that Intel might go in the lead on that in the near future as well?
AMD offers those things (Score:2, Insightful)
That's because they are not number one. Like Avis, they have to try harder.
Sales Pitch (Score:2)
Obvious sales pitch is obvious:
AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."
Feature #1 TSE: http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions [wikipedia.org] I'd imagine nobody codes for this.
Feature #2 : http://software.intel.com/en-us/articles/intel-virtualization-technology-for-directed-io-vt-d-enhancing-intel-platforms-for-efficient-virtualization-of-io-devices [intel.com]
It can still do virtualizion just fine: http://forums.anandtech.com/archive/index.php/t-2133898.html [anandtech.com]
Not an Intel fanboy or anything, but they're not as arrogant as people are making them
Re: (Score:2)
I'd imagine nobody codes for processor features that are limited to a particular brand or model lineup...
Re: (Score:2)
From what I've read in the last few months, the Linux kernel and glibc will both be adding transaction lock support. The performance benefits are pretty nice even when limited to backwards compatibility with existing lock methods.
Also, libraries like Intel's (of course) TBB will add support.
But all of that will be done with feature detection and fall back to using existing code.
It's like saying that nobody codes for MMX, SSE, Altivec or 3DNow. Or that nobody uses a particular Nvidia OpenGL extension only av
Re:Sales Pitch (Score:5, Informative)
I'd imagine nobody codes for this. [TSE]
That is going to be an important feature when programmers eventually leverage it. Hardware assisted optimistic locking can make concurrency easier, safer and more efficient as the CPU takes care of coherency problems usually left to the programmer and CAS instructions. Imagine being able to give each of thousands or millions of actors in a simulation their own independent execution context (instruction pointer, stack, etc.,) all safely sharing state and interacting with each other using simple, bug free logic, as opposed to explicit and error prone locking and synchronization. This has been done with software transactional memory but it frequently fails to scale due to lock contention. Hardware based TM can prevent that contention by avoiding lock writes.
It is extremely cool that Intel is implementing this on x86.
Re: (Score:2)
My main concern was whether it can run VMWare Workstation acceptably, and it can. Any larger VM scenarios instantly create a disk IO bottleneck on any desktop PC.
Which brings me back to my OP, Intel removed important sounding features that are actually useless, and correctly so. They should be commended for taking the initiative, instead everything I've red puts it in a negative anti-consumer light.
Bringing me to conclude, if you don't know wtf you're talking about, stop posting news stories about it.
Re: (Score:2)
My main concern was whether it can run VMWare Workstation acceptably, and it can. Any larger VM scenarios instantly create a disk IO bottleneck on any desktop PC.
Oh c'mon. Just about any desktop with sufficient RAM and a modern processor can run several virtual machines with boring 7200RPM drives. My students don't complain, even in the hour just before an assignment is due when everyone is using.
Re: (Score:2)
To clarify, larger VM scenario = 10+++ VMs with sufficient resources running simultaneously.
Re: (Score:2, Informative)
There's a big difference between VT and VT-d. Intel is only disabling VT-d (aka Directed IO) in the processors.
It is an I/O passthrough to a virtual machine (allowing a virtual machine to directly access the IO bus instead of passing through the hypervisor). Most people won't use anything like this and it's primary only found in enterprise class bare-metal hypervisors like VMWare ESXi, so it honestly doesn't have any impact on workstations running VMWare Workstation in 99.99% of situations.
From Intel:
"VT-d"
Re:Sales Pitch (Score:4, Informative)
Notice that VT-d is disabled, not VT. VT-d allows a hardware device to be passed directly from the hypervisor to a virtual machine (such as a video card). This is only used in HypverV, Xen, and (I think) VMWare ESX, none of which are desktop products. I use VMWare Workstation and Virtualbox quite often (although I'm warming up to KVM) on both AMD and Intel, with no ill effects from either side. Please be informed about what you're saying Intel is screwing us on, and you'll see that 90% of the people that use these features aren't even effected.
Re:Sales Pitch (Score:4, Insightful)
The problem is that people buying K parts and building PCs around them are pc enthusiasts.
Is my gaming desktop going to do double duty as a production Xen server? Of -course- not. At least not at the same time.
But if I look around my home office, the cpu's that used to be in my gaming PCs ... one is in a Xen server that I'm using actively. And another is a vmware server.
But as I use both xen and vmware for work, having these 'toy' servers at home has been helpful for learning, and experimenting. I definitely want cpus that support these technologies. I expect I'll build a hyper-V unit sooner than later too.
The only question i have about intel's move is "why" is this some sort of misguided marketing nonsense, or do these features perhaps interfere with the overclockability of the K cpus. Maybe transactional memory and hardware virtualization don't over clock well ? If that' the case... I get it.
Otherwise, I'm completely stumped as to why intel is removing it.
Re: (Score:3)
VT-d is not only for servers. I found it's use because of my countless cycles of attempts to dual-boot windows and linux (as in I eventually ended using just windows...repeat afte 6 months).
Now I boot linux, do the web browsing and stuff, but when I want to play, I just start my VM and play.
Linux: i5-2500 IGP
Windows: Radeon 7950 (started with 5850)
My over 80 hours of Skyrim are Xen exclusive. DeusEx HR was maybe 20-30h native, followed by more than 50h in VM.
This is my original post (closed since then): htt [tomshardware.co.uk]
vt-d can be used in KVM and virtualbox (Score:3)
Just to add a couple options.
As far as I know it's not available in VMware Workstation.
Re: (Score:3, Interesting)
Windows 8 Client Hyper-V REQUIRES VT-d. Otherwise there is no first-party VM solution for Windows 8 and you're have to install VirtualBox or WMware. Windows 8 doesn't default to installing into Hyper-V when the requirements are met as the parent suggests. Hyper-V is a feature that needs to be installed on all machines. Once installed then Windows 8 boots the hypervisor first then boots Win8 from the drive as a highly privileged VM. Performance for most things is near where it would be if the OS was on bare
Does MHz matter anymore? (Score:3, Interesting)
Is there anyone besides a small group of people who benefit from higher clock rates? Most people I know would pick battery life over performance on mobile devices. Desktops have been "powerful enough" for at least the past 5 years. Is it just about bragging rights at this point?
Re: (Score:2)
I know it does for photo processing. I have a laptop with a dual core i5 (something like 2.9GHz), and when I come home with a card full of RAW images it takes an hour at least to render them to jpeg in lightroom. RawTherapee is also somewhat slow. Faster storage would help somewhat (I really need to find the right size Torx screwdriver so I can put my SSD in this laptop), but it is still rather CPU-bound.
Re: (Score:2)
Re:Does MHz matter anymore? (Score:5, Informative)
Add to the list below rendering and those of us who compress and process video - of which I am one. Faster clock speeds can save me HOURS of time and is why I run an overclocked Sandy i7 at over 4ghz. It runs for hours at a time fully slammed with no problems.
So yeah, there are use cases for this outside of your sphere of knowledge.
Re: (Score:3, Insightful)
As someone who writes the software you're probably using for your video compression:
Fuck you, Fuck you, Fuck you!
I have wasted more of my life in idiotic bullshit bug reports from people with clocked to hell hardware. A one in ten thousand failure rate times hundreds of thousands of OCed users = big waste of my @#$@ time. There is a reason processor vendors sell parts clocked at the speeds they do.
Re:Does MHz matter anymore? (Score:4, Interesting)
I've seen lots of weird bugs vanish when even "factory overclocked" parts are put back at stock settings.
If I were you I'd post no bug reports if you oc anything policy.
And I'd go through you bug reports and lable anything from an oc'r as "bug possible oc failure, will not investigate, closed".
It's like someone who hot-rods his car screaming at shell about their gas because their car only gets 10mpg.
Mycroft
Re: (Score:2)
If you have a compute task that's not bound by I/O or RAM such as media transcoding, a faster CPU can be quite helpful. My time to reencode a BD dropped by almost 30% in a move from Lynnfield to Ivy Bridge versions of i7; that's not insignificant for a process that still takes hours. Putting aside my dubious need, we're not that far from consumer 4k video and the increased demands that will bring.
Re: (Score:2)
Some remarks:
1 - If you're buying an i5 or i7, chances are you're using more then the average user (especially if you're going with an i7).
2 - The processors in question are desktop processors, not the mobile ones.
Re: (Score:2)
That's what I wonder as well. For all CPU intensive workloads, wouldn't the extra cores do it? Also, if certain applications require faster cores, wouldn't it be better if they were multi-threaded more?
As for the engineering & video processing apps, seems to me like they could make use of something like the Itanium
Re: (Score:2)
Can't say I'm surprised (Score:2, Interesting)
Now that AMD is no longer a threat to them, they can go back to their old tricks again.
That is dumb (Score:2)
Shouldn't the unlocked multiplier version be a primium product? This is annecessary step backwards. I think most people who are interested in a K-series would be more willing to pay a premium. Who in their right mind would EVER give up VT-d for an unlocked multiplier? Maybe they just want to kill the tradition once and for all.
Re:That is dumb (Score:5, Informative)
Guh. Premium, not primium! And annecessary = unnecessary. I suck.
Re:That is dumb (Score:4, Interesting)
Well, I would, for one. Unless you're using Xen or HyperV, VT-d doesn't really benefit you.
Not really a big shock (Score:3, Informative)
Well, "free" clock headroom aside, Intel removing features from the K series parts (VT-d, etc.) has been going on since Sandy Bridge I believe. Basically, if you want the best of both worlds you will want to invest in an Extreme Edition processor. As quick search on ark will show, the 3770K does not have VT-d while the 3930K does.
-Reed
Re: Best of World (Score:3)
I was looking into upgrading my system when the Haswell CPUs came out, and I was disappointed. Then I ordered a socket 2011 motherboard with 4 full-length PCIe slots and quad-channel DDR3. It ended up being about $100 more than a comparable Haswell Z87 chipset build, with a faster (MHz) cpu.
I got the (sandybridge-E) core i7 3820 quad core for $249, which
Re: (Score:2)
Re: (Score:2)
Nobody who buys a Xeon and needs it would ever overclock it. It's not worth the (minimal) risk increase.
Re: (Score:2)
Bullshit - I would! I have a XEON now running an ESX server at home and hell yes I would overclock it without a second thought. Good luck finding a XEON board that supports both ESX and overclocking though! there's nothing magical or scary about overclocking if you have half a clue and don't try to run right over the bleeding edge. I've been doing it since the 8088 days when a damned crystal from RadioShack was required and it's never been a problem. This is Intel screwing with the market plain and simple.
Re: (Score:3)
Just because you buy a pro CPU for your toy doesn't mean you and your needs are suddenly relevant for the customer base of the pro CPU.
This is why AMD can not die just think of what int (Score:5, Insightful)
This is why AMD can not die just think of what intel will do with out AMD in the market.
Re: (Score:2)
They will have to deal with the ARMs market than?
Re: (Score:2)
can that run todays X86 software?
Meh. (Score:5, Insightful)
I've never found overclocking to be worth the trouble. Anytime there's a stability issue with an overclocked PC, there's always that nagging doubt that all my troubleshooting is for naught, because it was a fluke bit fail due to the overclocking. Life's too short- skip the anxiety and run your processor at it's rated speed.
Re: (Score:3)
My thoughts as well. I kind of wonder how many people out there are still overclocking. It's so rare that anything I do is CPU bound anymore. Maybe I'm getting old becuase I just want things to work.
Re: (Score:3)
Re: (Score:3)
<shrug> I never pay that much for a CPU, since I have had exceptional experiences with the AMD CPUs. In my experiences, they have always outperformed Intel's processors, and generally cost half as much. I could overclock them if I wanted, and back in the Athalon 800'ish series did.
Re:Meh. (Score:5, Informative)
In my experiences, they have always outperformed Intel's processors, and generally cost half as much.
That hasn't been the case for several generations of processor design, unfortunately. The top end of the AMD processor line can't compete with Intel on performance. That's why they've gotten so cheap -- so OEMs build systems on them. The 'Intel Tax' puts a lot of their mid-range and above stuff out of reach of the average consumer, and generally you're only finding them in laptops now because of the superior power usage and thermals...
If you want per-unit performance today, you buy Intel. If you want commodity, you buy AMD.
Re: (Score:3)
That actually depends, because the new AMD architectures share a FPU between cores in a module, so if you have a *mixed* integer and FP load, AMD comes out tops, otherwise if it is pure integer Intel's superior caching algorithms tend to push it in front, and for pure FP the AMD chips tend to bottleneck. Something like CAD, a mix of FP and integer is perfect for AMD. Games, which are more FP than integer, not so much. Server work, where the cache isn't likely to give much of an advantage, AMD is again compe
I keep seeing this mentioned without any backing (Score:4, Interesting)
Let me make this very clear: back in the days of the Athlon versus the Pentium IV, Intel had the disadvantage because the damn thing was designed primarily for SSE2, and they had a decode imbalance in the design. The Athlon had 3 x87 FPU pipes which made it superior despite the P4's faster clock...but once developers targeted SSE2, the Pentium IV matched the Athlon in FPU, and outclassed it on ALU operations (since both chips had dual 64-bit SSE2 units).
With the introduction of the Core 2, Intel switched to a 4-wide decode and DUAL 128-bit SSE2 units, allowing 2 instruction / cycle throughput, TOASTING the Athlon 64 in all matters of performance. Almost two years later AMD countered with Barcelona, which also had dual 128-bit SSE2 units, but was castrated by their 3-wide decoder. It was a match for Core 2 at the same clocks, but they couldn't match the clocks Intel had.
With the new Core series of chips, and the reintroduction of Hyperthreading, Intel wiped the floor with AMD in anything multithreaded, and they steadily increased single-threaded performance with each new iteration. Dual AVX 256-bit units in Sandy Bridge also potentially DOUBLED Intel's FP throughput. At the same time, AMD moved away from FP performance with Bulldozer, which shared dual 128-bit AVX execution units between two cores. Even with twice the cores AMD still lagged behind in peak FPU throughput, because the shared decode units meant roughly two-wide decode when all cores were heavily-loaded.
So today AMD is not the destination for high FPU throughput, and they really have not been for a decade. I really cannot understand your claims to the contrary.
Re: (Score:3)
First, Bulldozer was not a high performance chip, and never intended to be a high performance chip. It was meant to be a PC based equivalent of a Niagra capable of massive threading. So let's compare apples to apples shall we?
AMD Still considers the Athalon to be the performance chip. Comparing apples to apples, maybe you are asking how a chip rated 300MHz lower be faster? First, the length of the bus needed to get from inbound to FPU is much longer on Intel. Cache is much larger on AMD, prefetch is su
Re: (Score:3)
I do video transcoding that doesn't know how to use the GPU yet, so I overclock at home on my server. My gaming box has "everything overclocked" just because it was a fun project.
Re: (Score:2)
So troubleshoot at stock speeds, then switch back to your overclock when you've solved the problem. That also has the positive effect of actually showing you whether it's your overclock that's the issue.
Re: (Score:2)
Then you aren't doing it right. If you setup an overclocked machine correctly and don't try to push it right to the bleeding edge you'd get plenty of bang for the buck. My current Sandy machine is pushed to 4.5GHZ and I save a great deal of time processing video as a result - it's an i7 3770K. It process video for hours on end with no issues and reboots only for updates occasionally. Cooling is your biggest issues, water works best and don't push a ton of voltage through it. Start with the basics and work u
Re: (Score:2)
Life's too short- skip the anxiety and run your processor at it's rated speed.
With liquid cooling, your processor can run significantly above its rated speed because most failures are based on thermal overload. The core in your "slower" processor is the same as a "faster" one, but it failed qualification at some point, and it's not due to a physical defect per-se but because thermal tolerances are so tight that there may be a circuit cluster that becomes unstable due to parasitics; Usually it's highly localized heating. Liquid cooling can bring not just that component, but all the ot
Re: (Score:2)
Overclocking stopped having a real impact once clock speeds took a back seat to cores. I guess it's still fun for certain people to see how much they can squeeze out, but real-world performance just doesn't seem to justify the trouble.
why would you OC enterprise CPU's? (Score:2)
those are enterprise features? why would you OC a chip in something that brings you revenue and risk a problem?
Re: (Score:2)
I think this is precisely they point. This is a business decision to prevent people from buying cheap unlocked desktop CPUs with VT-d, overclocking them, and say, using them to run their dev/test QA VM environments - hell, even production environments if you're really pinching pennies. If you want to get really "out there", it's possible that there was pressure from hypervisor vendors for Intel to lock this down so that they didn't have to support the random failures that can occur with overclocking.
Intel (
This shows what will happen in a world without AMD (Score:2)
Re: (Score:2)
You think AMD is any threat to Intel? They stopped having any real competitive pressure on Intel years ago.
Re: (Score:3)
AMD didn't come out of nowhere, they were making 8088's in 1975.
Re: (Score:3)
I'm glad to hear AMD was selling 8088's before Intel developed them.
Please, pretty please let me see your 8088 based machine from 1975.
Damn am I getting that old!
Overclocking .. (Score:2)
Re: (Score:2)
Lost an Athlon last year that was being used in a NAS due to the cooling fan locking up (it was a second gen athlon, before they had thermal shutdown....)
Well, you just killed it for me. (Score:5, Insightful)
The K-series parts lack the support for transactional memory extensions and VT-d device virtualization
Yeah, well, fun fact... a lot of enthusiasts like myself like things like VMWare, which depend on this kind of thing. Deleting those features from the unlocked line means I just won't buy them... one of the big drivers for overclocking is to run virtualization. You might think it's "just gamers" doing this, but a lot of us do network and system administration and deployment and like the ability of having a "lab in a box" offered by current processors. You take that away and you're going to find your bottom line hurting, possibly more than a little.
I don't know which of your marketing assclowns came up with this idea as a revenue generating measure, but it's going to backfire in their face and I hope when it does you fire their ass, apologize, and never try this again. You're only succeeding in driving us towards commodity hardware like AMDs offerings... All they need to capitalize on the market you've just shit on now is offer mainboards with multiple sockets for their CPUs and make the mainboards cheap and the core system very energy efficient... and not only will the enthusiasts ditch you, but so will the data centers...
You're opening a can of worms here. Bad plan, darlings.
Re: (Score:2)
This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...
Re: (Score:2)
This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...
Got one right now, actually; It's a i5-3570K. To the best of my knowledge, no features are disabled compared to other models based on this core. But vmware needs VT-d to function, and if they kill this feature off, it won't work. So, no, it hasn't been opened for "awhile", this is something that's started rolling out in the last year.
Re: (Score:2)
Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise! I found out the hard way myself when I built an ESX server and couldn't install, I found the feature greyed out in the BIOS. A quick Google on that model and I realized I'd been had too.
http://ark.intel.com/products/65520 [intel.com]
http://www.tomshardware.com/forum/356118-28-purchased-3570k-virtualization [tomshardware.com]
Re: (Score:2)
Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise!
Sorry, my bad. I confused VT-d with VT-x. Yes, you're correct -- it won't run an ESX server, but I use Workstation, so it's been fine for me. That sucks though -- I know a lot of people who build dedicated lab machines on a rack; I don't have the funds to lay out on something that complex, nor the space where I live right now, but I can see how that would screw you over... especially when VMWare's hardware requirements [vmware.com] white sheet doesn't specifically list it either. :(
This kind of cpu fragmentation I think
Re: (Score:3)
But vmware needs VT-d to function, and if they kill this feature off, it won't work.
Bullshit. Even ESX/ESXi can work just fine without VT-d. The only thing you lose is I/O pass-through. Cut out the hyperbole. The fact that you can explicitly disable VT-d in VMWare's settings disproves your ridiculous claims.
Re: (Score:2, Insightful)
No it doesn't. Look up the difference between VT and VT-d. The i5-3570K does not have VT-d (I was aware of that when I bought mine). This feature is only used by Xen and HyperV (I can't speak for ESX) for very specific functions.
Comparison for you (scroll down so you can see VT-d, VPro, and Trusted Execution):
Sandy Bridge:
i5-2500K: http://ark.intel.com/products/52210 [intel.com]
i5-2500: http://ark.intel.com/products/52209 [intel.com]
Ivy Bridge:
i5-3570K: http://ark.intel.com/products/65520 [intel.com]
i5-3570: http://ark.intel.com/products/6 [intel.com]
Re:Well, you just killed it for me. (Score:4, Informative)
As I've pointed out before in this thread... It was a typo. Funny thing is, 'd' and 'x' are right next to each other on the keyboard, and vt-d is different than vt-x. But whatever... why read comments elsewhere in the thread?
Re: (Score:2)
None of the K processors have ever had VT-d. Also, VMWare ESXi is about the only virtualization product which uses VT-d (direct hardware access f
Re: (Score:2)
Overclocking is risky. The clock rate is what it is because that's what the chip is reliable with. Increasing that is increasing risk, and maybe that's ok if you're a game and don't mine breaking things, but if you're dependent upon overclocking for important business reasons then you're better off just getting a faster CPU in the first place.
Current K CPU also lose VT-d (Score:4, Informative)
Current K rated CPU lose this and possibly some other features. I didn't pay attention to this and found out the hard way when I couldn't run an overclocked ESX-i Sandy machine. Pissed is an understatement! There's no good reason to do this other than to screw with the marketplace.
I've switched to a XEON CPU of Ivy heritage and GL finding a board for one of those that runs ESX-i and can be overclocked. Nearly every machine I own is overclocked and has been for many years and it pisses me off to get jerked around like this by Intel.
Re: (Score:2)
Maybe. Another possibility is that those features are heavily timing dependent and the OC chips caused more problems than they solved.
Re: (Score:2)
Current CPU that aren't K rated can be overclocked though not to the same degree. I've never heard of issues from folks overclocking those and running them in virtual environments. Somehow I doubt that this is for our protection but they certainly haven't said one way or the other. If I could overclock my damned XEON I'd sure do it.
Re: (Score:2)
> There's no good reason to do this other than to screw with the marketplace.
This is what happens when there is less competition. We need AMD or some other company to scare Intel into competing on quality rather than artificial scarcity.
This is just business (Score:2)
I actually think this makes sense from a business perspective since the virtualization features would be targeted towards their Xeon line vs. the home PC market. As for overclocking, I do it moderately on both Intel and AMD systems but this lock on the Haswell reminds me of the same debates around Sandy Bridge and Ivy Bridge and ... back to when they started locking the clocks on the Pentium IIs. The advantages of overclocking don't just go against getting the most speed out of the hardware, they also all
Is it necessary these days? (Score:5, Insightful)
Yes, I remember the good ol' days when you can get a $100 CPU and make it work like a $800 one. I remember in particular the days of buying a cheap Celeron and having it perform like much more expensive Pentium II or even P3.
And I also remember days of headaches with stability issues, over heating and other stupid problems all to squeeze a few extra FPS out of Doom.
Nobody overclocks anymore, and if they do, it like getting a trophy for trolling a blog. Its completely unnecessary and doesn't really offer anything except a feel good, slap on the thy own back when you see your completely arbitrary and virtual benchmark numbers rise up while you ruin your CPU.
What needs the extra performance these days? You need to Tweet faster? Like on Facebook faster? Browse a website factions of milliseconds faster?
Games used to drive overclocking but GPU's are where game performance lies these days. Sure maybe overclocking your CPU by 50% might offer 1% more FPS, but who the fuck really cares, nobody with a life that is.
Intel realizes that the enthusiast market for PC's has nose dived and its obviously cheaper to produce CPU's where you don't have to worry about the kind of performance tolerances that are required for overclocking.
And I don't think "enterprise" level developers are buying cheap computers and then overclocking to get better VM performance. I mean really? If you consider yourself an "enterprise" developer then get the "enterprise" to buy you a decent workstation or VM server. I don't think your "enterprise" wants you to spend days trying to optimize performance on your workstation, I'd fire anybody that wastes any amount of time in a BIOS.
I would say Intel should focus on offering one "enthusiast" level CPU that is completely unlocked for overclocking. I mean if people want to burn out their CPU repeatedly its more money from a market segment that is drying up, but I think in general Intel or any CPU company should not have to worry about providing overclockable CPU's across their product line.
The bottom line is that benchmarks aside, if you ever looked at your Task Manager you'd probably realize that your CPU is idling at 1% usage 99% of the time, so you want to make the System Idle task run faster? I don't get it anymore.
Re: (Score:3, Interesting)
Re:Is it necessary these days? (Score:4, Interesting)
Pick a decade and stick to it, rather than picking and choosing facts. People dont burn out their CPU's anymore when overclocking, and thats been true for an entire fucking decade now. Seems to me that you never overclocked anything, ever, and are using lots and lots of excuses now to rationalize your irrational fear of it ("idle task"
Re: (Score:2)
..they didn't notice the market nosediving.
back in the day they didn't sell overclocking friendly chips separately. they realized there's a market and started selling to them, that's why you have these chips on the market. on their marketing dept it would be problematic if they had all the same features and happened to realiably overclock 10-20%, because that would make people ask wtf are they paying for if they're buying premium non-K chips..
Re: (Score:3)
Is it really necessary to say no one needs something just because you don't. Sorry but responses like yours are useless as they are more insult then info. Next time try and leave your attitude out of your responses and maybe you'll get some good karma for once.
Re: (Score:3)
The big issue nowadays is how much RAM you can install on your system. If you can install 16 to 32 GB of RAM to run under Windows 7 Professional, you can work on VERY large media files with nary a slowdown issue on most Intel Core i5 and i7 CPU's.
The real reason for this change is fab yields (Score:2)
The real reason for this change is fab yields.
It's how they do all the other processors as well:
o manufacture
o test
o blow fuses as needed for failed tests
o bin the part a an xxxyyyzzz part
One of the reasons Apple machines tend to be more expensive is they pay a premium for higher performance "speed burt" relative to other laptop vendors, so the chips that rate out at supporting a higher speed burst clocking go into the Apple bin.
Similarly, RAM chips get binned as well; those that bin out as supporting withi
Re: (Score:2)
Since when is transactional memory is a mass consumer feature? Next to no one will notice or care.
Re: (Score:2)
or someone that thinks you should be able to get overclock support and virtualization support without playing these market segmentation games.
new slogan time (Score:4, Funny)
*(But we are trying out one of those "pumps"...)