Intel Announced 8-Core CPUs And Iris Pro Graphics for Desktop Chips 173
MojoKid (1002251) writes "Intel used the backdrop of the Game Developers Conference in San Francisco to make a handful of interesting announcements that run the gamut from low-power technologies to ultra-high-end desktop chips. In addition to outing a number of upcoming processors—from an Anniversary Edition Pentium to a monster 8-core Haswell-E — Intel also announced a new technology dubbed Ready Mode. Intel's Ready Mode essentially allows a 4th Gen Core processor to enter a low C7 power state, while the OS and other system components remain connected and ready for action. Intel demoed the technology, and along with compatible third party applications and utilities, showed how Ready Mode can allow a mobile device to automatically sync to a PC to download and store photos. The PC could also remain in a low power state and stream media, server up files remotely, or receive VOIP calls. Also, in a move that's sure to get enthusiasts excited, Intel revealed details regarding Haswell-E. Similar to Ivy Bridge-E and Sandy Bridge-E, Haswell-E is the 'extreme' variant of the company's Haswell microarchitecture. Haswell-E Core i7-based processors will be outfitted with up to eight processor cores, which will remain largely unchanged from current Haswell-based chips. However, the new CPU will connect to high-speed DDR4 memory and will be paired to the upcoming Intel X99 chipset. Other details were scarce, but you can bet that Haswell-E will be Intel's fastest desktop processor to date when it arrives sometime in the second half of 2014. Intel also gave a quick nod to their upcoming 14nm Broadwell CPU architecture, a follow-on to Haswell. Broadwell will be the first Intel desktop processor to feature integrated Iris Pro Graphics and will also be compatible with Intel Series 9 chipsets."
8 cores? (Score:3, Insightful)
Re: (Score:2, Insightful)
No, they're well ahead of AMD in this regard. AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core. Much like Intel's current i7s are 4 core CPUs that can process an integer and a floating point instruction at the same time on one core. Basically, AMD is marketing hyper threading as being more cores.
Re:8 cores? (Score:4, Informative)
Re:8 cores? (Score:5, Informative)
They share everything except for the ALU, which is duplicated out.
Actually, in addition to integer execution units, they also don't share instruction decoders, L1 data caches, and integer op schedulers. ;-) They do share L1 instruction cache, L2 cache and the FPU pipeline (which is supplemented by the GPU units in ALUs anyway for FP-heavy applications, though).
Re: (Score:2)
Re: (Score:2)
Steamroller (used only in Kaveri) now uses two decoders, as long with the assorted little changes (a good review on a decent tech site will explain it)
Re: (Score:2)
Re: (Score:3)
Re:8 cores? (Score:4, Informative)
I believe cache is shared, and is believed to be one of the bottlenecks of the current AMD CPUs.
Re: (Score:2)
The data caches are not shared. Each core has a separate data cache. The decoder is the same so they share the instruction cache. But AMD's instruction caches are 64 KB while Intel uses 32 KB sized instruction caches.
Re: (Score:2)
I believe cache is shared, and is believed to be one of the bottlenecks of the current AMD CPUs.
By far not the most significant one though, in single threaded tests the i7-4770K beats the FX-8350 by 62% in Cinebench R11.5, 73% in Cinebench R10 and 47% in POV-Ray 3.7RC6 and that's when the AMD core is not competing for resources with its sibling. With turbo the picture is a bit more complex than that but 4 Intel cores already equals 6-7 AMD cores. Then you add in cache contention, shared FPU, overhead of more threads for the last 1-2 cores of difference as in the most ideal benchmarks for AMD they're r
Re: (Score:2)
The whole point is to eliminate or at least minimize external limiters so that the comparison is done on like terms. If they were to stream from the cdrom, the cpus will spend most of their cycles idle. So, typically, they rip a track to disk, run the encode several times, and average. This doesn't mislead anyone but the stupid, as it's a given that ripping takes the majority of time. What the test does suggest is that one chip is better with media transcoding than another, and that workload isn't alway
Re: (Score:2)
While core to core performance is important, it's equally (maybe less even) important to compare 2/4/6/8 cores. We don't compare GPUs with only 1 raster unit enabled.. or a cpu with X amount of cache disabled so that every CPU is equal in cache..
We buy for the sum, not for the addend.
Bull (Score:2)
No, they're well ahead of AMD in this regard. AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core. Much like Intel's current i7s are 4 core CPUs that can process an integer and a floating point instruction at the same time on one core. Basically, AMD is marketing hyper threading as being more cores.
What you describe is superscalar execution, and was the point of the original Pentium. That's Instruction-Level Parallelism not Thread-Level Parallelism. Also the Pentium Pro/Pentium 2 had three FPUs.
It's lame that this comment is modded insightful, you're making shit up.
AMD 16 cores, Intel 10 working as 20 frequently (Score:2)
Re: (Score:2)
Intel calls EMT64 64 bits
Intel hasn't called it EM64T in years. It's now "Intel 64".
when it is just 32 bits on each 1/2 of the clock cycle.
Please provide a reliable source for your assertion that all Intel 64 processors have 32-bit data paths internally.
Re: (Score:2)
Re: (Score:2)
It was true for the first generation or two of Intel chips that supported AMD's 64-bit extensions. It hasn't been true for quite a while though.
So that'd be the 64-bit Pentium 4s (perhaps not surprising, as it was initially a 32-bit microarchitecture, and fully widening it to do 64 bits of arithmetic at the time might've been more work than they wanted to do) and the Core 2 (more surprising, as that microarchitecture was released in 64-bit chips from Day One, but maybe the design work started with a 32-bit chip and the 64-bitness was added at the last minute).
So I can believe it for the 64-bit Pentium 4s; is there any solid information indicating
Re: (Score:2)
It was approximately 2010. I asked about EM64T while participating in a build event at an Intel convention in Chicago. They called corporate and confirmed.
So, in 2010, they'd either be Core 2 (not inconceivable, as per my other reply, if the Core 2 design started out as 32-bit and changed to 64-bit late in the game) or Nehalem (less likely, as by that time I'd expect them to have a design that started out as 64-bit, unless their design pipeline was as deep as Pentium 4's pipeline :-)).
The machine that I walked away with used a Mini-ITX board, had an I5 and HD4000 graphics. Perhaps things have changed since then.
I rather suspect they have.
Re: (Score:2)
Re: (Score:2)
The GP's confusion is probably due to the relationship between throughput and latency. Intel's designs have one cycle of latency for basic arithmetic operations (add, sub, xor etc), but they can despatch multiple operations per cycle. The Core 2 was the last chip that I looked at in detail and from memory it could execute three basic instructions per cycle with a one cycle latency. On benchmarks this looks like 1/3 cycle per 64-bit operation. The previous chip that I looked at from Intel (which was not a Co
Re: (Score:2)
There was a time (Prescott Pentium 4, or maybe all the Pentium 4 processors) when the ALU was "double pumped" - it worked at twice the frequency of the rest of the system. So, it was on "half a general CPU cycle" or a "full cycle of the ALU).
But that was quite a long time ago, and more than 3 microprocessor generations had passed.
Re: (Score:2)
Re: (Score:2)
http://www.anandtech.com/show/... [anandtech.com]
Written by men much smarter than I
Re: (Score:2)
So much wrong in this thread... (Score:5, Insightful)
AMD's Bulldozer cores have Clustered Integer Core [wikipedia.org] which has two true ALU "cores" and one shared FPU. For integer instructions this is two true cores and not "hyper-threading". For FP instructions this is "hyper-threading" and why Intel has been regularly handing AMD it's arse in all benchmarks that aren't strictly ALU dependent (gaming, rendering, etc). AMD's FPU implementation, clock for clock, is a bit weaker on most instructions as well. And yes, the FPU _is_ shared on AMD processors.
EMT64 is not "32 bits on each 1/2 of the clock cycle". That doesn't even make any sense. EMT64 is true 64 bit. x86-64 does have 32 bit addressing modes when running on non-64bit operating systems. This is part of the x86-64 standard and hits AMD, Intel and VIA.
Hardware Queuing Support is part of the Heterogeneous System Architecture [wikipedia.org] open standard and won't even be supported in hardware until the Carizzo APU in 2015. Since this is an open standard, Intel can chose to use it.
Both architectures have shared caches.
WTF does nVidia's IEE-754 compliance have to do with Intel vs AMD?
I'm not an Intel or AMD fanboy, I try to use the right one for the job. I prefer AMD for certain work loads like web servers, file servers, etc because they have the most integer-bang for the buck. If I'm doing anything that involves FP, I'm going to use an Intel Chip. Best graphics solution?... yeah, I'm not even going to go down that hole.
Re: (Score:2)
Hardware Queuing Support is part of the Heterogeneous System Architecture [wikipedia.org] open standard and won't even be supported in hardware until the Carizzo APU in 2015. Since this is an open standard, Intel can chose to use it.
The first is not a correction of something that was "wrong in this thread" (if I was wrong in the first place - there *is* already HW for it in Kaveri, even though the implementation may change in the future) , and the second is an opinion (I really don't think that Intel will follow suit any time soon on that).
WTF does nVidia's IEE-754 compliance have to do with Intel vs AMD?
Well, AMD apparently takes care for the execution units to be completely interchangeable, so that code could executed on one core or the other as necessary with identical results, which is one of the
Needs an better DMI link / more PCI-e lanes (Score:2)
The non extreme / severs ones are very limited on PCI-e and even in systems like the MAC pro the pci-e limits / DMI hold it back.
The mac pro should of had 2 SSD's but due to limits it only has one.
Re: (Score:3)
It has a pci express which is several multitudes faster as it is directly on the PCI bus. It is rated for over 700 megs a second.
Re: (Score:2)
number of lanes is to low
Re: (Score:2)
Is that really true compared to an ata port? It handles graphics cards just fine. Perhaps someone with more knowledge can eleborate on this?
These cards are just coming out for PC's too and man they are expensive but can promise bandwidth about 1 gig/sec. I think in a few years when this is the norm the mechanical disk will finally die. AHCI will seem slow in comparison.
Re: (Score:2)
The extreme platform as you call it has had relatively affordable quad cores (i7 3830, i7 4820K) and I guess there will be a similar quad core Haswell-E for sale.
You can go that way if you want a workstation with crap tons of RAM, I/O and PCIe slots.
Weird Business Strategy (Score:3)
Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?
server cpus are more complicated (Score:2)
The desktop/laptop processors are easy...single socket, relatively small number of cores.
It takes effort to add the bits to allow the processors to scale to 10/12 cores, huge caches, and multiple sockets. They also use more complicated memory modules, different motherboards, etc.
Also, large companies are able to get their hands on limited quantities of these cpus well before they're generally available for large-scale ordering to allow their engineers to build products on them and test how they'll behave.
Re: (Score:2)
I would argue it the other way, that achieving high single thread performance is very complicated and requires both more design work as well as better understanding of the given process than is usually available for the initial launch where they're using projected and calculated si characteristics. A year later they have some experience with massive volume production, know to many decimal places what their yield will be, as well as have more time to do custom circuits that high freq CPUs will require.
Adding
Re:Weird Business Strategy (Score:4, Insightful)
Re: (Score:2)
Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?
Not at all - the commodity CPU customers can do beta test for the more risk-averse enterprise server CPU customers.
Re: (Score:2)
Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?
They're not really consumer CPUs, they're a spin-off of Intel's server/workstation CPUs for the enterprise. That market requires a lot of validation and is generally very conservative preferring tested and true technology so it's not unnatural for server chips to lag behind consumer chips by a generation and so the "enthusiast" processors aren't ready until the Xeons are. My guess is that most of them are "damaged goods", server CPUs with ECC, QPI, vPro, TXT or other essential server features broken, but if
Re: (Score:2)
It makes sense for a couple of reasons
1: Intel desperately want to stop the portable computing market moving away from laptops and laptop-like tablets towards smartphone-like tablets. To do that they need to get the most power efficient technology possible into ultrabooks and ultrabook-like tablets.
2: Making a design work properly with 2-4 cores on one chip for laptops and mainstream desktops is a lot simpler than making it work properly with 8+ cores and inter-chip links for a server part (and the high end
Re: (Score:2)
#1. For CPU heavy loads you probably have more than one CPU per board.
#2. Most people don't use their 1U Rack-Mount Servers to play Crysis and TitanFall, they just need to handle a crap-ton of threads/ram/drives. Therefore having the latest built-in GPU features does nothing useful.
#3. Stability > Core Speed
Re: (Score:3)
The design-side motivation is to alternate architectural changes with process shrinks so that you're not trying to debug both at the same time. Prescott tried that, and look how that turned out.
The marketing motivation is that the buyer of the commodity part is more price sensitive and the buyer of the performance part is more feature sensitive. You use the shrunk process for commodity parts first due to the increased die per wafer, which give you both greater volume and lower cost per die so that you can s
Re: (Score:2)
Re: (Score:2)
Commodity CPU's can be advanced in small steps every six months, while enterprise CPU's can be advanced in large steps every few years.
Bout time... (Score:5, Funny)
Finally! I have been waiting for next gen Iris graphics [computerhistory.org] since like forever!
Re: (Score:2)
If you must have an integrated graphics solution, go with AMD. Their implementation is much superior.
Their silicon implementation might be better. Their drivers are still hot canned crap. I have a Foxconn motherboard with onboard video and even with the latest drivers if I actually load them (and not just VGA) then the system just crashes. So I dropped a $10 (after MIR) nVidia card in there and everything is working. Only idiots specify AMD graphics. Its only purpose in years has been to mine bitcoins, and now there are dedicated miners which are more efficinet.
Re: (Score:2)
"You are full of outdated rubbish."
Please, GMABooster is already preparing to unlock the underclocked Iris GPU.
Because this is what Intel does. Use it at max power, show performance, then underclock the shit out of it and ship it off.
The 950 would have been competitive if it had shipped at its stock 400 MHz (minus not having hardware T&L) instead of the fucked 166MHz the drivers forced at low level.
How about 2 fast cores instead of 8 slow ones? (Score:2)
Re: (Score:3)
You asked for it, you got it! [anandtech.com] Though the downside is these two fast cores don't include AVX, AVX2, or a few other instruction sets.
Re: (Score:2)
Re: (Score:2)
This, but AMD with Athlon helped a lot (at 33Mhz a year, we might have 2Ghz CPUs now), Otherwise Intel wouldn't have had have any incentive to push clock speeds that fast. If AMD were kicking them again, I'm pretty sure those exotic conditions wouldn't be such a barrier anymore.
Re: (Score:3)
Ok, but aside from the n% increase over the n% increase over the n% increase over the n% increase, what has Intel done for us?
Intel makes the 2 fast core processor right now, today, and it'll cost you a staggering $120 to $150. It's called the Haswell Core i3 and each of its cores is faster than any of the cores in your $5000 machine from 2007. It will run Dwarf Fortress faster than anyone would have imagined back then.
Of course there's no limit to what you'd like, but if you have a problem with the amazi
Re: (Score:3)
Silicon tops out at ~ 5 GHz.
Germanium X tops out at ~500 GHz.
The average consumer doesn't give a rats ass about GHz, which means that you will never see cheap 10 GHz CPUs anytime soon.
Hell, we're STILL waiting for Knights Corner / Landing 48+ core CPU to ship to the general public.
re: Germanium X (Score:3)
Make that 798Ghz [phys.org]
"When we tested the IHP 800 GHz transistor at room temperature during our evaluation, it operated at 417 GHz,"
http://en.wikipedia.org/wiki/S... [wikipedia.org]
What's with the past tense in the headline? (Score:2)
Intel Announced 8-Core CPUs And Iris Pro Graphics for Desktop Chips
Okay, I know that strictly speaking it did happen in the past, but that's not how headlines are usually written.
DDR4? (Score:2)
I mean, everybody is so excited about DDR4... But do people understand that instead of 8 dimm slots we'll get only 4 (1 dimm per channel instead of 2-3)? So while keeping costs on this side of reasonable, we're getting only half the amount of memory?
WTF?!!
Re: (Score:2)
No...not everyone. Going from DDR2 to DDR3 netted fractional gains in real world applications and indications are that the same will be true going from DDR3 to DDR4.
Also plenty of consumer level boards only have 4 DIMM slots now. Which has always been plenty for most people, ever since we moved up from DDR1 boards and their crappy 2GB limit per stick.
Re: (Score:2)
No...not everyone. Going from DDR2 to DDR3 netted fractional gains in real world applications and indications are that the same will be true going from DDR3 to DDR4.
To put a bullseye on this, its because latencies havent really changed. Its a rare workload that isnt either CPU limited or RAM latency limited, rather than RAM bandwidth limited. DDR4 isnt going to change that.
Re: (Score:2)
And this is a well-known problem, ever since the SGI O2 with its UMA had the same problem. It had top-of-the-line memory bandwidth for a small desktop workstation at the time at 2.1GiB/s for the system RAM, but in terms of 3D performance etc, it was outperformed by what on paper was inferior predecessors unless the dataset was really large(And then the CPU couldn't keep up instead.....)
We already see some of the issues with the new Xbox, while the PS4 won't run into the issues quite as badly, due to going w
Re: (Score:2)
Re: (Score:2)
So while keeping costs on this side of reasonable, we're getting only half the amount of memory?
I suspect it will be a pain when the platform first comes out but in time 16GB desktop DDR4 modules will become affordable while I doubt 16GB desktop DDR3 modules ever will (if the boards even support them)
Re: (Score:2)
Fun story, AMD supports 16GB unregistered DDR3 DIMMs, but Intel CPUs don't, except the 8-core Atom and presumably Broadwell. If those 16GB DIMMs ever get affordable and readily available, it would probably be in 2015 when there are Broadwell desktops/laptops around.
Re: (Score:2)
Re: (Score:2)
16GB regsitered ECC DDR3 server modules are only $158 according to newegg but at least on the intel side you need a server board and CPU to use them.
16GB unregsitered non-ecc desktop DDR3 modules are another matter. Afaict only one specialist manufacturer has announced that they are making them and when I google the part number they list I don't find anywhere actually selling it. Also from what I have read the standard init code that intel gives to bios manufacturers doesn't support 16GB modules and it is u
Re: (Score:2)
Enthusiast mobos mostly only have 4 slots anyway.
Define "Enthusiast mobos", there are plenty of LGA2011 desktop boards with 8 dimm slots.
And Intel showed a Haswell-EP system with 3 DIMM slots per channel while they keep saying it's 1 per channel; clearly we haven't gotten the full story.
That's EP not E, it wouldn't surprise me if ddr4 desktop memory only supports 1 dimm per channel while registered ECC DDR4 server memory supports more. Just as with DDR3 the desktop stuff maxed out at two dimms per channel while the server stuff went up to three dimms per channel
Re: (Score:2)
"Enthusiast mobos" - anything on the front page of Newegg.
Iris Pro is a white elephant (Score:4, Interesting)
The eDRAM simply makes the chip way too expensive.
If you look at the price of i7 core 4770R: $358. It's an i7 but has only has 6 MB of cache (compared to the 8mb of the regular i7 4770). So basically, it's about the same value as a i5-4670K which cost $243. With the price difference you could buy a Radeon R7 260X, which will trash Iris Pro in performance.
Re: (Score:3)
6 MB L3 plus 128 MB of L4 gets you a faster CPU than 8 MB of L3 alone, actually.
Re: (Score:2)
What does that have to do with it? i5-4670K and i7-4770R have same amount of L1, L2 and L3 cache
Re: (Score:2)
And instead of a power-hungry R7 260X room heater, you could get a cool nVidia GTX 750TI.
What's a "desktop"? (Score:2)
What's a "grumpyman"? (Score:4, Funny)
Really.
And the Mac Pro is now lagging again. (Score:2)
Unless they have a refresh of the Pro when the chip launches or soon after the Pro is back to being too expensive for the performance.
Re: (Score:3)
The pro has a +10 core if you max it out.
"Enthusiasts" (Score:2)
I don't think the poster knows what hardware "enthusiasts" really mean. No one is looking forward to the "e" EXTREME series CPU. Perhaps some people with more money than brains. Enthusiasts want CHEAP hardware that they can then fiddle with to gain big results. Haswell while a decent CPU only really offered better power efficiency and a few more instruction sets that might be potentially useful in a few years. Not exactly a ringing endorsement.
This sort of thing costs $1000+ dollars, so unless you are a ric
Re: (Score:2, Informative)
There's no reason for most programs to be 64bits. Most programs don't need to address that much RAM nor do they need the additional registers that you get with 64bit processors.
Now for programs that use massive amounts of RAM or need the additional registers, going 64bit makes sense, but it's silly to suggest that there's something wrong with 32bit programs in general that would be fixed by moving to 64bit.
Re: (Score:2)
Re: (Score:2)
But your memory usage will be lower if all your pointers are 4 bytes instead of 8.
Re: (Score:3)
Memory is fairly cheap (though if you really want lots of memory in addition to the cost of the memory itself you have to consider the cost of the platform to accomodate that memory) but cache, particulally the lower levels of cache that are closest to the CPU isn't so cheap, if you have a pointer-heavy workload (e.g. data structures that are mostly cross-references implemented using pointers) then you can fit a lot less of your workload in cache with .64-bit pointers.
For java (which is very pointer heavy)
Re: (Score:3, Insightful)
I just did a ps -e | wc -l and got 245. Maybe most of my processes are only single threaded but since there's 245 of them I'm glad my processor has 8 hardware threads to handle them.
Re: (Score:2)
A dual core solves that already. That allows your most CPU hungry process to use 100% of one core (when it does) while your 244 other processes use about 10 to 20% of the other core.
Re: (Score:2)
The thing is when you look more closely you find that most of those processes are spending most of their time asleep. So there is little to be gained from more than 2 cores (one for the program you actually care about, one for the background crap) unless the program you actually care about can effectively spread it's work across multiple threads*.
* There are a lot of processes that have multiple threads but only use one of them at a time to do significant work.
Re: (Score:2)
That's a good argument for dual-core over single-core. Buying a single-core CPU is for chumps and has been since 2007. But it is not a very good argument for staying with dual-core over moving up to quad/hex/octo core setups. That argument boils down to cost
Re: (Score:3, Insightful)
The few times I'm ever waiting on CPU, it's multi-threaded. Video transcoding, occasionally compiling. I can't remember the last time I heard of a game being CPU bound - that's always GPU-bound these days.
Re: (Score:2)
Games are often CPU bound or rather have some significant CPU requiremets, it's just that new graphics cards are always benched on fast CPUs and the "gamers" tend to keep their hardware up to date. If you put a good graphics card on an old, unspectacular CPU your games may run like crap.
Re: (Score:2)
Anecdote time:
I used to have a dual core Athlon 64, and had a GF 9800GT, and Crysis had to be run at a really low res, with all graphics at bottom settings, and I always attributed that to the GPU.... Then I upgraded to a i5-2500.... And suddenly I could play Crysis at 1920x1080 and even pull up some of the settings to medium or even high... Because the CPU could suddenly keep the GPU fed...
Re: (Score:2)
There are dozens of AAA titles which are CPU bound. Especially multi-player games where the CPU has to keep track of everything so that it can all happen in a deterministic order. Since that can't happen across multiple-threads, your FPS gets limited by the speed of a single core.
(Planetside 2 is pr
Re: (Score:2)
> We still live in this era of Single threaded games,
That hasn't been true since the PS3 and Xbox 360 days.
Yes, a lot of (PC) indie games are single-threaded, but any game that ships on consoles is multi-threaded.
Re: (Score:2)
A point that I read somewhere is that even though they're multithreaded, they largely have "the rendering thread", "the audio thread", "the physics thread" etc.
Few games are really well multi-threaded. On the other hand this puts a tab on run-away CPU requirements.
Re: (Score:2)
Nonetheless, benchmarks show that there are new games that will take advantage of say 8-core CPU. I think it's measurable when you look at 6-core AMD FX vs 8-core AMD FX of the same generation. Nonetheless, a lot of people and magazines do not recommend buying anything more expensive than an unlocked quad-core Haswell i5 ($220-240). Anything faster gets too expensive.
Re: (Score:2)
The same price ranges also apply to GPUs. Any GPU in the $80-$120 range can probably handle most games at 720p, going with something in the $180-$220 range gets you a GPU that can handle almost everything at 1080p. Spending $300+ on a GPU is only needed if you are doing a trip
Re: (Score:2)
There are plenty of specialized applications that use that many cores. Media encoding comes to mind. An average desktop doesn't need these chips, but there are some users more than willing to pay a premium price for this.
Re: (Score:2)
Re: (Score:2)
Look up Iris Pro on Youtube.
Look up the price difference between a chip with Iris Pro and a similarly spec'd chip without. How does the Iris Pro compare with a $200+ stand alone GPU?
.. now you get it .. the Iris Pro is crap, not because it doesnt perform, but because it costs many times what its actually worth.
ding ding ding
Re: (Score:2)
Re: (Score:2)
Actually I run an AMD processor. So what if it had half the FP power. Most FP intensive applications I use have GPU acceleration. Oh and yeah it was cheaper than an Intel processor with the same integer performance. Heck it was cheaper than an Intel processor with the same FP performance. That's how expensive Intel processors are these days.
If you got a yourself a PS4 or a Xbox One you are using an AMD processor.
Re: (Score:2)
But with AMD you have a higher power bill, need to buy a bigger heatsink, stay clear from lowest end motherboards. It ain't exactly cheaper.
Re: (Score:2)
I was thinking a 15 euro heatsink will do on an Intel, and a 30 euro one on AMD.
Cheapest motherboards are those around 45 euros or less. Putting a 125 watt FX on that is a very bad idea. The electric load is too big and the CPU may be throttled down. In contrast an Intel mobo will run i5/i7 fine. I agree that low end mobos have great stability otherwise, they have high volume and production is reliable.
Power use is insignificant if you don't pay for it or shut down/stand by the PC often. Else over the cours
Re: (Score:2)
They used those 8x more transistors to increase the performance per clock
I couldn't find any current quad core Haswell CPU's with a 2.4G clock like the Q6600, but an i5 4430 is twice as fast, despite having less cache.
It's multi-threaded performance is on par with a 2 core G1820 Celeron. The Celeron is must better at single threaded performance and uses half the power, despite having integrated graphics in there too.
They also moved the memory controller in to the CPU. That takes up space.
Re: (Score:2)
I'll wait for the mac nano.
They could halve the price if they abandoned Intel for their own A7 chip. i.e. iPad internals with 8GB RAM running OS X.
Re: (Score:2)
Our 4770Ks will still have much better performance per dollar than these E-chips.
I'm not happy to find that we got gimped out of VT-d by buying the current top chip, however. Being able to (possibly) run Windows only in a VM for gaming while using Linux as the host would be awesome.