Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory 181
crookedvulture writes: Intel has updated its high-end desktop platform with a new CPU-and-chipset combo. The Haswell-E processor has up to eight cores, 20MB of cache, and 40 lanes of PCI Express 3.0. It also sports a quad-channel memory controller primed for next-gen DDR4 modules. The companion X99 chipset adds a boatload of I/O, including 10 SATA ports, native USB 3.0 support, and provisions for M.2 and SATA Express storage devices. Thanks to the extra CPU cores, performance is much improved in multithreaded applications. Legacy comparisons, which include dozens of CPUs dating back to 2011, provide some interesting context for just how fast the new Core i7-5960X really is. Intel had to dial back the chip's clock speeds to accommodate the extra cores, though, and that concession can translate to slower gaming performance than Haswell CPUs with fewer, faster cores. Haswell-E looks like a clear win for applications that can exploit its prodigious CPU horsepower and I/O bandwidth, but it's clearly not the best CPU for everything.
Reviews also available from Hot Hardware, PC Perspective, AnandTech, Tom's Hardware, and HardOCP.
*drool* (Score:5, Funny)
*drool*
'nuff said.
I'm still clunking along on a P4 3.8 GHz. I'd love a new box that fast!
Re: (Score:2)
Why spend $2000 to update from a P4 though? For $350 or $400 a system can show your P$ to be a waste of electricity.
Re: (Score:3)
You're not the only one. Im chugging along with a c2d from 2008. I can get at least another three years out of this, if not more. Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer).
Re:*drool* (Score:5, Informative)
Re: (Score:2)
Awful and mediocre programmers (the majority) are trying their hardest to make their software as inefficient as possible so as to completely or mostly eliminate any advantages we get from the latest and greatest technologies.
Man, I'd say we are leaving the point where the bad programmers can slow these machines down and we're not looking back. The downside to this is that it's going to fully encourage those bad programmers to continue their bad practices since "their program runs great!" (because of the hardware, not their good coding skillz)!
Re: (Score:2)
All too often the programmers aren't given time to optimize the code. Middle management weenies hear the "we got it working" part of the demonstration, and don't care that it's running too damn slow. They have shit to sell right now!
And yes, too many coders have no idea how to optimize.
Re: (Score:2)
Its not even about optimizing the code, its making choices that from the beginning cannot result in faster code. People like to focus on the overhead of JITs, GC's, and hidden object copies, etc, in many "modern" languages, but frankly while they have an effect, the mindset they bring is a worse problem.
Modern machines can lose a lot of performance with poor memory placement/allocation in a NUMA configuration, doing cache line ping ponging, and on an on. Things that are simply not controllable if your langu
Re: (Score:3)
Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer).
There are LOTS of applications outside of gaming where more speed is appreciated. Especially if you're a professional. (Of course, it's arguable you didn't mean that when you said "personal" computing, but I'm not working in an office, and my work machine is my "personal" machine.)
I was chugging along with a c2d for a long time too. But there came a time when it was long past due for replacement.
Re: (Score:2)
There are LOTS of applications outside of gaming where more speed is appreciated.
But a lot of those applications are also runnable in networked clusters. I stopped compiling code on my desktop probably 15 years ago and haven't looked back. Buying a single machine with 32-cores and a super fast RAID shared between a dozen or so developers both improves individual compile times and saves a bunch of money over buying a bunch of faster desktop machines for everyone. Edit the code locally, save to a network shar
Re:*drool* (Score:4, Insightful)
there was a time, back in the 90's (rapid progression of 286/386/486/Pentium) where you needed to upgrade your computer every 2-3 yrs or you couldn't even run the latest software...and i'm not talking hard core games...even simple stuff like word processing or the newest ver of windows.
seems like now you can get by with 5-6 yr cycles, esp with the introduction of an ssd and more ram.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
When I had a Dual Processor Power Mac I could turn the heaters in my house down a couple of notches as the G5 would act like a space heater. Heck that was its nickname in a number of forums.
I decided enough was enough when the temperature in front of the computer in summer was rivaling sitting out on the bitumen on the road. Almost immediately after turning the G5 off permenantly, my power bills went down $70 per quarter.
Re:*drool* -- FPGA development (Score:2)
I do a lot of FPGA programming and It takes me 15-20 minutes to synthesize a design on a modern fast computer. As more of the part is being used, synthesis takes more and more time, as the chip becomes harder to rout.. I'm a user that is primarily CPU bound. I hope that Intel will continue to push on the raw performance. For the past few years, as we've only seen marginal improvements in CPU performance.
There is also the issue that FPGAs keep getting cheaper/bigger, so no matter how fast your rig, it al
Re: (Score:2)
Maybe not doubling every 18...but the general trend is there...
WiFi speeds are about 100x faster than 10 yrs ago.
Boot times are about 10-20x faster than 10 yrs ago.
Battery life is about 4-5x longer than 10 yrs ago
Meanwhile, prices are about 1/4 what they were 10 yrs ago.
I'll take that over speed increases.
Re:*drool* (Score:5, Informative)
Single thread performance from core 2 duo from 2008, to the 4770 i7 from this year improved just 90%, so, not even a doubling in speed.
Re: (Score:2)
The fastest i7 is only 85% faster at single threaded jobs.
The fastest i5 is only 65% faster at single threaded jobs.
The fastest i3 is only 57% faster at single threaded jobs.
The fastest AMD FX is only 26% faster at single threaded jobs.
The fastest AMD APU is only 18% faster at single threaded jobs.
The cost of the E8600 is $45. forty-five fucking dollars.
Re: (Score:2)
Yes, and for a desktop machine probably 90% of what I do is limited by the single thread performance .Hence why I haven't upgraded in a while myself.
So, I do welcome faster machines, what I don't welcome is the fact that the vast majority of machines being sold today are actually _SLOWER_ than what was available a few years ago.
This happened at work, we replaced a couple of older machines that cost a fortune with a couple newer far less expensive one and the performance was actually worse.
Re: (Score:2)
"Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer)."
That is just so stupid. Personal computing is not just about gaming or browsing or a bit of coding.
Many non-geeks do things that require way more computer horsepower than geeks do. Like 3D, video/movie, heavy graphics editing, music and the list just goes on.
Re: (Score:2)
That's why I said personal computers, not business machines/workstations.
Re: (Score:2)
And yet again you make an erronous assumption. Many non-geeks do those things for HOBBIES, and only have one computer(and maybe a tablet...)
Which means that they do those hobbies on their personal computer.
Re: (Score:2)
haha i am still using mine though soon i will succumb to starcitizen and an upgrade
Re: (Score:2)
..to do what? I've long stopped caring about this stuff. It seems to solve no real problems out there.
Well, I use my CPU to transcode media files, so I might get one. But for gaming? When will CPU ever matter for gaming, unless your running some terribly-written Java game?
Re: (Score:2)
When will CPU ever matter for gaming, unless your running some terribly-written Java game?
When consoles stop shipping with such crappy CPUs.
Re: (Score:2)
PS4 and XBone are _already_ x86 so not sure if you are being serious, sarcastic, or cynical.
Are you referring to their abysmal 1.6 GHz clock speed [wikipedia.org]?
Re: (Score:2)
Even selecting a 4-core CPU at complete random from the $50 to $100 range on newegg, you are still likely to get a better performing CPU. In other words, its difficult to actually own something worse. The reason for this is that the closest desktop APU to the chip that the PS4 uses is only $49.
$49 fucking dollars.
So yes, consoles ship with "such crappy CPUs."
Re: (Score:2)
Game developer here. A lot of stuff still happens on the CPU, especially when you're talking about large-scale AAA 3D games. Note that some of these items may make use of additonal GPU or specialized hardware, but that's still somewhat rare.
* Model animation is performed on the CPU. This is probably the biggest CPU hit in most AAA games today.
* Audio engines are all in software now, and they're applying a lot of real-time effects, in addition to the costs of real-time decompression and mixing overhead.
*
Re: (Score:2)
I've never seen a gaming-related benchmark where the slowest new desktop CPU you could buy wasn't "enough".
Can you give me an example of a modern game (other than minecraft) where the difference between 2x2 GHz and 8x4 GHz CPUs would be human-noticeable?
Re: (Score:2)
"When will CPU ever matter for gaming, unless your running some terribly-written Java game?"
When you have a game that does a lot of AI stuff? Sins of a Solar Empire and the Total War series both tend to hit the CPU quite hard when your fleets/armies become large....
To the point of "Don't zoom in, just let the fleet autoattack...." yet you zoom in anyway, and get 120 FPS thanks to the GPU, but no units doing anything except in a slideshow, due to the CPU being hogged... Of course, I probably shouldn't have e
Re: (Score:2)
Thanks - a real example! Wow, to me "300" does not sound like a large number for a computer. My mind boggles at how anyone could write code that bad - the AI must be written in some wildly inappropriate language? Or the developer just didn't care about perf and never got a bug assigned as they didn't QA at that scale? Nah, he got the bug and the game shipped with it, of course.
Re: (Score:2)
When I think "next gen games" I think games written for mobile platforms that look like flash games and would have run on my C64, What specifically did you have in mind?
Re: (Score:2)
Working on my pet project. Having Eclipse start in under 10 minutes. Being able to run *all* my code manufacturing jobs all at once, instead of having to run three at a time on my laptop (the longest job takes 20 hours to run.)
Believe me, I could use the CPU power. I'm not an "average" user, just a broke one. :P
Re: (Score:2)
BTW, that 20 hour job is running on a Core i7 mobile/laptop chip, not my P4. I shudder to think how long the P4 would take...
Re:*drool* (Score:4, Interesting)
Price (Score:2)
Re:Price (Score:5, Interesting)
Though the lower-end model is only $300 for a 6-core 12-thread!
http://www.microcenter.com/pro... [microcenter.com]
Nice (Score:2)
I recently (less than a year ago), bought an i7, four core, 8 thread machine. I use it a lot for chess analysis, and it is amazing how quickly it can get to a 24 ply deep analysis. Even with a slightly slower clock, 8 cores would be so much quicker.
Re: (Score:2)
I have a similar spec in a laptop workstation where I run cloud software in VMware workstation. For most people 24GB of RAM and a quad-core i7 is not going to make their wordprocessing or browsing any better, so for most people tablets are more convenient and useful.
Cores matter in virtualization of course, but at the moment, the slowest component is the hard disk.
***Big intake of breath*** (Score:2)
But does it run Linux?
Re: (Score:3)
But does it run Linux?
No but it runs Netbsd!
Re: (Score:2)
I'm almost going to buy one. But only if I can get better performance that makes the extra money worth it.
At the moment - naaaah!
Re: (Score:2)
No, but it runs MenuetOS which is what you should be learning because ASM is god and anything else plain out fucking sucks.
just wait (Score:5, Interesting)
until next year. 14nm shrink should be a huge boost in both efficiency and performance.
The x99 is an "enthusiast" platform and has pricing along those lines.
DDR4 is also extremely new. Expect it to get faster/better timing specs as time progresses.
That's a pretty silly statement (Score:2)
In computer technology, there is ALWAYS something new next year. Yes, there'll be a 14nm shrink next year (or maybe later this year)... but then just a year away will be a technology update, a new core design that is more capable, and of course they'll have more experience on the 14nm process and it'll be better... however only like a year after that 10nm will be online and that'll be more efficient.
And so on and so forth.
With computers, you buy what you need when you need it. Playing the "Oh something bett
Re: (Score:3)
DDR4 is also extremely new. Expect it to get faster/better timing specs as time progresses.
this.
DDR4 is like $350 for 4x4GB. Too expensive still. This time next year we should see prices closer to what we are paying for DDR3 today.
DDR4 is "extremely new" as in 2011 [wikipedia.org]. For me, the only real improvement seems to be in power consumption.
Since regular SDRAM, each DDR generation has doubled throughput, but latencies have only improved very slowly. So in many cases the doubled data rate is just a marketing gimmick. This might explain why each DDR generation has been relatively slow to enter mass market. For example, in late 2008 I was speccing a work laptop, and it had this new and shiny DDR3 memory, with all the issues such as price and
5820K is an extremely nice part (Score:5, Interesting)
The 5820K is packing 6 cores and an unlocked multiplier for less than $400. If you don't absolutely need the full 8-core 5960X, then the 5820K is going to be a very powerful part at a reasonable price for the level of performance it delivers.
Re: 5820K is an extremely nice part (Score:3)
Yes but X99 and DDR4 blows any chance of doing Haswell-E on a budget. I need a new PC and is considering either 4790K or 5960X, the former is fine now while the latter is going all out on new tech which I hope will last longer. Eight cores crushes the mainstream chip in multithreading. Eight RAM slots in case I want to double up, of a type that will exist long and improve much. Plenty PCIe lanes. Slightly weak single threaded performance at stock but considerable overclocking potential. With 10% performance
Re: (Score:2)
In my experience, I'm seldom if ever CPU-capped, and if I am, what I'm doing it the sort of thing that 10% won't make a difference on. My advice, save your money. Buy all the RAM you'll want now (16GB, 32GB would be extravagant) before it becomes expensive.
The few extra months you buy with the 5960X isn't going to make a difference in the long run.
Of course, I don't know your particular application, nor do I know your particular financial situation. YMMV.
Re: (Score:3)
I was just looking at that one a few hours ago (need to replace my desktop ... Mozilla apps are pigs with high core-affinity).
I decided against it because it has many fewer of the new instructions [wikipedia.org] than the 4790K [amazon.com], slower clock, and almost double the TDP (and I prefer quiet/low power).
Obviously for highly parallel tasks that can fit nicely in the 5820K's bigger cache, it will win handily. I'd love to see an ffmpeg coding shoot-out, but I'm concerned that the 5820K's disabled PCIe lanes might hamper other sy
Re: (Score:2)
You mean like the TSX-NI [wikipedia.org] instructions that can increase performance of certain highly threaded workloads up to 40%?
Oh, so sorry. Intel spectacularly bit the big one on that rollout. Completely busted. So bad that the latest microcode update completely disables those instructions.
Obligatory (Score:2)
But will it run Crysis?
Re: (Score:2)
No you'll have to wait for the upgrade with the industrial cooling unit bolted on. Then you might get it usable.
YMMV
Re: (Score:2)
It runs Crysis flawlessly while I'm banging your mother, sister, and father.
Your brother is too busy with my dog for me to do anything with him.
And the Mac Pro is now obsolete or soon will be. (Score:2)
That is the problem with Apple's obsession with small and sexy. Of course if Apple updates the Pro this year all is good but given their history I would not bet on it.
Re: (Score:2)
That's just nonsense. Just because there are taller buildings doesn't make the Empire State Building any smaller.
Re: And the Mac Pro is now obsolete or soon will b (Score:2)
Come on. The Mac Pro requires you to spend big bucks. It's not too much to ask Apple follow Intel's roadmap with the Mac Pro.
Re: (Score:2)
Elephant in the room (Score:4, Informative)
No one is talking about the elephant in the room: RAM prices are so high that you'd have to spend $700 to hit 64GB RAM (the max the board supports). That is just outrageous.
These prices are going to lead to a severe drop in demand.
Re:Elephant in the room (Score:5)
Re: (Score:2)
Usually people who need more than 16 gigs are requiring this for work-related reasons, where the $700 takes a different perspective.
$700 may not be much compared to labour, but it's still real money someone in the economy is going to pay.
The "room" is where you have other computer hardware and electronics, and that keeps getting cheaper and faster all the time. That's why the price fixing of RAM is so obvious. I remember paying less per GB in the DDR2 days.
Also, memory is supposed to be this relatively dumb part of machinery. As I'm speccing out a new home machine, I notice that the mobo will cost less than 8 GB of DDR3, which is t
Re: (Score:2)
I suspect you're right about price fixing. However, the fact that someone in the economy has to pay a large sum of real money is irrelevant in determining cost-benefit.
Yes, it's real money. But so are labor costs. And, in theory, those labor costs represent [a portion of] the real value that person is adding to the economy. So anything that makes the employee able to add value more efficiently is overall good.
In general, an employee would not be earning $200/hr on a $7,000 workstation if they weren't ad
Re: (Score:3)
OMG, 64 GB of RAM for only $700. That is simply amazing, how cheap it is.
Re:Elephant in the room (Score:5, Funny)
Two years ago it was half that price.
Electronics prices are supposed to drop over time. When you compare current prices to 5 years ago there isn't much of a difference.
As wikipedia likes to say (Score:2)
(citation needed)
I have never seen RAM as cheap as it is now. When you can buy a 16GB ECC DIMM for less than $200, it is rather wonderful. Our researchers that use big amounts of memory are extremely happy with how much memory they can stuff in desktops and servers for a reasonably price.
Now I'll admit, I don't have a chart of RAM prices, so I suppose I could be wrong, but then I've worked in IT for the last, oh, 20ish years on a continuous basis and spec'ing and buying hardware is a fairly common part of m
Re: (Score:2)
See http://www.extremetech.com/com... [extremetech.com] and https://pcpartpicker.com/trend... [pcpartpicker.com]
Re: (Score:2)
"RAM prices are so high that you'd have to spend $700 to hit 64GB RAM (the max the board supports). That is just outrageous."
Except I'm finding 8GB RAM sticks of DDR3 for 40 bucks, So you obviously have no fucking clue how to price-shop.
Oh, you use /g/ recommended sourcing. No fucking wonder, you retard. Go to pricewatch.com.
You're probably the same idiot that shills for logicalincrements.
Re: (Score:2)
Oh, you use /g/ recommended sourcing. No fucking wonder, you retard. Go to pricewatch.com.
Why, so they can buy some shitty off-brand memory with an ass warranty?
Re: (Score:2)
Shitty? I've got a 486DX4 laptop using memory I got from pricewatch.com in 1996.
And it's still working.
All of my memory bought from Pricewatch has worked fine and is still working to this day.
Re: (Score:2)
Re: (Score:2)
Actually, the prices I was quoting were for DDR3, not DDR4 so the problem is actually worse than you think.
Re: (Score:2)
You might want to tone down your insults. Furthermore, I took a look at http://www.pricewatch.com/syst... [pricewatch.com] and prices are roughly double what you claimed. Next time please provide concrete links instead of insulting for the sake of insulting.
When will the newer gen of intel chips go DDR4? (Score:2)
When will the newer gen of Intel chips go DDR4? also what about AMD?
broadwell is not going to have DDR4.
How is that surprising? (Score:2)
Have you looked at RAM prices? 32GB of DDR3 RAM is about $300-400 for a 4x8GB set, depending on speed and company. So $600-800 for 64GB. Ok well how about server memory, since you can get servers with 6TB of RAM if you like (really, check HP or Dell). For a 16GB DIMM, which is the largest you can get before the price per GB skyrockets, it is about $160-200. fo $640-800 for 64GB.
So hmmm, looks like DDR4 is right in what other ram costs, plus a bit of a premium since it is brand new tech. What a shock! Who wo
Re: (Score:2)
your lawn is too new, i paid close to 200/mb
The I-Word (Score:2)
For 'how many lanes of NSA Bulldozer 7.0" (is that the latest?) we'll just have to wait for the next Vanunu or Snowden.
The 3 year cycle... (Score:2)
Re: (Score:2)
please check out StarCItizen. if it's to your taste... you will upgrade
Re: (Score:2)
I dunno. It just thinks like the rate of change has slowed down a lot over the last decade. Or maybe I'm just getting old.
10+ years ago, performance was more than doubling every two years through a combination of higher clocks, die shrinks, extra transistors, fundamental breakthroughs in logic circuit designs, etc. Right now, mainstream CPUs are only ~60% faster than mainstream CPUs from four years ago because clocks are stuck near the 4GHz mark, die shrinks are becoming much slower in coming, nearly all fundamental breakthroughs have been discovered and modern hardware is already more powerful than what most people can be bot
Re: (Score:2)
Maybe the fact that most kids these days only play on consoles?
I think you're right about this one.
Back 'in the day' the game software got more sophisticated almost with each game released. Home consoles were fairly primitive. For the game or pro computer consumer there was a constant pressure to build a machine that could keep up with the latest software.
I think there are several factors which contributed to the stagnation of the hardware upgrade cycle.
1. More game developers now are using middleware to develop their product; instead of writing fresh and improved code
Boring...oddly (Score:3)
Interesting essentially how little benefit they get.
The X99 mobo and platform is nice, I like a lot of what they're doing there, and all of the system components matter a lot to user experience. But unless you have a very specific requirement any user would be just as well served with a quad core or a octa core, if not better served with the devil's canyon quad core given the single threaded performance. That's probably a bad place for intel to be positioning these, as the target audience for these processors is looking for blazing fast and lots of cores. And it only delivers one of the two.
I think if I was buying a system this week or next (which... I am) I'd be a bit disappointed that I can't put a devil's canyon quad core on an X99 mobo, and then upgrade the CPU later if they manage to refresh the E series into something more attractive.
Also why can't the DMI link be better in other cpu (Score:2)
Also why can't the DMI link be better in other cpu's
Why do have to now get an 6 core to use Hasswell-e to get more then 16 pci-e 3.0 + X4 pci-e 2.0 (DMI)
Most people may only need 1 Video card but with pci-e SSD's coming out more pci-e is needed.
"...can translate to slower gaming performance" (Score:2)
for non-multithreaded games?
Re: (Score:2)
Yes. Whenever people talk about "games" you know that's really just a secret code word for Dwarf Fortress.
Only 40 lanes (Score:2)
So I'm only going to have the ability to use two GPUs in SLI?
Nope. Fuck that.
Re: (Score:2)
Re:DDR2/3/4 (Score:5, Informative)
Re:DDR2/3/4 (Score:5, Informative)
CAS latency hasn't been measured directly in nanoseconds for some time now. It is now measured in clock cycles. The shorter your clock cycles (the higher your frequency) the shorter in absolute time your CAS latency is for the same number. CAS 10 at 2133 is about the same as CAS 5 on 1066.
CAS latency on Wikipedia [wikipedia.org]
Memory timing on Hardware Secrets [hardwaresecrets.com]
FAQ on RAM timings from Kingston [kingston.com]
Re:DDR2/3/4 (Score:4, Interesting)
Just to put "some time now" the time frame into perspective, the last mainstream PC memory form-factor to use asynchronous DRAM was 72 pin SIMMs.
When PCs went from 72 pin SIMMs to the first 168 pin DIMMs, in the mid-1990s, the interface changed to (non-DDR) synchronous clocking.
Re:Broadwell (Score:5, Informative)
if you can wait then you should always wait for new tech
Re: (Score:2)
But if I'm trying to game on an old i5-750, wouldn't this be a good time to upgrade to one of the cheaper 4-core Haswells that are running 3.8mhz instead of 2.7? Maybe a Haswell i5 (I guess, I'd need a new mobo then, right?) And the latest PCI-E for a new graphics card.
I don't like to buy the newest and best, but when the second newest becomes cheap. I've got a really nice case, but I'm not sure if I could put a new processor into my old motherboard or if it would even be worth it.
I'd like to do somethin
Re: (Score:2)
"4-core Haswells that are running 3.8mhz"
Only 3.8 MHz? Wow, that's fucking slow. Glad I went AMD instead of intel!
Image processing (Score:5, Interesting)
I use -- and write -- image processing software. Correct use of multiple cores results in *significant* increases in performance, far more than single digits. I have a dual 4-core, 3 GHz mac pro, and I can control the threading of my algorithms on a per-core basis, and every core adds more speed when the algorithms are designed such that a region stays with one core and so remains in-cache for the duration of the hard work.
The key there is to keep main memory from becoming the bottleneck, which it immediately will do if you just sweep along through your data top to bottom (presuming your data is bigger than the cache, which is typoically the case with DSLRs today.) Now, if they ever get main memory to us that runs as fast as the actual CPU, that'll be a different matter, but we're not even close at this point in time.
So it really depends on what you're doing, and how *well* you're doing it. Understanding the limitations of memory and cache is critical to effective use of multicore resources. You're not going to find a lot of code that does that sort of thing outside of very large data processing, and many individuals don't do that kind of data processing at all, or only do it so rarely that speed is not the key issue, only results matter. But there are certainly common use cases where keeping a machine for ten years would use up valuable time in an unacceptable manner. As a user, I am constantly editing my own images with global effects, and so multiple fast cores make a real difference for me. A single core machine is crippled by comparison.
Change is coming... (Score:2)
Re: (Score:2)
"If it's for gaming, then the CPU isn't really a bottleneck like it's made out to be and there is no gain in going i7 over an i5 unless you are going to be streaming."
Man you're so full of shit I can smell you through the internet. First, the i7 has more PCI-E lanes, which translates over to "I can drop in more GPUs if desired."
Streaming shit is all dependent upon the framebuffer access now days - GPU. Not CPU.
It's like people don't understand how hardware acceleration works.
Re: (Score:2)
They buy TWO Intels.
Re: (Score:2)
Over clocked POWER chips in liquid nitrogen.
Re: (Score:2)
Can anyone please tell us why is there a need to slow down the CPU speed in order to put in more cores?
Thermals. More CPUs generate more heat, more heat with the same thermal envelope means you can't run each CPU as fast. Of course in a different environment (say with a liquid nitrogen cooling rig vs an air cooled rig), you could probably clock those CPUs higher.
Just because you can put in more CPUs doesn't mean you should. It used to be the limiting engineering factors were area vs chip yield. Now days thermals are arguably the most important consideration because often you are limited both thermally (an
Re: (Score:2)
> Failure to full exploit SMP in 2014 is a fine reason to avoid a given game as far as I'm concerned.
That's a crappy reason. You'll miss out on Path of Exile, Minecraft, and Terraria, all which are excellent games.
Re: (Score:2)
https://mojang.com/2014/08/minecraft-1-8-pre-release-the-bountiful-update/ [mojang.com]
These changes consist of both new features, and large game structure changes such as replacing the hard-coded "block renderer" with a system that is able to read block shapes from data files, or performance enhancements such as multi-threading the client-side chunk rendering. We hope you will enjoy it!
Re: (Score:2)
Considering Minecraft has been out for 3 years and they have sold 54+ million copies it is about time they addressed some of the performance issues!