Haswell Integrated Graphics Promise 2-3X Performance Boost 133
crookedvulture writes "Intel has revealed fresh details about the integrated graphics in upcoming Haswell processors. The fastest variants of the built-in GPU will be known as Iris and Iris Pro graphics, with the latter boasting embedded DRAM. Unlike Ivy Bridge, which reserves its fastest GPU implementations for mobile parts, the Haswell family will include R-series desktop chips with the full-fat GPU. These processors are likely bound for all-in-one systems, and they'll purportedly offer close to three times the graphics performance of their predecessors. Intel says notebook users can look forward to a smaller 2X boost, while 15-17W ultrabook CPUs benefit from an increase closer to 1.5X. Haswell's integrated graphics has other perks aside from better performance, including faster Quick Sync video transcoding, MJPEG acceleration, and support for 4K resolutions. The new IGP will support DirectX 11.1, OpenGL 4.0, and OpenCL 1.2, as well." Note: Same story, different words, at Extreme Tech and Hot Hardware.
IRIS you say? (Score:1)
http://en.wikipedia.org/wiki/SGI_IRIS
Re: (Score:2)
My first thought as well.
Best thing about this (Score:5, Informative)
Re: (Score:1)
Tell that to Poulsbo chipset buyers...
The promised Gallium3D driver [phoronix.com] was never delivered, even if it apparently was nearly ready to release, and Intel only kicked users around between their desktop and automotive teams, releasing some crappy binary drivers to keep them quiet.
Re: (Score:3)
I suspect a lot of the problem there has to do with Imagination Technologies (creator of the PowerVR GPU core in the Poulsbo parts) and how much Imagination Technologies were willing to let Intel release (either as binary drivers or as source code)
Re: (Score:1)
I suspect a lot of the problem there has to do with Imagination Technologies (creator of the PowerVR GPU core in the Poulsbo parts) and how much Imagination Technologies were willing to let Intel release (either as binary drivers or as source code)
And you're probably right, in what respects to not releasing a open source driver for the poulsbo chipset. But Intel treated their customers with the utmost disrespect, only pretending to be working on a usable driver until it was discontinued and abandoned. No decent closed source driver was released after the first interaction, before the promise of the Gallium 3D driver. Users were lied to, and pushed from one team to the other looking for drivers. That wasn't Imagination Tech - that was Intel stalling a
Re: (Score:1)
Intel still supports and releases binary drivers for the Poulsbo platform.
http://www.intel.com/content/www/us/en/intelligent-systems/intel-embedded-media-and-graphics-driver/emgd-for-intel-atom-systems.html [intel.com]
It has never been a one-shot or abandoned platform. However, you need to understand that this device was specifically designed for embedded applications and not general purpose computing products.
It's unfortunate in my mind that several manufacturers released it as a consumer product.
Re:Best thing about this (Score:4, Informative)
Not an intel chip, it came from PowerVR.
Anyone who bought one of those was simply a fool. Everyone gets to be a fool now and then so don't feel to bad if it bit you.
Re: (Score:1)
Anyone who bought one of those was simply a fool. Everyone gets to be a fool now and then so don't feel to bad if it bit you.
Intel fooled me once. They won't fool me twice, as I won't buy their chipsets again. My neybooks are AMD APUs, my tablets and smartphones are ARM. Good ridance!
Re: (Score:2)
When AMD makes a decent open source driver I will use their graphics cards. So far I use intel GPUs and Nvidia ones. The latter will end up no longer in my home when Intel gets good enough. At that point I will likely stop buying AMD cpus.
AMD's Open Source Linux Driver Trounces NVIDIA's (Score:2)
When AMD makes a decent open source driver I will use their graphics cards. So far I use intel GPUs and Nvidia ones.
You're in luck. AMD's open source driver trounces NVIDIA's [slashdot.org].
Re: (Score:2)
So you are using open Intel drivers, that are more or less on par with AMD ones (usually lagging a few months from intel, because they share most of the mesa code, its easier to catch up than building ground up), with AMD cards being more powerful and so, faster.
I play linux games (via humble bundle, steam and desura) with AMD open drivers, so i can tell you that they work. If you check phoronix benchmarks, AMD closed and open drivers have a average performance difference of 20%, that isnt perfect, but not
Re: (Score:2)
Bullshit if it's the PowerVR crap - no opensoruce drivers at all. Check it out before you make a blanket statement but I will agree for their HD series (which the Iris is simply an improvement of) they do offer a compelling reason to consider if going the Open Source route.
A major feature is still missing in action (Score:1)
How long until we finally have Intel and other CPU vendors create a unified memory model now that we have a GPU on die? I mean if anything I'd think the point would be to have your on-die GPU integrate with a discrete card so that both low and high-end setups gain something from this. PS4 will have a unified memory model; how long until the rest of us do on our desktops?
Re:Worst thing about this (Score:5, Insightful)
Think of these chips with integrated graphics like hybrid cars. You're not gonna go down to the drag strip with them, or haul a camper, or pick up the 10 kid carpool group. But for the vast majority of trips you'll get to the same destination is basically the same amount of time, with less noise and higher efficiency.
Re: (Score:1)
Except your analogy is total nonsense. Modern operating systems do in fact benefit from having a decent discrete GPU. Performance improves in ways you might not even expect.
Then there are the things that you would expect to be better with a good GPU. All of these things improve with a GPU that doesn't suck. You don't have to be some obsessive gamer to be in a position to see the benefit either.
Historically, Intel GPUs have sucked so bad that no one wanted to support them at all.
Or to put it in automotive te
Re:Worst thing about this (Score:5, Insightful)
So unexpected that you can't even name one!
Intel GPUs are fine for 99% of use. Heck, most games run fine on them. Sure the latest Call of Honor: Medal of Duty will not run on UItra, but most games will be fine on medium and low settings.
Re: (Score:2)
No benefits to a discrete GPU?
Ripping/Transcoding (thank you CUDA and OpenCL)
Running games at resolutions and detail levels that look better than doom
Image processing (thank you CUDA and OpenCL)
Video decoding/decompression (thank you CUDA and OpenCL)
Intel is doing remarkably with with HD4000, and OpenCL performs pretty well, but that won't come close to matching the 48 to 3072 GPU cores present in modern discrete video cards.
Re: (Score:3)
IIRC transcoding / decoding with Intel's quicksync is actually VERY competitive with a discrete GPU. And all of those CUDA / OpenCL tasks are hardly representative of the average user.
As parent said, for the 99% use case, Intel integrated are sufficient.
Re: (Score:2)
They probably use the SIMD instruction set (aka SSE, formerly MMX) for parallelization. Honestly, I'd rather write to a GPU using a common language, but parallel technology does exist on the CPU. Inflexible, single purpose parallel instructions, but ones that would work for that task.
The average user (Score:3, Interesting)
And all of those CUDA / OpenCL tasks are hardly representative of the average user.
There's a meme lately in Slashdot comment sections that everything must be made for "the average user" without any room to grow. I see it popping up whenever anybody mentions limits of a product sold to the public, especially artificial market-segmenting limits. Where did it come from?
Re: (Score:2, Insightful)
And there is a trend of you posting stupid shit like this. Why would Intel want to release an expensive, fast, power hungry integrated GPU as part of their mainstream offerings? There are already vendors that sell expensive, fast, power hungry GPUs. Buy one of those.
Re:The average user (Score:5, Insightful)
The discussion was explicitly on whether what Intel is doing here is useful. For the majority of the market, the answer is yes.
There are areas where OpenCL and Cuda are useful. That is in no way relevant to the discussion. Noone is saying that powerful hardware is useful, we're saying that most users have no need for it and that integrated graphics are actually useful to actual people.
Re: (Score:3)
Re: (Score:2)
A small, efficient, powerful-enough integrated solution is the best market for the vast majority of users since they still can opt to add a dedicated GPU if needed.
In the case of a desktop, I agree with you. But in the case of a laptop, people have to plan several years in advance because you can't replace the GPU without replacing everything else.
Re: (Score:2)
Do you really see a market for a portable computer with a discrete graphics card several years from now? I expect the future will be tablets-with-keyboards distinct from non-portable gaming systems with better graphics.
Re: The average user (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
But when you take other criteria into account like the volume that is used by discrete hardware in a casing, the energy consumption and subsequently the need of a cooling systems, then integrated solutions become a lot more attractive.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Run compute intensive tasks remotely (Score:3)
thank you CUDA and OpenCL
OpenCL-heavy tasks can be done on a compute server at home or in a data center, and you can SSH (or VNC or RDP or whatever) to use an application on a compute server from your laptop. The only real use case I see for carrying an OpenCL powerhouse with you, apart from running shaders in a high-detail 3D game, is for editing huge images or high-definition video in a vehicle or some other place with no Wi-Fi. One workaround is to downscale the video to low definition (e.g. 320x180), edit the low-definition vid
Re: (Score:1)
thank you CUDA and OpenCL
OpenCL-heavy tasks can be done on a compute server at home or in a data center, and you can SSH (or VNC or RDP or whatever) to use an application on a compute server from your laptop. The only real use case I see for carrying an OpenCL powerhouse with you, apart from running shaders in a high-detail 3D game, is for editing huge images or high-definition video in a vehicle or some other place with no Wi-Fi. One workaround is to downscale the video to low definition (e.g. 320x180), edit the low-definition video while away from the net, and then export the edit decision list (EDL) back to the compute server to render the result in high definition. I used to do that with AviSynth.
Running games at resolutions and detail levels that look better than doom
Games are the other reason for carrying a beefy GPU with you. But Skyrim looks better than Doom, Doom II, and Doom 3, and Skyrim runs playably on the HD 4000 at 720p medium [anandtech.com].
There's a world of difference between having compute power on your machine and on a machine you have remote access to.
And there's a world of difference between running a game at 20-30 fps on medium at a sub-native resolution and running it as intended.
Re: (Score:2)
There's a world of difference between having compute power on your machine and on a machine you have remote access to.
What's the practical effect of this "world of difference"? I need some ammo against the oft-repeated argument that "Apple's App Store restrictions are irrelevant because the iPad can run SSH and VNC".
And there's a world of difference between running a game at 20-30 fps
The article I linked says 46 fps.
on medium
Does the PS3 version even go higher than medium?
at a sub-native resolution
The article I linked says 1366x768. (I rounded it to 720p for the reader's convenience.) How is this "sub-native" on a laptop with a 1366x768 panel?
Re: (Score:2, Interesting)
I do HPC engineering for a living, and I really don't see the point in private discrete GPUs anymore. We've added 8000 Teslas to our cluster, and I've come to prefer using them over CPUs simply for performance reasons. But likewise I've come to prefer IGPs to discrete cards for private use in the last 2-3 years. There is no game that isn't playable on an Ivy Bridge IGP (last I've run is Skyrim on 1920p and settings in the middle between average and max, 40-50 FPS), the power usage is lower (in general but a
Re: (Score:2)
Perhaps this will convince NVIDIA not to underclock their GPUs. Now that the baseline is much higher, they will have to deliver awesome performance to be relevant in the notebook scene.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2, Informative)
Performance improves in ways you might not even expect.
I guess it improves it in such an unexpected way I didn't even notice. In the two desktop computers I have in my household, I originally built them using integrated video cards and then upgraded to discrete cards couple months later when I had some more time to pick out something and some spare cash. Neither my wife nor I noticed any difference in normal desktop performance for office related software or web browsing. The only difference seen was in video games. My work issued laptop has integrated GPU,
Re:Worst thing about this (Score:4, Informative)
actually, video editing doesn't really benefit from a discrete GPU since the damn encoding support is still crap. Most of the various software I've looked at still get more bang from a better CPU then GPU encoding and if you're in the industry like ILM/Pixar, then you aint using GPU encoding anyhow - its mainly dedicated ASICS and such. For someone doing it as a hobby, they're buying a video card specifically supported by their software so it makes no god damn difference to 99.99 percent of the folks out there that onboard graphics suck.
In my case, small business owner; I've been planning a new build for 4th quarter (part of my 4yr replacement cycle) based on a Xeon E3 1275 with onboard graphics because the system purpose doesn't need much in the way of a GPU. It's a development system (builds and Database work) so why waste money. Hell all of my employees systems are onboard graphics just to save a few bucks that's better spent on more ram or a slightly better cpu. As with anyone, trade-offs are required when building/specing our systems and as a business, we tend to go with the cheapest configurations we can get. Keep in mind that the cheapest configuration does not mean the cheapest parts. We learned a long time ago that spending a bit more for quality hardware resulted in less downtime.
Re: (Score:3)
When you're doing any sort of composting or image processing, you need a
Room to grow is another factor (Score:2)
In the two desktop computers I have in my household, I originally built them using integrated video cards and then upgraded to discrete cards [...] The only difference seen was in video games. My work issued laptop has integrated GPU, and I'm not sure what performance could be improved on it
That's because your work laptop's work load probably doesn't have any 3D. If your job description included CAD or other 3D modeling, you might notice more of a difference with a beefier GPU.
not every one does high end gaming, video editing
Room to grow is another factor. Consider someone who buys a computer and then decides he wants to do non-Flash gaming or edit high-1080p video from his smartphone's camera. Suddenly his laptop has become obsolete. Is he necessarily going to have the money to buy a whole new laptop?
Re: (Score:3)
Except your analogy is total nonsense. Modern operating systems do in fact benefit from having a decent discrete GPU. Performance improves in ways you might not even expect.
Nope. They benefit from having 3D acceleration, yes, but the integrated graphics on modern Intel chips is a discrete core. It just happens to be on the same die as the CPU. My laptop's CPU is clocked at 1.2GHz dual core, with two extra cores for the video clocked at 300-500MHz. Those cores are dedicated to the video only, and it's *plenty* fast enough for normal use on the operating system, with all of the blingy effects. Switching to a discrete graphics card won't make any difference at all, because the vi
Re: (Score:2)
Re: (Score:2)
Discrete graphics still significantly outrun Intel's offerings. We get a 150% performance increase when a 900% performance increase is warranted to compete with current cards (GeForce 680MX). Guess I'll never get integrated graphics if I can avoid it.
Intel isn't competing with discrete graphics solutions, though.
And yes, the speed increase isn't spectacular when compared to the other options in the marketplace, but they're not exactly "alternatives", and they're certainly not "competing".
Re: (Score:2)
You mean a $300 part that is a CPU and GPU combined has a slower GPU component that a $500 dedicated GPU? Shocking. Utterly shocking.
Re: (Score:2)
Only they are failing to compete with $20 cards as well, and their best offering is smoked by SOC (sorry, lingo - System On a Chip) like the AMD A10 in the graphics department.
Re: (Score:2)
The Iris Pro has similar performance to a GeForce GTX 650. Please let me know where you can get a GTX 650 for under $20.
You'd have to go back more than one generation to find an Intel iGPU that is slower than a $20 discrete card.
Re: (Score:2)
Only they are failing to compete with $20 cards as well, and their best offering is smoked by SOC (sorry, lingo - System On a Chip) like the AMD A10 in the graphics department.
The AMD A10 is not a "SoC" any more than Intel's offerings are. Both AMD and Intel are currently offering CPUs with integrated GPUs – it's just that the marketing is slightly different, and that each company emphasizes its own strength (AMD has better GPUs, Intel has better CPUs).
For these to reasonably be considered a "SoC",
Re: (Score:2)
Actually, the whole north bridge is on the chip, and has been since LGA1156 for intel. For AMD, they only moved it on with socket FM2 (the most recent APUs).
The only thing that's external is the "platform host controller", which is a renamed south bridge. Haswell is integrating chunks of the south bridge onto the CPU too.
Re: (Score:2)
The fastest $25 graphics card on newegg is a radeon 5450 [notebookcheck.net], which even the current HD4000 [notebookcheck.net] beats.
If we allow up to $50, we get a GT620 [notebookcheck.net], which still gets narrowly beaten.
You need to get to $75 cards to find ones that beat the current HD4000, and will be about level with the GT3e. The current AMD A10 graphics are about 40% faster than the HD4000, and hence will be about 42% slower than the GT3e.
So no, we're not seeing any kind of kerb stomping here.
Re: (Score:2)
The 680 isn't mainstream, by any means. Haswell brings the higher-end iGPU up to the performance levels of a GeForce GTX 650, which definitely is more mainstream.
supports Display Port 1.2 (Score:2, Interesting)
the Hot Hardware link confirms DisplayPort 1.2, which is the only thing I /really/ care about. The others are nice, but 4K out of the laptop means my next mid-range laptop can be my primary desk machine as well. This should push along the display manufacturers after their decade of stalling (perhaps soon we'll see screens in the 20-24" range with more resolution than our 5" displays have).
No, probably not (Score:2)
Making a 4k display isn't as simple as manufacturers just wanting it bad enough. I know people like to look at little phone screens and say "If these can be high rez, why can't big displays be higher rez!" but all that shows is a lack of understanding of the situation.
Transistors cost money and pixels require them. How many pixels a display has is not a small part of its cost. So you can't just say "Let's have 4k or 8k desktop displays, should be hard!" because it in fact is.
That isn't to say we won't, it i
Size of market (Score:2)
The technology is out there, witness the few korean dead-cheap dumb high-res screens.
They only cost more than similar lower, hd-res of the same featureless no-name brands.
What is lacking is a huge market, so economy of scales kicks in and produced ueber-high-resolution screens is worthy.
Currently the biggest chunk of all produced flat pannel end up in TV screens. It makes more sense economically for the constructor to just put the same pannels into computer screens, than to market a special different type o
We're closer than it seems (Score:3)
http://www.engadget.com/2013/04/12/seiki-50-inch-4k-1300/ [engadget.com]
$1300 for a 4k display. Granted, it's locked to 30Hz, but for most of us 60Hz will be as fast as we need to go (though we'll get more for that god-awful 3D crap they keep trying to push). 4k @ 50 is very close the 2560x1600 30" monitor I have for pixel size, which is fine enough for me at my working distance.
We stalled at 1920x1080 because every moved to TV production. Now that 4k/8k has broken free, we can get over that hump. Not saying there aren't
Re: (Score:2)
Re: (Score:2)
1080i60 is really only 30 frames per second. The fact that the TV industry actually managed to dupe the FCC into allowing interlaced standards into HD is, perhaps, one of the biggest snow jobs ever.
The part about 240hz is actually my point - people will gravitate towards the shiny, even if it gets them nothing - as long as it's out there. I just want higher res to be produced for less than a kings ransom so I can use it for computing. I deal with large architectural CAD files (and photography for fun, and
Re: (Score:3)
Seiki has a 50" TV with a 3840x2160 resolution, available right now for $1499 [tigerdirect.com]. So I don't buy the argument that it's somehow technologically prohibitive. Why can this crappy company no one has ever heard of bring out a 4K TV under $1500, but no one else can make a 4K monitor in an even smaller size (32" or so) without charging over $5500? (and that's for Sharp's offering, which is the next least expensive – most 4K monitors cost as much as a new car). As far as I can tell, it's not technological barri
Re: No, probably not (Score:2)
Re: (Score:3)
Know what else supports 4K? HDMI.
And I'm fairly sure that if you're willing to pay $10K for a 4K screen (the current che
Re: (Score:3)
And I'm fairly sure that if you're willing to pay $10K for a 4K screen (the current cheapest on the market - some Chinese knockoff), display manufacturers are willing to give you the 4K display you want.
Try less than $5K for the Sharp PN-K321 [compsource.com] professional monitor.
Or you could pay $1K and get high res 27-30" screens as well just as you always could. You won't be able to find a 4K screen for $100 until 4KTVs are down in the under-$1000 range like HDTVs are now.
Is $1300 [shopnbc.com] close enough?
The only reason display resolutions "stalled" was because everyone was attracted to cheap cheap cheap - cheap laptops, cheap LCD monitors, etc.
LCDs have been growing and dropping steadily in price, I picked up a 60" LCD at about half the cost a 42" LCD cost me five years earlier. That's double the screen estate (60/42)^2 for considerably less, while resolution has been ridiculously expensive. 4K - as opposed to 2560x1600/1440 that never saw any adoption outside a few niche computer monitors has the potential to be the new HDTV. Right now you're paying early adopter
Re: (Score:2)
Know what else supports 4K? HDMI.
The current HDMI revision only supports 4K at frame rates of 30 fps or below, so it's not really suitable for anything except watching film-sourced content. Supposedly HDMI 1.5 might support 4K@60Hz, but this is not confirmed. You need DisplayPort to do it now.
Leaked months ago (Score:1)
Last November it was revealed that the intel processors would have the GT1, GT2 and GT3 graphics in Haswell. The only difference is that Intel has lifted the muzzle of a press embargo on Haswell to push more Ivy Bridge (and yes, even Sandy Bridge) units out the door to clear out back logs.
It's been known since last year that the release date for Haswell is June 2nd, but nobody is allowed to report on that for fear of losing intel advertising dollars.
Now intel users can play 10 year old games :D (Score:2)
Re: (Score:1)
Re: (Score:2)
Honestly unless you are playing Crisis they are getting pretty good. I played Portal 2 on IGP. That is two years old, but the laptop I played on was already a year old at that point.
Re: (Score:2)
Portal 2 runs on an engine that's effectively 12 years old by now, with just some updates. It's far more CPU dependant than more modern engines for example.
Same thing with Left 4 Dead 2, the benchmark of which Valve rigged by using a 1½ year newer update for the Linux version than what's available for the Windows version, an update that actually shifts more stuff to the GPU for example.
Re: (Score:2)
Honestly unless you are playing Crisis they are getting pretty good. I played Portal 2 on IGP. That is two years old, but the laptop I played on was already a year old at that point.
New Intel IGPs does handle Crysis [youtube.com] with fluid framerates even with quality settings turned high.
Re: (Score:2)
Re: (Score:3)
yep
i love buying games the day of release with all the DRM, bugs, always connected to internet issues where they can't support all the players, etc
i'd rather buy a game a year or two after release after its on sale at least 50% off
You haven't tried, have you? (Score:3)
New Intel GPUs are surprisingly competent. No, they don't stand up to higher end discrete solutions but you can game on them no problem. You have to turn down the details and rez a bit in some of the more intense ones, but you can play pretty much any game on it. (http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html). For desktops I always recommend dropping $100ish to get a reasonable dedicated card but for laptops, gaming on an integrated chip is realistic if your expectations are
Re: (Score:2)
Re: (Score:2)
In other words, precisely what I said. Yes, you have to back off on rez and settings. Guess what? That's fine, and expected for something as low power profile as an integrated GPU. Fact remains you can game on it just fine, even new games. Not everyone needs everything cranked, not everyone wants to spend that kind of money. Intel GPUs are extremely competent these days. They are low end and will always be because they get a fraction of the 30-40ish watts of power a laptop CPU/GPU combo can have rather than
Scaling back has a limit (Score:2)
Yes, you have to back off on rez and settings. Guess what? That's fine, and expected for something as low power profile as an integrated GPU. Fact remains you can game on it just fine, even new games
One could game just fine on an original PlayStation or a Nintendo DS, and new DS games were still coming out until perhaps a few months ago. It's just that the settings have to be scaled so far back that things look like paper models [wikipedia.org] of what they're supposed to be. The DS in particular has a limit of 6000 vertices (about 1500-2000 polygons) per scene unless a game enables the multipass mode that dramatically lowers frame rate and texture resolution. Fortunately, the HD 4000 in Ivy Bridge runs games with det
Re: (Score:2, Informative)
I put together a tiny mini itx system with an Ivybridge i3-3225. The case is super tiny and does not have space for a dual slot video card. Even low-to-mid grade video cards are dual slot nowadays, and I didn't have any on hand that would fit.
I shruged, and decided to give it a go with the integrated HD4000. The motherboard had really good provision for using the integrated graphics anyway. Dual HDMI+DVI and even WiDi support via a built in intel wireless N card. (Widi only works via the integrated GPU, so
Re: (Score:2)
No, they don't stand up to higher end discrete solutions
They don't stand up to AMD's integrated graphics either.
Re: (Score:2)
Actually, given the benchmarks, the GT3e should be about 20% faster than the A10 5800k's graphics chip.
Re: (Score:1)
The high end part runs Dirt 3. Intel showed a demo running equally fast as a GT 650M: http://www.anandtech.com/show/6600/intel-haswell-gt3e-gpu-performance-compared-to-nvidias-geforce-gt-650m [anandtech.com]
Re: (Score:1)
Be sure to tell yourself that nonsense as you drop another $500 on a landfill-bound graphics card to play Xbox360 ports.
The other option being dropping $500 to $2000 on a landfill-bound laptop to play Xbox360 ports at worse settings and frame rates? Or dropping $1000 for the high-end, landfill-bound desktop CPU from Intel (since that's the one with the "high-end" integrated GPU) to do the same?
GPU with integrated 128-core CPU (Score:2)
Wouldn't these kinds of things be more accurately described as GPUs with integrated CPUs?
It's been 10 years since Intel started panicking when they realized a Pentium core could be tucked into a tiny corner of a GPU, as far as transistor count went.
Amazing! (Score:2, Insightful)
Wow so rather than the 11 FPS you were getting, you might be able to get 22 FPS!
You can almost play a video game at those speeds! Well done!
Re: (Score:2)
Actually, if you look at some Benchmarks [anandtech.com] You'll see that the current (Ivy Bridge) chips play current games at 30-40fps (even Crysis on pretty high detail settings). So you're looking at 60-100fps with Haswell's GT3e.
Re: (Score:2)
Not sure I would trust those benchmarks.
First: Metro is all on lowest settings and scores between 11-25FPS
Second: Crisis I am not sure how performance is higher than mainstream... unless "performance" is just a nicer way to say lowest quality, highest performance.
Third: The resolutions they are talking about are 1366x768 which might have been relevant 10 years ago, but are hardly what I would call modern.
So if you are saying that these integrated solutions will barely play modern games if at all on their lo
Re: (Score:2)
First: Metro is all on lowest settings and scores between 11-25FPS
Okay, so one single game shows poorish performance, and will show entirely adequate performance on Haswell GT3e.
Second: Crisis I am not sure how performance is higher than mainstream... unless "performance" is just a nicer way to say lowest quality, highest performance.
In other words, it manages 60fps already on lowest settings, and will do 120-150fps on Haswell, and manages 30-40fps on decent settings, and will manage 60-80 on Haswell.
Third: The resolutions they are talking about are 1366x768 which might have been relevant 10 years ago, but are hardly what I would call modern.
Actually, 1366x768 is the single most common resolution out there these days on new machines. This is more true on laptops, which these IGPs are aimed at, where it ships on 95% of machines. Sure, us geeks are likely to want shin
Re: (Score:2)
I suppose. I wasn't really thinking laptops. But you are right on laptops integrated video is more common, and dedicated is horribly expensive for even a middling card. Which is why I would not really consider gaming on a laptop.
So I suppose as a target, it will make some difference for those that wish to play some games on their laptop. At any rate it will raise the bar as to which games are possible VS impossible to play on a regular laptop that doesn't cost 2 grand.
Sigh... And on the general computing side... (Score:3)
Haswell parts are expected to be 10-15% faster than Ivy Bridge, which was itself barely any faster than Sandy Bridge.
Anyone remember the days when computing performance doubled or even tripled between generations?
I have a desktop PC running a Sandy Bridge i5-2500K running at a consistent 4.5GHz (on air). At this rate, it could be another couple generations before Intel has anything worthwhile as an upgrade... I suspect that discrete-GPU-buying home PC enthusiasts are going to continue to be completely ignored going forward while Intel continues to focus on chips for tablets and ultrabooks.
Re: (Score:2)
Re: (Score:2)
Intel has publicly stated many many times that they are going to a "tick - tock" development cycle, where they don't expect people to buy new every release. This is a "tick" which introduces new features and performance improvements, followed next year by the "tock" which will include a die shrink and even more power savings, with a slight bump in performance and mostly the same features.
This is why most Sandy Bridge main boards just needed a firmware update to support Ivy Bridge - the were mostly the same
Re: (Score:2)
Ok, but Haswell delivers virtually nothing computing performance-wise over a reasonably overclocked Sandy Bridge which is a full tick-tock cycle earlier.
Re: (Score:2)
Modern CPUs are so fast nowadays that the bottleneck is feeding them, not processing the data. The bus between memory and the CPU allows it to process 24GB/s, but most computers only come with 2, 4 sometimes 8GB. And then there's the issue of reading from the drive. The only way you're going to consistently peg an i5 for more than a few second (let alone an i7) is crunching terabytes of scientific data. Otherwise it's a software issue like Kerbal Space Program which only uses 50% of one core instead of 80%
Re: (Score:2)
Haswell parts are expected to be 10-15% faster than Ivy Bridge, which was itself barely any faster than Sandy Bridge. (...) I suspect that discrete-GPU-buying home PC enthusiasts are going to continue to be completely ignored going forward while Intel continues to focus on chips for tablets and ultrabooks.
It's taken Intel from 2004 to now to go from the 3.8 GHz Penium IV to the 3.5-3.9 GHz i7-4770K, where do you want them to go? While they're still doing moderate IPC improvements there's just no easy way to scale to 5 GHz and beyond and scaling with more cores has rather tapped out, interest in their six-core LGA2011 platform is minimal. Of course price is a big part of it, but also that people don't have much to put the two extra cores to work with. Heck, if all you're doing is gaming many would suggest the
a bit late? (Score:3)
..but what does that come out to? (Score:1)
End result, I'm still going to get people complaining to me that The Sims runs slow. Only difference is it'll be stop-motion instead of slideshow.
arstechnica in-depth article (Score:2)
arstechnica has a more in-depth look [including architectural details] at:
http://arstechnica.com/gadgets/2013/05/a-look-at-haswell/ [arstechnica.com]