NVIDIA Announces GeForce GTX 1060, Fierce Competition For the Radeon RX 480 (hothardware.com) 144
Reader MojoKid writes: In May, NVIDIA released the GeForce GTX 1080. The company followed up on that beastly chip in June with slightly cut down GeForce GTX 1070 and that trickledown effect is now reaching the mainstream market with the arrival of the GeForce GTX 1060. The GeForce GTX 1060 can be seen as a direct response to the AMD Radeon RX 480, which offers a ton of performance at the $200 price point. While still built using a 16nm FinFET process, the GP106 core on the GTX 1060 features 1280 CUDA cores; exactly half that of the GTX 1080. Base clock for the GPU is 1506MHz, while the boost clock is 1708MHz (NVIDIA is quick to point out, however, the GPU core can easily be overclocked to 2GHz+). The GTX 1060 features a 192-bit memory bus and comes with 6GB of GDDR5 memory running at 8Gbps. The card has a single 6-pin power connector and a 120W TDP. NVIDIA claims that the GTX 1060 is on average 15 percent faster than its closest competitor, the Radeon RX 480. The NVIDIA GeForce GTX 1060 will be available starting July 19th from a wide variety of third-party partners including ASUS, EVGA, Gigabyte, MSI and Zotac etc. with a starting price of $249. The NVIDIA-built GeForce GTX 1060 Founder Edition will be available for $299.
Current gen vs last gen (Score:2)
Re: (Score:2)
In the past for moderately priced gaming PCs i've always gone for the mid-tier option ($200ish) from the previous generation, but this sounds like a pretty good deal. Is this going to be the new gold standard for the mid-price range?
I had a very similar strategy in the past, although I'm curious why you go for previous generation cards? I have generally stuck with the $200ish option of the current generation, such as the 1060. I have never seen any appeal for going with cards from 1-2 years ago since they are generally outperformed by new cards at the same price point.
Re:Current gen vs last gen (Score:5, Insightful)
I think you misunderstood the poster, they buy the mid-tier price point 2 years after release, and its no longer release price after 2 years.
That is what I thought he meant, but I don't think the logic holds up. After 2 years the new $200 cards tend to beat the previous generation which drop in price to the same price point.
Take this example, where the 1060 will be priced at about $200. Lets say that the GTX 970 soon drops to the $200 price point (its around $280 now). Based on the 1080 & 1070, the 1060 will likely have a PCMark score of around 10850 (scores [videocardbenchmark.net]). Since the 970 has a score of 8658, there doesn't seem to be any logic in going with the last generation. Based on my possibly incorrect memory, this is usually if not always the case.
Re: (Score:2)
According to the benchmark site you linked it looks like the 580 had a little more than 6 times the processing power of the 460, but the 460 was less than half the price and i'm still able to play all the games i'd like, though perhaps not with the highest settings, s
Re: Current gen vs last gen (Score:1)
Re: (Score:2)
20 dollars can by you a replacement fan, or even a used 460.
Re: (Score:2)
That is what I thought he meant, but I don't think the logic holds up. After 2 years the new $200 cards tend to beat the previous generation which drop in price to the same price point.
He's saying he buys the mid-tier option from the previous generation, not the high-tier option from the previous generation which is now at the current generation mid-tier price. He's saying that when the next generation of cards comes out, he'll consider the GTX 1060 (which, presumably, will be priced even lower than it was when it was new).
Re: (Score:2)
I was thinking of doing the same thing this year. Wait for the 10xxs to come out and then wait a little longer for
Re: (Score:2)
I think the 1060 is one of the more expensive x60 range cards so far. While the performance increase this generation is pretty decent, the combination of long time between generations and increased prices means that you could've bought a 970 at launch for maybe just $50 more two years ago and enjoyed 1060-level performance this whole time. Unless you can find a 970 well below $200 on a firesale somewhere, I'd definitely go with the 1060 (or RX480 if you're so inclide).
Re: (Score:2)
you could've bought a 970 at launch for maybe just $50 more two years ago and enjoyed 1060-level performance this whole time
The 970 was released at $329, over $100 more than the 1060. And based on how the 1070 performs with 75% of the 1080's cores, the 1060 should perform about 25% better than the 970. You should be comparing the 1060 to the 960, which was at the same price point, but in that case the 1060 will likely be about 80% faster. But he would have had to wait about 18 more months.
From what I can tell the 10xx line is the most impressive GPU upgrade in a long time. The new cards simply crush the last generation at the sa
Re: (Score:2)
What's really impressive is that in addition to the raw performance, the Pascal series is even more power efficient than Maxwell. Polaris seems to have caught up with Maxwell in terms of power efficiency, but Pascal is quite a bit ahead.
Re: (Score:1)
The 970 was released at $329, over $100 more than the 1060.
How do you figure that?
$329 - $249 = $80
Re: (Score:2)
Maybe he likes to leave Fry's employees a big tip?
Re: (Score:2)
$329 - $249 = $80
Re: (Score:2)
Re: (Score:2)
Yeah who knows what the prices of the cards were when you where choosing, although the GTX 560 did have the same $199 initial price as the GTX 460 (source [wikipedia.org]). The 460 came out in July 2010 though while the 560 came out in May 2011, so you could certainly have been building that computer before the 560 came out. The 570 & 580 came out in Nov/Dec 2010, so those were probably the cards you had available to choose from at the time ($349/$499 respectively).
Although this does go to show that as soon as the newe
Re: (Score:2)
So yeah, based on what you just said i guess making my purchasing decision before the 560 came out was a mistake. I think i hadn't realized at the time that the "xx" portion of the number had a similar meaning and release schedule between generations.
Re: (Score:1)
Re: (Score:1)
The 480 seems like a better deal than paying 25% more money for 15% more performance.
Re: (Score:2)
The 480 seems like a better deal than paying 25% more money for 15% more performance.
He did say he bought the 460, not the 480. And if he was comparing the 460 to the 570, which is most likely, it would have been 75% more money for 66% more performance. If the 560 had already been released, it would have been more like 10% more money for 18% more performance.
My bet was that he built the computer in early 2011 and was choosing the 460 over the 570. Based on price per performance they were about the same, but not everyone can pay close to $400 for a video card.
Re: (Score:2)
You're confusing Nvidia's older generations with AMD's newer generations.
The AMD RX 460 and RX 470 haven't launched yet. The RX 480 launched recently. The Nvidia GTX 400 series launches ages ago.
The RX 480 has MSRPs of 200 (4 GB) and 240 (8 GB). Currently, all 4 GB cards are 8 GB cards and you can unlock them by flashing the BIOS.
The 1060 will have MSRPs of 250 and 300. Both have 6 GB of RAM. The 300 version is the "Founder's Edition". Both allegedly launch on the 19th and Nvidia claims they expect th
Re: (Score:2)
Actually I thought he was replying to another post earlier in the thread, when the original author said his current card is a GTX 460. It was still my mistake though, and I am certain you are correct that the AC was referring to the RX 480.
Although the GTX 1060 will likely beat out the RX 480 in performance by about 40% based on videocardbenchmark.net figures, so its hard to consider that the best option right now. I mean it certainly is the better option today because the GTX 1060 is not released yet, but
Re: (Score:2)
Is this going to be the new gold standard for the mid-price range?
Dunno. 15% more throughput (according to nVidia) for 25% more money? Looks like AMD still wins the dollars per throughput equation.
Re: (Score:2)
Re: (Score:2)
The gain in power efficiency in this generation is a strong reason to go with the new card rather than one from the previous generation. The GTX 1060 will about match the current discounted price of the GTX 970, but it will consume significantly less power (and probably make less noise) doing it.
It's a bit more expensive than the current street price of the GTX 960 (around $200), so some market for those may remain until the GTX 1050 is ready. I think we're also likely to see some street price cuts on the 9
It will only be competition if you can find it in (Score:5, Informative)
The GTX 1080 and 1070 have been consistently out if stock.
Ditto the 480 (Score:1)
Most of the stores around here are similarly out of stock on the RX 480.
Re: (Score:2)
A quick check of Newegg shows 4 different models of the 1070 in stock, and 2 different models of the 1080.
In all fairness those are not the versions of the 1070 you want to buy. None of the versions you want will be back in stock until July 16th, if Amazon's dates are correct. Still not that bad of a wait.
Re: (Score:2)
+1
Walked in to the denver microcenter to pick up a raspberry pi for a project, figured id see if 1080's were in stock, they had FE's as well as the asus strix that I had been hoping to get. Had about 5-10 of each at the time and this was a week ago
Niiiiice (Score:2, Funny)
TuxRacer is going to SCREEEEEEAM on this card sliding down those mountains!
Enough horsepower to run an Oculus Rift well? (Score:1)
Re:Enough horsepower to run an Oculus Rift well? (Score:4, Informative)
"Enough horsepower to run an Oculus Rift well?"
Quoting from the The Rift’s Recommended Spec [oculus.com]:
"For the full Rift experience, we recommend the following system: NVIDIA GTX 970 / AMD 290 equivalent or greater"
Ars Technica [arstechnica.com] writes "Faster than a GTX 980".
PCworld [pcworld.com] even uses the title "GTX 1060 is a $250 GTX 980 killer".
So, yes, it's easily enough to use the Rift.
And the HTC Vive for that matter.
PCIe? (Score:1)
Re: (Score:2)
This is nVidia, not AMD.
Re: (Score:2)
Low end motherboards are often more reliable than high end ones. Much higher volumes and no funny hardware.
Re: (Score:2)
So you read the future? where are your benchmarks, we also want to see then!
At least wait for the benchmarks to show up before trying to look like a nvidia zealot
Re: (Score:2)
Re: (Score:2)
The problem with leaked scores is trust... it is not the first time someone fake some leaks (even if joking) and those quickly spread as real benchmarks.
Re: Nope (Score:2)
Re: (Score:1)
Found the Nvidia fanboi.
If you're going to be a coward, at least don't be an idiot.
Re: Nope (Score:2)
Which is recommended for Linux gaming? (Score:1)
Which of these are recommended for Linux gaming? I like to buy AMD when I can, but in the past, the Radeon drivers were hell to deal with, compared to NVIDIA.
Re: (Score:3)
I just switched to Linux about six months ago with my R9 280 and it was relatively painless.
It wasn't really any different than installing drivers on windows besides the couple of prerequisite files I needed to check for in the terminal.
Re: (Score:1)
To answer your question though, I've heard through the vine that Nvidia drivers are easier to deal with on Linux, though I have not experienced it firsthand. I do know for sure that it is updated more frequently than AMD's which currently have 12/18/2015 as the last update, though they work without a hitch.
Re: (Score:3)
To answer your question though, I've heard through the vine that Nvidia drivers are easier to deal with on Linux, though I have not experienced it firsthand.
The AMD proprietary drivers (fglrx/catalyst) are roughly equivalent to nVidia in terms of stability and upgrade hassle, while the AMD open source driver (Radeon/AMDGPU) is the least hassle experience. Maybe Valve knows this for sure, but my impression is that most Linux AMD users stick with the default open source driver these days because the performance gap has closed up to the point that convenience outweighs it. Personally, I have had zero issues with the AMD open source drivers for several years on a v
Re:Which is recommended for Linux gaming? (Score:4, Interesting)
Which of these are recommended for Linux gaming? I like to buy AMD when I can, but in the past, the Radeon drivers were hell to deal with, compared to NVIDIA.
That's basically still how it is. The Linux driver performance is substantially worse than the Windows driver. You should stick with nVidia for gaming on Linux. I am only using ye olde Asus GTS 450 OC on my Linux box, but it works a treat. I am a cheap bastard so my Windows box only has a 1GB Zotac 750Ti, which is currently out for RMA.
Re: (Score:2)
That's basically still how it is. The Linux driver performance is substantially worse than the Windows driver.
True mainly when the Linux port is actually a Direct3D wrapper. For a decent native port there are already cases where Linux soundly beats Windows, for example, here [steamcommunity.com] where Dota 2 on Linux with OpenGL 3.3 beats Windows DX11 by a wide margin. Not just that, but if you follow the Dota 2 scene, you know that Windows network lag and game crashes are painful and regular. About time to enjoy some of that buttery smooth Linux network experience and rock solid stability, don't you think? Especially when serious mone [prizetrac.kr]
Re: (Score:2)
This is exactly why I went with a nVidia when upgrading this generation, I've had a lot of issues with my AMD on Linux.
I have a dual boot, so I mainly game in Windows, but only because I'm lacking the option to run the game in Linux.
Those I can run in Linux, I do. But the Radeon drivers are really giving me problems and I've tried a few.
Re: (Score:2)
Now I'm left to wonder if the infamous Radeon mouse cursor corruption bug also exists on Linux. That damn thing has been around for more than a decade and I'm not buying another Radeon unless it gets fixed.
Re: (Score:2)
Now I'm left to wonder if the infamous Radeon mouse cursor corruption bug also exists on Linux.
Fixed last year [amd.com]. Catalyst bug, I have been running the open source driver for the last few years so I never saw it.
Re: (Score:2)
Re: (Score:2)
It's not really any trouble unless you love to play some AAA franchises.
FWIW, Steam works very well on Linux and most of the games I play are native.
Re: (Score:2)
It's not really any trouble unless you love to play some AAA franchises.
So you mean, like the bulk of the market? This situation may improve as we move towards Vulkan but it's still real, and still annoying.
Re: (Score:2)
So you mean, like the bulk of the market?
You're right, it doesn't apply to me, but I know alot of gamers that it would apply to.
We're starting to see a shift now and some of those large studios are developing for Linux now, but it's still to few in number, and that doesn't even count the complications added in with coding for OpenGL instead of DirectX.
Re:Which is recommended for Linux gaming? (Score:4, Interesting)
Valve: OpenGL is faster than DirectX — even on Windows (20% faster) [extremetech.com]
Bringing Unreal Engine 4 to OpenGL [nvidia.com]
The only reason developers should consider DirectX at this point is if they need to run on an XBONE.
Re: (Score:2)
Hell, even Doom 1 and 2, Quake 1, 2 and 3 are not sold on Steam Linux although these games pioneered Linux gaming.
It also sucks how gaming requires hundreds of dollars in upgrades that are of no use for most everything else (browsing, documents, watching video etc.). Why spend big bucks for a gaming linux desktop?, you could either get a Windows desktop instead (same hardware with Windows installed) or a console. The Windows desktop not only has more games, it makes it trivially easy to run Quake 1, 2, 3, D
GloFo 14nm vs TSMC 16nm (Score:3)
Anybody has any idea whether GloFo's 14nm FinFET has some sort of disadvantage vs TSMC's 16nm? Otherwise it looks quite bad for the AMD engineers when they have to use more power than a much faster GTX 1070 and also max out at around 1.3GHz when nVidia pulls 2GHz...
And this is from a longtime AMD/ATI fan, mainly because I attribute to AMD/ATI the fact that through the competition they kept Intel & nVidia coming up with new stuff at decent price points, so it saddens me to see them lagging behind the last few years...
Re: (Score:1)
The difference in perf/w is actually pretty minor if you look outside games; the RX 480 delivers 5161 GFLOPS and the 1070 does 5783 (Both at base clock), and their memory bandwidth is the same. It's a real shame that they haven't been able to tap all that power in games, but they're still fairly competitive as far as GPGPU goes.
Re: (Score:1)
but they're still fairly competitive as far as GPGPU goes.
That's it when nVidia releases Cuda 8 so that anything can be computed at all with the existing Cuda applications and any kind of "fair" comparison can be made. Not that it matters much as CPCPU market is purposefully segmented by nVidia to maximize profits and comparisons are meaningful only within an ecosystem for non-custom software. Portable performance still means only portability between GPU generations.
Re:GloFo 14nm vs TSMC 16nm (Score:4, Interesting)
Anybody has any idea whether GloFo's 14nm FinFET has some sort of disadvantage vs TSMC's 16nm? Otherwise it looks quite bad for the AMD engineers when they have to use more power than a much faster GTX 1070...
AMD's product is released and independently tested while nVidia's is only announced, so take those claims with a grain of salt. I believe the technical term for the situation is "FUD". Even if you accept nVidia's claims at face value, the 480 still comes out as great value and is shipping now. I guess the market agrees because the initial production run seems to be mostly sold out.
Re: (Score:2)
The question was about the already released GTX 1070, a larger, faster chip that despite that draws less power and clocks 50% higher than the RX 480.
Ah. I seriously doubt that the measured power consumption of the GTX 1070 really stays below 150 watts running at top clock speed. Oddly enough, I see the reviewers just accepting nVidia's claims without reporting actual measurements, in contrast to the current tempest in a teapot over the RX 480.
If they are on an equivalent process, it would mean an engineering issue. Or is GloFo's 14nm node not as good as the TSMC 16nm?
It's GloFo+Samsung, by the way. Speculation about the relative merits of the processes is just speculation until we see a lot better, trustworthy real life measurements. What we know for sure is that the two produc
Re: (Score:2)
Ah. I seriously doubt that the measured power consumption of the GTX 1070 really stays below 150 watts running at top clock speed. Oddly enough, I see the reviewers just accepting nVidia's claims without reporting actual measurements, in contrast to the current tempest in a teapot over the RX 480.
Not sure why you say that.
The numbers. RX 480 is 5.7b transistors at 14nm, GTX 1070 is 7.2b at 16nm. Both engineering teams are top in their field, I doubt anybody seriously dropped the ball on architecture. And so far there are no reports of glaring deficiencies of the 14nm process. The nVidia part is clocked 35% higher. So: more transistors, higher clock speed, larger process, but the same or less power consumption? Doesn't add up, not even close. This leaves measurement error or as the likely explanation. BTW, I'm a big fan of bo
Re: (Score:2)
Actually, here's another theory: they were running the reference cards over-volted to be conservative, given that the yield ramp on the new process node likely created a shortage of parts that run reliably at lower voltage. Now, with a little more time to establish the safe operating limits, they sent out their update to reduce the operating voltage. Whatever, it's a niggle, the big news about this part is the price.
Re: (Score:2)
The numbers. RX 480 is 5.7b transistors at 14nm, GTX 1070 is 7.2b at 16nm. Both engineering teams are top in their field, I doubt anybody seriously dropped the ball on architecture. And so far there are no reports of glaring deficiencies of the 14nm process. The nVidia part is clocked 35% higher. So: more transistors, higher clock speed, larger process, but the same or less power consumption? Doesn't add up, not even close. This leaves measurement error or as the likely explanation.
There is no measurement error. This has already been proven by various publications, some measure power on the rails, others power on the socket, you can't mess up that kind of measurement, it is very simple. The RX 480 with less transistors, less clock speed, less performance for most things uses the same power as the GTX 1070, and I am asking why? I guess only insiders would really know why, but my one guess was some sort of issue with the 14nm process, hence my question.
Re: (Score:2)
The RX 480 with less transistors, less clock speed, less performance for most things uses the same power as the GTX 1070, and I am asking why? I guess only insiders would really know why, but my one guess was some sort of issue with the 14nm process, hence my question.
Given that AMD's recent fix mainly lowered the operating voltage, it would seem that the parts were running over-volted, perhaps a conservative strategy adopted for yield reasons on the relatively new process node. So, indirectly it would be an issue with the process, the same as any new process: the yield curve. Otherwise, Samsung has been shipping the 14nm Snapdragon 820 in the S7 for some time, it seems the process node is pretty solid. There is some chatter out there that there is room to optimize the 1
It's actually not very good competition (Score:3, Insightful)
The GeForce GTX 1060 can be seen as a direct response to the AMD Radeon RX 480, which offers a ton of performance at the $200 price point.
I disagree strongly. I'm in the market for a new video card to replace my 750 Ti, which is currently out for RMA. It only has 1GB on it, and that's starting to be a problem for me, so I'm looking at moving up to a whole 2GB or so. I've been a fairly loyal nVidia customer basically all along; After the PowerVR and the Voodoo and Voodoo 2, I owned the TNT, and the TNT2, and went on to own every other generation of geforce from the 2 up until now. (I skipped the original, I had a Permedia 2 AGP 8GB then, which was just slightly slower but which had much better image quality.) Every so often I tried an ATI card, and the results were always disastrous. Twiddled DnA drivers made ATI cards more or less usable in the bad, sad early days of Catalyst, but they were always a bigger PITA than nVidia.
On the other hand, many people say that AMD has come a long way with the drivers, and the hardware actually seems to have offered competitive performance for some time now. In practice, the RX480 has not caused anyone any problems yet, aside from some texture flashing in water in Crysis 3 when used in a Crossfire configuration (watched the video this morning.) PCI-SIG members say that the RX480 isn't going to burn out anyone's motherboard traces or their power supply any more than any other common GPU, many of which play fast and loose with the standards. Meanwhile the 1060 doesn't support SLI, costs 20% more, and offers maybe 15% better performance. I don't see that as a credible competitor. I can buy one RX480 now (or perhaps in a few more days when the release of the 1060 knocks the price down slightly) and then pick up another one later if I want to do 4k or I find that I just need more grunt to run some game. I can't do that with the 1060.
I'm still leery of buying an AMD card, and probably will wait for partner RX480s to come out before I consider it seriously. The stock GPU cooler on the RX480 is a bit garbage, and I don't want to go to water cooling; I already have a massive air cooler in my system and plan to stick with it. But since Crysis 3 in 4k aside there seems to be no actual problem so far with even dual RX480s in crossfire and overclocked they seem to be a credible option, and I'm thinking of cuddling one of them up to my FX-8350/990FX-Gaming system real soon now, with plans for another one at a later date when they're even cheaper.
Given my history with AMD/ATI graphics, which is unfortunate, I'm still leery of this plan and might just buy one fat GPU up front, but I really don't need that much GPU right now and I don't particularly want to pay for it. But the 1060 is not even in the running if it doesn't include SLI.
Re: (Score:2)
Re: (Score:2)
Meanwhile the 1060 doesn't support SLI, costs 20% more, and offers maybe 15% better performance. I don't see that as a credible competitor. I can buy one RX480 now (or perhaps in a few more days when the release of the 1060 knocks the price down slightly) and then pick up another one later if I want to do 4k or I find that I just need more grunt to run some game. I can't do that with the 1060.
Two cards in SLI do not provide 100% more performance than one card; on average maybe 80% more, depending on game and resolution. A GTX 1060 has half the cores of a GTX 1080, at a little less than half the price. I see that you want the option to upgrade, but if you eventually want 2x1060 in SLI just buy a 1080 for the same number of cores. It's 20% more expensive ($600 vs $250 x2) but 20% more performant than the same number of cores in SLI would be.
Re:It's actually not very good competition (Score:4, Informative)
Two cards in SLI do not provide 100% more performance than one card; on average maybe 80% more, depending on game and resolution.
Looks like it's closer to 90% for the RX480s.
I see that you want the option to upgrade, but if you eventually want 2x1060 in SLI just buy a 1080 for the same number of cores. It's 20% more expensive
This is why I'm looking at an AMD card again... because nVidia's answer is always "spend more money"
Re: (Score:2)
Dual GPU means more latency and twice the bullshit.
4K is also a crap ton of pixels you might as well avoid unless you're a developer with 20 xterm opened or a spreadsheet professional who uses spreadsheet software with a UI that scales arbitrarily.
If 1080p is okay for work, I believe a 1080p 144Hz would maximize the gaming value, if good antialiasing can be used too.
Re: (Score:2)
Well, guess what? I just heard back on the RMA of my 750 Ti 1GB. Zotac doesn't have any of those lying around, so they're going to send me a GTX 950 AMP! 2GB, which is a pre-overclocked dual-fan 950... which does support SLI. So it looks like I am going SLI, but I'm staying nVidia.
Dual GPU does mean more latency, but not twice the latency or anything silly like that, and I'm not using any annoying input devices which will compound the problem. It does mean more bullshit, but I don't know about twice. My PS
Re: It's actually not very good competition (Score:2)
You missed out on the 9800 line (Score:2)
Holding out for fanless (Score:4, Interesting)
Hopefully soon they'll follow up with an even lower power 1050 card.
I always buy the very best fanless card for my Linux (no games) deskop. When a better one comes along, I buy it.
Re: (Score:2)
Is there any reason for you not to use Intel's integrated graphics?
Re: (Score:2)
If all you do is standard office tasks, probably not.. If you do graphics intensive applications (image editing, video editing, CAD, etc) or applications that can benefit from offloading tasks to GPU (via whatever technology/cores/execution units/stream processors the cards offer), then possibly/probably but the improvement may not be worth the extra cost, power, and or noise that comes with the discrete GPU.
Re: (Score:2)
Blender works reasonable well for me on Intel HD, but I'm not a very sophisticated blender user.
(PS - my salary is tied to how many GPUs people buy)
Re: (Score:2)
If you do graphics intensive applications (image editing, video editing, CAD, etc) or applications that can benefit from offloading tasks to GPU (via whatever technology/cores/execution units/stream processors the cards offer), then possibly/probably but the improvement may not be worth the extra cost, power, and or noise that comes with the discrete GPU.
What do you mean by offloading? In my experience, image and video editing don't use enough GPU power to justify a discrete card. The main work is done on CPUs, and some OpenGL features might be used for filtering. Unless, of course, you offload the actual work to GPUs as well.
Even so, today's Intel graphics do plenty of OpenGL (see shameless plug [youtube.com] for example). CAD will generally benefit from "real" GPUs due to better conformance/precision, not so much due to raw speed; see this Intel HD bug [github.com] for example.
Re: (Score:2)
Several of the custom boards for 1070/1080 comes with a fan that turns off when not under high load.
Both MSI Gaming and ASUS STRIX has 0db mode when not under heavy load.
+15% faster versus +25% the price (Score:1)
The base model R480 is $200. The 8GB version is $230.
I'll have to wait for some real-world tests to see how well each one does.
Three monitor gaming? (Score:2)
Re: (Score:2)
No, that viewport rendering is for VR speed improvements. 16 views of the same camera angle in a single pass for multi-eye renders
The 1060 will support multiple monitors though, as you said it has the same output support of 1DVI/1HDMI/3DP (and thats before any AIB changes as asus has a 2hdmi/2dp 1080 and gigabyte has an hdmi add-on board that disables a dp port)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Just don't call it a benchmark.
No one said anything about benchmarks, it's "results". Actual results are what matter, not a number that says what should theoretically happen.
At least I have a card that works AND games well on linux with open drivers.
That's fantastic, how are you enjoying playing Fallout 4 on Linux? Or, wait, you wouldn't use Steam on Linux, because it's not open and everything that isn't open has backdoors, and you're not a cuck, therefore you only and exclusively use open software. So which games do you play, exactly?
Re: (Score:2)
Re: (Score:2)
No, he's not saying he's impressed, he's saying that it doesn't matter how it happened, the nVidia card can run the games he cares about faster, and at lower power than the AMD card, so the nVidia card is the one he's going to buy.
Re: (Score:2)
I don't understand how anyone can post stuff like this on every news story, day after day, without getting sanity-snapping bored....
Or are we simply long past the "sanity-snapping" part?
Re: (Score:2)
I don't understand how anyone can post stuff like this on every news story, day after day, without getting sanity-snapping bored....
Or are we simply long past the "sanity-snapping" part?
I prefer the "APP APP LUDDITE" goofball posts to the crap spewed by that festering anal sore "APK" and his pointless bullshit about his magical hosts file.
Re: (Score:2)
Is is the same guy who kept posting about cows?
Re: (Score:1)
"You're talking to moo?"
Re: (Score:2)
I would guess that sexconker (the cow guy) is also the app guy. It's the same brand of repetitious high comedy that he seems to enjoy.
Re: (Score:2)
Re:15% performance increase (Score:4, Interesting)
The article says, "NVIDIA claims that the GTX is on average 15 percent faster than its closest competitor (i.e. the Radeon RX 480)", leaving it ambiguous as to which model they were referring. Given the pricing (4GB 480 for $200, 8GB 480 for $240, 6GB 1060 for $250), we'd assume that the 15% increase would be over the $240 RX 480, since it's the closest competitor in terms of price, but NVIDIA may be using some coy phrasing to compare the 1060 against a fictional mid-level RX 480 that averages the capabilities of the 4GB and 8GB models.
If it really is achieving a 15% increase over the $240 RX 480, then that's substantial, especially so considering that it does so "while also being over 75 percent more power efficient [than its closest competitor]", because at that point you'd be paying just $10 for a noticeable performance boost that would pay for itself over time from power savings. They'd sweep the legs completely from underneath the high-end 480. But if it's actually just 15% faster than a fictional, mid-level model or the 4GB model, that's substantially less impressive.
I'm eagerly awaiting the benchmarks.
Re: (Score:2)
Both models are indistinguishable in performance. https://www.youtube.com/watch?... [youtube.com]
TLDR : the only game that shows a difference is one where the 480 doesn't have enough GPU power to render the higher resolution textures that eat up more than 4gb of VRAM.