NVIDIA Launches GeForce GTX 1060 To Take On AMD's Radeon RX 480 (hothardware.com) 89
Reader MojoKid writes: NVIDIA just launched their answer to AMD's Radeon RX 480 mainstream card today, dubbed the GeForce GTX 1060. The GP106 GPU at the heart of the GeForce GTX 1060 has roughly half of the resources of NVIDIA's current flagship GeForce GTX 1080. NVIDIA claims the GTX 1060 performs on par with a previous generation high-end GeForce GTX 980 and indeed this 120W mainstream offers an interesting mix of low-power and high-performance. The new GeForce GTX 1060 features a new Pascal derivative GPU that's somewhat smaller, called the GP106. The GP106 features 10 streaming multiprocessors (SM) with a total of 1280, single-precision CUDA cores and eight texture units. The GeForce GTX 1060 also features six 32-bit memory controllers, for 192-bits in total. GeForce GTX 1060 cards with either 6GB or 3GB of GDDR5 memory will be available and offered performance that just misses the mark set by the pricier AMD Radeon R9 Nano but often outran the 8GB Radeon RX 480. The GeForce GTX 1060 held onto its largest leads over the Radeon RX 480 in the DirectX 11 tests, though the Radeon had a clear edge in OpenCL and managed to pull ahead in Thief and in some DirectX 12 tests (like Hitman). The GeForce GTX 1060, however, consumes significantly less power than the Radeon RX 480 and is quieter too.You may also want to read PCPerspective's take on this.
Deja vu! (Score:2, Informative)
All over again [slashdot.org].
What is this? An Alzheimers test?
Re:Deja vu! (Score:4, Informative)
Re: (Score:1)
Would have been nice to see that specified in the summary.
By the way, it pretty much goes without mentioning that an nvidia card runs cooler than an AMD :-)
Re: (Score:2)
Would have been nice to see that specified in the summary.
One title said that the card was "announced" while the other said that it had been "launched". A pretty clear distinction right from the start. Then the summary says:
What do you think that these tests are if they aren't benchmarks?
Re: (Score:1)
Ah, so then the "announcement" was really an nvidia ad? Do they really need that much front page exposure?
Re: (Score:2)
Today's launch is for reviews, with performance results, etc. That previous post was just the "announcement".
When will VideoCards peak? (Score:5, Insightful)
We had a good run From 1995-1998 with the SVGA cards that did 1024x768 with 32bit color. Then that 3D acceleration came out and buying a good video card became much more difficult.
With Displays going up to 4k we should be getting to a point where increase of resolution will not matter, And 3d performance on those displays should be quick enough.
While Mores law is in effect our bodies are not adapting as fast as the technology, so there should be a point where the Video from a computer will meet a threshold where playing such upgrade games isn't going to be important.
Much like how we don't talk much about Sound cards.
Re: (Score:3)
With Displays going up to 4k we should be getting to a point where increase of resolution will not matter, And 3d performance on those displays should be quick enough.
Quick enough for what? When we reach photorealism at dual 4k, then we can maybe talk about peaking. We're a long, long way off from that.
Re: (Score:2)
So, what, six, ten years out? Battlefield 4 isn't photorealistic but it's definitely moving in that direction with just a few tricks.
Re: (Score:1)
When we reach photorealism at dual 4k, then we can maybe talk about peaking. We're a long, long way off from that.
That, and the petabytes of storage and RAM needed to store all that for a 30 second video
Re: When will VideoCards peak? (Score:3)
When we reach photorealism at dual 4k, then we can maybe talk about peaking.
When a single mobile GPU can drive a pair of small 8Ã--8k 120hz stereoscopic displays, then we can maybe talk about peaking. :)
Re: (Score:2)
Most people don't need that much power,
The question was not whether most people need that much power, the question was when can we talk about GPUs peaking, and the answer is not any time soon. Also, you're wrong about resolution. Anyone would benefit from a 4k monitor, if their apps are properly designed and they can make use of it. Smoother text is easier on the eyes.
Re: (Score:2)
Are we really a long, long way from that? Let's not forget that only two decades ago, the top videogames looked like this [wikipedia.org].
Is it really that far-fetched to think that we're only a decade or two away from photorealist, stereoscopic 4K gaming with 120fps per display?
Re: (Score:2)
The progress is definitely slowing down, however. Far Cry was released in 2004 and looked amazing at the time. Crysis came out just three years later and was clearly a whole new level. That was 2007, or almost ten years ago. It still looks very, very good by todan's stanards, if not quite top notch. Crysis 3 is three years old now, and while it's a moderate improvement over 1/2, not much, if anything, surpassed it yet, certainly not to a degree that Crysis improved on Far Cry.
Hopefully it's been mostly an i
Re: (Score:2)
Likely not for awhile. And with stuff like HBM(and memory on GPU die) in the pipe, I wouldn't expect that to happen for a decade or more, especially since computer video displays are moving into 4k. The reality is, there's never enough processing, memory, or bandwidth on a video card and there's plenty of limitations on current PC's that cause issues.
But buying a videocard became difficult? Hardly. Buy a good mid-range card for $150-200 every 5 years if you're not a hardcore gamer(though lots of games o
Re: (Score:2)
Re: (Score:3)
exactly. even if we increased the processing power of graphics cards 20 fold right now, we still wouldn't even have real-time raytracing of inanimate scenes. let alone trees with moving leaves, human hair or people wearing realistically looking fabric (of fur).
Re: (Score:2)
I'm all for realistically looking anthropomorphic characters (Miqo'te), but your comment is leaning a bit too far on the furry cosplay side...
Re: (Score:2)
s/of fur/or fur
Re: (Score:2)
Re: (Score:3)
...44.1kHz 16 bit audio is relatively trivial...
So, is is there an analogous specification for video cards? The 44.1kHz @ 16bit is pretty easily justified (Nyquist–Shannon + reasonable dynamic range). Can a visual equivalent be easily justified? That is to say, at sort of "eye limited" (retina, in Apple lingo) resolution and field of view, how many polygons can be said to make up the human perception of reality, and what sort of graphics processing muscle would be required to drive this?
I of course have no idea, just wondering out loud. Just tr
Re: (Score:3, Interesting)
Well, lets do a bit of math then to figure it out.
According to https://www.nde-ed.org/EducationResources/CommunityCollege/PenetrantTest/Introduction/visualacuity.htm [nde-ed.org], 20/20 vision is
the ability to resolve a spatial pattern separated by a visual angle of one minute of arc
Lets take that as a given for the sake of the argument, and assume that we want just enough dpi on our screen that one pixel shows up at a visual angle of one minute of arc. So the screen can just match the resolution of the eye.
Lets also assume that the largest screen we might ever want is as wide as the viewing distance from o
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Moores law has been dead for quite a while now.
You have misunderstood what Moore's law is about. It is simply about the number of transistors doubling in integrated circuits every year (later revised to every two years). It is not about single threaded performance in CPUs.
That is why they have just been adding more cores and cache and trying to improve memory technology.
How do you think they add more cores and cache into CPUs if not by increasing the number of transistors? You have just described Moore's law in action!
Moore's law has been around for decades; which only slightly longer than the predictions that the law is dying.
Re: (Score:2)
"Moores law has been dead for quite a while now.
You have misunderstood what Moore's law is about. It is simply about the number of transistors doubling in integrated circuits every year (later revised to every two years). It is not about single threaded performance in CPUs."
Oh boy, here we go again, another Moore's law explainer.
So, try to understand that the Moore's law got well known because of all the speed that your precious count brought.
Nobody cares about the transistor counts, people upgraded because
Re: (Score:2)
Re: (Score:2)
Just because you, your grandma, and CNN's tech section editor misunderstood something for a while, doesn't make it right.
Re: (Score:2)
Re: (Score:2)
The other day Tim Sweeney of Epic games said we need about 40 TFLOPS to get realistic (non-human) visuals, Current generation is 5 to 10 TFLOPS.
After that, we probably need it to run off of a AA battery.
There's still some room to advance from where we are today.
Re: When will VideoCards peak? (Score:2)
Then that 3D acceleration came out and buying a good video card became much more difficult.
There was a sequence of "correct" 3D cards to own which more or less went from Matrox Millenium to 3Dfx to Nvidia but if you bought wrong (Nvidia NV1, anything ATi before r300, S3, Number Nine, Voodoo4/5, etc), you were generally not a happy camper... fortunately for me, I learned my lesson early-on with "Tandy 16-color graphics" (EGA comparable but not compatible.
Re: (Score:2)
At least your Tandy had that 3-voice synth IC, as opposed to those of us on EGA systems stuck with a crappy monophonic* speaker.
* yes, I know about digital audio via the PC speaker. But it took a lot of CPU to do that and it sounded like crap on top of a high-pitched whine.
Re: (Score:2)
VR will be pushing dual 4k @ 90+fps in less than 5 years most likely. At that point, I think we'll be close to the threshold you speak of.
Re: (Score:2)
Where do you get "2000 pixels per mm" exactly? I've got something 3D-printed right in front of me with 200 microns layers and I can only see the layers because of the light reflection.
2000 per mm means 0.0005 millimeters, which is insanely tiny as far as our eyes are concerned.
Re: (Score:1)
Where do you get "2000 pixels per mm" exactly?
Science. Any other questions?
Re: (Score:2)
We had a good run From 1995-1998 with the SVGA cards that did 1024x768 with 32bit color. Then that 3D acceleration came out and buying a good video card became much more difficult.
And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.
And I also like to see a return to analog video output. No pixels - that's the property of the software and not the rendering medium. Higher quality analog can display higher fidelity.
Much like how we don't talk much about Sound cards.
Joe Schmoe doesn't care about sound anymore. Gone are symphonic rock through HiFi systems with discrete components an
Re: (Score:2)
Right, so, you yern for the days of an ATI Mach64 for 2d video, a pair of 3dfx Voodoo2s in SLI, and a Aureal A3d sound card, or a SB32 with WaveBlaster2 daughterboard.
Those were, indeed, good times, though some of it was through the rosy glasses of nostalgia.
Re: (Score:2)
And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.
What do you think you would gain there? Not having to buy the video connectors repeatedly? They're a pretty small portion of the price.
Re: (Score:2)
What do you think you would gain there? Not having to buy the video connectors repeatedly? They're a pretty small portion of the price.
Being able to add just the (and all the) connectors you need, and be able to get higher quality DA components if you want. Be able to not pay for accelerated 3d if you don't need it. Be able to pay for better accelerated 3d if you need it. Have completely independent video cards for different functions. Be able to run a game on one display and my e-mail on another, simultaneously, because I get back the true multihead support that the young whippersnappers ripped out from Linux in the early 2000s. Ru
Re: (Score:2)
Being able to add just the (and all the) connectors you need, and be able to get higher quality DA components if you want.
Not that anyone uses analog output any more, but they tend to have a wicked high-speed RAMDAC on there for that minuscule portion of the market still using CRTs now that we have things like LCDs with adaptive sync.
Be able to not pay for accelerated 3d if you don't need it.
It's a tiny portion of the price at the low end.
Be able to pay for better accelerated 3d if you need it.
You can already do that.
Have completely independent video cards for different functions.
You can do that, too! You can even install an Nvidia card just for PhysX, and do graphics on an AMD card! Or you can use a card just for GPGPU.
Be able to run a game on one display and my e-mail on another, simultaneously, because I get back the true multihead support that the young whippersnappers ripped out from Linux in the early 2000s.
I'm able to do that on Windows, heh.
An "everything but the kitchen sink" approach is always going to be a jack of all trades, and master of none. I don't put up with it for audio, so why should I for video? Choice is good, and discrete components offer that.
Most of us are using "integrated" audio and hav
Re: (Score:2)
The idea is to have video output flow arbitrarily. Imagine adding more outputs on a card for your integrated graphics, without needing to go dual GPU ; or on the contrary, have two different GPU, one for Windows and one for Linux but just one set of outputs. So that you don't need to fiddle with a KVM switch, dual input monitors and their menus, be stuck with one or two Linux monitors next to one Windows monitor (all fixed), etc.
There are some existing technologies that do something like that but in specifi
Re: (Score:2)
And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.
I'm not sure if this is economically feasible, but it sure is a nice idea. A lot of my GPU usage is spent on rendering and computing, not just direct display, and I hate the idea of paying extra for components I never use. OTOH, every mechanical connector comes with a lot of overhead, not to mention potential for wear and damage. The first integrated circuits were conceived to avoid solder/connector issues, not so much miniaturization.
And I also like to see a return to analog video output. No pixels - that's the property of the software and not the rendering medium. Higher quality analog can display higher fidelity.
It's a somewhat interesting idea, especially considering the audio analo
Re: (Score:2)
For that matter, make a 16/10 21" CRT with HDMI etc. inputs and I'd be very interested. With 1920x1200 85Hz for some stuff, 1440x900 120Hz for other stuff (like most casual desktop usage), 1920x1080 60Hz works if you plug in random crap, 1024x768 or 1280x1024 with black bars for the odd thing, etc.
It would be trivial technologically wise (except for concerns of "lost technology") and there would be something of a market already with nostalgia gamers, fringe LCD haters and old fucks, what have you. But in th
Re: (Score:2)
But in these times, you can't have anything different. Even with LCD monitors, there's more choice lately but you can't get a monitor that's 16/10 and high refresh, or 16/10 and big, or all three at once. (nor even a 27" 1080p at 144Hz)
It's silly that the HD video/movie craze forced computer users to the same widescreen format, as if computers were all about watching movies. I recently got a couple of 1280x1024s for next to nothing, as my math exhibitions work best in near-square formats. OTOH, 16:9 is nice for a stage backdrop projection.
Re: (Score:2)
Re: (Score:2)
Fidelity to what? You still have to store the visual information digitally if it's coming from a computer.
You sound like one of those vinyl purists who romanticizes the 'golden age' before digital, and forgets how crappy it actually used to sound.
You sound like a kid that has never worked with compression algorithms, vectors and interpolation.
Why do you think you can stream an MP4 and get it to look good in 1080p? It's certainly not because they stream the full digital signal, tailored to your pixel resolution.
The tyranny of pixels is falling. With high resolution displays, it becomes a problem and not a solution. Scalable graphics with physical and relative measurements is the future. Including protocols for sending this type of information to
Re: (Score:3)
As it so happens, the demands of VR headsets mean that video cards available now are nowhere NEAR adequate. As a poster south of me says, you need at LEAST dual 4k - one for each eye - and fovea tracking - and at LEAST 90 FPS. All the time. With minimal latency.
Believe it or not, but not even the most expensive GPU money can buy - heck, not even unreleased GPUs that Nvidia has in Tesla cards (they are "released" but you can't use em as a graphics card) - is anywhere close to being able to push this kind
Re: (Score:2)
Sound cards, at their core, are just creating analog frequencies from a digital source. This is a well understood mature technology, so there's not much to do there except reducing distortion and improving snr.
GPUs however, still have a scale issue - simplistically, the more pixels you drive, the more horsepower you need in the GPU. If we would have stayed at 1024x768 then the GPUs we have today would be massive overkill. But we didn't - a 4k display has more pixels than 10 1024x768 displays, and we're d
Re: (Score:2)
Can it run Oculus Rift (Score:1)
n/t
But what about the DPC latency? (Score:5, Insightful)
It will only be competition if you can find it (Score:2)
Re: (Score:2)
I'm curious. How does VGA passthrough work with virtualization? Is the host treated as a headless server while the one guest the VGA card is assigned to gets a display? Does the VGA card get reset and handed back to the host when that particular guest is shut down?
Or do you have two video cards/monitors, one for the host and one for the guest? In that case, do you also have a separate keyboard/mouse for this guest? If not, how does the keyboard/mouse input get assigned to the guest? (Synergy?)
Re: (Score:1)
It needs 2 Video cards. One for host. One for guest. Keyboard and mouse is usually shared but you can do dedicated pass through of those things as well as usb peripherals if you want. I can report that the oculus works fine in a vm with vga pass through. On my setup, I use both monitors for linux and then when i want to start up a game I boot a vm of windows 7 and switch one of the monitors inputs over to it. The performance is roughly the same as native. Since I play with v-sync on and have a fairly
Re: (Score:1)
Depending on how urgent the replacement is, waiting a few more weeks might make sense. The first manufacturer designs should soon appear, with a few improvements over the reference design.
PC Games Hardware has a short hands-on test of the XFX Radeon RX 480 ( http://www.pcgameshardware.de/AMD-Radeon-RX-4808G-Grafikkarte-264637/Videos/XFX-RX-480-Black-Edition-Test-1202095/ [pcgameshardware.de]) which shows better cooling and an 8-pin power connector that can officially handle 150W, thus solving the problem with the reference desig
The elephant i (Score:2)
Well, it doesn't look too good for AMD. Their "super efficient" RX 480 uses much more power than the 1060 and is slower. ;) ) and also AMD's version of aync compute works far better than Pascal (see: http://wccftech.com/nvidia-gef... [wccftech.com] and http://www.eurogamer.net/artic... [eurogamer.net] )
On the bright side is the price of the 480 is only $200 (well, eventually it will be
Re: (Score:3)
The 1060 is faster the 480 in old games. However newer games will use technologies like asynchronous DX12 and Vulkan. Ashes of the Singularity and Hitman are good examples of the former, and the Vulkan build of Doom is a good example of the latter.
The 480 is faster than the 1060 in those 3 games. Doom/Vulkan is a *lot* faster on the 480.
Re: (Score:1)
I think you'll find though that the 480 is only faster if the game makes use of async compute.
Having said that async compute is shaping up to be a very important feature.
Doesn't sound like a great value (Score:1)