Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

NVIDIA Launches GeForce GTX 1060 To Take On AMD's Radeon RX 480 (hothardware.com) 89

Reader MojoKid writes: NVIDIA just launched their answer to AMD's Radeon RX 480 mainstream card today, dubbed the GeForce GTX 1060. The GP106 GPU at the heart of the GeForce GTX 1060 has roughly half of the resources of NVIDIA's current flagship GeForce GTX 1080. NVIDIA claims the GTX 1060 performs on par with a previous generation high-end GeForce GTX 980 and indeed this 120W mainstream offers an interesting mix of low-power and high-performance. The new GeForce GTX 1060 features a new Pascal derivative GPU that's somewhat smaller, called the GP106. The GP106 features 10 streaming multiprocessors (SM) with a total of 1280, single-precision CUDA cores and eight texture units. The GeForce GTX 1060 also features six 32-bit memory controllers, for 192-bits in total. GeForce GTX 1060 cards with either 6GB or 3GB of GDDR5 memory will be available and offered performance that just misses the mark set by the pricier AMD Radeon R9 Nano but often outran the 8GB Radeon RX 480. The GeForce GTX 1060 held onto its largest leads over the Radeon RX 480 in the DirectX 11 tests, though the Radeon had a clear edge in OpenCL and managed to pull ahead in Thief and in some DirectX 12 tests (like Hitman). The GeForce GTX 1060, however, consumes significantly less power than the Radeon RX 480 and is quieter too.You may also want to read PCPerspective's take on this.
This discussion has been archived. No new comments can be posted.

NVIDIA Launches GeForce GTX 1060 To Take On AMD's Radeon RX 480

Comments Filter:
  • Deja vu! (Score:2, Informative)

    All over again [slashdot.org].

    What is this? An Alzheimers test?

    • Re:Deja vu! (Score:4, Informative)

      by msmash ( 4491995 ) Works for Slashdot on Tuesday July 19, 2016 @11:24AM (#52541311)
      No, today's post sheds more insight into the GPUs, and how they fare against each other -- with benchmark numbers etc.
      • Would have been nice to see that specified in the summary.

        By the way, it pretty much goes without mentioning that an nvidia card runs cooler than an AMD :-)

        • Would have been nice to see that specified in the summary.

          One title said that the card was "announced" while the other said that it had been "launched". A pretty clear distinction right from the start. Then the summary says:

          The GeForce GTX 1060 held onto its largest leads over the Radeon RX 480 in the DirectX 11 tests, though the Radeon had a clear edge in OpenCL and managed to pull ahead in Thief and in some DirectX 12 tests (like Hitman).

          What do you think that these tests are if they aren't benchmarks?

    • by Vigile ( 99919 ) *

      Today's launch is for reviews, with performance results, etc. That previous post was just the "announcement".

  • by jellomizer ( 103300 ) on Tuesday July 19, 2016 @11:19AM (#52541277)

    We had a good run From 1995-1998 with the SVGA cards that did 1024x768 with 32bit color. Then that 3D acceleration came out and buying a good video card became much more difficult.
    With Displays going up to 4k we should be getting to a point where increase of resolution will not matter, And 3d performance on those displays should be quick enough.
    While Mores law is in effect our bodies are not adapting as fast as the technology, so there should be a point where the Video from a computer will meet a threshold where playing such upgrade games isn't going to be important.

    Much like how we don't talk much about Sound cards.

    • With Displays going up to 4k we should be getting to a point where increase of resolution will not matter, And 3d performance on those displays should be quick enough.

      Quick enough for what? When we reach photorealism at dual 4k, then we can maybe talk about peaking. We're a long, long way off from that.

      • by Hadlock ( 143607 )

        So, what, six, ten years out? Battlefield 4 isn't photorealistic but it's definitely moving in that direction with just a few tricks.

      • When we reach photorealism at dual 4k, then we can maybe talk about peaking. We're a long, long way off from that.

        That, and the petabytes of storage and RAM needed to store all that for a 30 second video

      • When we reach photorealism at dual 4k, then we can maybe talk about peaking.

        When a single mobile GPU can drive a pair of small 8Ã--8k 120hz stereoscopic displays, then we can maybe talk about peaking. :)

      • by Yvan256 ( 722131 )

        Are we really a long, long way from that? Let's not forget that only two decades ago, the top videogames looked like this [wikipedia.org].

        Is it really that far-fetched to think that we're only a decade or two away from photorealist, stereoscopic 4K gaming with 120fps per display?

        • The progress is definitely slowing down, however. Far Cry was released in 2004 and looked amazing at the time. Crysis came out just three years later and was clearly a whole new level. That was 2007, or almost ten years ago. It still looks very, very good by todan's stanards, if not quite top notch. Crysis 3 is three years old now, and while it's a moderate improvement over 1/2, not much, if anything, surpassed it yet, certainly not to a degree that Crysis improved on Far Cry.

          Hopefully it's been mostly an i

    • by Mashiki ( 184564 )

      Likely not for awhile. And with stuff like HBM(and memory on GPU die) in the pipe, I wouldn't expect that to happen for a decade or more, especially since computer video displays are moving into 4k. The reality is, there's never enough processing, memory, or bandwidth on a video card and there's plenty of limitations on current PC's that cause issues.

      But buying a videocard became difficult? Hardly. Buy a good mid-range card for $150-200 every 5 years if you're not a hardcore gamer(though lots of games o

    • There is tons of stuff game designers can't put in the game because they are too processing intensive... trust me they will use up any new gpu advances.
      • exactly. even if we increased the processing power of graphics cards 20 fold right now, we still wouldn't even have real-time raytracing of inanimate scenes. let alone trees with moving leaves, human hair or people wearing realistically looking fabric (of fur).

    • by tlhIngan ( 30335 )

      We had a good run From 1995-1998 with the SVGA cards that did 1024x768 with 32bit color. Then that 3D acceleration came out and buying a good video card became much more difficult.
      With Displays going up to 4k we should be getting to a point where increase of resolution will not matter, And 3d performance on those displays should be quick enough.
      While Mores law is in effect our bodies are not adapting as fast as the technology, so there should be a point where the Video from a computer will meet a threshold

      • ...44.1kHz 16 bit audio is relatively trivial...

        So, is is there an analogous specification for video cards? The 44.1kHz @ 16bit is pretty easily justified (Nyquist–Shannon + reasonable dynamic range). Can a visual equivalent be easily justified? That is to say, at sort of "eye limited" (retina, in Apple lingo) resolution and field of view, how many polygons can be said to make up the human perception of reality, and what sort of graphics processing muscle would be required to drive this?

        I of course have no idea, just wondering out loud. Just tr

        • Re: (Score:3, Interesting)

          by Lonewolf666 ( 259450 )

          Well, lets do a bit of math then to figure it out.

          According to https://www.nde-ed.org/EducationResources/CommunityCollege/PenetrantTest/Introduction/visualacuity.htm [nde-ed.org], 20/20 vision is

          the ability to resolve a spatial pattern separated by a visual angle of one minute of arc

          Lets take that as a given for the sake of the argument, and assume that we want just enough dpi on our screen that one pixel shows up at a visual angle of one minute of arc. So the screen can just match the resolution of the eye.

          Lets also assume that the largest screen we might ever want is as wide as the viewing distance from o

          • Nice. I was thinking more in terms of computational power required for arbitrary photo-realistic graphics at this resolution. I'm not even sure if that's a well-posed question, though. But perhaps one could decide the minimum size of a polygon/size of textures required/etc., and come up with some heuristic argument for theoretical GPU requirements that could provide an imperceptibly high frame rate at the "44.1kHz/16bit video" resolution/bitdepth, displaying an arbitrarily complex (up to the limit of human
    • Moores law has been dead for quite a while now. Really digital computing is reaching a dead end in itself. If you have noticed, the single thread performance of Intel CPUs is only around 20% of what they were 5 years ago. The days of exponential growth are well over. That is why they have just been adding more cores and cache and trying to improve memory technology. All the low hanging fruit has been picked.
      • Moores law has been dead for quite a while now.

        You have misunderstood what Moore's law is about. It is simply about the number of transistors doubling in integrated circuits every year (later revised to every two years). It is not about single threaded performance in CPUs.

        That is why they have just been adding more cores and cache and trying to improve memory technology.

        How do you think they add more cores and cache into CPUs if not by increasing the number of transistors? You have just described Moore's law in action!

        Moore's law has been around for decades; which only slightly longer than the predictions that the law is dying.

        • by mestar ( 121800 )

          "Moores law has been dead for quite a while now.

          You have misunderstood what Moore's law is about. It is simply about the number of transistors doubling in integrated circuits every year (later revised to every two years). It is not about single threaded performance in CPUs."

          Oh boy, here we go again, another Moore's law explainer.

          So, try to understand that the Moore's law got well known because of all the speed that your precious count brought.

          Nobody cares about the transistor counts, people upgraded because

          • by Pulzar ( 81031 )

            Nobody cares about the transistor counts, people upgraded because in about a year, your computer got twice as fast. This effect was known as a "Moore's law".

            Just because you, your grandma, and CNN's tech section editor misunderstood something for a while, doesn't make it right.

        • I know what Moores law is. It is dead. The law is double every two years PER SQ/INCH. That hasn't happened. You don't know what you are talking about. I didn't say that was the sole cause of single thread performance not increasing much, but it is a big contributor. There are only a few ways to get better single thread performance and increasing transistor count it is biggest way historically. Idiot.
    • The other day Tim Sweeney of Epic games said we need about 40 TFLOPS to get realistic (non-human) visuals, Current generation is 5 to 10 TFLOPS.
      After that, we probably need it to run off of a AA battery.

      There's still some room to advance from where we are today.

    • Then that 3D acceleration came out and buying a good video card became much more difficult.

      There was a sequence of "correct" 3D cards to own which more or less went from Matrox Millenium to 3Dfx to Nvidia but if you bought wrong (Nvidia NV1, anything ATi before r300, S3, Number Nine, Voodoo4/5, etc), you were generally not a happy camper... fortunately for me, I learned my lesson early-on with "Tandy 16-color graphics" (EGA comparable but not compatible.

      • by Yvan256 ( 722131 )

        At least your Tandy had that 3-voice synth IC, as opposed to those of us on EGA systems stuck with a crappy monophonic* speaker.

        * yes, I know about digital audio via the PC speaker. But it took a lot of CPU to do that and it sounded like crap on top of a high-pitched whine.

    • by Gordo_1 ( 256312 )

      VR will be pushing dual 4k @ 90+fps in less than 5 years most likely. At that point, I think we'll be close to the threshold you speak of.

    • by arth1 ( 260657 )

      We had a good run From 1995-1998 with the SVGA cards that did 1024x768 with 32bit color. Then that 3D acceleration came out and buying a good video card became much more difficult.

      And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.

      And I also like to see a return to analog video output. No pixels - that's the property of the software and not the rendering medium. Higher quality analog can display higher fidelity.

      Much like how we don't talk much about Sound cards.

      Joe Schmoe doesn't care about sound anymore. Gone are symphonic rock through HiFi systems with discrete components an

      • Right, so, you yern for the days of an ATI Mach64 for 2d video, a pair of 3dfx Voodoo2s in SLI, and a Aureal A3d sound card, or a SB32 with WaveBlaster2 daughterboard.

        Those were, indeed, good times, though some of it was through the rosy glasses of nostalgia.

      • And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.

        What do you think you would gain there? Not having to buy the video connectors repeatedly? They're a pretty small portion of the price.

        • by arth1 ( 260657 )

          What do you think you would gain there? Not having to buy the video connectors repeatedly? They're a pretty small portion of the price.

          Being able to add just the (and all the) connectors you need, and be able to get higher quality DA components if you want. Be able to not pay for accelerated 3d if you don't need it. Be able to pay for better accelerated 3d if you need it. Have completely independent video cards for different functions. Be able to run a game on one display and my e-mail on another, simultaneously, because I get back the true multihead support that the young whippersnappers ripped out from Linux in the early 2000s. Ru

          • Being able to add just the (and all the) connectors you need, and be able to get higher quality DA components if you want.

            Not that anyone uses analog output any more, but they tend to have a wicked high-speed RAMDAC on there for that minuscule portion of the market still using CRTs now that we have things like LCDs with adaptive sync.

            Be able to not pay for accelerated 3d if you don't need it.

            It's a tiny portion of the price at the low end.

            Be able to pay for better accelerated 3d if you need it.

            You can already do that.

            Have completely independent video cards for different functions.

            You can do that, too! You can even install an Nvidia card just for PhysX, and do graphics on an AMD card! Or you can use a card just for GPGPU.

            Be able to run a game on one display and my e-mail on another, simultaneously, because I get back the true multihead support that the young whippersnappers ripped out from Linux in the early 2000s.

            I'm able to do that on Windows, heh.

            An "everything but the kitchen sink" approach is always going to be a jack of all trades, and master of none. I don't put up with it for audio, so why should I for video? Choice is good, and discrete components offer that.

            Most of us are using "integrated" audio and hav

            • The idea is to have video output flow arbitrarily. Imagine adding more outputs on a card for your integrated graphics, without needing to go dual GPU ; or on the contrary, have two different GPU, one for Windows and one for Linux but just one set of outputs. So that you don't need to fiddle with a KVM switch, dual input monitors and their menus, be stuck with one or two Linux monitors next to one Windows monitor (all fixed), etc.

              There are some existing technologies that do something like that but in specifi

      • And for all this time, I have been hoping for a split, where the display card is decoupled from the acceleration card, and talking with an open bus standard.

        I'm not sure if this is economically feasible, but it sure is a nice idea. A lot of my GPU usage is spent on rendering and computing, not just direct display, and I hate the idea of paying extra for components I never use. OTOH, every mechanical connector comes with a lot of overhead, not to mention potential for wear and damage. The first integrated circuits were conceived to avoid solder/connector issues, not so much miniaturization.

        And I also like to see a return to analog video output. No pixels - that's the property of the software and not the rendering medium. Higher quality analog can display higher fidelity.

        It's a somewhat interesting idea, especially considering the audio analo

        • For that matter, make a 16/10 21" CRT with HDMI etc. inputs and I'd be very interested. With 1920x1200 85Hz for some stuff, 1440x900 120Hz for other stuff (like most casual desktop usage), 1920x1080 60Hz works if you plug in random crap, 1024x768 or 1280x1024 with black bars for the odd thing, etc.

          It would be trivial technologically wise (except for concerns of "lost technology") and there would be something of a market already with nostalgia gamers, fringe LCD haters and old fucks, what have you. But in th

          • But in these times, you can't have anything different. Even with LCD monitors, there's more choice lately but you can't get a monitor that's 16/10 and high refresh, or 16/10 and big, or all three at once. (nor even a 27" 1080p at 144Hz)

            It's silly that the HD video/movie craze forced computer users to the same widescreen format, as if computers were all about watching movies. I recently got a couple of 1280x1024s for next to nothing, as my math exhibitions work best in near-square formats. OTOH, 16:9 is nice for a stage backdrop projection.

      • Analog video output would only help if there were also a return to analog video display devices. I don't see CRTs or any other analog display device returning in the foreseeable future. Digital output is better for controlling displays that are inherently digital, such as LCD and OLED panels and DLP projectors.
    • As it so happens, the demands of VR headsets mean that video cards available now are nowhere NEAR adequate. As a poster south of me says, you need at LEAST dual 4k - one for each eye - and fovea tracking - and at LEAST 90 FPS. All the time. With minimal latency.

      Believe it or not, but not even the most expensive GPU money can buy - heck, not even unreleased GPUs that Nvidia has in Tesla cards (they are "released" but you can't use em as a graphics card) - is anywhere close to being able to push this kind

    • Sound cards, at their core, are just creating analog frequencies from a digital source. This is a well understood mature technology, so there's not much to do there except reducing distortion and improving snr.

      GPUs however, still have a scale issue - simplistically, the more pixels you drive, the more horsepower you need in the GPU. If we would have stayed at 1024x768 then the GPUs we have today would be massive overkill. But we didn't - a 4k display has more pixels than 10 1024x768 displays, and we're d

    • Not for a while yet. Even when the video resolution has reached "good enough for everybody", there's the next push: hardware accelerated physics. At the moment physics models are necessarily crude, which is why things never quite behave the way you think they ought to. Physics is amenable to massively parallel scaling, just as graphics has been. When we can model physics at the macro-atomic level, video cards will be done.
  • by Eric Hiller ( 4651513 ) on Tuesday July 19, 2016 @11:39AM (#52541419)
    Initial benchmarks look lower than the RX 480 and the price higher [with actual retail availability no better]. When you add the DPC latency issues, I wouldn't touch it with a 10 foot pole. I actually have a GTX 1070 I'm sending back now just because i don't want to deal with possible Nvidia DPC hell.
  • The GTX 1080 and 1070 have been consistently out if stock.
  • Well, it doesn't look too good for AMD. Their "super efficient" RX 480 uses much more power than the 1060 and is slower.
    On the bright side is the price of the 480 is only $200 (well, eventually it will be ;) ) and also AMD's version of aync compute works far better than Pascal (see: http://wccftech.com/nvidia-gef... [wccftech.com] and http://www.eurogamer.net/artic... [eurogamer.net] )

    • The 1060 is faster the 480 in old games. However newer games will use technologies like asynchronous DX12 and Vulkan. Ashes of the Singularity and Hitman are good examples of the former, and the Vulkan build of Doom is a good example of the latter.

      The 480 is faster than the 1060 in those 3 games. Doom/Vulkan is a *lot* faster on the 480.

      • I think you'll find though that the 480 is only faster if the game makes use of async compute.
        Having said that async compute is shaping up to be a very important feature.

  • Linus Tech Tips did a video review (YouTube Link [youtube.com]) and it sounds like the RX480 is a much better value. I'm surprised, but pleased to see AMD doing well. It would be nice if nvidia had some competition at all price points.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...