Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware

NVIDIA Predicts 570x GPU Performance Boost 295

Gianna Borgnine writes "NVIDIA is predicting that GPU performance is going to increase a whopping 570-fold in the next six years. According to TG Daily, NVIDIA CEO Jen-Hsun Huang made the prediction at this year's Hot Chips symposium. Huang claimed that while the performance of GPU silicon is heading for a monumental increase in the next six years — making it 570 times faster than the products available today — CPU technology will find itself lagging behind, increasing to a mere 3 times current performance levels. 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"
This discussion has been archived. No new comments can be posted.

NVIDIA Predicts 570x GPU Performance Boost

Comments Filter:
  • Re:haha yeah right (Score:4, Interesting)

    by 4D6963 ( 933028 ) on Friday August 28, 2009 @05:25PM (#29236349)

    Intel said 4 nm for 2022, that's in 13 years. What precisely allows you to doubt that claims, except maybe the fact that deadlines are often missed? Let me rephrase that, what allows you to think that it'll be reached much later than anything else?

    Also, queue a dozen+ posts explaining to the armchair pundits how 560x is possible.

  • by javaman235 ( 461502 ) on Friday August 28, 2009 @05:36PM (#29236513)

    Its easy to get a 570x increase with parallel cores. You will just have a GPU that is 570 times bigger, costs 570 times more and consumes 570 times more energy. As far as any kind of real break through though, I'm not seeing it from the information at hand.

    There is something worthy of note in all this though, which is that the new way of doing business is through massive parallelism. We've all known this was coming for a long time, but its officially here.

  • by BikeHelmet ( 1437881 ) on Friday August 28, 2009 @06:07PM (#29236911) Journal

    Or... not.

    Currently CPUs and GPUs are stamped together. Basically, they take a bunch of pre-made blocks of transistors(millions of blocks, billions of transistors in a GPU), and etch those into the silicon, and out comes a working GPU.

    It's easy - relatively speaking - and doesn't require a huge amount of redesign between generations. When you get a certain combination working, you improve (shrink) your nanometre process and add more blocks.

    However, compiler technology has advanced a lot recently, and with the vast amounts of processing power now available, it should be simpler getting more complex blocks fully utilized. A vastly more complex block, with interconnects to many other blocks, could perform better at a swath of different tasks. This is evident when comparing the performance hit from Anti-Aliasing. Previously even 2xAA had a huge performance hit, but nVidia altered their designs, and now Multisampling AA is basically free.

    I recall seeing an article about a new kind of shadowing that was going to be used in DX11 games. The card used for the review got almost 200fps at high settings - with AA enabled that dropped to about 60fps, and with the new shadowing enabled, it dropped to about 20fps. It appears the hardware needs a redesign to be more optimized for whatever algorithm it uses!

    Two other factors you're forgetting...

    1) 3D CPU/GPU designs are coming slowly, where the transistors aren't just on a 2D plane... that would allow vastly denser CPUs and GPUs. If a processor had minimal leakage, and low power consumption, 500x more transistors wouldn't be a stretch.

    2) Performance claims are merely claims. Intel claims a quad-core gives 4x more performance, but in many cases it's slower than a faster dual-core.

    570x faster for every game? Doubtful. 570x faster at the most advanced rendering techniques being designed today, with AA and other memory-bandwidth hammering features ramped to the max? Might be accurate. A high end GPU from 6 years ago probably won't get 1fps on a modern game, so this estimate might even be low.

    A claim of 250x the framerate in Crysis, with everything ramped to the absolute maximum, might be even accurate.

    But general performance claims are almost never true.

  • by Anonymous Coward on Friday August 28, 2009 @06:16PM (#29237031)

    I do a number of molecular dynamics simulations myself, and while computational science on GPUs has been intriguing, for my purposes, it's been hampered by the lack of double precision. That may not happen, as it's not necessary for actual graphics, but if nVidia wants to market to a big cadre of computational scientists, that's what this community would need.

  • by Entropius ( 188861 ) on Friday August 28, 2009 @06:24PM (#29237101)

    You can -- that's what people are trying now. The issue is that in order for the GPU's to communicate, they've got to go over the PCI Express bus to the motherboard, and then via whatever interconnect you use from one motherboard to another.

    I don't know all the details, but the people who have studied this say that PCI Express (or, more specifically, the PCI Express to Infiniband connection) is a serious bottleneck.

  • Re:In other news... (Score:4, Interesting)

    by TeXMaster ( 593524 ) on Friday August 28, 2009 @06:24PM (#29237109)

    In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

    http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

    Yeah, I'd be making outrageous statements too if I were Nvidia.

    Even when it comes to GPGPU (General Purpose computing on the GPU), ATI's hardware is much better than NVIDIA's. However, the programming interfaces for ATI suck big times, whereas NVIDIA's CUDA is much more comfortable to code for, and it has an extensive range of documentation and examples that provide developers with all they need to improve their NVIDIA GPGPU programming. It also has much more aggressive marketing.

    As a sad result, NVIDIA is often the platform of choice for GPU usage for HPC, despite it having inferior hardware. And I doubt OpenCL is going to fix this, since it basically standardizes the low-level API, keeping NVIDIA with its superior high-level API.

  • % VS Times (Score:5, Interesting)

    by AmigaHeretic ( 991368 ) on Friday August 28, 2009 @06:38PM (#29237269) Journal
    I'm sure this is just another case of some moron seeing 570% increase and going, WoW! my next GPU will be 570 TIMES faster!!

    For the rest of us of course 570% increase is 5.7X faster.

    So, CPUs increasing 3X in the next 6 years and GPUs increasing 5.7X I can maybe believe.
  • Re:In other news... (Score:4, Interesting)

    by JustNiz ( 692889 ) on Friday August 28, 2009 @06:59PM (#29237509)

    >> provides features you can only appreciate on a 120hz display

    well thats a new one. There's not even slight technical merit to that statement but its certainly demonstrates the amusing creativity of ATi fanbois.

    >> The 9800GT and 8800GT are the same price and the ATI card blows it out of the water

    I have no argument that you should go with Ati if you're windows only and looking at cheaper-end cards.

    Its totally irrelevant to me though as I go for best overall performance, decent drivers, and only consider cards that have drivers that work well with Linux. ATI suck on all counts in my areas of interest.

  • by Anonymous Coward on Friday August 28, 2009 @08:08PM (#29238115)

    Have you considered breaking the workload up into single time-slices? In other words, the first batch is all lattice points of the form (x,y,z,0), the second is (x,y,z,1), etc. I may be misremembering what little I know about lattice QCD -- I worked on an MPI lattice QCD simulation briefly as a grad student about a million years ago -- but I believe in doing so you can effectively "double buffer" it. In short, you allocate two time-slices worth of memory on the GPU, and alternate which one is the active time-slice and which one is the previous time-slice. And since GPUHost memory transactions can be done asynchronously, you can hide them with the kernel execution time.

    Apologies if this seems too obvious - but I've been working with CUDA practically since day one and I've seen quite a few people make the mistake of assuming that a 3D problem should be broken into 3D, rather than 2D, sub-problems.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...