Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Nvidia CEO "Not Afraid" of CPU-GPU Hybrids 228

J. Dzhugashvili writes "Is Nvidia worried about the advent of both CPUs with graphics processor cores and Larrabee, Intel's future discrete graphics processor? Judging by the tone adopted by Nvidia's CEO during a financial analyst conference yesterday, not quite. Huang believes CPU-GPU hybrids will be no different (and just as slow) as today's integrated graphics chipsets, and he thinks people will still pay for faster Nvidia GPUs. Regarding Larrabee, Huang says Nvidia is going to 'open a can of whoop-ass' on Intel, and that Intel's strategy of reinventing the wheel by ignoring years of graphics architecture R&D is fundamentally flawed. Nvidia also has some new hotness in the pipeline, such as its APX 2500 system-on-a-chip for handhelds and a new platform for VIA processors."
This discussion has been archived. No new comments can be posted.

Nvidia CEO "Not Afraid" of CPU-GPU Hybrids

Comments Filter:
  • by Anonymous Coward on Friday April 11, 2008 @04:11PM (#23040450)
    CPU and GPU integration is quite logical progression of technology. There are things the GPU is not optimal and same goes to the CPU. It seems that when combined, they prove successful.

    A side note maybe we'll see a Nvidia GPU based Folding@home release some day, but at least ATI latest GPUs have a new client to play with:
    http://folding.typepad.com/news/2008/04/gpu2-open-beta.html [typepad.com]
  • Multi Core GPUs (Score:2, Interesting)

    by alterami ( 267758 ) on Friday April 11, 2008 @04:14PM (#23040490)
    What AMD should really try to do is start combining their cpu technology and their graphics technology and make some multi core GPUs. They might be better positioned to do this than Intel or Nvidia.
  • He should be afraid (Score:5, Interesting)

    by Yvan256 ( 722131 ) on Friday April 11, 2008 @04:22PM (#23040608) Homepage Journal
    I, for one, don't want a GPU which requires 25W+ in standby mode.

    My Mac mini has a maximum load of 110W. That's the Core 2 Duo CPU, the integrated GMA950, 3GB of RAM, a 2.5" drive and a DVD burner, not to mention FireWire 400 and four USB 2.0 ports under maximum load (the FW400 port being 8W alone).

    Granted the GMA950 sucks compared to nVidia's current offerings, however do they have any plans for low-power GPUs? I'm pretty sure the whole company can't survive on the FPS-crazed game players revenues alone.

    They should start thinking about asking intel to integrate their (current) laptop GPUs into intel CPUs.
  • by scumdamn ( 82357 ) on Friday April 11, 2008 @04:38PM (#23040850)
    Intel is and always has been CPU-centric. That's all they ever seem to focus on because it's what they do best. Nvidia is focusing 100% on GPUs because it's what they best. AMD seems to have it right with their combination of the two (by necessity) because they're focusing on a mix between the two. I'm seriously stoked about the 780G chipset they rolled out this month because it's an integrated chipset that doesn't suck and actually speeds up an ATI video card if you add the right one. Given, AMD isn't the fastest when it comes to either graphics or processors but at least they have a platform with a chipset, CPU, and graphics that work together. Chipsets have needed to be a bit more powerful for a long-ass time.
  • Can of Whoop Ass?? (Score:3, Interesting)

    by TomRC ( 231027 ) on Friday April 11, 2008 @04:56PM (#23041108)
    Granted NVidia is way out ahead in graphics performance - but generally you can tell when that someone is getting nervous when they start in the belligerant bragging.

    The risk for NVidia isn't that Intel will surpass them, or even necessarily approach their best performance. The risk is that Intel might start catching up, cutting (further) into NVidia's market share.
    AMD's acquisition of ATI seems to imply that they see tight integration of graphics to be at least cheaper for a given level of performance, or higher performance for a given price. Apply that same reasoning to Intel, since they certainly aren't likely to let AMD have that advantage all to themselves.

    Now try to apply that logic to NVidia - what are they going to do, merge with a distant-last-place x86 maker?
  • by caerwyn ( 38056 ) on Friday April 11, 2008 @04:59PM (#23041144)
    This is true to some extent, but raster will never completely go away- there are situations where raster is completely appropriate.

    For instance, modern GUIs often use the 3d hardware to handle window transforms, blending and placement. These are fundamentally polygonal objects for which triangle transformation and rasterization is a perfectly appropriate tool and ray tracing would be silly.

    The current polygon model will never vanish completely, even if high-end graphics eventually go to ray tracing instead.
  • Re:Let's Face It (Score:3, Interesting)

    by LurkerXXX ( 667952 ) on Friday April 11, 2008 @05:05PM (#23041214)
    Intel has open specs on their integrated video hardware, so Open Source folks can write their own stable drivers.

    ATI and Nvidia do not. I know who I'm rooting for to come up with a good hardware...
  • Just like the FPU (Score:5, Interesting)

    by spitzak ( 4019 ) on Friday April 11, 2008 @05:22PM (#23041360) Homepage
    Once upon a time the floating point was done on a seperate chip. You could buy a cheaper "non-professional" machine that emulated the fpu in software and ran slower. You could also upgrade your machine by adding the fpu chip.

    Such FPU's do not exist today.

    I think Nvidia should be worried about this.

  • The problem... (Score:3, Interesting)

    by AdamReyher ( 862525 ) * <adamNO@SPAMpylonhosting.com> on Friday April 11, 2008 @05:22PM (#23041370) Homepage
    ...is that Nvidia is saying that Intel is ignoring years of GPU development. Umm, wait. Isn't a GPU basically a mini-computer/CPU by itself that exclusively handles graphics calculations? By making this statement, I think they've forgotten who Intel is. Intel has more than enough experience in the field to go off on their own and make GPUs. Is it something to be scared of? Probably not, because as he correctly points out, a dedicated GPU will be more powerful. However, it's not something that can be ignored. We'll just have to wait and see.
  • The results are not as realistic with raster. Shadows don't look right.
    As John Carmack mentioned in a recent interview, this is in fact a bonus, for shadows as well as other things.

    The fact is that "artificial" raster shadows, lighting and reflections typically look more impressive than the "more realistic" results of ray tracing. This alone explains why raster will maintain its dominance, and why ray tracing will not catch on.
  • by wattrlz ( 1162603 ) on Friday April 11, 2008 @05:54PM (#23041654)
    There must be a conspiracy behind that. There's no way big-budget studios with seven and eight figure budgets and virtually limitless cpu cycles at their disposal could be releasing big-screen features that are regularly shown up by video games and decade old tv movies. Maybe it has something to do with greenscreening to meld the cgi with live action characters, perhaps it's some sort of nostalgia, or the think that the general public just isn't ready to see movie-length photo-realistic features, but there's no way digital animation hasn't progressed in the past ten or twenty years.
  • by Anonymous Coward on Friday April 11, 2008 @05:58PM (#23041684)

    Worse than that, people like me would be quite happy using our 4MB ISA graphics cards, if some sod hadn't gone and invented PCI.

    Are you sure about that?


    4MB is not enough to store 1280x1024 at 32bpp. I also believe that extra video card memory can be used in 2D to store extra bitmaps.


    ISA also has a bandwidth of under 4 MB/s, which is not enough for 320x240 16bpp 30fps video


    If you want to talk about those old graphics cards, try turning off all 2D acceleration and see how smooth moving windows and scrolling is. That's why they did window outlines.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday April 11, 2008 @06:45PM (#23042100) Homepage
    Having the GPU built into the CPU is primarily a cost-cutting measure. Take one low-end CPU, add one low-end GPU, and you have a single-chip solution that consumes a bit less power than separate components.

    Nobody expects the CPU+GPU to yield gaming performance worth a damn, because the two big companies that are looking into this amalgam both have underperforming graphics technology. Do they both make excellent budget solutions ? Yes they certainly do, but for those who crave extreme speed, the only option is NVidia.

    That said, not everyone plays shooters. Back in my retail days, I'd say I moved 50 times more bottom-end GPUs than top-end ones. Those Radeon 9250s were $29.99 piles of alien poop, but cheap poop is enough for the average norm. The only people who spent more than $100 on a video card were teenagers and comic book guys (and of course, my awesome self).
  • by Futurepower(R) ( 558542 ) on Friday April 11, 2008 @06:51PM (#23042154) Homepage
    The problem is not that Nvidia CEO Jen-Hsun Huang made one stupid statement. The problem is that he said many foolish things, indicating that he is not a good CEO. Here are some:

    Quote from the article: "Nvidia CEO Jen-Hsun Huang was quite vocal on those fronts, arguing hybrid chips that mix microprocessor and graphics processor cores will be no different from systems that include Intel or AMD integrated graphics today."

    My opinion: There would be no need for all the talk if there were no chance of competition. Everyone knows there will be new competition from Intel Larabee and AMD/ATI. Everyone knows that "no different" is a lie. Lying exposes the Nvidia CEO as a weak man.

    "... he explained that Nvidia is continuously reinventing itself and that it will be two architectural refreshes beyond the current generation of chips before Larrabee launches."

    The entire issue is that Intel+Larabee and AMD+ATI will make Nvidia irrelevant for most users. The GPU will be on the motherboard. Nvidia will sell only to gamers who are willing to pay extra, a lot extra.

    "Huang also raised the prospect of application and API-level compatibility problems with Larrabee. Intel has said Larrabee will support the DirectX 10 and OpenGL application programming interfaces just like current AMD and Nvidia GPUs, but Huang seemed dubious Intel could deliver on that front."

    Intel, in this case, is Intel and Microsoft working together. Both are poorly managed companies in many ways, but they are both managed well enough to insure that the Microsoft product works with the Intel hardware. Sure, it is an easy guess that Microsoft will release several buggy versions, because Microsoft has a history of treating its customers as though they were beta testers, but eventually everything will work correctly.

    '[NVidia VP] Tamasi went on to shoot down Intel's emphasis on ray tracing, which the chipmaker has called "the future for games." '

    Ray tracing is certainly the future for games, there is no question about that. The question is when, because the processor power required is huge. It's my guess, but an easy guess, that Mr. Tamasi is lying; he is apparently trying to take advantage of the ignorance of financial analists.

    "Additionally, Tamasi believes rasterization is inherently more scalable than ray tracing. He said running a ray tracer on a cell phone is "hard to conceive."

    This is apparently another attempt to confuse the financial analyists, who often have only a pretend interest in technical things. Anyone understanding the statement knows it is nonsense. No one is suggesting that there will be ray-tracing on cell phones. My opinion is that this is another lie.

    "We're gonna be highly focused on bringing a great experience to people who care about it," he explained, adding that Nvidia hardware simply isn't for everyone."

    That was a foolish thing to say. That's the whole issue! In the future, Nvidia's sales will drop because "Nvidia hardware simply isn't for everyone." Most computers will not have separate video adapters, whereas they did before. Only powerful game machines will need to by from Nvidia.

    'Huang added, "I would build CPUs if I could change the world [in doing so]." ' Later in the article, it says, "Nvidia is readying a platform to accompany VIA's next-generation Isaiah processor, which should fight it out with Intel's Atom in the low-cost notebook and desktop arena"

    Translation: Before, every desktop computer needed a video adapter, which came from a company different than the CPU maker, a company like Nvidia. Now, the video adapters will be mostly supplied by CPU makers. In response, Nvidia will start making low-end CPUs. It is questionable whether Nvidia can compete with Intel and AMD making any kind of CPU.
  • Re:Multi Core GPUs (Score:3, Interesting)

    by et764 ( 837202 ) on Friday April 11, 2008 @07:17PM (#23042384)
    Why does your CPU need vector operations if you have a vector of CPUs?
  • by Anonymous Coward on Friday April 11, 2008 @11:12PM (#23043824)
    The amount of hackery needed to get shadows to work right and look "impressive" with rasterization approaches is too high for it to persist. The methods are hacks, it's difficult to make it work right and fast, and requires a lot of collaboration with artists to ensure that it looks good. Plus, it's not very scalable - every time you want more realism, you have to dig into shaders and figure out how to cheat and make it look realistic even though the underlying math is not. Of course, the results are worth the effort, which is why game studios spend such a disproportionate amount of time (both programmer and processor) on graphics.

    Raytracing, on the other hand, requires no hacking whatsoever to produce the same quality of results as the totally massaged rasterized scene, just a little tweaking. I've seen scenes that I could not pick out as fake rendered over ten years ago with PovRay; the newer versions do even better. I don't know where you've seen raytracing suck compared to rasterized images, as that doesn't match anything I've ever seen. Even if I take you at your word on that, though, it still doesn't matter. Once raytracing is able to run fast enough to get full screen resolution at 60 fps, it's going to be a lot harder to justify the immense design and effort that goes into new rasterization tricks when you can achieve the same evolution by inching up the quality knob (level of antialiasing, number of caustics rays, etc.) on the raytracer every time a new breed of processors come out.

    Yes, rasterization might be faster for the same scene at the same level of quality. It might even continue to be faster forever, I don't know. But it's irrelevant - I'm sure a lot of people thought MIDI and MOD game music would never disappear back in the day, because there was so little space on a disk that you would never want to waste some of it on actual recorded music. The optimizations that help performance in the early years of a technology are always discarded once you reach a point where the "real thing" comes relatively cheap. Once you can simulate reality close enough that a casual observer can't tell the difference, optimizations will no longer be aimed at processor speed, but will focus on ease of creation, and raytracing has a massive edge there.

    As to why brilliant guys like Carmack see no future in raytracing? Simple - they are knee deep in the extremely difficult optimizations required to get tomorrow's results out of today's machines, and this closeness with hacking the guts of an imperfect system makes it really hard to imagine a day when that imperfect system is unnecessary. That's fine while today's results suck. But there will come a day when squeezing another factor of two out of your graphics card's polygon count won't help you because you're already close enough to reality that nobody cares anymore.

    That's when we turn to physics. And the rigid body experts will reign supreme, talking about how large scale molecular physical simulation will never overtake their methods coupled to special purpose soft-body solvers. And they'll be right for ten years, and then we'll have enough processing power that it doesn't make sense to make "stupid" simplifications like the rigid body assumtion, and eventually physics will be a solved problem as we start simulating what really happens as opposed to a high level approximation of it. God knows what we'll turn to after that...
  • by CompMD ( 522020 ) on Saturday April 12, 2008 @03:38AM (#23044824)
    The large corporations and engineering companies that have *THOUSANDS* of high-end workstations need graphics hardware compatible with complex, specialized software. I'm talking Unigraphics, CATIA, Patran, Femap, etc. You need to use the hardware certified by the software publisher otherwise you don't get support and you can't trust the work you are doing to be correct. And the vast majority of the cards that are up to the challenge are nvidia cards.

    I have done CAD/CAM for ages, and my P3-750 with a Quadro4 700XGL isn't noticeably slower than a P4-3.4 with a Radeon X300SE running Unigraphics NX 5. I have a P3-500 with a QuadroFX-1000 card that freaking flies running CATIA V5. Again, in contrast, my 1.8GHz P4 laptop with integrated Intel graphics sucks balls running either UG or CATIA.

    Speaking for the workstation users out there, please keep making high performance GPUs, Nvidia.

There are two ways to write error-free programs; only the third one works.

Working...