Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Intel Hardware

Nvidia Firmly Denies Plans To Build a CPU 123

Barence writes "A senior vice president of Nvidia has denied rumours that the company is planning an entry into the x86 CPU market. Speaking to PC Pro, Chris Malachowsky, another co-founder and senior vice president, was unequivocal. 'That's not our business,' he insisted. 'It's not our business to build a CPU. We're a visual computing company, and I think the reason we've survived the other 35 companies who were making graphics at the start is that we've stayed focused.' He also pointed out that such a move would expose the company to fierce competition. 'Are we likely to build a CPU and take out Intel?' he asked. 'I don't think so, given their thirty-year head start and billions and billions of dollars invested in it. I think staying focused is our best strategy.' He was also dismissive of the threat from Intel's Larrabee architecture, following Nvidia's chief architect calling it a 'GPU from 2006' at the weekend."
This discussion has been archived. No new comments can be posted.

Nvidia Firmly Denies Plans To Build a CPU

Comments Filter:
  • Anyone Surprised? (Score:5, Interesting)

    by Underfoot ( 1344699 ) on Wednesday August 27, 2008 @10:42AM (#24765335)

    Is anyone actually surprised that the CEO is denying this? Even if the rumors were true, letting news out to market about it would give Intel time to prepare a response (and legal action).

  • Reprogrammable GPU? (Score:5, Interesting)

    by Wills ( 242929 ) on Wednesday August 27, 2008 @10:44AM (#24765361)
    When hell freezes over, they could release a GPU where the instruction set is itself microprogrammable with open-source design, and then end users could decide whether they want to load the GPU's microcode with an x86 instruction set, a dsp set, or whatever.
  • x86 rumors origin ? (Score:4, Interesting)

    by DrYak ( 748999 ) on Wednesday August 27, 2008 @10:59AM (#24765617) Homepage

    Currently nVidia is partnering with VIA for small form factor x86 boxes. And they have made several presentation about a combination of (VIA's) x86-64 Issaiah and (their own) embed GeForce.
    Touting that the platform would be the first small form factor able to sustain Vista in all DX10 and full Aero glory.

    Maybe that is where some journalist got mixed and where all this "nVidia is preparing a x86 chip" rumor began ?

  • Re:Anyone Surprised? (Score:2, Interesting)

    by Hal_Porter ( 817932 ) on Wednesday August 27, 2008 @11:08AM (#24765747)

    Is anyone actually surprised that the CEO is denying this? Even if the rumors were true, letting news out to market about it would give Intel time to prepare a response (and legal action).

    The original story came from Charlie at The Inquirer. Charlie and NVidia hate each other.

  • Re:Difficult (Score:3, Interesting)

    by Wills ( 242929 ) on Wednesday August 27, 2008 @11:19AM (#24765931)
    I was aiming for the extreme reprogrammability and versatility that an open-source microcode CPU design with SIMD, RISC and CISC sections all on a single die. Sure, the trade off is that you don't get as much capability in each subsection (compared to the capabilities of a dedicated GPU, or a dedicated modern CPU) because the sub-sections all have to fit inside the same total area of silicon. But what you get instead is an open-source microcode CPU which has great versatility, without needing to go down the FPGA design route (even more versatile, but less simple to use).
  • Re:Anyone Surprised? (Score:2, Interesting)

    by morgan_greywolf ( 835522 ) on Wednesday August 27, 2008 @11:21AM (#24765949) Homepage Journal

    The original story came from Charlie at The Inquirer. Charlie and NVidia hate each other.

    Possibly related to Charlie's vast holdings of AMD stock...

  • by Anonymous Coward on Wednesday August 27, 2008 @11:22AM (#24765967)

    Rewrite the software in place to run on a different architecture (whatever their latest GPUs implement). Maybe, just maybe GPUs have evolved to a point where interpreted generic-x86 wouldn't be (completely) horrible.

  • Re:From 2006 (Score:4, Interesting)

    by Lumpy ( 12016 ) on Wednesday August 27, 2008 @11:46AM (#24766339) Homepage

    The alpha failed because the motherboards were $1300.00 and the processors were $2600.00 nobody in their right mind bought the stuff when you could get Intel motherboards for $400 and processors for $800.00 (dual proc boards, high end processors)

    DEC died because they could not scale up to what the intel side was doing. you had thousands of motherboards made per hour for Intel with maybe 4 a day for Alpha. It's game over at that point.

    I loved the Alphas, I had a dual alpha motherboard running windows NT it rocked as a server.

  • by alen ( 225700 ) on Wednesday August 27, 2008 @12:22PM (#24766955)

    3dfx's problem was they could never figure out how they sold their cards. they flipped flopped from themselves to having others make the cards like Nvidia does. after so many times no one wants anything to do with you because it's bad for business planning.

    nvidia has had it's current selling model for 10 years and only its partners have changed. if you want to sell video cards you can trust that if you sell cards based on nvidia's chips they won't pull the rug out from under you next year and decide to sell the cards themselves

  • by Bruce Perens ( 3872 ) * <bruce@perens.com> on Wednesday August 27, 2008 @12:41PM (#24767263) Homepage Journal
    Pixar had an OEM model too, back in its days of making hardware and software products (the Pixar image computer, Renderman, Renderman hardware acceleration) while waiting for the noncompete with Lucasfilm to run out. It's a very difficult way to run a business, because you have to pull your own market along with you, and you can't control them.

    It does look like 3DFx bought the wrong card vendor. They also spun off Quantum3D, then a card vendor, which is still operating in the simulation business.

  • Re:Only reason (Score:3, Interesting)

    by ratboy666 ( 104074 ) <{moc.liamtoh} {ta} {legiew_derf}> on Wednesday August 27, 2008 @02:07PM (#24768439) Journal

    How is a "GPU" different from a "CPU"? If you take them to be the SAME, you end up with Intels LARRABEE. If you take them as somehow DIFFERENT, you end up with nVidias proclamation.

    If they are considered the SAME, but with different performance tunings, other applications begin to open up.

    As an example: it is currently true that the "GPU" is given an exorbitant amount of resources to do one thing -- create visuals for games.

    And that's it. It contains a significant amount of the system memory, and processing logic, and "locks it away". Which is very good if you are selling the graphics cards, but not ideal (at all) for the customer.

    If the graphics card can be placed closer and more generally, the customer would win. EXCEPT -- for one problem (and, boy is it a doozy).

    The nVidia is programmed with a specific higher-order assembly language, We rely solely on the hardware vendor for tools. I think that this is UNIQUE in the (mass-market) processor world. And this is why Intel, with an x86 compatible GPU is such a threat. Can anyone else produce an OpenGL shader compiler for the nVidia? Or, better yet, extend it to do NON-shader tasks. How about for the AMD? Yes, you CAN for Intel, and will, by design be able to (I would expect, even ENCOURAGED) for LARRABEE.

    The idea is to extend the "NUMA" concept for memory to processors. Intel is doing it because others are already doing it - SUN with Niagra and Niagra 2 are providing an absolutely amazing proof of concept. (except with multi-core and FPU units).

    Why would you BOTHER with a specific purpose GPU, if you could have a (possibly less performant) workable solution with more cores, AND be able to use them for other tasks?

    Of course this is not particularly relevant to TODAYs applications. They are matched to current hardware. Now, I will bring up the L word - Linux. Linux is suited to a much wider degree of scaling (practically) and runs on ARM up to Z/Series. It also supports NON-x86 ISAs. Which would mean that a non-x86 version of this idea is probably supportable. But, it wouldn't run CURRENT software, and, I believe, would be a complete non-starter.

    But, take this with a grain of salt -- I am obviously not a great predictor (otherwise I would already be retired).

  • by TheLink ( 130905 ) on Wednesday August 27, 2008 @02:12PM (#24768511) Journal
    But who really wants that sort of versatility- who wants so many different instruction sets? The compiler writers? I doubt more than a few people want that.

    Would such a GPU be faster? It might be faster for some custom cases, but is it going to be faster at popular stuff than a GPU that's been optimized for popular stuff?

    The speed nowadays is not so much because of the instruction set, it's the fancy stuff the instruction set _control_ e.g. FPU units, out of order execution, trace cache, branch prediction etc.

    Just look at the P4, Opteron and Core 2. For the same instruction set you get rather different speeds.

    Good luck allowing buffer size, branch prediction logic, etc to be changed in a programmable way, have it run faster AND not screw up.

    The FPGA sort of stuff is for when you can't convince Intel, Nvidia etc to add the feature for you, because nobody else wants it but you.

    Programmers who make Crysis, and programmers who make Unreal, tend to want similar stuff fast.

    Maybe there might be some custom functions that are different for each popular software, that need to be sped up. But I don't see why you'd necessarily require a different instruction set just use those functions.
  • by bagofbeans ( 567926 ) on Wednesday August 27, 2008 @05:58PM (#24771069)
    I don't see an unequivocal denial in the quotes. Just an implied no, and then answering a question with a question. If I was defining products at Nvidia, I would propose an updated Via C7 (CPU+GPU) product anyway, not a simple standalone CPU.

    "That's not our business. It's not our business to build a CPU. We're a visual computing company, and I think the reason we've survived the other 35 companies who were making graphics at the start is that we've stayed focused."

    "Are we likely to build a CPU and take out Intel?"
  • Re:Just a thought... (Score:1, Interesting)

    by Anonymous Coward on Wednesday August 27, 2008 @06:38PM (#24771637)

    Some would say that the way we use devices is changing, that feature packed cell phones, UMPCs, and specialist devices like consoles, are beginning to dominate the home space. These platforms often dont use an x86 CPU. They use a RISC cpu like an arm or a freescale chip.
    These people are significant rivals to intel.
    The XBOX and the PS2 both have quazi CISC CPU chips in designed by IBM.

    What I'm saying is that although Intel probably is now the dominant player in the x86 market, this is simply leading to a lot of player making solutions that beat them in a direction Intel has not been attempting.

    It would make sense for NVIDIA, with its history of embedded chips, to be one of these. Low cost SoCs with CPU, GPU, and chipset all in one place for the thinclient/ultra low cost market perhaps?

    Microsoft will crosscompile their OS the moment there is demand.

Old programmers never die, they just hit account block limit.

Working...