Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Portables Hardware

Nvidia Mulls Cheap, Integrated x86 Chip 211

CWmike writes "Nvidia is considering developing an integrated chip based on the x86 architecture for use in devices such as netbooks and mobile Internet devices, said Michael Hara, vice president of investor relations at Nvidia during a speech that was webcast from the Morgan Stanley Technology Conference this week. Nvidia has already developed an integrated chip called Tegra, which combines an Arm processor, a GeForce graphics core and other components on a single chip. The chips are aimed at small devices such as smartphones and MIDs, and will start shipping in the second half of this year. 'Tegra, by any definition, is a complete computer-on-chip, and the requirements of that market are such that you have to be very low power and very small but highly efficient,' Hara said. 'Someday, it's going to make sense to take the same approach in the x86 market as well.'"
This discussion has been archived. No new comments can be posted.

Nvidia Mulls Cheap, Integrated x86 Chip

Comments Filter:
  • For those of us who dealt with intel's "integrated" graphic cards on laptops for the past several years now... on their behalf I just want to say PLEASE FOR THE LOVE OF ALL THINGS SHINY AND SILICON, DON'T DO IT! Anything with the word "integrated" near it makes me want to cringe... it's a post traumatic stress response caused by watching a myriad of good video games shutter, blink, crash, and burn right in front of me. It's a black day indeed when Warcraft 3 can't run at full resolution on a laptop produced

    • Re: (Score:3, Informative)

      by aliquis ( 678370 )

      Heh, funny that you mention it.

      I run WC3 on my Macbook Pro 1.5 year old and I use 1024x768, medium, high, medium, high, high, on, on, high. So medium models and textures ...

      And I agree, especially in this price range :D

      • Re: (Score:2, Informative)

        by coxymla ( 1372369 )
        Any MacBook Pro ought to be able to slaughter WC3 at all full settings on the high resolution. Maybe not in the nv9400 mode (although I wouldn't be surprised if even that chip could handle the awesome power of a 4 year old game) on the latest unibody models, but still.

        Are you sure that you're running a Universal Binary version? If not, then your CPU has to emulate PPC instructions. Get the latest updates from Blizzard and your experience should be a lot better.

        • by aliquis ( 678370 )

          Well, since we talked about it I raised res to 1440x900 all the same (medium textures and models), and it lagged like crazy even when there was only heroes + like 5 units each or so during early rush.

          Though then I play 3on3 RT and in OS X, I'm confident it would run smooth in Windows. I think my 6800 LE may have run better in Windows than this 8600m GT in OS X.

          Well, my 8600m GT doesn't come close so no, no change in hell 9400m would do it.

          I don't remember where to see if it's PPC or Intel version but I'm qu

          • by slyn ( 1111419 ) <ozzietheowl@gmail.com> on Thursday March 05, 2009 @11:19PM (#27087277)

            Um, wat? I have the same model you do (it's the santa rosa MBP with the 8600m GT yes?) and it has no problem running WC3 at full res and everything maxxed. Anything less and your computer probably has something wrong with it.

            I can run WoW in dalaran (for those not familiar with the game, the busiest city) on a packed server or do a full 25 man raid with everything but view distance maxxed and view distance at around 1/3 of max and still average ~30+ FPS. If I go any higher on distance I need to lower most other settings, as I think thats when all the various armor/player/model/building/etc textures start causing the 256 mb of graphics ram to have to swap out and things start getting shitty. WC3 is much less graphically intense than that even if you've got two huge armies going at it.

            Maybe an early sign of this: http://support.apple.com/kb/TS2377 [apple.com]

            I got that but Apple fixes it for free warranty or not since its Nvidia's manufacturing problem (my understanding is its the same problem (conceptually) as the RRoD only on your laptop).

    • by aliquis ( 678370 )

      Anything with the word "integrated" near it makes me want to cringe...

      But then again Apple claim the new Mac Mini with 9400M and shared RAM can run the latest games ..

      Reality distortion field [checked]

    • by emj ( 15659 ) on Thursday March 05, 2009 @07:06PM (#27085117) Journal

      My Intel 855GM handles xterms very well, recently they have become very wobbly slimey when I drag them around in Gnome, other than that everything is fine with my integrated chip.

    • by nedlohs ( 1335013 ) on Thursday March 05, 2009 @07:16PM (#27085249)

      Yes, because what I want to do is slot a PCIe card into my damn cell phone.

    • by cjb658 ( 1235986 ) on Thursday March 05, 2009 @07:32PM (#27085435) Journal

      It's being designed for netbooks, which aren't typically designed for gamers.

      Fortunately, the one good thing that's come from Vista is that now almost all new computers come with decent graphics cards.

      I hated looking for new laptops that were $800 and finding out they had integrated graphics, then being forced to pay for the "premium" product tier to get discrete graphics, which included a much more expensive processor and RAM.

      With a desktop, you can just buy a $500 PC at Walmart and drop in a decent graphics card.

      • by tepples ( 727027 )

        With a desktop, you can just buy a $500 PC at Walmart and drop in a decent graphics card.

        With a desktop PC, it's also a lot harder to move the PC from your home office to the TV room when you want to play games on the 32" flat screen. But then, I guess you could buy a second desktop PC for the TV room with the money you save vs. a laptop PC.

        • Re: (Score:3, Insightful)

          Or you could get a 30" flat screen for your desktop, a nice audio system, and a comfortable chair and not have duplicate media setups.

          For bonus points, put a couch behind your chair & move the chair out of the way when you have guests.

      • Intel recently announced it was making the Atom CPU core available for SoCs made in TSMC [eetimes.eu]. NVIDIA has dealings with TSMC, as they only do design and outsource manufacturing. So theoretically NVIDIA could just use an Atom core as a base and slap a GPU on it, much like they did with Tegra with an ARM core.
    • by Samah ( 729132 )

      Anything with the word "integrated" near it makes me want to cringe...

      So you don't want to use any silicon chip then?
      http://en.wikipedia.org/wiki/Integrated_circuit [wikipedia.org]

      • Re: (Score:3, Funny)

        by timeOday ( 582209 )
        Fine, but I refuse to use any newfangled CPU that has integrated cache memory and can't harness the power of my math coprocessor.
    • It's a black day indeed when Warcraft 3 can't run at full resolution on a laptop produced only a year ago.

      Frozen Throne runs fine on my stinking EEE PC 900HA. And it has a three-generation-old, under-clocked Intel GMA 950.

      Frozen throne should run great on a GMA X4500. Even WoW runs OK on a GMA X4500.

    • I think graphics with an integrated motherboard will fare much better than the opposite.

    • Warcraft 3 requires something like a Voodoo 3 card. Even the slowest integrated gfx chips today outperform that by something like 5-10 times.

      In fact, you could get faster frame rates with a dual core CPU doing ALL the rendering into the frame buffer.

      There's something wrong with your laptop.

    • I have to say, WC3 runs happily on my year-old MacBook.

    • by CompMD ( 522020 )

      Not all computers are for gamers.

  • by pak9rabid ( 1011935 ) on Thursday March 05, 2009 @06:52PM (#27084933)
    Nvidia develops a very basic x86 CPU thats tightly coupled to one of their embedded GPUs that doesn't implement any x86 technology that's still currently patent-protected. The basic x86 CPU acts as a shim for software that expects to talk to an x86 CPU and offloads as much as possible to the significantly more advanced GPU running the bulk of the load. The end result? An x86-compatible embedded system that vastly outperforms anything currently on the market that doesn't violate anyone's active x86 patents.
    • by Anonymous Coward on Thursday March 05, 2009 @07:01PM (#27085055)

      Wow, do you even know anything about x86?

      • by larry bagina ( 561269 ) on Thursday March 05, 2009 @07:10PM (#27085165) Journal
        almost as much as he knows about GPUs.
      • Re:Prediction.. (Score:5, Interesting)

        by Fulcrum of Evil ( 560260 ) on Thursday March 05, 2009 @07:14PM (#27085221)
        x86 in an instruction set and a bunch of semantics. The decoder takes about 1% of a modern CPU, and if you're able to lop this off and run it on a GPU or something for cheap, your software won't care.
        • by setagllib ( 753300 ) on Thursday March 05, 2009 @08:27PM (#27086019)

          Wow, now all we need is to connect the GPU to the FSB/QPI, make it support pagetables, interrupts, DMA, CPU-style L1/2/3 coherent cache, memory controller with synchronous fencing, legacy and long modes for pointers and instructions, etc.... and then we'll have something that can possibly emulate an x86 CPU at only 99.9% performance penalty!!

          Or, you know, not.

          • by mgblst ( 80109 )

            If we could somehow integrate the photon torpedoes to initiate a reverse tachyon beam, I think we might have a real challenger on our hands.

            Too much Star Trek?

        • Well yes, the standard practice for making a modern x86 CPU is to make a RISC core and then put a decoder in to translate x86 instructions (see the AMD K5 for a good example of this).

          However, GPUs are a little different. The programming model of a GPU is as follows;
          Load in a block of code that performs some kind of mathematical operation (known as a kernel).
          Specify the block of data to run the kernel against and the block of data to put the output.
          Run the kernel.

          For an x86 program that typically cons
          • Given the content of TFA, Nvidia may well be building an integrated chip that's just something like a 486 on 45nm silicon. The actual shared silicon may be nothing much at all - MMU or something.
      • Re: (Score:3, Funny)

        by mgblst ( 80109 )

        Not just x86, but this guy clearly doesn't know anything about CPU/GPUs at all. Kinda like an old friend of mine, convinced he was going to design his own CPU, despite not knowing anything about computer hardware. Or even computer software. Sure could play games though, and smoke a lot of dope.

    • Or, they could just talk to VIA, which already has everything in order to produce basic but unexciting x86s; but has long suffered from high levels of graphical suck. NVidia's contention has been that x86 performance is minimally important compared to GPU performance; but, obviously, you need some x86 to run Wintel stuff: thus, VIA's not all that exciting; but cheap and almost definitely up for licence/collaboration or purchase, x86 as an option.
      • VIA's got a pretty strong CPU; the Nano holds its own for the low-power segments.

        An nVidia-VIA partnership would have worked wonders but nVidia went there and came back. For some reason they wanted to do ION instead. What a sick joke; and here I was hoping for a VIA Nano wiht a 9400 chipset.

    • You forgot the part about it somehow, some way, being litigated to death and never seeing the light of day.

  • by davidsyes ( 765062 ) on Thursday March 05, 2009 @06:52PM (#27084937) Homepage Journal

    read:

    "Nvidia NULLS Cheap, Integrated x86 Chip "

  • Netbooks? (Score:5, Funny)

    by Hognoxious ( 631665 ) on Thursday March 05, 2009 @06:59PM (#27085029) Homepage Journal
    You aren't allowed to call them netbooks, didn't you get the subpoena?
  • by jd ( 1658 ) <imipak&yahoo,com> on Thursday March 05, 2009 @07:34PM (#27085457) Homepage Journal

    Surely a better design is to produce a series of very small, highly specialized, very fast cores on a single piece of silicon, and then have a layer on top of that which makes it appear to be an x86, ARM or whatever.

    One reason for having a bunch of specialist cores is that you don't have one core per task (GPU, CPU or whatever), but rather one core per operation type (which means you can eliminate redundancy).

    Another reason is that having a bunch of mini cores should make the hardware per mini core much simpler, which should improve reliability and speed.

    Finally, such an approach means that the base layers can be the same whether the top layer is x86, ARM, PPC, Sparc or a walrus. NVidia could be free to innovate the stuff that matters, without having to care what architecture was fashionable that week for the market NVidia happens to care about.

    This is not their approach, from everything I'm seeing. They seem to be wanting to build tightly integrated system-on-a-chip cores, rather than having a generic SoaC and an emulation layer. I would have thought this harder to architect, slower to develop and more costly to verify, but NVidia aren't idiots. They'll have looked at the options and chosen the one they're following for business and/or technical reasons they have carefully studied.

    If I was as bright as them, why is it that they have the big cash and I only get the 4 digit UID? Ergo, their reasoning is probably very sound and very rational, and if presented with my thoughts could very likely produce an excellent counter-argument to show why their option is logically superior and will produce better returns on their investments.

    The question then changes as follows: What reasoning could they have come up with to design a SoaC unit the way they are? If it's the "best" option, although demonstrably not the only option, then what makes it the best, and what is it the best at?

    • by hawk ( 1151 ) <hawk@eyry.org> on Thursday March 05, 2009 @09:11PM (#27086419) Journal

      >Finally, such an approach means that the base layers can be the
      >same whether the top layer is x86, ARM, PPC, Sparc or a walrus.

      So much for running linux on it!


      BIOS ERROR.
      NO OPERATING SYSTEM FOUND.
      THE WALRUS HAS EATEN THE PENGUIN.

      hawk

    • Surely a better design is to produce a series of very small, highly specialized, very fast cores on a single piece of silicon, and then have a layer on top of that which makes it appear to be an x86, ARM or whatever.

      Yes, they call that a modern x86 CPU.
      They don't create the x86 instruction set in hardware anymore. They just have a translation layer in hardware that takes the x86 code and runs it on another type of hardware (usually a RISC core).

      The internal execution core of this type of CPU [a modern x86] is actually a "machine within the machine", that functions internally as a RISC processor but externally like a CISC processor. The way this works is explained in more detail in other sections in this area, but in a nutshell, it does this by translating (on the fly, in hardware) the CISC instructions into one or more RISC instructions. It then processes these using multiple RISC execution units inside the processor core.

      http://www.tek-tips.com/faqs.cfm?fid=788 [tek-tips.com]

      Incidently CISC had a big advantage over RISC. Each instruction typically did more and so for a given program a CISC computer will typically use less code. Saving cache, memory and bandwidth. So modern x86 CPUs have the advantage of

  • by Phizzle ( 1109923 ) on Thursday March 05, 2009 @07:42PM (#27085563) Homepage
    A Beowu.... aww fuck it.
  • by BikeHelmet ( 1437881 ) on Thursday March 05, 2009 @07:52PM (#27085679) Journal

    They should just push ARM heavily. ARM is doing great right now. Companies like Texas Instruments are pushing the architecture heavily, and there's high demand.

    Linux ARM support is blasting ahead, thanks to projects like the Beagleboard.

    On top of that, a while ago Microsoft said they were developing an ARM version of Windows. Although we won't see it right away, in a couple years that'll open up even more options.

    If they push ARM hardware heavily enough, software will follow. Heck, the software is already coming along, so they just have to market the hardware properly.

    Most people won't know the difference between a linux MID and a windows MID. Both have "Email", "Instant Messenger", "Calendar", "Web Browser", etc., and if you need a new program you just download it... Nobody would even think of installing software off a CD, so most "Why won't this work?" scenarios won't even come up. It'll just look slightly different.

    And once a couple game devs follow - or heck, a program like Google Earth - it won't be long before oodles of software is being ported, and the ARM-x86 barrier breaks down.

    • This doesn't work for one main reason. ARM just isn't marketable. No one when looking at specs is going to go for the laptop with 1 gig of RAM and a 600 Mhz ARM CPU compared to one with a gig of RAM and a 1.6 Ghz Intel CPU. Sure, the megahertz myth is busted, but for the average consumer, the more Ghz, the more appealing the choice.
      • Don't market mhz, then. Market capabilities.

        If the ARM laptop has "plays 1080p" slapped on the front, and the Atom laptop has "plays 720p" slapped on, which do you think the consumer will buy? What if the ARM laptop has a "16 hour battery life", and the Atom laptop has a "9 hour battery life"? What if the ARM laptop costs $50 less?

        I just hope the salespeople educate them a bit on the differences. :P Linux vs Windows needs to be explained... as Dell seems to have pointed out.

    • If they push ARM hardware heavily enough, software will follow.

      (1) It doesn't matter how hard they push ARM, all the legacy x86 software won't magically work.

      (2) I don't think Nvidia has what it takes to push Microsoft hard enough to get a mainstream version of Windows or Office on ARM, and without Windows goes a huge number of people who refuse to try something different.

      (3) Nvidia's strength - the GPU - is best used in games. Without focus there, there is no reason to go with Nvidia's solution over Intel's or AMD's or Via's. Without Windows that just isn't go

      • Nvidia's GPU also has VDPAU [wikipedia.org] which means that the GPU can do nearly all of the work decoding video, even HD x.264. So pair it with a weak little power-efficient cpu and you have an excellent video player. Intel can't do that.

    • by Erich ( 151 ) on Thursday March 05, 2009 @09:39PM (#27086619) Homepage Journal
      Sigh.

      The reason to go with x86 is because ARM is just as shitty of an architecture.

      Seven supervisor modes now? Horrible page table format? Have you seen what they are planning for 64-bit addressing?

      Even more importantly than the CPU architecture, the ARM busses are typically very low performance. And if most of the time is dealt with memory movement, having a better bus dwarfs what's going on with the CPU.

      So, in the end, you have slow cores. Intel knows how to make x86 fast. And, as they are starting to show, they can make it low power also. ARM has yet to show a fast core. They don't use that much power, but if "netbooks" are low end laptops instead of high end cell phones, a few watts is fine.

      Oh, and did I mention that x86 cores are x86 compatible? That makes the software barrier to entry a lot lower.

      To compete with Intel, you have to be better. A lot better. For very low end, ARM is better, because all that matters is leakage power, and after that all that matters is power for very small processing. At a higher level of performance, ARM is different, but perhaps not better. Maybe the ARM architecture has some features which make it less complex to implement than x86. But at the end of the day if nobody is making ARM cores that spank x86 cores, x86 will win. Didn't you learn this from PowerPC? Don't you realize the same thing will probably happen to ARM except at the extremely low end? And even there, if Intel decides to start licensing 386 synthesizeable cores, how long do you think ARM7 and ARM9 will last?

      • by BikeHelmet ( 1437881 ) on Thursday March 05, 2009 @11:17PM (#27087265) Journal

        Yes, the ARM architecture is horrible and slow - but it also integrates really easily with other kinds of chips.

        How long have we had ARM SoCs with CPU, GPU, MMU, plus a dozen other chips all in a single chip? An ARM "CPU" (SoC) isn't just the CPU part. It also has dozens of other chips inside it for accelerating specific types of processing, and all with remarkably low power consumption.

        ARM is less complex than x86. Both ARM and x86 are moving towards integrating more and more stuff on a single die. Which do you think will work better - the simpler architecture (though not vastly simpler) with rapidly improving speeds, or x86? ARM has more experience in this area. They'll win.

        You say to compete with Intel "you have to be better", but your opinion on what makes a CPU better is flawed. Power6 stomped Intel for performance. Even today for FPU stuff it's still about 100% faster than Core i7(per ghz - and it scales up to +5ghz on air), and I don't see it dominating the market at all!

        ARM will win for these reasons:
        -Lower cost.
        -Lower power consumption.
        -Much smaller size. (smaller devices appeal to many people)
        -Similar/better performance for specific tasks(like video decoding/recording).
        -Efficient software base.
        -Appealing to device manufacturers.

        Yes, x86 is compatible with everything under the sun, but everything under the sun is incredibly inefficient, and designed to run on desktop dual/quad-core systems.

        You're arguing about what the consumers want, but you're thinking like a techy. If you put an x86 program next to a well-coded ARM program, they'll both run just as responsively, and at the end of the day, to end users, responsiveness is what determines "speed".

        x86 may "spank" arm, but consumers think Vista is "slow" because it takes 30 seconds to delete a file that took 0.5 seconds in XP, and it requires more RAM. They don't give a shit that the kernel may be 5% more efficient. :P They don't care that they have a 2.6ghz dual-core CPU rather than a 2.6ghz single-core CPU, if it feels slower than before. (because of flaws with software)

        All this puts the importance on software quality rather than the hardware. But software is easy, for ARM. ARM has no super-fast desktop line that would spur the growth of inefficient crapware.

        Don't you feel lucky that we are to have tons of open source developers making quality software that runs on ARM devices? And piles of device manufacturers ready to push linux/FOSS software on these devices?

        Too bad there's so few x86 device manufacturers pushing linux/FOSS. More support and demand would really spur growth of efficient software for netbooks and the like - but we do have Dell, I guess. :P

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      On top of that, a while ago Microsoft said they were developing an ARM version of Windows.

      They already have one. It's called Windows CE.

      At my work we dev .net on win CE on arm. It works quite well believe it or not.

  • This just in, nVidia announces world's first netbook to require not one but two separate AC adapters at all times. Other features include built-in vacuum cleaner noise generator, and thermal pubic hair remover.

  • It seems that everything is moving in the direction of operational efficiency. More instructions per cycle, less power draw, faster and more efficient buses among processor, memory and peripheral devices are among important issues being focused on.

    But what ever happened to Moore's law? Are we already outside of its prediction? Has the chain been broken? I thought we would all have 5GHz machines running ice-cold by now but some of the latest and greatest stuff is a mere 1.6GHz atom processor based sub-no

    • Re: (Score:3, Interesting)

      by Animats ( 122034 )

      But what ever happened to Moore's law? Are we already outside of its prediction? Has the chain been broken?

      Effectively, yes. The problem is not cost per gate and wafer real estate per gate, which continue to decrease. It's heat dissipation per unit area. I've been to semiconductor talks where there are charts of increasing heat dissipation with lines marked "room temperature", "soldering iron", "nuclear reactor", and "surface of sun". The trend is clear and not encouraging.

      The effect is that comput

  • by BESTouff ( 531293 ) on Friday March 06, 2009 @04:47AM (#27089029)

    Problem is that more and more netbooks are sold with linux, and NVIDIA drivers integration in any distro is less than stellar. Contrast that with Intel hardware where everything is well supported by all vendors.
    Unless they open their drivers, this platform will be Windows-only so even their lower-end models will be hampered by the Windows Tax.
    That won't go very far.

"Pull the trigger and you're garbage." -- Lady Blue

Working...