Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Upgrades Hardware

NVIDIA GeForce GTX TITAN Uses 7.1 Billion Transistor GK110 GPU 176

Vigile writes "NVIDIA's new GeForce GTX TITAN graphics card is being announced today and is utilizing the GK110 GPU first announced in May of 2012 for HPC and supercomputing markets. The GPU touts computing horsepower at 4.5 TFLOPS provided by the 2,688 single precision cores, 896 double precision cores, a 384-bit memory bus and 6GB of on-board memory doubling the included frame buffer that AMD's Radeon HD 7970 uses. With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology! The GTX TITAN introduces a new GPU Boost revision based on real-time temperature monitoring and support for monitor refresh rate overclocking that will entice gamers and with a $999 price tag, the card could be one of the best GPGPU options on the market." HotHardware says the card "will easily be the most powerful single-GPU powered graphics card available when it ships, with relatively quiet operation and lower power consumption than the previous generation GeForce GTX 690 dual-GPU card."
This discussion has been archived. No new comments can be posted.

NVIDIA GeForce GTX TITAN Uses 7.1 Billion Transistor GK110 GPU

Comments Filter:
  • by i kan reed ( 749298 ) on Tuesday February 19, 2013 @10:55AM (#42945155) Homepage Journal

    All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

    • by h4rr4r ( 612664 )

      The biggest reason with this class of cards is always epeen. The next biggest might be that with ps4 and xbox720 on the horizon and if money is no object you will not have to upgrade for a long time.

      • by durrr ( 1316311 )

        Multi display gaming and 4K monitors right around the corner may also give it a run for its money.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Really? Like right in the summary man: for the "HPC and supercomputing markets"

      Not so you can run Quake at 500,000 fps

      • of course, if you read beyond the summary, you'll discover references to "crysis 3", ultra fast small form factor gaming PCs", "display over clocking" and other gimmicks that have no place in a supercomputing environment. In fact, the only real concession to HPC is double precision performance, which I suspect is only marginally useful in games.

      • by tyrione ( 134248 )
        Still can't match the power of the AMD FirePro S10000; and AMD's strategy for HSA Computing is where industry is headed.
      • by dbIII ( 701233 )
        The thing that's been keeping me from using such things in HPC is some algorithms that use a lot of memory can't be shoehorned onto a 1GB card without losing any advantage of running on it instead of a CPU. Bumping the memory up to 6GB makes these things a lot more useful.
    • Re: (Score:2, Troll)

      by Sockatume ( 732728 )

      In five years' time they'll be able to crank these out as integrated graphics chips for low-end Dell laptops. They might as well ship them to a handful of enthusiasts and ahead-of-the-curve game developers now.

    • I'm not 100% convinced of that. While it's only one example in a sea of thousands, CDProjekt looks like the next iteration of The Witcher will be a real step up. Also, there are always people who want and can afford the bleeding edge, even if they don't need it.
      • by Sockatume ( 732728 ) on Tuesday February 19, 2013 @11:56AM (#42945881)

        The Next Big Thing is all-real-time lighting. Epic has been demoing a sparse voxel based technique that just eats GPU power.

        • Re:What's the point? (Score:4, Informative)

          by Luckyo ( 1726890 ) on Tuesday February 19, 2013 @12:25PM (#42946207)

          That is simply not happening this decade. The jump in required computing power is ridiculous, while the current "fake lighting" is almost good enough. At the same time, you can't really utilize the current GPU types efficiently for real time lighting because that's simply not what they're optimized for.

          • By the time 'real lighting' (whatever that is) becomes possible the current fake lighting will also be able to do far more than it does today. Bang-for-buck, the current techniques will always win because the cost of simulating 'real' is exponential.

          • by tyrione ( 134248 )

            That is simply not happening this decade. The jump in required computing power is ridiculous, while the current "fake lighting" is almost good enough. At the same time, you can't really utilize the current GPU types efficiently for real time lighting because that's simply not what they're optimized for.

            Agreed. Wake me when BTO systems from Newegg or custom systems from Apple have options to install 256 or 512 GB of DDR4 RAM, not to mention the SDRAM designers have shrunken the die size down for a single stick to support 64 or 128 GB of ram. We already know Motherboard manufacturers aren't going to dedicate more room for RAM Slots and the Bus architectures needed for this future will have to be a whole new beast.

    • I've been looking into GPU-assisted rendering recently. Blender introduced the Cycles renderer not so long ago, and it runs on nVidia cards to accelerate ray traced rendering (apparently there were some problems with AMD). This allows for real-time previews but performance is obviously limited by the card and currently also by the memory on the card, which can limit your scene setup. There is also support for acceleration in LuxRender. This is a welcome addition to their lineup for me, since nVidia's 6xx se

      • by dwywit ( 1109409 )

        Ditto for video rendering - Premiere Pro can use CUDA cores to render most effects real-time. It makes a BIG difference to productivity not having to queue up your various colour corrections and special effects for rendering, make adjustments, lather, rinse, repeat.

    • Re:What's the point? (Score:5, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Tuesday February 19, 2013 @11:29AM (#42945575) Journal

      All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

      Buying the absolute-top-of-range card(or CPU) almost never makes any sense, just because such parts are always 'soak-the-enthusiasts' collectors items; but GPUs are actually one area where (while optional; because console specs haven't budged in years) you actually can get better results by throwing more power at the problem on all but the shittiest ports:

      First, resolution: 'console' means 1920x1080, maximum, possibly less'. If you are in the market for a $250+ graphics card, you may also own a nicer monitor, or two or three running in whatever your vendor calls their 'unified' mode. A 2550x1440 is pretty affordable by the standards of enthusiast gear. That is substantially more pixels pushed.

      (Again, all but the shittiest ports) you usually also have the option to monkey with draw-distance, Anti-aliasing, and sometimes various other detail levels, particle effects, etc. Because consoles provide such a relatively low floor, even cheap PC graphics will meet minimum specs, and possibly even look good doing it; but if the game allows you to tweak things like that(even in an .ini file somewhere, just as long as it doesn't crash), you can throw serious additional power at the task of looking better.

      It is undeniable that there are some truly dire console ports out there, that seem hellbent on actively failing to make use of even basic things like 'a keyboard with more than a dozen buttons'; but graphics are probably the most flexible variable. It is quite unlikely(and would require considerable developer effort) for a game that can only handle X NPCs in the same cell as the player on the PS3 to be substantially modified for the PC release that has access to four times the RAM or enough CPU cores to handle the AI scripts or something. That would require having the gameplay guys essentially designing and testing parallel versions of substantial portions of the gameplay assets, and potentially even require re-balancing skill trees and things between platforms.

      In the realm of pure graphics, though, only the brittlest 3d engines freak out horribly at changing viewport resolutions or draw distances, so there can be a reward for considerably greater power(for some games, there's also the matter of mods: Skyrim, say, throws enough state around that the PS3 teeters on the brink of falling over at any moment. However, on a sufficiently punchy PC, the actual game engine doesn't start running into (more serious than usual) stability problems until you throw substantially more cluttered gameworld at it.

      • there's also the matter of mods: Skyrim, say, throws enough state around that the PS3 teeters on the brink of falling over at any moment. However, on a sufficiently punchy PC, the actual game engine doesn't start running into (more serious than usual) stability problems until you throw substantially more cluttered gameworld at it.

        That's why you mod Skyrim so that bodies take a longer time to disappear... like say 30 days instead of 10 and you crank down the cell respawn time from 10/30 days to 2/3 days.

        Or you install the mod that summons maggots... hundred of writhing maggots.....

      • Actually right now (Score:5, Informative)

        by Sycraft-fu ( 314770 ) on Tuesday February 19, 2013 @12:11PM (#42946047)

        Console rez means 1280x720, perhaps less. I know that in theory the PS3 can render at 1080, but in reality basically nothing does. All the games you see out these days are 1280x720, or sometimes even less. The consoles allow for internal resolutions of arbitrary amounts less and then upsample them, and a number of games do that.

        Frame rate is also an issue. Most console games are 30fps titles, meaning that's all they target (and sometimes they slow down below that). On a PC, of course, you can aim for 60fps (or more, if you like).

        When you combine those, you can want a lot of power. I just moved up to a 2560x1600 monitor, and my GTX 680 is now inadequate. Well ok, maybe that's not the right term, but it isn't overpowered anymore. For some games, like Rift and Skyrim, I can't crank everything up and still maintain a high framerate. I have to choose choppy display, less detail, or a lower rez. If I had the option, I'd rather not.

        • by Khyber ( 864651 )

          "I know that in theory the PS3 can render at 1080, but in reality basically nothing does."

          Mortal Kombat, Disgaea 3, Valkyria Chronicles, DBZ Budokai Tenkaichi, all of these are 1080p true-resolution games.

          • Wow, 4 games (Score:5, Interesting)

            by Sycraft-fu ( 314770 ) on Tuesday February 19, 2013 @03:43PM (#42948133)

            Seriously man, this isn't a console-fan argument nor is that one you want to have in relation to PC hardware because you'll lose. The point is, most games these days are targeted at 1280x720, or lower, at 30fps. The problem is to target anything higher you trade something off. Want 60fps? Ok, less detail. Want 1080? Ok, less detail. There is just only so many pixels the hardware can push. Crank up the rez, you have to sacrifice things.

            Computers can do more than that, but need more hardware to do it. The target on my system is 2560x1600 @ 60fps, with no detail loss. My 680 can't handle that all the time, that's the point.

          • Not to mention Gran Turismo 5 which does 1080p/60fps/3D

            • by Khyber ( 864651 )

              Actually, GT5 doesn't do 1080p. it does 1280x1080 and only the menus and such are actual 1920x1080.

        • Console rez means 1280x720, perhaps less. I know that in theory the PS3 can render at 1080, but in reality basically nothing does.

          Ratchet and Clank (I'm a hard core gamer u know) for the PS3 remastered the ones for PS2, so now not
          only in HD but 3D as well, With my setup, 3D is always in 30 fps as well - to bad I don't care for 3D - mostly
          as the glasses require batteries that are changed often.

          I just moved up to a 2560x1600 monitor, and my GTX 680 is now inadequate.t.

          GTX 570 here, and an i7-950 cpu that only fits in the X58 series mother boards. - you know the boards that
          sold as 3.0 SATA capable but aren't.

      • Re: (Score:3, Interesting)

        First, resolution: 'console' means 1920x1080, maximum, possibly less'. If you are in the market for a $250+ graphics card, you may also own a nicer monitor, or two or three running in whatever your vendor calls their 'unified' mode. A 2550x1440 is pretty affordable by the standards of enthusiast gear. That is substantially more pixels pushed.

        And almost all those pixels go to waste. I'm still waiting for display units that would be able to track in which direction you're actually looking and give the appropriate hints to the graphics engine. You'd save a lot of computational power by not displaying the parts of scene falling into the peripheral vision area in full resolution. Or, alternatively, you could use that computational power to draw the parts you *are* looking at with greater amount of details.

      • Buying the absolute-top-of-range card(or CPU) almost never makes any sense

        Why does it have to make sense if you've got the money?

    • by alen ( 225700 )

      It's not for games anymore

      Hedge funds use nvidia branded servers with gpu's for trading. Lots of scientific uses as well for medicine, oil and gas exploration, etc. how do you think they know where to frack for natural gas or dig sideways for the hard to reach oil?

      • > how do you think they know where to frack for natural gas or dig sideways for the hard to reach oil?

        Rhabdomancy!

        *ducks*

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      I have no need for this therefore nobody does.

      Why do people find this argument convincing? It's just dumb.

    • There are a lot of good PC games with great graphics that completely ignore consoles. You just have to look harder. While console games have advertisements all over TV and stores PC games have stuck to the more niche advertisements they always have used. Look at PC gaming websites The high end PC games target future hardware and you won't be able to get a high frame rate on the highest settings even if you have the latest and greatest card.

      Even for ports a super high end graphics card is beneficial.
      • by Luckyo ( 1726890 )

        1080p is budget in PC world. Cutting edge enthusiasts are looking at 2140p (4k) and 3D monitors (requires double frame rate). Current high end chokes on these unless run in SLI.

    • by Jaqenn ( 996058 )
      Oculus Rift is going to be asking you to push dual monitors at 60 fps with VSync enabled at whatever resolution they settle on. That's difficult for many cards today.
      • by durrr ( 1316311 )

        They use a single monitor split in half. And the resolution is quite low.
        It's unlikely that they'll suddenly opt for dual 4k monitors unless they plan to release the retail version by 2018

        • by zlives ( 2009072 )

          from my understanding, the production version will have a substantially higher res... not sure about the 4k bit though!

    • except if you're a game developer yourself.

      That would explain the problem with most games out there. They pimp all these super-bitching graphics effects but most people do not have $1000 to spend on their gfx.

    • by msauve ( 701917 ) on Tuesday February 19, 2013 @12:10PM (#42946035)
      It's for bitcoin miners, obviously.
    • I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

      TFLOPS are the new inches. *ducks*

    • This probably isn't targeted at any gamer you know. GPUs are now better thought of as vector/parallel processing machines. Im sure a lot of Wall Street firms will pick these up, and the "graphics" card will never ever drive a monitor.

      The other class would be guys who need to do visualizations. We've been promised "real time raytracing" for years now. Maybe Industrial Light and Magic will pick some up.

    • by Luckyo ( 1726890 )

      3D on high resolution screens is one of the biggest reasons for this. Most of the current budget stuff does fine rendering non-3D at 1080p. 2140p at 120FPS for 3D monitor? Even SLI's choke.

      If you're buying a thousand USD video card, you likely have a similar monitor to use with it.

    • by Cinder6 ( 894572 )

      All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

      I'll bite. I've owned both the PC and PS3 version of Skyrim. The PC version has better graphics, hands down, and that's well before you start loading up all sorts of visual enhancement mods. So no, games aren't just targeted at console limitations, at least not in the texture arena; I'm not sure if Skyrim PC uses different models with a higher poly count.

    • by Nyder ( 754090 )

      All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

      In case you've been living under a rock for the last 5 years or so, the 3D graphic cards do more then just play games, they use their FP processing power to, get ready for this, process stuff fast!

      OpenCL, CUDA, and sort of fun and exciting stuff. You want to bitcoin mine? Graphic cards dude!. You want to fold proteins? Graphic Cards dude!

      Graphic cards aren't about great graphic anymore, I mean, if they were, Nvidia and AMD wouldn't have much more powerful cards coming out then what is in consoles.

  • GK110 vs. 7970 (Score:3, Interesting)

    by Anonymous Coward on Tuesday February 19, 2013 @10:57AM (#42945171)

    Hmm. $999 for 4.5 TF/s vs. $399 for 4.3 TF/s from AMD Radeon 7970. Hard to choose.

    • Re: (Score:2, Insightful)

      Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.
      • Are you measuring acceleration of of calculations? TF already contains a time unit.

      • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday February 19, 2013 @11:30AM (#42945587) Journal

        Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.

        This is why I've stopped buying hardware altogether and am simply saving up for a time machine... Importing technology from the future is, by far, the most economically sensible decision one can make.

        • Why wait? Can't you put some money in a high-interest long-term account and just stick an advert in something that will get archived, say Craig's list.

        • by eth1 ( 94901 )

          Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.

          This is why I've stopped buying hardware altogether and am simply saving up for a time machine... Importing technology from the future is, by far, the most economically sensible decision one can make.

          You're modded funny, but you don't need a time machine. I've been going through some older games (generally much better in terms of fun/gameplay) and indies, and by "importing" this tech from the past, I'm saving a bundle. And having a lot more fun than I used to with the big AAA titles.

    • Re:GK110 vs. 7970 (Score:4, Informative)

      by bytestorm ( 1296659 ) on Tuesday February 19, 2013 @11:48AM (#42945779)
      I think this new board does ~1.3TF of double-precision (FP64), whereas the Radeon 7970 does about 947MF, which, while not double, is a significant increase (radeon 7970 src [anandtech.com], titan src [anandtech.com]). They also state the theoretical FP32 performance is 3.79 TF for radeon 7970, which is lower than the number you gave. Maybe yours is OC, I didn't check that.

      tl;dr version, FP64 performance is 37% better on this board.
    • by Shinobi ( 19308 )

      Nvidia: Easy to use, easy to program for, good I/O capability, good real-world performance, hence their popularity in the HPC world.

      AMD: Awesome on paper. However, crap programming interfaces, Short Bus Special design in terms of I/O, and unless something's changed during the last month, it's STILL completely fucking retarted in requiring Catalyst Control Center and X RUNNING on the machine to expose the OpenCL interface(yeah, that's a hit in the HPC world.....)

      I'm going with Nvidia or Intel, thank you very

  • Serious stuff (Score:2, Informative)

    by Anonymous Coward

    And here I was, thinking that TI-83 has pretty cool graphics.

  • by Anonymous Coward

    Wow. 3x as many transistors as a Core i7 3960X? I guess the days are finally here when you buy your graphics card and then figure out what kind of system to add on to it, rather than the other way around.

    • by JTsyo ( 1338447 )
      GPU to CPU is not a 1 to 1 comparison.
      • So how much cache is on this chip?

    • I wonder what kind of yields Nvidia is getting... 3 times as many transistors as one of Intel's fancy parts, and on a slightly larger process(28 vs. 22nm) that's a serious slice of die right there.

      On the plus side, I image that defects in many areas of the chip would only hit one of the identical stream processors, which can then just be lasered out and discounted slightly, rather than something critical to the entire chip working. That probably helps.

      • by jandrese ( 485 )
        That is in fact exactly what they do. Usually everything but the highest end part will have at least one core disabled in hardware, that being the core that failed testing.
      • That's why it has the configuration it does (if some of the amounts it has look strange to you it is because there are units disabled) and why it costs so damn much. Combine a low number per wafer, with probable high failures (since TSMC's 28nm process has quite a few issues) and you get a high cost per working part, and thus a high cost to consumers.

    • by Luckyo ( 1726890 )

      GPU is about slamming more of the small cores into the package. Processing power scales in linear fashion with number of cores because of how parallelizible graphics calculations are.

      CPUs cannot do this. They need powerful generalist cores and their support structures instead. So they can't increase amount of cores and expect linear increase of performance. So they don't grow as big as GPUs.

  • "will easily be the most powerful single-GPU powered graphics card available when it ships"

    Yep, for the first week or two. I'll stick with my 670 that runs BF3 at max settings with 50+ FPS. Graphics card like the Titan is as useless as Anne Frank's drumset for the typical gamer.
    • by l3v1 ( 787564 )
      "for the typical gamer"

      "targeting towards HPC (high performance computing) and the GPGPU markets"

      Nuff said.
  • by dtjohnson ( 102237 ) on Tuesday February 19, 2013 @11:20AM (#42945439)

    Software (other than games) that can actually benefit from this type of hardware is scarce and expensive. This $1000 card will probably be in the $5 bargain box at the local computer recycle shop before there is any significant software in widespread use that could put it to good use.

    • by Zocalo ( 252965 )
      There's plenty of software in fairly widespread use already that can use this much power, although whether you class it as "significant" or not probably depends on your field. You do need to think beyond rendering pretty pictures on a screen at high framerates, at which it's obviously going to excel, though. I'm more curious how these cards will stack up for stuff like transcoding production quality video (I can flatten my current card with Sony Vegas), running the numerous @Home type distributed computin
    • The software isn't coming because until recently it's been hard to program to take advantage of the hardware. When I can use Python to interact with this hardware then the software will come from people like me.

    • Software (other than games) that can actually benefit from this type of hardware is scarce and expensive.

      I write software that can actually benefit from this type of hardware.

    • by JBMcB ( 73720 )

      Adobe Photoshop, AfterEffects and Premiere. Pretty much every modern video encoder and decoder. Pretty much every on-line computing initiative (BOINC, SETI@home, Folding@home, Bitcoin)

      Wolfram Mathematica. MATLAB/Simulink. Arcview. Maple. Pretty much all simulation/engineering/visualization software (Ansys, OrCad, NX, etc...)

      Pretty much every 3D and compositing package in existence (3ds Max, Maya, Softimage, Mud, Flame, Smoke, Media Compose VRay, DaVinci, BorisFX, Red, Nuke, Vegas, Lightwave, Cinema4D, Nuke)

      • Amen, brother. I even make a sort of game out of trying what software actually runs on CUDA cards. Last time I checked, Netbeans 7.3 RC 1 ( or was it 2 ? ) with Java 8 early-release ran flawlessly. With all the Maven projects I tend to have opened simultaneously, saves me about 2 Gb of RAM. And then some Core i7 computing power, much needed elsewhere.
    • >Software (other than games) that can actually benefit from this type of hardware is scarce and expensive.

      Hence it is targeted at the HPC market. My Master's Degree, incidentally, is in HPC... we're quite used to having to deal with weird and flaky hardware and dev environments to get our code to run.

      > This $1000 card will probably be in the $5 bargain box at the local computer recycle shop before there is any significant software in widespread use that could put it to good use.

      When the Cell processor

  • by account_deleted ( 4530225 ) on Tuesday February 19, 2013 @11:31AM (#42945605)
    Comment removed based on user account deletion
    • by mc6809e ( 214243 )

      I thought that most HPC users needed double-precision maths.

      Why, then, would a card aimed at the HPC market have so many single-precision cores alongside the double-precision cores?

      I'm not sure it has separate DP cores along side SP cores.

      It's possible that the double-precision features of the card are made possible by first taking the outputs of the single-precision circuits and then building on that so that there is no separate DP core -- just extra circuitry added to the SP cores.

      • by mc6809e ( 214243 )

        Note that the number of single cores divided by double cores is exactly 3.

        2688/896 = 3

        It isn't too much of stretch to assume that NVIDIA have figured out a way to use 3 SP cores to make a DP core.

    • Most electromagnetics applications that I have experience with don't actually need double precision, but scientists tend to use double precision anyway because they don't want to hassle with making sure that numerical issues aren't hurting them with single precision. If you have the time, you can try to characterize how much precision you need, and write your application mostly in single precision, with double precision in any critical places that require it. Often the measurement errors in the data you'r

    • by enjar ( 249223 )

      This card isn't marketed at the HPC crowd. The Tesla line is the one that's marketed at HPC, and the Tesla line has the better double precision performance.

      From reading the announcement, they are using the fact that the Titan supercomputer runs nVidia GPUs and they want to pick up a little "halo effect" from that. In reality, it's kinda like the "stock" cars that run in NASCAR. The car may purport to be a Chevrolet, Ford, Dodge or Toyota, but there's no option to pick one up at your local dealer that is

  • And, I'd say it's way overpowered. Right now, I can play BF3 and Eve simultaneously w/ no problems. I got it for future proofing my gaming needs. Hardware has to be ahead, though. If it wasn't gamers would be in a constant cycle of upgrading hardware. By getting the latest/greatest, I've seen that I can go about 5 years before needing an upgrade to stay current.

    • by Luckyo ( 1726890 )

      Try running at 2140p at 120FPS for 60FPS 3D on highest settings. It's been tried. Result: you need 2x680 in SLI not to get severe FPS drops.

  • 7.1B transistors in 551mm^2? That's atrociously low transistor density.

    Most of us probably use things that are probably 1/8th the size with 16B+ transistors on it. You probably know them as little 32GB+ memory cards.

    The thing is - memory devices (all memory - flash (NOR/NAND), RAM (SRAM/DRAM) etc) are the most transistor-dense things around - their sheer density makes it so that they're limited by how much silicon area they can use - if you double the silicon area, you double the storage. Moore's law helps

    • by slew ( 2918 )

      Even silicon area isn't that impressive - a good dSLR will have a camera sensor with a large silicon area. Hell, there are FPGAs with just as big silicon dies as well.

      Not all technologies are comparable...

      Canon's APS-H sensor is 550mm^2, it has ~120Mpixels which about 4 transistors/pixels isn't very dense (mostly photodiode area).

      State of the art DRAM is 4Gbit which at about 2 transistors/bit is only 8G transistors.
      Samsung's leading edge NAND flash chip is the TLC (triple level cell or 3bits/transistor) 17.2B transistor chip (about 48Gibits and generally high-density flash drives are built out of several lower-density chips and a flash drive controller chip).

      Intel's Ivy

    • by zlives ( 2009072 )

      i feel like i have learned something... what are you doing posting in slash dot :)

    • You've made a long and complex argument that the transistor density is low by comparing it against other types of chips. The density is limited by heat dissipation. A DRAM only accesses a few bits at a time so even accounting for refresh something like thousands of those billions of transistors are active at the same time. In a GPU most of the transistors are for processing paths that will be used in parallel, and many separate memories that will be accessed independently. Orders of magnitude more transisto

  • by chrysrobyn ( 106763 ) on Tuesday February 19, 2013 @12:40PM (#42946389)

    With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology!

    I believe there are two modern lithography lens manufacturers, one at 32mm x 25mm and the other at 31mm x 26mm, although I'm having trouble seeing publicly available information to confirm that. Either way, 800mm2 is the approximate upper bound of a die size, minus a bit for kerf, which can be very small. Power7 [wikipedia.org] was a bit bigger. Tukwila [wikipedia.org] was nearly 700mm2. Usually chips come in way under this limit and get tiled across the biggest reticle they can. A 6mm x 10mm chip might get tiled 3 across and 4 up, for example.

  • Can someone clarify if the 4.5 TF is single or double? The hardware leads me to believe it is single.

    4.5 TF is a huge number for SP, but unless I'm doing the math wrong, their DP flops are really low, less than 2 TLFOP. Single precision is great for games, but dual-precision is generally the supercomputer benchmark (LINPACK).

    Thanks in advance.

You know you've landed gear-up when it takes full power to taxi.

Working...