Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware Entertainment Games

Real-time Raytracing For PC Games Almost A Reality 292

Vigile writes "Real-time raytracing has often been called the pinnacle of computer rendering for games but only recently has it been getting traction in the field. A German student, and now Intel employee, has been working on raytraced versions of the Quake 3 and Quake 4 game engines for years and is now using the power of Intel's development teams to push the technology further. With antialiasing implemented and anisotropic filtering close behind, they speculate that within two years the hardware will exist on the desktop to make 'game quality' raytracing graphics a reality."
This discussion has been archived. No new comments can be posted.

Real-time Raytracing For PC Games Almost A Reality

Comments Filter:
  • by ackthpt ( 218170 ) * on Friday September 21, 2007 @11:53AM (#20698391) Homepage Journal

    Or is it? Simply means games will appear more eye-candy than they currently are. Gameplay will not change. EA will continue to use take last years sport game, through some new people into it, perhaps introduce some bug which makes it unusable and peddle it as The New Deluxe Edition. I wonder how many geometric objects it will be able to handle (and whether it handles transparancy with textures and patterns well) Having done a bit of raytracing I'm familiar with how quick things can bog down. It'll probably be a bit clunky at first, but get much better as horsepower and horsepower/dollar ratio improve.

    There was some game I played on an Amiga (got that? A really old computer) where I raced around in an aircar zapping stuff (some bastard borrowed the game and I've never seen it since!) Very nicely rendered graphics, beautiful even, nearly looked ray-traced. Must have been about 15 years ago.

    While I look forward to more realistic, or creative and beautiful gamescapes, do keep in mind -- we were all blown away by the first high quality animated films, now almost everything animated is rendered, raytraced, etc. and there's a lot of junk out there now. So this will be exciting for about 2 years then become "meh".

    Lastly, they've got to get the motion down. Characters in games, including sports, look so damn wooden in their movement! That's where real improvement needs doing.

    • by BillBrasky ( 610875 ) on Friday September 21, 2007 @11:58AM (#20698507)
      True, raytracing by itself will not make gameplay any better, nor animation better. However, it should make some visual effects that are hard today (shadows, reflections) simple. Hopefully, this will free up developers to work on other things instead of 'getting the shadows right'. http://en.wikipedia.org/wiki/Raytracing#Advantages_of_ray_tracing [wikipedia.org]
      • by slew ( 2918 ) on Friday September 21, 2007 @12:27PM (#20698913)

        True, raytracing by itself will not make gameplay any better, nor animation better. However, it should make some visual effects that are hard today (shadows, reflections) simple. Hopefully, this will free up developers to work on other things instead of 'getting the shadows right'.


        I'll have to disagree with that. For many people "right" looking shadows are like the movies and television shows. Shadows and light/dark interplay in these environments are far from natural and even in ray-traced environments, animators laboriously juggle "fake" light sources to make the shadows "right" looking.

        Also "single" bounce reflections are essentially "solved" problems with triangle rendering (environment maps), so only real advantages of ray tracing are "multi-bounce" and "self-shadowing" which are somewhat easier to solve in a ray-traced environment instead of a triangle rendered environment. Although sometimes these are interesting effects, they generally fall in the "eye-candy" side of the fence today and developers rarely spend much time on these (or so we hope given the state of game-play and AI in todays games), and they generally just implement canned solutions (e.g., some self-shadowing bump-map pixel shader technique) for certain "effects".
        • by lawpoop ( 604919 )

          I'll have to disagree with that. For many people "right" looking shadows are like the movies and television shows. Shadows and light/dark interplay in these environments are far from natural and even in ray-traced environments, animators laboriously juggle "fake" light sources to make the shadows "right" looking.

          So then, in a ray-traced environment, couldn't developers just install virtual stage lights in the environment to re-create TV and movie lighting in the gameplay? Sort of the same way that Nintendo made the Zelda game look like it was animated?

          • Re: (Score:3, Interesting)

            by slew ( 2918 )

            So then, in a ray-traced environment, couldn't developers just install virtual stage lights in the environment to re-create TV and movie lighting in the gameplay? Sort of the same way that Nintendo made the Zelda game look like it was animated?

            In case that wasn't clear in my response, developers do use virtual stage lights to make shadows look good in ray-traced environments (just like they do it in triangle rendered environments).

            The time spent is in tweaking the location of those virtual lights to get sha

            • Re: (Score:3, Insightful)

              by lawpoop ( 604919 )
              I see what you're saying now -- technology ( in this case, neither rendering nor ray-tracing ) does not give us "art for free" -- you still need animators, voice actors, lighting, set-makers, etc. etc. It just gives us another venue to perform art, which still takes the same amount of time.
        • by *weasel ( 174362 ) on Friday September 21, 2007 @01:05PM (#20699539)
          The canned solutions include precalculated light maps, mostly-static light sources and level designs that are carefully constructed to limit overdraw. The push for raytracing is more about removing the drawbacks of the current 'solutions', than notably improving eye candy.

          E.g. raytracing solutions will free up developers to implement more-dynamic scenes, more-dynamic lights and level designs where buildings and cities aren't glorified mazes where 90% of the architecture is an impenetrable facade.
          (Sure, some titles feature those sorts of things now - but they're expensive tricks, with severely limited implementation)

          • by timeOday ( 582209 ) on Friday September 21, 2007 @05:38PM (#20705099)
            Could you go further into why raytracing is better for deformable terrain (including buildings etc)? I think static environments are one of the most glaring problems of simulated environments.
            • Re: (Score:3, Informative)

              by Zerth ( 26112 )
              Raytracing allows you to make models from an equation or a function with a resolution limited by processing power, not by polygons, are solid on the inside and can react with physics procedurally.

              Want to blow a hole in that wall? Instead of having destructible section pre-modeled, you just boolean subtract the shape of the explosion. Is it a brick wall, so the hole should have jaggy bits? Just run a greebling [wikipedia.org] algorithm on the edges.

              Want to have breakable glass? Instead of having to make all your windows o
        • Re: (Score:3, Interesting)

          by uhlume ( 597871 )
          Where's my -1, Improper/Excessive Use Of "Scare" Quotes mod?

          This isn't a semi-literate junior high textbook, you don't need to highlight the important terms for us -- we're perfectly capable of figuring those out from context, thanks.

          But yes, you're absolutely right about the necessity of lighting design to create dramatic lighting even with raytraced rendering. Most modern 3d-accelerated raster technologies are similar enough to raytracing in their effect that environmental lighting workflows shouldn't cha
      • Re: (Score:3, Insightful)

        by Artraze ( 600366 )
        Last I checked, that's the whole point of rendering engines, like Quake 3 and so many others. While they may end up needing modifications for maximum performance, I would be amazed if this didn't as well. Oh sure, maybe in 10 years when we have full hardware ray tracing and hypertransport based physics processors that will alleviate the need to spend so much time performance tuning. Until then though, this is not going to be any better than any other engine. From a developer's perspective at least. It
      • by renoX ( 11677 )
        >this will free up developers to work on other things instead of 'getting the shadows right'.

        Uhm, no, I doubt that these 'real-time' rays tracers do environement mapping, so they will give 'hard shadows' which are still incorrect, so they will still have to work on them to get them right..

        One downside of many ray-tracer is that they work better on static environement so the eye-candy has a price..
    • by Lije Baley ( 88936 ) on Friday September 21, 2007 @12:03PM (#20698563)
      It's like what I used to say about pushing higher resolutions for television: Ten minutes into a GOOD show or movie and people are no longer conscious of the fact that they are watching it on a 12-inch black and white set.
      • This may be true, but if you presented the same show side by side with one in SD on a 12" B&W TV and the other in HD on a 42" 1080p HDTV, I'm sure every one of them would rather watch the High Definition version. It's like watching a good football game on TV; given two good games to watch, if one is in HD and one is not, I'll be watching the HD broadcast.
        The relation to games is that we'd always rather play the game with more realistic looking graphics. How many times do I hear gamers say, wow the gam
    • Re: (Score:2, Interesting)

      The more the hardware can do for you, the less developer resources you need to spend on getting shadows and reflections to look good. The less developer resources spent on BS means that you can spend more developer resources on things like improving gameplay. Maybe EA won't do it (they don't strike as a very innovative company anymore), but somebody will.

    • Simply means games will appear more eye-candy than they currently are. Gameplay will not change.

      Untrue! Ray Tracing is a lot more flexible method of rendering than previous engines have allowed. Many engines have claimed features like "destructible levels and terrain", but the engines were never fast enough to give both the eye candy demanded by the market and an engine capable of such free-form interaction. Ray Tracing could change all that. Programmers could no longer be limited by BSP trees, visibility trees, polygon count, and other requirements imposed on traditional engines.

      Graphics-wise, ray tracing could open new doors as well. For example, 3D adventure games haven't really taken off because it's harder to insert clues in the areas. A painting on a wall, for example, will tend to be slightly too blurry to see a clue embedded in it in a true 3D environment. Ray tracing allows for more precise rendering that would make the painting crystal clear from all perspectives and distances. Which means that the game designer could actually make it visible that the subject of the painting is pointing at a hidden door without making it so obvious that it destroys the enjoyment of the puzzle.

      What I'm getting at is that graphics improvements have been one of the factors that have allowed game creators to explore new game genres in the past. While the 3D-age has often focused on rendering quality to the point of forgetting the purpose of graphical improvements, that's not to say that a major switch in technologies couldn't bring new gaming experiences with it.
      • by Goaway ( 82658 )

        Programmers could no longer be limited by BSP trees, visibility trees, polygon count, and other requirements imposed on traditional engines.
        Indeed. They would instead be limited by the other kinds of data structures and algorithms you need to make raytracing in realtime feasible.
        • They would instead be limited by the other kinds of data structures and algorithms you need to make raytracing in realtime feasible.

          Of course. Such is the nature of the beast. The key is that it will be a different set of limitations. The key features (e.g. high detail, large number of objects, simplified lighting) are present in nearly every real-time raytracing engine I've seen. Which means that these features are something programmers will most likely be able to count on. :-)

      • Many engines have claimed features like "destructible levels and terrain", but the engines were never fast enough to give both the eye candy demanded by the market and an engine capable of such free-form interaction.
        I thought Red Faction was pretty good at that, and the system requirements were not too high.
      • by MenTaLguY ( 5483 )

        Ray Tracing could change all that. Programmers could no longer be limited by BSP trees, visibility trees, polygon count, and other requirements imposed on traditional engines.

        Aren't such issues also relevant to raytracing? Models are still going to be polygon-based, and unless you plan on doing a linear search over all the polygons in the scene for every ray emitted, you'll still need a spatial index (BSP, etc.) to speed up ray/polygon intersection tests.

    • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Friday September 21, 2007 @12:33PM (#20698979)
      Or is it? Simply means games will appear more eye-candy than they currently are. Gameplay will not change. EA will continue to use take last years sport game, through some new people into it, perhaps introduce some bug which makes it unusable and peddle it as The New Deluxe Edition. I wonder how many geometric objects it will be able to handle (and whether it handles transparancy with textures and patterns well) Having done a bit of raytracing I'm familiar with how quick things can bog down. It'll probably be a bit clunky at first, but get much better as horsepower and horsepower/dollar ratio improve.

      With raytracing, there are lots of new possibilities. For one thing, reflection and refraction actually work like they do in real life. That means accurate mirrors, lenses, and water refraction. Lights can work accurately if you want them to, and radiosity can be precomputed for static scenes. That may just be eye candy to most people, but there are potentially game-play enhancements that make real life optics part of the game. Most of it (except good lenses) has been faked before with rasterization, but raytracing will actually let you set up a series of mirrors and telescopes to peek around corners in a FPS for instance. I can imagine a true hall of mirrors in an FPS would be at least a little more interesting than what we have now, too.

      The other big technological benefit of raytracing is that it's asymptotically faster than rasterization. Raytracing is O(log n) versus O(n) for rasterization, which means that even though raytracing is currently slower (the constants involved in raytracing are higher), after the break even point is passed much less of the available computational power will be needed to render the scene and can instead be used for physics and AI.
      • Re: (Score:3, Interesting)

        by joshv ( 13017 )
        "Raytracing is O(log n) versus O(n) for rasterization, which means that even though raytracing is currently slower (the constants involved in raytracing are higher), after the break even point is passed much less of the available computational power will be needed to render the scene and can instead be used for physics and AI."

        Not disagreeing with you here, but what's "n"?
        • It's the number of objects in this case. As the AC said, that could mean triangles. Or it could mean spheres, cubes, cylinders, ellipsoids, and other mathematically describable geometries. What shapes are supported depends on the actual rendering engine itself. Some computations are stupidly simple for raytracers (e.g. perfect spheres) while others are slightly more computationally intensive (e.g. ellipsoids). Thus depending on the tradeoffs of the engine, 'n' could represent the total number of polygons in
        • by Kelbear ( 870538 )
          http://en.wikipedia.org/wiki/Big_O_notation [wikipedia.org]

          Hell if I know. But as a failure in both math and programming, I can tell you that I think it is a description of the efficiency of the formula that the workload is going through. I think he means to say that the gains in efficiency will increase relative to rasterization as the detail of the scene increases.

          • by joshv ( 13017 )
            "I think he means to say that the gains in efficiency will increase relative to rasterization as the detail of the scene increases."

            I am assuming that's what he meant as well. If that's the case, and we define "n" as some metric of scene complexity, then I'd like to see some support of the claim that rasterization is O(n) and raytracing is O(log(n)).
        • In general in Big-O notation, "n" is whatever the algorithm scales with. In this specific instance, I think 'n' might be pixels. The processing power required to perform ray-tracing scales with the log of the number of pixels, while power needed for rasterization scales linearly with the number of pixels.
        • Not disagreeing with you here, but what's "n"?


          N is the number of graphics primitives in the scene (usually triangles, but raytracing can also use more complicated primitives such as spheres, cylinders, and boxes). For triangle-based scenes, the turnover point between rasterization and raytracing is believed to be somewhere between 10,000,000 and 100,000,000 triangles. Current game levels are often in the 10,000,000 triangle range.
    • by Skevin ( 16048 )
      > EA will continue to use take last years sport game

      But I envision the first forays into real-time raytracing will give me some more flexibility into my sports games. I'm going to design a sports team, The Primitives, with the following line-up:

      Peter "The Plane" Pizzorni
      Colin "The Cone" LaMonde
      Samuel "The Sphere" Tomali
      Kyle "The Cube" Cayso
      Terrell "The Teapot" Tyson

      One of the first matches will be against the Nurbs, but the whole point of the game is to really hawk my new line of athletic wear, CSG: "Co
    • by Yvanhoe ( 564877 ) on Friday September 21, 2007 @05:51PM (#20705279) Journal
      While I am not sure that realtime raytracing will really be the next big thing, I think there are unintended consequences you overwatched.

      Today, most CG effects must be hard coded, using tricks, shaders, complex modeling techniques, multiple passes, etc... In the raytracing world, as you are aware, the engine is easier to use, and I would also say, easier to code. It is also very easy to parallelize (so a specialized card could bring HUGE performance gains) and require few modeling tweaking compared to the current T&L world. In a raytracer, shadows (including self-projecting), reflections, refractions, bump mapping, displacement mapping, etc... are an integral part of the renderer, they are not a lot of different modules stacked on top of each other. Bringing down the complexity of the rendering engine hopefully frees more resources to work on other parts of the game.
  • Give me gameplay. (Score:5, Insightful)

    by xC0000005 ( 715810 ) on Friday September 21, 2007 @11:55AM (#20698445) Homepage
    I grew up with video games where the blob of pixels barely resembles anything. The power of gameplay, lasting gameplay far outstrips graphics. Not that a little eye candy doesn't hurt. I guess the core problem is that nothing Intel produces can run time optimize "Lair" into "Tetris" or otherwise correct for this.
    • ...can run time optimize "Lair" into "Tetris" or otherwise correct for this.

      Oh...so a game worth buying then?
    • Re: (Score:3, Insightful)

      by king-manic ( 409855 )
      The power of gameplay, lasting gameplay far outstrips graphics

      arcade Pac-man was awesome game play for it's time. I doubt I could stand more then 10 min of it. Super mario brothers was awesome for it's time. I doubt I could ever finish it again without being bored silly. Final Fantasy 6 was awesome for its time. I could play it still all the way through once a year. But my younger brother gets bored to tears. Gameplay dates itself too. We suffer from nostalgia, you and me. Gameplay is fun. Eye candy is fun.
      • by mosch ( 204 )
        Super Mario was released recently as a Wii download.

        It's *awesome*. And good for way more than ten minutes.
      • by Hatta ( 162192 )
        I couldn't stand more than 10 minutes of pacman when it was new. I couldn't afford more than 10 minutes of pacman when it was new either. I still play SMB fairly regularly. On the original hardware even. I never played Final Fantasy when it was current, but I've played FF1 and FF2J in the past couple years and had a blast. I'm currently playing Morrowind, just before that I finished Lunar for the Sega CD. Lunar was a lot more fun. I don't suffer from nostalgia, I enjoy it.
    • I don't see why most slashdotter's think that if a company does decent graphics they cannot have good gameplay. Sure there is a lot of crap games that come out but this was true in the past too. Graphics have been going up but I can't say that gameplay has necessarily been going down completely. There were plenty of genesis/nintendo games I simply didn't find to be fun. In any case improved graphics in the last ten years has allowed for more diverse, immersing, and heart wrenching games.

      PS2+ games are t
      • I don't see why most slashdotter's think that if a company does decent graphics they cannot have good gameplay.

        The argument is that game designers and artists come from the same budget, so if a company invests heavily in one, the other suffers.

      • I have never understood the race to photorealism in games. Perhaps it's for those back-of-box screenshots ("from a version you'll never own"). Better graphics are nice, but they swap the the player's imagination for visual detail. Games companies do this, diverting programming resources from what a game plays like to what a game looks like, without realising that there's a "+5, imagination" gameplay boost that comes from believing that the collection of bad sprites on screen is humanity's last chance for
      • by Hatta ( 162192 )
        Experience mostly. The flashiest games tend to have the most mundane game play. The designers are relying on you being distracted by shiny things and not noticing that the game is pretty bad. It's easier to get fancy graphics than good gameplay too. You just have to pump a lot of money into artists. Good gameplay isn't something you can manufacture like that.

        And good graphics don't have anything to do with the complexity of the game. Look at Nethack. It's all done in an 80x24 console, but it's a more
  • I'm sure Pixar and other rendering houses will leverage this to keep production costs down and get videos out to market quicker. Then you have side-projects like the GPGPU, if this raw power can be harnessed for other applications it could be a boast for researchers.
  • I ran some raytracers and man, just getting a scene to render was a pixel by pixel affair, watching the image slowly update on the screen. It blows me away how yesterday's "holy shit this is awesome!" prerendered animation becomes today's game engine and tomorrow's "meh, what else have ya got?"

    Youtube videos are still too low-res but I've seen some of the high-res renders of current games like Armed Assault [wikipedia.org]. Wow, takes your breath way. The only shortcoming for realism at this point is they're still having t
  • by SnoopJeDi ( 859765 ) <[snoopjedi] [at] [gmail.com]> on Friday September 21, 2007 @12:02PM (#20698551)
    ...but Q4RT [idfun.de] seems to have handicapped most of what makes the Doom 3 engine so impressive-looking to begin with. The reflection effects sure are nice, but it's a long way from making anything comparable to modern methods.

    Sure is interesting, all the same.
  • by dada21 ( 163177 ) <adam.dada@gmail.com> on Friday September 21, 2007 @12:07PM (#20698623) Homepage Journal
    I was a founder of Deep Productions [deeplabs.com], one of the Chicago's first rendering farms about 15 years ago. I recall having dozens of Pentium 60s (Were they called Pentium Pros back then?) with 512MB of RAM (if I remember correctly) running a variety of rendering programs (usually 3D Studio, but others based on clients needs). IIRC, a single raytraced frame took about 20 minutes. 2 dozen machines churning full speed were able to render approximately 60 fields per hour, or 1 second of animation in an hour.

    I exited that market and Deep eventually moved out of that field entirely, but looking back, I can't believe we made the money that we made at the time. Now that ray tracing is getting closer to real time, it gives me a few minutes pause to realize how much technology has changed in ways that the AVERAGE consumer has no understanding of -- and doesn't need to. In the end, I'm glad that so many entrepreneurs take risks so that consumers needs (and yes, entertainment for some is a need) and wants are fulfilled, without those consumers even knowing the process necessary to get there.
    • by lawpoop ( 604919 )
      My prediction is that in 5-10 years, the average home computer will have a machinima software package that will produce Little Nemo quality or better animation with basically video-game controls. It would be a convergence of machinima and open-source rendering programs. I can't wait until any geek in the world can animate their feature-length sci-fi movie on their home computer!
  • by recoiledsnake ( 879048 ) on Friday September 21, 2007 @12:07PM (#20698627)

    Raytracing comes under a class of problems that are embarassingly parallel. Want to render 2 million(~1920x1020) pixels? Send them to 2 million processors(cores) simultaneously and get results back. This is possible because there is rendering each pixel is independent of rendering another. Note that all the data required(like textures, lights, etc.) should be available to all the processors, so SETI style high latency computation is out of the question.

    What makes it interesting is that the gigahertz race is done with and has turned into a "core" race. Intel was already showcasing 80 cores on the same chip. A few cores dedicated to Phong shading algorithms and radiosity and the rest to ray tracing would simply overshadow the current raster rendering. Also, raytracing is mathematically elegant and simple compared to all the dirty tricks employed by current graphics technology so it should make programmers' lives easier(unlike the Cell processor which is a nightmare to code for).

    • True, but not exactly relevant to this discussion. Conventional poly-based 3D rendering is parallel, too. SLI setups take advantage of that already.
      • SLI and Crossfire are parallel at the frame level and not at the pixel level.
        • Re: (Score:3, Interesting)

          by Guspaz ( 556486 )
          Incorrect. They typically support alternate-frame rendering (each card does every other frame) for games that are problematic, but the best performance is to be had with tile-based rendering. This is where the SLI setup splits the scene up into a number of tiles, and then the two cards render them all, splitting the load so that each card is working as hard as it can. This is effectively splitting on the pixel level, but in a bit larger chunks. I'm sure that's because whatever overhead is involved probably
        • SLI and Crossfire are parallel at the frame level and not at the pixel level.

          Not quite. SLI is parallel on the scanline-level (that's why it's called "scanline interleaving", remember?). Internally, GPU's themselves are highly parallel architectures by nature. One can think of tens of different ways to distribute rendering operations over parallel hardware, many of which are actually used by modern GPU's.
    • by blueg3 ( 192743 )
      Sort of. The pre-rendering work isn't trivially parallel. With a direct raytracing algorithm, it's parallel at the pixel level, but you lose some power if you do things like cache results, since you need to either compute these results multiple times (losing the cache benefit) or communicate them between processors.

      Still, it's true that raytracing parallelizes much more nicely than polygon-drawing.
  • Raytracing has no advantage over rasterizing for opaque surfaces. Rasterizers are faster there, since their performance is not tied directly to the screen resolution.
    The advantages lie in refraction/reflection/shadows/translucency, which are painful to implement with rasterizers.

    Thus, a hybrid seems to be the best idea. Rasterizer as default, with a special "shootray" instruction in the pixel shader.
    • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Friday September 21, 2007 @12:50PM (#20699231)
      Raytracing has no advantage over rasterizing for opaque surfaces. Rasterizers are faster there, since their performance is not tied directly to the screen resolution. The advantages lie in refraction/reflection/shadows/translucency, which are painful to implement with rasterizers.

      Actually, there's a big advantage. Raytracing is O(log n), but rasterization is O(n). OpenRT's demo [openrt.de] of a 350 million triangle model of a Boeing rendered in real time on a single PC (without GPU support) is a good example. The entire model doesn't even fit in memory, so visible surfaces are cached. The result is still realtime (although only a few FPS) with incredible detail. Go slashdot the server and watch the movie. Modern raster based cards can only render that many triangles in a whole second with all their fancy hardware, if they're lucky.
      • by ardor ( 673957 )
        This advantage is worthless for games. Games usually have moderate geometry complexity but *very* high demands on the fillrate. Raytracing has no advantage there. Besides, today's DX10-class hardware does not distinguish between vertex and pixel shaders, which is much harder to achieve with a raytracer.

        That said, a 8800 card CAN render 350 million triangles...
    • Not really. Ray-tracing is also very good for high-density scenes. Hundreds of thousands to millions of triangles are just fine with ray-tracing, whereas it would completely bog down a normal T&L rasterizer. This is because you really only need to do 1 ray per screen-pixel with raytracing vs. an astounding amount of operations for each vertex (and possibly each texel/fragment, as well) for a normal rasterizer.
  • Handhelds first? (Score:4, Interesting)

    by Floritard ( 1058660 ) on Friday September 21, 2007 @12:10PM (#20698663)

    if a certain configuration of hardware can render 1280x720 images at 30 frames per second, then that same hardware will be able to push 563 FPS at a resolution of 256x192 (which happens to be what the DS has).
    So why not make a handheld that can do real-time raytracing? Seems it would be easier to do. And that's a pretty good selling point to boast "better than PS3/360 graphics in the palm of your hand."

    And to the above posts bemoaning the focus on graphics over gameplay, remember if they get a good real-time raytracing system in place then that frees the dev team up quite a bit. No longer having to work so hard on faking proper lighting, they can then focus on the more important things like gameplay/AI/physics.
    • Re: (Score:3, Interesting)

      by smash ( 1351 )

      So why not make a handheld that can do real-time raytracing? Seems it would be easier to do. And that's a pretty good selling point to boast "better than PS3/360 graphics in the palm of your hand."

      It's just not worth it.

      These days, games are often ported from platform to platform with fairly portable code (ie, written in C with platform specific low level stuff in ASM if required).

      The second you put a raytracing platform out, every conventional raster graphics engine on the market becomes extremely

    • Re: (Score:3, Funny)

      by Slashcrap ( 869349 )
      So why not make a handheld that can do real-time raytracing?

      Because rendering a scene of a set level of complexity using ray tracing is vastly more compute intensive than the alternatives and nobody wants to buy a handheld that weighs 30Kg and requires a portable diesel generator.

      Would you like spectacularly obvious answers to any other questions while I'm here?
  • by king-manic ( 409855 ) on Friday September 21, 2007 @12:10PM (#20698667)
    Most people who pine for better game play are not looking hard enough. Generally they suffer form a severe case of nostalgia. Back int he bad old days for each Super Mario brothers or Missile command there were 4 ET's, Coeleco smurfs or Custer's Revenge. You just don't remember them. The past wasn't some golden age where game play trumps graphics. IT was a place where event he brilliant games had significant control issues, where top shelf games wouldn't been be considered tier 3 dreck today. Take a much maligned games liek Lair, is the basic controls any worse then say NARC for the NES? but NARC was a "good" game for it's time while Lair is a maligned as crap. I haven't played lair but bad controls are no longer acceptable.

    There is game play innovation today, and it doesn't have to be independent of pretty graphics. In fact the people responsible for the game play aren't the ones responsible for innovative game play. One does not diminish the other. Good game play is also not the same as innovative game play. They coincide for instance in games like Katamari damacy but often innovation ~= unpolished ~= crap. What we're all looking for is polished game play. It never changes that around 80% of everything will be considered crap. So just rmeember that back int he day 80% of everything was crap too but you just don't remember. So they can ray trace graphics, thats awesome. Will it diminish gameplay.. not really you'll still have 80/20 rule. It's not an indication that things were better then before only that your brain works in a funny way.
    • Thank you! I know it's somewhat off-topic, but the Games section on Slashdot is so flooded with nostalgia that any attempt to re-introduce sanity is great. Especially when modded up.
    • by grumbel ( 592662 )
      The issue is: more graphics mean more costs, more costs means less flexibility and less flexibility means less innovation and independence. The past isn't something where all games were great, it is however a time where a lot of games were far more innovative then anything today. Just compare how many new stuff happened between 1990-1995 and compare it to 2002-2007. It is not even that just little new innovation is happening, a lot of genres simply died out because they where to far of the mainstream and th
      • The issue is: more graphics mean more costs, more costs means less flexibility and less flexibility means less innovation and independence.

        Good thing for you, smaller games have made a come-back. Sony, Microsoft, and Nintendo have online stores chalk full of smaller games. Their all pretty but in a more modest sort of way unlike some of the baroque epics.

        Also innovation != fun. ET was innovative in many ways. It sucked donkey testicles. Liar has a innovative control scheme. One that doesn't work too well gi
    • I can't argue that we don't look at the past through rose colored glasses. We do tend to remember the best games, because they are the only ones worth remembering (and if we were smart, the only ones we spent much time playing). Back then, gameplay did trump graphics, but that's only because graphics pretty much sucked, and you had to provide something. That's not to say people didn't care about graphics. When Donkey Kong came out, people didn't just love the gameplay; they also really enjoyed what were
  • by ggambett ( 611421 ) on Friday September 21, 2007 @12:10PM (#20698673) Homepage
    I wonder if this is still relevant.

    Don't get me wrong, I love raytracers [mysterystudio.com], but what once was their exclusive domain (reflections, shadows,...) has been done in a "fake" but very convincing way since the few latest generations of 3D video cards. What's left? True refraction? True curved surfaces? Is it that important? I tend to side with the "give me gameplay" crowd here.

    Realtime caustics and global illumination, on the other hand...
    • I tend to side with the "give me gameplay" crowd here.

      Define gameplay. As many point out graphics and gameplay aren't mutually exclusive and often handled by different teams. Play Q&A, game designers, producers, directors handle Gameplay. Graphic artists and game engine coders handle graphics. They work in parallel and often 80% of everythign is crap anyways. Want gameplay, try elite beat agents. Want gameplay and graphics, try Gears of war. Want just graphics, try liar.
    • The difference is, however, that developers waste tons of development time implementing effects that mimic lighting, refraction, and reflection. With raytracing, this behavior is implied. The increased simplicity will benefit all, once we have the horsepower to do such a thing.
      • Re: (Score:2, Informative)

        by Bob512 ( 25393 )
        I'd much rather see developers "waste" their time making things efficient than having 1000 cores on my machine trying to ray trace every scene. What it all comes down to is coherency between threads of execution, and all of the techniques that ray tracing makes possible (primarily high frequency lighting, since that's really the only thing that can't be done in a rasterization model) have terrible coherency.

        This is super important because no matter how many cores you have, the bottleneck will still be going
    • I don't care at all how the video card decides something needs to be rendered, so long as the results look good. I'm not concerned with the "correctness" of the calculations, only the results. I'd be all for a raytracing card if they found a way to make that work faster with less silicon than the existing rasterization systems. However, it seems we've really done a pretty good job of figuring out what can be quickly accelerated in silicon.

      I'd much rather have good, fast, fake stuff than something that is do
      • by p0tat03 ( 985078 ) on Friday September 21, 2007 @01:17PM (#20699743)

        The problem with faking everything is that it quickly breaks down as your needs get more complex. For example, I've been working with a colleague recently on doing some nice, fast, impressive fake effects - most notably a system that can simulate a light shining through stained glass (not just a straight texture projection). We came up with a novel and fast way to fake it, but it completely breaks down if, say, two stained glass windows are in-line and you try to shine a light through... It simply doesn't work.

        The advantage of doing things "for real" are that compatibility between your different effects is almost guaranteed, and your coders don't have to spend immense amounts of time curing those problems.

        • by p0tat03 ( 985078 ) on Friday September 21, 2007 @03:54PM (#20702895)

          Just wanted to add a bit more explanation of this. Lightmapping has traditionally been the most effective way to get radiosity in a scene while still remaining real-time. When effects like normal and parallax mapping came along, lightmaps were suddenly incompatible. It took Valve to sort this out (though their solution is far from ideal), and only now, with UE3 and Gears of War, does it actually look halfway decent (Half-Life 2's solution washed things out, it's as if the normal mapping simply isn't there).

          To solve the problem of two fake effects being incompatible, Valve invented a new fake effect to bridge it. You can imagine what happens when you start trying to mix a large number of effects. This is why the holy grail is still real-time raytracing - it's also a bit like why we want to have the Theory of Everything, as opposed to a bunch of little physics theories that each apply to a special case.

    • by Boronx ( 228853 )
      Doesn't this allow indie designers access to the highest quality graphics for free? Doesn't it free up all game designers to make their models more abstract, allowing them to concentrate more on gameplay?
    • by MenTaLguY ( 5483 )
      I just wanted to second that -- I could care less about raytracing. It'd be only an incremental improvement over what we can do now. GI, on the other hand, would mean a substantial improvement.
  • Right now I have a image of people rushing over to a smoking server as it seems the site has gone down. I didn't get to finish reading the article either. :(
    • Right now I have a image of people rushing over to a smoking server as it seems the site has gone down.

      Alright, where's the jester that put his raytracing software in the server!?
  • they speculate that within two years the hardware will exist on the desktop to make 'game quality' raytracing graphics a reality."

    I don't think so. Within 2 years GPU power will have increased a lot as well, and polygonal rendering already approaches raytracing quality right now, with anisotropic filtering/antialiasing, very high polygon counts, very high-res textures with programmable shading techniques, etc. Stuff like photorealistic shadows, glass effects, refraction etc, its all very nice, but for fast-
    • by popo ( 107611 )
      Amen. While it's nice to think we could "actually" do it, the more important question is: Why do it?

      If the answer is "Because it will look better", I'm with you. No it won't look better in 2 years. Polygonal rendering will look insanely good in two years. Realtime Raytracing will be an interesting graphical curiosity in 2 years... in the same vein as Novalogic's voxel graphic games were in the 90's.

    • Other problems I see with raytraced games are the exponential increase of processing with higher resolutions or higher light source counts, the fact that poor raytracing actually looks worse on higher resolutions, the increased production and programming costs and the fact that graphics companies will not like seeing the investments made in their current GPU architectures melt away.

      Ray tracing is actually more efficient. With the scene in a proper data structure, ray tracing can be O(log n) for a scene w
  • by tgd ( 2822 ) on Friday September 21, 2007 @12:35PM (#20699009)
    I remember fifteen years ago doing VR research work and people joking about real-time raytracing for games and VR. Computers are massively faster now than they were then. Why aren't we doing it at this point?

    Resolutions have gone up enormously. Polygon count has gone up enormously. If we talk the sort of quality scenes we were rendering in 1993, it was only a few more years before it was possible to do them real-time... but at that point models were 10x more complicated and you weren't rendering for 320x240, you were looking at 640x480. Now we're doing millions of polygons at HD resolutions.

    As long as people want more polygons, more texture detail, and higher resolutions, realtime raytracing will never be a production reality. Better hardware, faster CPUs, etc are all consumed quickly to handle richer environments and then suddenly there isn't overhead for raytracing anymore.
  • Sigh (Score:5, Insightful)

    by derEikopf ( 624124 ) on Friday September 21, 2007 @01:17PM (#20699751)
    "Hey look, the photons accurately react with the environment according to current laws of physics! Finally they figured out how to make games fun!"

    :-\

    The obsession with graphics is ruining the gaming industry. Compare the PS3's sales to the Wii's [msn.com] for evidence.
  • People are already starting to use graphics hardware to solve more general computations, even with all their highly specific rasterization functionality.

    so.. ray tracing is in many ways easier than current techniques, meaning the hardware to do it is more generalized, which has two benefits: it can be highly parallelized (ie, easily makes use of many cores, which is now the trend in CPUs as well) and it would likely result in GPU's that are even more usefull for general computation. This would expand the
  • JUST IN TIME FOR MADDEN 2009! YES!

    (for the impaired, insert sarcasm above and read between the lines)
  • But is it needed? (Score:3, Interesting)

    by nmg196 ( 184961 ) * on Friday September 21, 2007 @04:14PM (#20703399)
    In one of my lectures at university while studying Computer Science, the lecturer said:

    People look at the TV and say things like "there's nothing on", "this is rubbish", "this film is so predictable", "surely not an ad break already". They don't often say, "I wish this TV had more pixels and a higher audio sampling rate".

    Sometimes I think he's right. While I can see the merits of high definition and DTS, I've also seen plenty of films that seem to rely entirely on CGI and pretty graphics but have a weak plot (and plenty of games too for that matter). I hope this isn't going to make the developers spend even more time making textures, models and scenes just because you can see them so clearly.
  • The kind of hardware needed to run raytracing really fast is well understood, and it doesn't really look like today's GPUs or like intel's CPUs, though even today you can get better results if you take advantage of the GPU as well. If ATI or nVidia doesn't come up with a hardware raytracing GPU someone else will. It's a pity that Intel doesn't seem to be interested in working on that angle.

    Here's an article I've dug out of the Wayback machine and cleaned up, Raytracing vs Rasterization [scarydevil.com]. Phillip Slusallek's home page is here [uni-sb.de], and you can follow that to SaarCOR and OpenRT. They built a prototype RPU (R for raytracing) that at 66 MHz was comparable in performance to a 2.6 GHz P4. The video [uni-sb.de] is pretty impressive, considering how slow the hardware is.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...