Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics AMD First Person Shooters (Games) Software The Almighty Buck Hardware News Games Technology

Ask Slashdot: Why Don't Graphics Cards For VR Use Real-Time Motion Compensation? 159

dryriver writes: Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that in order to experience virtual reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset, and a good, fast gaming-class PC. This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 or 6 years has a 'Real-Time Motion Compensation' feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in real time, thus giving you a hyper-smooth 200-400 FPS/Hz image on the TV set with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the real-time motion compensating cannot cost more than a few dollars total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset. Now suppose you have an entry level or mid-range GPU capable of pushing only 40-60 FPS in a VR application (or a measly 20-30 FPS per eye, making for a truly terrible VR experience). You could, in theory, add some cheap motion compensation circuitry to that GPU and get 100-200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensation as a real-time GPU shader as the rest of the GPU is rendering a game or VR experience.

So my question: Why don't GPUs for VR use real-time motion compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Why Don't Graphics Cards For VR Use Real-Time Motion Compensation?

Comments Filter:
  • Sickening (Score:5, Interesting)

    by bobbutts ( 927504 ) <bobbutts@gmail.com> on Wednesday July 13, 2016 @06:32PM (#52507065)
    I find the effect sickening on a flat TV. I'd gather it's worse in goggles.
    • Re: (Score:2, Insightful)

      I agree, it's almost motion-sickness inducing.

      Plus it looks very unrealistic; quite eerie.

      Like interpolating pixels in the uncanny valley.

      • Re:Sickening (Score:5, Interesting)

        by Martin Blank ( 154261 ) on Wednesday July 13, 2016 @10:22PM (#52507971) Homepage Journal

        It reminds me of the Hobbit movies, in particular of the battle on the river. I was taken out of the movie by the splashes. They looked fake, but I knew that this was more because the movie was shot at 48FPS and so captured the motion better.

        So does it look fake because it is fake, or does it look fake because it's different from what we expect to see?

        • I think it's the latter; as I recall, home video cameras (the ones that wrote to tape) went up to ~60 fps. That was faster than TV, and so when people watched it, it looked different and wrong to them. This actually discouraged movie studios from shooting at higher framerates because it reminded test audiences of their home videos, and thus they thought it looked lower quality, even though it was actually better.
        • by dj245 ( 732906 )

          It reminds me of the Hobbit movies, in particular of the battle on the river. I was taken out of the movie by the splashes. They looked fake, but I knew that this was more because the movie was shot at 48FPS and so captured the motion better.

          So does it look fake because it is fake, or does it look fake because it's different from what we expect to see?

          Hobbit looked bad because the CGI was bad. The piles of gold in the Smaug scenes were especially bad, and the other CGI wasn't very good.

          • I'm only talking about the water on the river. That was all real, filmed at 48fps. It looked bad because it was unexpectedly too detailed.

    • This. Extrapolating movement and cranking up the framerate in certain parts of the scene while other parts run at lower framerates or move out of sync is a sensation that constantly reminds me I'm watching a video and breaks the experience altogether. Being immersed in a world where this is occurring would be maddening.
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        There are, however, things that make VR implementations easier and less error prone.

        If the VR helmet itself, which has motion sensors, detects motion, it could apply motion compensation while it awaits for a new frame.

        The approach could be limited to head movements, be truly helpful and nigh flawless if done correctly.

        So really, the poster does make a valid point.

        • Re:Sickening (Score:5, Informative)

          by Immerman ( 2627577 ) on Wednesday July 13, 2016 @08:49PM (#52507651)

          I believe something related is already being done on the Oculus and probably the Vive as well - in that each frame, after being rendered, is geometrically manipulated to approximate the changes that should be seen due to head motion since rendering began.

          You can't use the same technique as used for televison, interpolating between frames, because that introduces far more lag, which I believe is the single largest contributor to nausea. After all you can't interpolate between frames until the second frame is ready, so even if interpolation were instantaneous you couldn't render frame N+0.5 until frame N+1 was finished, so you'd be inserting a minimum extra half-frame worth of additional lag in exchange for doubling the higher frame rate. Pretty sure that would be a lousy deal, especially considering the ugly artifacts such interpolation introduces.

          • Extrapolate. It's at least worth a try, to determine if the resulting errors are distracting.
            • I suspect they do so. I would assume the render cycle is:
              1) Extrapolate where your head will probably be by the time the frame is done rendering
              2) Render the frame for that position
              3) Do some fast image distortion on the resulting image to better reflect what you should be seeing from where your head actually is
              4) Display final image
              Repeat.

              Bottom line is there are a *lot* of tricks being used to minimize lag-related discrepancies, it's worth looking at what's actually being done, if only to see all the tri

              • They also do a warp if (on hopefully rare occasions) the GPU can't get the new frame ready in time, repeating the previous frame but warped to match the new head position.

                • I suspect it may even be the same technique - normally you warp to make up for lag (1 frame worth of motion), occasionally you warp even more to make up for a dropped frame (2 frames worth of motion).

                  Hmm, I suppose if your warping algorithm were good enough you could theoretically double displayed frame rates by making a policy of only rendering every other frame, with warped intermediate frames. I suspect that discrepancies would become obvious if present constantly, but I could be wrong. Especially if y

          • I believe something related is already being done on the Oculus and probably the Vive as well - in that each frame, after being rendered, is geometrically manipulated to approximate the changes that should be seen due to head motion since rendering began.

            You're referring to Oculus Asynchronous Timewarp, which maps the viewport into a tessellated grid whose polygons get deformed in accordance with motion data.

    • by Trogre ( 513942 )

      That's just because you're used to watching movies at a horrible choppy 24 fps.

      Look up the Soap Opera effect; it's a real thing. And you'll be very happy once you've gotten over it. To me 24 fps movies look like poorly done stop-motion.

    • I find it sickening when it's disabled, as motion is much more jittery and stutters while panning on today's blur-free LCD panels.

      On earlier TVs that would mask this with blurring or phosphor fade between frames, this was not an issue. Today's sets have high refresh rates with little artifacts and blur, so pans and motion appear full of stutter, especially on a large set where an object/person in motion travels larger physical distances between frames

  • by Anonymous Coward on Wednesday July 13, 2016 @06:35PM (#52507083)

    You can't change the angle at which the scene is rendered by interpolating between frames.

    It's not the raw framerate. It's that the scene your viewing has to match where you're looking that quickly or you get motion sick.

    • by Thagg ( 9904 )

      You can't change the angle at which the scene is rendered by interpolating between frames.

      It's not the raw framerate. It's that the scene your viewing has to match where you're looking that quickly or you get motion sick.

      While the parent is Anonymous coward, please rate him up, as that is correct.

  • They do. (Score:3, Informative)

    by Anonymous Coward on Wednesday July 13, 2016 @06:37PM (#52507109)

    Look up "asynchronous time-warp". /thread

    • Re:They do. (Score:5, Informative)

      by Lord Crc ( 151920 ) on Wednesday July 13, 2016 @07:05PM (#52507283)

      Look up "asynchronous time-warp". /thread

      Pretty much this. Here's a video explaining time warping, it also has some links for more details in the video description.

      https://www.youtube.com/watch?v=WvtEXMlQQtI/ [youtube.com]

      • by Anonymous Coward

        This is basically the answer. The important thing to note is that async timewarp has to use extremely low latency (like 120 FPS or faster) head tracking sensors in order to warp the image to match the motion of your head.

        As many people have already commented, the latency will make you motionsick, Specifically, the time delay between moving your head and what you see reflecting that movement. You need to have the raw *refresh* rate fast enough to fix this problem, and they combat the lower *render* rate by a

      • The HTC Vive uses a different technology, I don't remember what it was called, but it seems to be more effective at preventing nausea, as I read in many reviews that people tend to experience less nausea using the Vive for long periods than with Oculus.

        I own a Vive, and when playing Elite I usually get between 45 and 55 frames per seconds, yet the experience seems entirely smooth to me and I don't get any nausea at all. I find no perceivable difference to playing a game running at 90 FPS.

        • The Vive uses Interleaved Reprojection, which drops the framerate to 45 fps, and then doubles (and warps the second copy of) each frame. This is enabled when the fps drops below 90. More info 'here [steamcommunity.com].
      • Pretty much this.

        No, not this at all. A lot of people are making this mistake. It's not what the submitter is talking about. He's talking about what is usually called "motion compensation" but should, in this case, probably be called "motion compensated interpolation."

        It has nothing to do with the motion of the viewer's head - which is what Async Timewarp is all about - but is about the "motion" between two frames of video, and using the detected motion to create an interpolated frame between each "real" frame to increase f

        • No, not this at all. A lot of people are making this mistake. It's not what the submitter is talking about.

          Of course this is not what the submitter is talking about, I never said it was.

          The need for high framerate in VR is mainly due to rotation of the head, which is exactly what time warping is good for.

          If one were unable to rotate your head and could only translate it, VR wouldn't have the same need for 90+ FPS.

          I can't see how motion compensation would do a better job, in fact I don't quite see how it would be any good at all.

          So yes, very much "this" indeed.

    • That's not motion compensation - not the kind the submitter is talking about, anyway.

  • Latency (Score:5, Informative)

    by Kjella ( 173770 ) on Wednesday July 13, 2016 @06:38PM (#52507111) Homepage

    Broadcast TV with this stuff on has considerable latency. It looks at the next frame and then interpolates its way there, might work reasonably well to watch a recorded show but would be pretty horrible for real time motion from VR. Next question?

    • Re:Latency (Score:5, Informative)

      by ndnet ( 3243 ) on Wednesday July 13, 2016 @07:04PM (#52507277)
      This!

      Even with a stable framerate, this technique intentionally delays the next frame to add compensation frames.

      As an example, let's have a magic VR helmet running at 120Hz and instant processing (ie, 0ms GTG time, which doesn't exist) and a video card capped at a perfectly stable 30 FPS (aka 30Hz).

      We will split a second into ticks - we'll use the VR helmet's frequency of 120 Hz, so we have 120 ticks, numbered 1 to 120. (Just to annoy my fellow programmers!)

      We therefore get a new actual frame every 4th tick - 1st, 5th, 9th, etc.

      Without motion compensation, we would display a new frame every 4th tick - 1st, 5th, 9th, etc.
      With ideal (instant) motion compensation, we can't compute a transition frame until we have the new frame. So we could, theoretically, go real frame #1 on 1st tick, computed frame based on #1 and #2 on 5th tick, real frame #2 on 6th tick, computed frame based on #2 and #3 at 9th tick, etc.

      This would also be jerky - 2 motion frames then 3 at rest? We could push the frames back a tick and fill the interval with three compensation frames, but then we increase the delay, which is always higher than this example (and is multiplicative). So we'd have frame #1 at 5th tick, computed frames at 6th/7th/8th, frame #2 at 9th tick, etc. You've now introduced a minimum 4 tick delay, which at 120Hz is 1/30 of a second, or 33ms! To an otherwise impossibly-perfect system!

      What about historical frames instead to PREDICT the next blur? Well, then, when something in-game (or, really, on screen) changes velocity, there would be mis-compensation. (Overcompensating if the object slows, undercompensating if it speeds up, and miscompensation if direction changes).

      There's more problems, too:
      - This doesn't help when the video card frameskips/dips.
      - Instant GTG and instant motion frame computation do not exist. At best, they're sub-tick, but you'd still operate on the tick.
      - Input delay already exists for game processing, etc.
      - Increased input delay perception would be exponential to the actual length of the delay. For example, 1-2ms between keypress and onscreen action? Hardly notable. 50ms delay just to start registering a motion on screen and course correct? Maybe OK, maybe annoying. 150-200ms? Brutal.
      • Thanks for the informative post. VR can't happen with current or foreseeable technology for precisely the reasons you mention. Sadly, VR has become just another religion that exists only to siphon money from the faithful. AR is a much more rational approach to an immersive experience. Why create a fake world, when the real world is already rendered for you, and has a perfect physics engine, as well? Just tag the parts you are interested in and use sprites if your tags need to move.
        • VR can't happen with current or foreseeable technology for precisely the reasons you mention.

          Err, no. The reasons mentioned have nothing to do with why VR "can't happen." The motion compensation trick is entirely unnecessary if the GPU can render frames fast enough. And it runs counter to the aims of VR, anyway.

          In case you hadn't noticed, VR is already happening. It might fizzle out, it might not. It's too early to say.

          It sounds like you just have a beef with VR, saw an opporunity to air said beef, and took it without really reading what was being said.

    • Wait, an actual explanation instead of "because profits, and also fuck you"? What site am I on?

      Ah yes, it is a rather narrowly targeted question about technical details, instead of business, politics, or sociology. Therefore, the guy who thinks he's right probably will be.

      Well done. The rest of youse guyse, still on notice.

    • but would be pretty horrible for real time motion from VR. Next question?

      Motion VR? It's horrible enough for playing standard XBox games. It's the first thing I turn off on my HDMI inputs when I get a new TV.

    • Certainly it takes much more computational power, and therefore greater potential latency, to actually render the raw frame than to do motion compensation?

  • by interiot ( 50685 ) on Wednesday July 13, 2016 @06:38PM (#52507121) Homepage
    It's not throughput that matters, it's latency. If there's more than a tiny delay between turning your head and your eyes seeing the viewport move, then many people get bad motion sickness.
  • by Qzukk ( 229616 ) on Wednesday July 13, 2016 @06:41PM (#52507137) Journal

    In order to add a frame between 1 and 2, you have to have received both frame 1 and frame 2. People are already getting sick because what they see and do don't match, you're going to make it worse by making what they see lag further behind what little the headset picks up.

    • It doesn't help that the technique is also *very* prone to artifacts, and those artifacts would differ per eye, making them even more noticeable in 3D.
      • It doesn't help that the technique is also *very* prone to artifacts

        But the submitter said "no visible stutter or strobing whatsoever"!

        Do you mean to say he may be mistaken?! I just don't know what to believe any more.

    • This isn't really accurate; you can (and they do) use frame 1 only, along with knowledge of the camera motion through the scene. It's not perfect, but it's better than missing a frame.

      • Everyone is conflating two entirely different concepts.

        First, there's motion compensation - perhaps better termed "motion compensated interpolation." That's what the GP and the submitter are talking about. For that you need the future frame.

        But there's also what Nvidia call Asynchronous Timewarp, which compensates for the motion of the player (but otherwise has nothing to do with the other kind of motion compensation) by warping a single frame, either because it has the extra time to do so before display is

  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Wednesday July 13, 2016 @06:50PM (#52507201) Homepage

    High frame rate is important for motion data, but it's also (and perhaps primarily) about latency. It is *much* more jarring in my Rift to experience latency -- akin to rapid sea-sickness -- than to have a lower frame rate.

    Getting more motion information would be great but we can't sacrifice latency for it, and those TVs tend to have a very noticeable amount of latency. Not that this is an unsolvable problem -- I just haven't seen it yet.

    • It is an unsolvable problem. In order to create an intermediate frame you must know the future frame.

      • They already know the camera position for the future frame, which is enough to make some minor adjustments. The result isn't as good as having the actual future frame, but it's also better than using the past frame.

        • They already know the camera position for the future frame

          How? Player input and/or head position can change before the future frame is to be rendered.

  • The biggest reason is latency. People start getting nauseous when they turn their head in a VR headset and their view doesn't change quickly to match the head movement. Motion Compensation on TVs relies on having at least two frames (or 22ms worth of frames at 90fps if this were a current-gen VR headset) already at the TV in order to do the calculations for a frame in between, and in practice they could be buffering 2-3 seconds worth of frames for their calculations and you'd never notice that your TV is di

  • by Nemyst ( 1383049 ) on Wednesday July 13, 2016 @07:08PM (#52507295) Homepage

    you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes

    Um, what? That's entirely wrong. You need a steady 90 fps, that's it. There's no doubling because of eyes, this isn't 3D TVs where you need to alternate frames. The only other concern is that the resolution is higher than 1080p.

    • Re: (Score:2, Informative)

      by ShooterNeo ( 555040 )

      No, it's correct. For a single GPU solution without Nvidia's new tech (that won't be widely used until AMD releases an equivalent) you render the frame for one eye, clear buffers, render it for the other. That means if you sent all frames rendered to a single monitor you'd get 180 FPS.

      • No. It's not 180 FPS. You're doing the math wrong. A few examples: My monitor refreshes at 60 fps. I've got a chrome window open; you can say that chrome is rendering at 60 fps. Now I open another chrome window. Does this mean chrome is rendering at 120 fps? No. It means that chrome is rendering 60 fps in two windows. Split screen games where one player is on the top half of the screen and the other is on the bottom half, does that double the fps on the game? No. It's simply rendering two cameras a
      • by Nemyst ( 1383049 )
        But then you're "only" rendering on a 1080x1200 screen, which is quite a bit smaller than a 1080p screen, whereas I'm pretty sure from the "high resolutions" part of the submitter that they expected the full 2160x1200 at 180 fps.

        Besides, you don't need particularly novel tech to render on two buffers simultaneously, reusing as much of the work as you can. Stereoscopic rendering is just a change of matrices (ignoring the final projective steps), so you can reuse a lot of data and processing. Just rendering
        • Apparently that "just" bit requires new GPU hardware, and at this instant in time, only 2 GPUs (the 2 new Nvidia 1000 cards) even support it at all. Any VR developer who wants to eat has to support AMD hardware as well because it's the chip in the PS4 and probably the chip used in the new generation of VR supporting consoles.

    • Actually, it's worse than that. 90fps is the rock-bottom MINIMUM. If you want a frame rate that satisfies the Nyquist minimum for high-contrast peripheral vision, even 400fps is on the low side.

      The human eye is really hard to fully satisfy with immersive video, because trying to define the eye's "resolution" or "frame rate" is kind of like trying to define the resolution of an Atari 2600 or an Apple IIe. The fovea has relatively high color resolution, but comparatively poor luminance resolution. Peripheral

  • It works in a TV broadcast because it's a stream. It will work in a 3d streaming video to a headset but that's without head tracking.

    It can't work for interactive head tracking. It's a thing called latency and it's the reason people get headaches from VR. Your brain does it's own head tracking and when what you see doesn't match you get vertigo and or a headache.

    You also get vertigo from confusing your brain by spending too much time in zero G. A ride in an elevator can make you lose your cookies. That's be

    • by Jeremi ( 14640 )

      You feel motion but you don't see it and your brain is drawing two different opposing conjectures.

      ... which is actually kind of amusing when I think about it. There's a watchdog circuit somewhere in your brain dedicated specifically to checking whether or not your sensory inputs match up, and when it detects that they don't, it assumes that you are drunk or high (or otherwise somehow poisoned) and initiates the upchuck routine. How many generations of questionable-quality-alcohol drinkers did it take to evolve that?

    • by Shinobi ( 19308 )

      "It's a thing called latency and it's the reason people get headaches from VR"

      No, it's ONE of the reasons people can get headaches from VR. Another cause of VR-related migraines is that up to around 25% of all humans has some degree of problem with depth perception, and using VR helmets will trigger that, the more problems you have with it, the faster it will trigger. Resolution, FPS, motion latency has got nothing to do with that problem

      • And another is that there is no difference in focus in a VR world. Your eyes will strain to make sense of that, just as they do when you wear someone else's glasses.

  • Actually, motion compensation requires both the current frame and one or more future frames to be able to compute intermediate frames. With a media player where the full video already exists it is simple enough to access "future" frames. With a TV showing video inbound from a broadcaster, cable company or media device, you can delay the output by 1/30th or 1/24th of a second, delay sound similarly, and no one is the wiser. P>

    But in video gaming, delaying the video by 1/30 or 1/24 or even 1/20 of a secon

  • by kbg ( 241421 )

    Because you are an idiot?

    In order to generate extra in between frames you need at least 2 frames. For TV this means that the output is delayed for a few milliseconds which isn't a problem because TV is not interactive but for a VR which is interactive you just added more delay into the controller response system.

  • NVIDIA does exactly that : warps the image according to most recent (or even predicted) head position.
    Works on their recent high end cards
    http://www.slashgear.com/nvidi... [slashgear.com]
    • Not what the submitter is talking about.

      There's "motion compensat[ed interpolation]" and there's "compensating for player motion." Two entirely different things.

  • ATW FTW (Score:3, Insightful)

    by WaffleMonster ( 969671 ) on Wednesday July 13, 2016 @08:39PM (#52507611)

    Oculus 1.3 runtime for the Rift was released with async timewarp. When it was released DK2 users used to earlier runtimes without it were all over the boards with phrases like "holy shit" and "DK3" to explain how ATW changed everything for them. Jitter issues magically disappeared overnight with only a simple software update.

    More generally there is one and only one "trick" for improving VR quality going forward and that is foveated rendering. This technology is absolutely critical to any serious vision of future HMDs.

    To provide some context cones of our eyes cover a massive (cough cough) 15 degrees of arc. That's it. You can't even lean back and read 1/4 of what is on your monitor without moving your eyeballs around to do it. 4k is overkill.. 1080 is overkill... The future in VR is entirely locked up in sensing eye orientation and optical and or electronic steering of relatively low resolution displays in response.

  • a) What you're describing is frame doubling or similar tricks (every manufacturer has it's own name for it). It only makes the system APPEAR better and brighter. This is often done by simply doubling the frames or turning the backlights on and off between changing frames. This makes it appear "crisper", brighter and smoother, it's only a trick though and most displays already do it, it doesn't actually improve the actual experience, only tricks your brain into thinking it is at first glance and often when y

  • These are not the same problems at all. Look at gaming monitors and Ping times to better understand the problem. (These aren't the same thing, but at least the underlying reasoning is closer). The VR problem is one of synchronicity. If your sitting looking at a screen of a movie or something, you don't need to change the image very often, and your brain will work it out. If there is something you are carefully paying attention too, like a ball in a sporting event. 30 FPS is not smooth enough (y

  • I have a nexus player which I like, but if I cast anything to it, or play local content with MX player I get the soap opera effect making a 24 fps movie look like it's at 60. Is it the Nexus player doing this or my TV? I've posted elsewhere and some said it's my TV but if so, wouldn't it do it with everything? Like Netflix YouTube etc.

    Drives me nuts.

    • Is it the Nexus player doing this or my TV?

      It probably is the TV. It may have different settings for different sources. Look for "judder reduction" or something like that.

  • Many posters have pointed out that latency in the graphics card is an inherent problem.

    There's also latency in the monitor, and latency in the computer which creates each frame. Merging the functions of the monitor and the graphics card should allow a latency reduction of 1 frame (at the display rate). (This brings up new problems, of course. Interfaces change, and the monitor has to handle the weight and heat of the graphics card function - a problem if the monitor is a VR headset.)

  • ... does exactly this, and it works a treat. Only difference to TV's is, it is only used when your framerate goes below 90 fps.

    It does look weird when it goes on for longer periods, but the alternative would be juddering which is insta-puke-city.

    • Asynchronous Timewarp is something different than what the submitter is talking about.

      The submitter is talking about interpolating between rendered frames to create smoother video - i.e. only rendering every other frame fully, then just interpolating between them to fill the gap.

      The obvious problem is that it means you've added an extra frame's delay to everything. Also, motion compensated interpolation isn't perfect.

  • A good VR experience (and preventing motion sickness) requires fast response time. This requires low latency of the entire chain from the motion sensing device, through the USB connection, OS process scheduling, scene calculation and rendering and any buffering in the video card and display.

    A system that is able to respond quickly can obviously produce more frames per second. But just creating more frames per second without reducing latency will not help the experience feel more convincing (or prevent you f

  • There are already GPUs that can do 90FPS without interpolation techniques which would not provide as good of a latency or eye tracking. In a couple of years the same level of performance will be cheap and widespread. Why would manufacturers focus on short term tricks and not the real thing?

  • what good would extra 60 frames _from the past_ do to your VR experience? other than make you puke

    BTW Microsoft once championed the idea of generating 3D from a bunch of 2D tiles, this would enable tricks like motion compensation for tile reusing. It went pretty well for everyone involved (read billions invested/wasted by all MS 'partners').
    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • You ask why we don't generate a video frame that's halfway between the current video frame and ONE THAT WILL HAPPEN IN THE FUTURE. That is quite probably the stupidest question I've heard a nerd make.
  • FPS isn't cumulative like that. 90 FPS "per eye" is just 90 FPS overall. It's the draw area (number of pixels) that increases. And, with what we currently have, it doesn't actually increase. Both Oculus and Vive are 2160 x 1200, 25% more than the usual 1920 x 1080.

  • You need to know what the next frame will be before you can interpolate towards it. This means delaying everything, That's a big no-no in VR.

    Also, motion compensation is not perfect. That's why all these "60fps!" trailers people like to put on YouTube look quite underwhelming.

    Real 48/60fps video looks a lot nicer.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...