Ask Slashdot: Why Don't Graphics Cards For VR Use Real-Time Motion Compensation? 159
dryriver writes: Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that in order to experience virtual reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset, and a good, fast gaming-class PC. This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 or 6 years has a 'Real-Time Motion Compensation' feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in real time, thus giving you a hyper-smooth 200-400 FPS/Hz image on the TV set with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the real-time motion compensating cannot cost more than a few dollars total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset. Now suppose you have an entry level or mid-range GPU capable of pushing only 40-60 FPS in a VR application (or a measly 20-30 FPS per eye, making for a truly terrible VR experience). You could, in theory, add some cheap motion compensation circuitry to that GPU and get 100-200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensation as a real-time GPU shader as the rest of the GPU is rendering a game or VR experience.
So my question: Why don't GPUs for VR use real-time motion compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
So my question: Why don't GPUs for VR use real-time motion compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
Sickening (Score:5, Interesting)
Re: (Score:2, Insightful)
I agree, it's almost motion-sickness inducing.
Plus it looks very unrealistic; quite eerie.
Like interpolating pixels in the uncanny valley.
Re:Sickening (Score:5, Interesting)
It reminds me of the Hobbit movies, in particular of the battle on the river. I was taken out of the movie by the splashes. They looked fake, but I knew that this was more because the movie was shot at 48FPS and so captured the motion better.
So does it look fake because it is fake, or does it look fake because it's different from what we expect to see?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It reminds me of the Hobbit movies, in particular of the battle on the river. I was taken out of the movie by the splashes. They looked fake, but I knew that this was more because the movie was shot at 48FPS and so captured the motion better.
So does it look fake because it is fake, or does it look fake because it's different from what we expect to see?
Hobbit looked bad because the CGI was bad. The piles of gold in the Smaug scenes were especially bad, and the other CGI wasn't very good.
Re: (Score:2)
I'm only talking about the water on the river. That was all real, filmed at 48fps. It looked bad because it was unexpectedly too detailed.
Re: (Score:1)
Re: (Score:2, Interesting)
There are, however, things that make VR implementations easier and less error prone.
If the VR helmet itself, which has motion sensors, detects motion, it could apply motion compensation while it awaits for a new frame.
The approach could be limited to head movements, be truly helpful and nigh flawless if done correctly.
So really, the poster does make a valid point.
Re:Sickening (Score:5, Informative)
I believe something related is already being done on the Oculus and probably the Vive as well - in that each frame, after being rendered, is geometrically manipulated to approximate the changes that should be seen due to head motion since rendering began.
You can't use the same technique as used for televison, interpolating between frames, because that introduces far more lag, which I believe is the single largest contributor to nausea. After all you can't interpolate between frames until the second frame is ready, so even if interpolation were instantaneous you couldn't render frame N+0.5 until frame N+1 was finished, so you'd be inserting a minimum extra half-frame worth of additional lag in exchange for doubling the higher frame rate. Pretty sure that would be a lousy deal, especially considering the ugly artifacts such interpolation introduces.
Re: (Score:2)
Re: (Score:2)
I suspect they do so. I would assume the render cycle is:
1) Extrapolate where your head will probably be by the time the frame is done rendering
2) Render the frame for that position
3) Do some fast image distortion on the resulting image to better reflect what you should be seeing from where your head actually is
4) Display final image
Repeat.
Bottom line is there are a *lot* of tricks being used to minimize lag-related discrepancies, it's worth looking at what's actually being done, if only to see all the tri
Re: (Score:2)
They also do a warp if (on hopefully rare occasions) the GPU can't get the new frame ready in time, repeating the previous frame but warped to match the new head position.
Re: (Score:2)
I suspect it may even be the same technique - normally you warp to make up for lag (1 frame worth of motion), occasionally you warp even more to make up for a dropped frame (2 frames worth of motion).
Hmm, I suppose if your warping algorithm were good enough you could theoretically double displayed frame rates by making a policy of only rendering every other frame, with warped intermediate frames. I suspect that discrepancies would become obvious if present constantly, but I could be wrong. Especially if y
Re: (Score:2)
I believe something related is already being done on the Oculus and probably the Vive as well - in that each frame, after being rendered, is geometrically manipulated to approximate the changes that should be seen due to head motion since rendering began.
You're referring to Oculus Asynchronous Timewarp, which maps the viewport into a tessellated grid whose polygons get deformed in accordance with motion data.
Re: Sickening (Score:3)
Re: (Score:2)
Yes, a very big deal. Lag is THE main cause of nausea in VR, and every trick in the book is being used to reduce it. Add 1 frame of lag and your 90fps headset suddenly has as much lag as it would if running at only 45fps, and proceeds to make pretty much everyone sick.
Re: Sickening (Score:2)
Time from movement to screen with no lag at 90hz is 1/90s. With 1 frame lag, it takes double that, so 2/90=1/45s, hence 1 frame lag means latency equivalent to half the framerate.
Re: (Score:2)
You're missing the point, we're only talking about the amount of lag present
At 90 fps, under ideal circumstances, lag =1/90 s
At 45 fps, under ideal circumstances, lag = 1/45 s
At 90fps, with one frame extra lag, total lag = ideal lag + extra lag = 1/90 s + 1/90 s= 2/90 s = 1/45 s
So, even though the frame rate is indeed still 90fps, the amount of lag is the same as you'd get if you were only running at 45fps.
If frame rate itself were a major issue for nausea, then motion compensation would make sense. But it
Re: (Score:2)
That's just because you're used to watching movies at a horrible choppy 24 fps.
Look up the Soap Opera effect; it's a real thing. And you'll be very happy once you've gotten over it. To me 24 fps movies look like poorly done stop-motion.
Sickening without, you mean (Score:2)
I find it sickening when it's disabled, as motion is much more jittery and stutters while panning on today's blur-free LCD panels.
On earlier TVs that would mask this with blurring or phosphor fade between frames, this was not an issue. Today's sets have high refresh rates with little artifacts and blur, so pans and motion appear full of stutter, especially on a large set where an object/person in motion travels larger physical distances between frames
Re: (Score:2)
Agreed. This effect actually makes me feel faintly sick on TVs that have it turned on (AND, it's one of the primary causes of the Soap Opera Effect, which, even if it doesn't make you sick, looks like shit). I always have to find it and turn it off on the sets I use.
Well, I grew up in the CRT era, where some programs (including live programming such as soap operas and evening news, and programs recorded on video tape) ran at the broadcast television standard field rate [wikipedia.org] of ~60fps. Although the broadcast image was interpolated, so they only sent half a frame at a time, the updates (and apparent fps) were 60Hz. The first LCD HDTV we got (Vizio in 2008) had this interpolation feature, and although the smooth effect it created didn't bother me (it just looked like the old
Re: (Score:2)
Although the broadcast image was interpolated
Interlaced.
That doesn't work because... (Score:5, Insightful)
You can't change the angle at which the scene is rendered by interpolating between frames.
It's not the raw framerate. It's that the scene your viewing has to match where you're looking that quickly or you get motion sick.
Re: (Score:2)
You can't change the angle at which the scene is rendered by interpolating between frames.
It's not the raw framerate. It's that the scene your viewing has to match where you're looking that quickly or you get motion sick.
While the parent is Anonymous coward, please rate him up, as that is correct.
Re: (Score:2)
They do. (Score:3, Informative)
Look up "asynchronous time-warp". /thread
Re:They do. (Score:5, Informative)
Look up "asynchronous time-warp". /thread
Pretty much this. Here's a video explaining time warping, it also has some links for more details in the video description.
https://www.youtube.com/watch?v=WvtEXMlQQtI/ [youtube.com]
Re: (Score:1)
This is basically the answer. The important thing to note is that async timewarp has to use extremely low latency (like 120 FPS or faster) head tracking sensors in order to warp the image to match the motion of your head.
As many people have already commented, the latency will make you motionsick, Specifically, the time delay between moving your head and what you see reflecting that movement. You need to have the raw *refresh* rate fast enough to fix this problem, and they combat the lower *render* rate by a
Re: (Score:3)
The HTC Vive uses a different technology, I don't remember what it was called, but it seems to be more effective at preventing nausea, as I read in many reviews that people tend to experience less nausea using the Vive for long periods than with Oculus.
I own a Vive, and when playing Elite I usually get between 45 and 55 frames per seconds, yet the experience seems entirely smooth to me and I don't get any nausea at all. I find no perceivable difference to playing a game running at 90 FPS.
Re: (Score:3)
Re: (Score:2)
Pretty much this.
No, not this at all. A lot of people are making this mistake. It's not what the submitter is talking about. He's talking about what is usually called "motion compensation" but should, in this case, probably be called "motion compensated interpolation."
It has nothing to do with the motion of the viewer's head - which is what Async Timewarp is all about - but is about the "motion" between two frames of video, and using the detected motion to create an interpolated frame between each "real" frame to increase f
Re: (Score:2)
No, not this at all. A lot of people are making this mistake. It's not what the submitter is talking about.
Of course this is not what the submitter is talking about, I never said it was.
The need for high framerate in VR is mainly due to rotation of the head, which is exactly what time warping is good for.
If one were unable to rotate your head and could only translate it, VR wouldn't have the same need for 90+ FPS.
I can't see how motion compensation would do a better job, in fact I don't quite see how it would be any good at all.
So yes, very much "this" indeed.
Re: (Score:2)
That's not motion compensation - not the kind the submitter is talking about, anyway.
Latency (Score:5, Informative)
Broadcast TV with this stuff on has considerable latency. It looks at the next frame and then interpolates its way there, might work reasonably well to watch a recorded show but would be pretty horrible for real time motion from VR. Next question?
Re:Latency (Score:5, Informative)
Even with a stable framerate, this technique intentionally delays the next frame to add compensation frames.
As an example, let's have a magic VR helmet running at 120Hz and instant processing (ie, 0ms GTG time, which doesn't exist) and a video card capped at a perfectly stable 30 FPS (aka 30Hz).
We will split a second into ticks - we'll use the VR helmet's frequency of 120 Hz, so we have 120 ticks, numbered 1 to 120. (Just to annoy my fellow programmers!)
We therefore get a new actual frame every 4th tick - 1st, 5th, 9th, etc.
Without motion compensation, we would display a new frame every 4th tick - 1st, 5th, 9th, etc.
With ideal (instant) motion compensation, we can't compute a transition frame until we have the new frame. So we could, theoretically, go real frame #1 on 1st tick, computed frame based on #1 and #2 on 5th tick, real frame #2 on 6th tick, computed frame based on #2 and #3 at 9th tick, etc.
This would also be jerky - 2 motion frames then 3 at rest? We could push the frames back a tick and fill the interval with three compensation frames, but then we increase the delay, which is always higher than this example (and is multiplicative). So we'd have frame #1 at 5th tick, computed frames at 6th/7th/8th, frame #2 at 9th tick, etc. You've now introduced a minimum 4 tick delay, which at 120Hz is 1/30 of a second, or 33ms! To an otherwise impossibly-perfect system!
What about historical frames instead to PREDICT the next blur? Well, then, when something in-game (or, really, on screen) changes velocity, there would be mis-compensation. (Overcompensating if the object slows, undercompensating if it speeds up, and miscompensation if direction changes).
There's more problems, too:
- This doesn't help when the video card frameskips/dips.
- Instant GTG and instant motion frame computation do not exist. At best, they're sub-tick, but you'd still operate on the tick.
- Input delay already exists for game processing, etc.
- Increased input delay perception would be exponential to the actual length of the delay. For example, 1-2ms between keypress and onscreen action? Hardly notable. 50ms delay just to start registering a motion on screen and course correct? Maybe OK, maybe annoying. 150-200ms? Brutal.
Re: (Score:2)
Re: (Score:2)
VR can't happen with current or foreseeable technology for precisely the reasons you mention.
Err, no. The reasons mentioned have nothing to do with why VR "can't happen." The motion compensation trick is entirely unnecessary if the GPU can render frames fast enough. And it runs counter to the aims of VR, anyway.
In case you hadn't noticed, VR is already happening. It might fizzle out, it might not. It's too early to say.
It sounds like you just have a beef with VR, saw an opporunity to air said beef, and took it without really reading what was being said.
Re: (Score:2)
Wait, an actual explanation instead of "because profits, and also fuck you"? What site am I on?
Ah yes, it is a rather narrowly targeted question about technical details, instead of business, politics, or sociology. Therefore, the guy who thinks he's right probably will be.
Well done. The rest of youse guyse, still on notice.
Re: (Score:2)
but would be pretty horrible for real time motion from VR. Next question?
Motion VR? It's horrible enough for playing standard XBox games. It's the first thing I turn off on my HDMI inputs when I get a new TV.
Re: (Score:2)
Certainly it takes much more computational power, and therefore greater potential latency, to actually render the raw frame than to do motion compensation?
It's not throughput, it's latency. (Score:4, Informative)
Because it works by delaying frames (Score:5, Insightful)
In order to add a frame between 1 and 2, you have to have received both frame 1 and frame 2. People are already getting sick because what they see and do don't match, you're going to make it worse by making what they see lag further behind what little the headset picks up.
Also because it is artifact-prone (Score:2)
Re: (Score:2)
It doesn't help that the technique is also *very* prone to artifacts
But the submitter said "no visible stutter or strobing whatsoever"!
Do you mean to say he may be mistaken?! I just don't know what to believe any more.
Re: (Score:2)
Re: (Score:2)
This isn't really accurate; you can (and they do) use frame 1 only, along with knowledge of the camera motion through the scene. It's not perfect, but it's better than missing a frame.
Re: (Score:2)
Everyone is conflating two entirely different concepts.
First, there's motion compensation - perhaps better termed "motion compensated interpolation." That's what the GP and the submitter are talking about. For that you need the future frame.
But there's also what Nvidia call Asynchronous Timewarp, which compensates for the motion of the player (but otherwise has nothing to do with the other kind of motion compensation) by warping a single frame, either because it has the extra time to do so before display is
Re: (Score:2)
Yes, it really would be noticeable.
This is why modern TVs have game modes to turn off motion compensation and other processing. Otherwise there's too much latency and gameplay suffers.
Even with game mode, I still find playing on a CRT or other truly-dumb monitor to be much more responsive for gameplay.
Re: (Score:2)
It wouldn't do the job unless increased motion-sickness is the intended goal.
Latency (Score:3)
High frame rate is important for motion data, but it's also (and perhaps primarily) about latency. It is *much* more jarring in my Rift to experience latency -- akin to rapid sea-sickness -- than to have a lower frame rate.
Getting more motion information would be great but we can't sacrifice latency for it, and those TVs tend to have a very noticeable amount of latency. Not that this is an unsolvable problem -- I just haven't seen it yet.
Re: (Score:2)
It is an unsolvable problem. In order to create an intermediate frame you must know the future frame.
Re: (Score:2)
They already know the camera position for the future frame, which is enough to make some minor adjustments. The result isn't as good as having the actual future frame, but it's also better than using the past frame.
Re: (Score:2)
They already know the camera position for the future frame
How? Player input and/or head position can change before the future frame is to be rendered.
Latency (Score:2)
The biggest reason is latency. People start getting nauseous when they turn their head in a VR headset and their view doesn't change quickly to match the head movement. Motion Compensation on TVs relies on having at least two frames (or 22ms worth of frames at 90fps if this were a current-gen VR headset) already at the TV in order to do the calculations for a frame in between, and in practice they could be buffering 2-3 seconds worth of frames for their calculations and you'd never notice that your TV is di
Huh? (Score:3)
you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes
Um, what? That's entirely wrong. You need a steady 90 fps, that's it. There's no doubling because of eyes, this isn't 3D TVs where you need to alternate frames. The only other concern is that the resolution is higher than 1080p.
Re: (Score:2, Informative)
No, it's correct. For a single GPU solution without Nvidia's new tech (that won't be widely used until AMD releases an equivalent) you render the frame for one eye, clear buffers, render it for the other. That means if you sent all frames rendered to a single monitor you'd get 180 FPS.
Re: (Score:1)
Re: (Score:2)
That's not how it works.
Re: (Score:2)
Besides, you don't need particularly novel tech to render on two buffers simultaneously, reusing as much of the work as you can. Stereoscopic rendering is just a change of matrices (ignoring the final projective steps), so you can reuse a lot of data and processing. Just rendering
Re: (Score:2)
Apparently that "just" bit requires new GPU hardware, and at this instant in time, only 2 GPUs (the 2 new Nvidia 1000 cards) even support it at all. Any VR developer who wants to eat has to support AMD hardware as well because it's the chip in the PS4 and probably the chip used in the new generation of VR supporting consoles.
Re: Huh? (Score:3)
Actually, it's worse than that. 90fps is the rock-bottom MINIMUM. If you want a frame rate that satisfies the Nyquist minimum for high-contrast peripheral vision, even 400fps is on the low side.
The human eye is really hard to fully satisfy with immersive video, because trying to define the eye's "resolution" or "frame rate" is kind of like trying to define the resolution of an Atari 2600 or an Apple IIe. The fovea has relatively high color resolution, but comparatively poor luminance resolution. Peripheral
Re: (Score:2)
Bullshit. Look at any demo that animates a high-constrast white object horizontally on a black background at a rate of 120 pixels per second on three monitors side by side... one moving it by one pixel every 120th of a second, one moving it by 2 pixels every 1/60th of a second, and one moving it by 4 pixels every 1/30th of a second. I can absolutely GUARANTEE that nearly everyone can accurately and easily tell them apart (though some users might need to be told what to look for).
Yes, to some extent, motion
Because there is no such thing as magic (Score:2)
It works in a TV broadcast because it's a stream. It will work in a 3d streaming video to a headset but that's without head tracking.
It can't work for interactive head tracking. It's a thing called latency and it's the reason people get headaches from VR. Your brain does it's own head tracking and when what you see doesn't match you get vertigo and or a headache.
You also get vertigo from confusing your brain by spending too much time in zero G. A ride in an elevator can make you lose your cookies. That's be
Re: (Score:3)
You feel motion but you don't see it and your brain is drawing two different opposing conjectures.
... which is actually kind of amusing when I think about it. There's a watchdog circuit somewhere in your brain dedicated specifically to checking whether or not your sensory inputs match up, and when it detects that they don't, it assumes that you are drunk or high (or otherwise somehow poisoned) and initiates the upchuck routine. How many generations of questionable-quality-alcohol drinkers did it take to evolve that?
Re: (Score:2)
"It's a thing called latency and it's the reason people get headaches from VR"
No, it's ONE of the reasons people can get headaches from VR. Another cause of VR-related migraines is that up to around 25% of all humans has some degree of problem with depth perception, and using VR helmets will trigger that, the more problems you have with it, the faster it will trigger. Resolution, FPS, motion latency has got nothing to do with that problem
Re: (Score:2)
And another is that there is no difference in focus in a VR world. Your eyes will strain to make sense of that, just as they do when you wear someone else's glasses.
gaming needs the current video, not old video (Score:2)
Actually, motion compensation requires both the current frame and one or more future frames to be able to compute intermediate frames. With a media player where the full video already exists it is simple enough to access "future" frames. With a TV showing video inbound from a broadcaster, cable company or media device, you can delay the output by 1/30th or 1/24th of a second, delay sound similarly, and no one is the wiser. P>
But in video gaming, delaying the video by 1/30 or 1/24 or even 1/20 of a secon
Idiot (Score:1)
Because you are an idiot?
In order to generate extra in between frames you need at least 2 frames. For TV this means that the output is delayed for a few milliseconds which isn't a problem because TV is not interactive but for a VR which is interactive you just added more delay into the controller response system.
Some graphics card already do (Score:1)
Works on their recent high end cards
http://www.slashgear.com/nvidi... [slashgear.com]
Re: (Score:2)
Not what the submitter is talking about.
There's "motion compensat[ed interpolation]" and there's "compensating for player motion." Two entirely different things.
ATW FTW (Score:3, Insightful)
Oculus 1.3 runtime for the Rift was released with async timewarp. When it was released DK2 users used to earlier runtimes without it were all over the boards with phrases like "holy shit" and "DK3" to explain how ATW changed everything for them. Jitter issues magically disappeared overnight with only a simple software update.
More generally there is one and only one "trick" for improving VR quality going forward and that is foveated rendering. This technology is absolutely critical to any serious vision of future HMDs.
To provide some context cones of our eyes cover a massive (cough cough) 15 degrees of arc. That's it. You can't even lean back and read 1/4 of what is on your monitor without moving your eyeballs around to do it. 4k is overkill.. 1080 is overkill... The future in VR is entirely locked up in sensing eye orientation and optical and or electronic steering of relatively low resolution displays in response.
Re: (Score:2)
Yes, very quickly -- even more quickly than with other approaches, because your eye saccades [wikipedia.org] really fast. (Although you probably don't have to render intermediate frames during the saccade, because IIRC your brain sort of ignores incoming detail during the movement itself.)
The win is that you have to render a much smaller patch of high-resolution detail, which saves computation in some parts of the pipeline. You've still got to do all the intersection, bounding-pyramid, depth-sorting and whatnot for your wh
Several reasons (Score:2)
a) What you're describing is frame doubling or similar tricks (every manufacturer has it's own name for it). It only makes the system APPEAR better and brighter. This is often done by simply doubling the frames or turning the backlights on and off between changing frames. This makes it appear "crisper", brighter and smoother, it's only a trick though and most displays already do it, it doesn't actually improve the actual experience, only tricks your brain into thinking it is at first glance and often when y
Total Misunderstanding... (Score:2)
These are not the same problems at all. Look at gaming monitors and Ping times to better understand the problem. (These aren't the same thing, but at least the underlying reasoning is closer). The VR problem is one of synchronicity. If your sitting looking at a screen of a movie or something, you don't need to change the image very often, and your brain will work it out. If there is something you are carefully paying attention too, like a ball in a sporting event. 30 FPS is not smooth enough (y
my nexus player does this (Score:2)
I have a nexus player which I like, but if I cast anything to it, or play local content with MX player I get the soap opera effect making a 24 fps movie look like it's at 60. Is it the Nexus player doing this or my TV? I've posted elsewhere and some said it's my TV but if so, wouldn't it do it with everything? Like Netflix YouTube etc.
Drives me nuts.
Re: (Score:2)
Is it the Nexus player doing this or my TV?
It probably is the TV. It may have different settings for different sources. Look for "judder reduction" or something like that.
System approach (Score:2)
Many posters have pointed out that latency in the graphics card is an inherent problem.
There's also latency in the monitor, and latency in the computer which creates each frame. Merging the functions of the monitor and the graphics card should allow a latency reduction of 1 frame (at the display rate). (This brings up new problems, of course. Interfaces change, and the monitor has to handle the weight and heat of the graphics card function - a problem if the monitor is a VR headset.)
Asynchronous Timewarp (Score:2)
... does exactly this, and it works a treat. Only difference to TV's is, it is only used when your framerate goes below 90 fps.
It does look weird when it goes on for longer periods, but the alternative would be juddering which is insta-puke-city.
Re: (Score:2)
Asynchronous Timewarp is something different than what the submitter is talking about.
The submitter is talking about interpolating between rendered frames to create smoother video - i.e. only rendering every other frame fully, then just interpolating between them to fill the gap.
The obvious problem is that it means you've added an extra frame's delay to everything. Also, motion compensated interpolation isn't perfect.
Interactive vs. Passive (Score:2)
A good VR experience (and preventing motion sickness) requires fast response time. This requires low latency of the entire chain from the motion sensing device, through the USB connection, OS process scheduling, scene calculation and rendering and any buffering in the video card and display.
A system that is able to respond quickly can obviously produce more frames per second. But just creating more frames per second without reducing latency will not help the experience feel more convincing (or prevent you f
Why do that now? (Score:2)
There are already GPUs that can do 90FPS without interpolation techniques which would not provide as good of a latency or eye tracking. In a couple of years the same level of performance will be cheap and widespread. Why would manufacturers focus on short term tricks and not the real thing?
because Latency, also you are a noob (Score:2)
what good would extra 60 frames _from the past_ do to your VR experience? other than make you puke
BTW Microsoft once championed the idea of generating 3D from a bunch of 2D tiles, this would enable tricks like motion compensation for tile reusing. It went pretty well for everyone involved (read billions invested/wasted by all MS 'partners').
https://en.wikipedia.org/wiki/... [wikipedia.org]
Stupid question. (Score:2)
No, it's just 90 FPS (Score:1)
FPS isn't cumulative like that. 90 FPS "per eye" is just 90 FPS overall. It's the draw area (number of pixels) that increases. And, with what we currently have, it doesn't actually increase. Both Oculus and Vive are 2160 x 1200, 25% more than the usual 1920 x 1080.
Because you can't interpolate into the future (Score:2)
You need to know what the next frame will be before you can interpolate towards it. This means delaying everything, That's a big no-no in VR.
Also, motion compensation is not perfect. That's why all these "60fps!" trailers people like to put on YouTube look quite underwhelming.
Real 48/60fps video looks a lot nicer.
Re: (Score:2)
Time-warp is not motion compensation, not in the sense used in the question.
It does compensate for motion, yes, but "motion compensation" is a different thing in video terms (it should probably be called "motion compensated interpolation.")