Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Displays Television Technology

Samsung, Stanford Make a 10,000 PPI Display That Could Lead To 'Flawless' VR (engadget.com) 62

Samsung and Stanford University have developed OLED technology that supports resolutions up to 10,000 pixels per inch -- "well above what you see in virtually any existing display, let alone what you'd find in a modern VR headset like the Oculus Quest 2," reports Engadget. From the report: The newOLED tech uses films to emit white light between reflective layers, one silver and another made of reflective metal with nano-sized corrugations. This "optical metasurface" changes the reflective properties and allows specific colors to resonate through pixels. The design allows for much higher pixel densities than you see in the RGB OLEDs on phones, but doesn't hurt brightness to the degree you see with white OLEDs in some TVs. This would be ideal for VR and AR, creating a virtually 'flawless' image where you can't see the screen door effect or even individual pixels. This might take years to arrive when it would require much more computing power, but OLED tech would no longer be an obstacle.
This discussion has been archived. No new comments can be posted.

Samsung, Stanford Make a 10,000 PPI Display That Could Lead To 'Flawless' VR

Comments Filter:
  • by MikeDataLink ( 536925 ) on Monday October 26, 2020 @08:32PM (#60652494) Homepage Journal

    I really love the experience of VR. But there's just no killer app or game. Everything out there feels broken and low quality.

    • Thats because the screen door effect is still prominent, so headset sales are relatively low in spite of the hype -- none of the VR proponents are willing to even objectively self-assess and acknowledge it. VR headsets still havent sold anywhere near what even the 1990s era Sega Saturn console sold (it was widely considered a FLOP .. anyone even remember it? neither do I) .. Anyway point is that none of the VR pushers are willing to acknowledge SDE is an issue. I got an Oculus Quest 2 .. of course I knew th

      • by Compuser ( 14899 )

        16K support has been done for a while now.
        https://www.tweaktown.com/news... [tweaktown.com]
        If they just make 16K per eye displays then I am guessing it would already solve screen door issues for most people. So it could happen fast. I just do not want to think about how much something like this would cost, at least initially.

        • by Compuser ( 14899 )

          Actually... USB 4.0 next year will support 16K. So if they can make the displays then the whole setup could be available soon and fairly standard. And more affordable than I thought, maybe even within range of some rich individuals (as opposed to say high end medical research labs).

        • We're a long ways from 16K displays for gaming.
          GPUs simply can't push pixels fast enough for that, yet; and VR at 24fps is... painful.
          • GPUs simply can't push pixels fast enough for that, yet; and VR at 24fps is... painful.

            You don't need 16k definition pixel at your peripheral vision.
            Periphery can afford to be blurry.
            (Foveated rendering).

            Also, GPU ray tracing is slowly becoming "a thing". It has the advantage of being an "embarrassingly parallel" class of problem (each traced ray is independent of the others) and thus you could throw more GPUs at the problem.

            So the very near future could see 16K VR headsets, driven by per-eye workstations that have each 4x SLI/Crossfire stacks of dual-GPU cards, and render the actual 16k reso

            • The 3090 has enough fill-rate to almost hit 60fps at 8K on contemporary games.
              So there's a world, where with foveated rendering with good eye tracking (because as it sits right now, that's a non-starter. If you try to force yourself to only look directly forward in a VR headset, you're going to give yourself a headache), could pull this off.

              But it's still well outside of realism.
              In the general case of 16k per eye, a 3090 is going to give you somewhere around 3.75FPS, and no other card is even *close* to
              • by Compuser ( 14899 )

                Yeah, 3090 is cheapo crap that I was not even thinking of. I think DGX A100 should be able to handle it though.

          • by DrXym ( 126579 )
            The display can be high-res without the source image needing to be any higher - it would still improve the experience for VR because the image would look sharper without all those annoying RGB elements or gaps.
      • Much more importantly than that is lack of foveal tracking. We are able to immerse ourselves in low quality images (even pixel-art worlds), that's a mental thing, but the awkward difference from how we tend to move our eyes way more than our heads is what physically breaks it. Without it, it just feels like you are wearing an awkward space helmet.

        The other benefit of foveal tracking is you really only need high-dpi rendering in a tiny oval where the eyes are looking, the rest can be black-and-white and pixe

          • Neat. It has a proprietary windows driver. As far as I'm concerned it doesn't exist. If it was an open api linux driver I might care.
        • by robi5 ( 1261542 )

          I get the "other benefit" part (why render gigapixels in 32 bit color in peripheral vision). Which I thought was _the_ benefit of foveal tracking. So what else do you refer to in the first paragraph? Ie. why does it matter if I glance sideways rather than rotate my head? Should the image translate, or some such?

          Btw. I think the focal length and aperture should be tracked too for correct depth of field rendering.

          • Depth of field (Score:4, Interesting)

            by DrYak ( 748999 ) on Tuesday October 27, 2020 @05:58AM (#60653440) Homepage

            Btw. I think the focal length and aperture should be tracked too for correct depth of field rendering.

            This is entirely achievable by exclusively relying on the eye tracking.

            Turns out:
              - part of the focusing is controlled by a reflex (if you cross eye to look at a closer object, your eye's lens will automatically adjust(*) for a closer object)
              - the rest is adjusted on a "as you need" basis depending on the percieved sharpness of the picture.

            That's why:
              - short sighted people can wear glasses and still read (close) text (e.g.: a book) with the glasses still on(*): even if now the lens focus needs to be adjusted much more closer, it will be just adjusted thus(*).
              - "magic eye" patterns work: the "depth" perception comes from the crossing of the eyesight (more cross eyed == closer) even if the piece of paper is always at the same distance and requires constantly the exact same lens' focusing.

            (Under the hood that's because your brain works entirely using only the parallax: it compares the overlapping of the two signals coming from the two eyes, and doesn't give a damn about the current contraction(*) of the len).

            So all such a system would need to do is to precisely track the direction at which each eye is pointing independently and compute at which distance these directions are crossing.
            The point where both eyes are crossing == the distance at which you're looking at == the part of the scene that should be sharply in-focus (unless the scene is outdoor and brightly lit).

            The problem is then generating the correct bluriness:
              - in classic polygons, you can only approximate by doing z-buffer-dependent-blur (this approximation kind of partially defeats the whole point of being realistic)
              - in ray-tracing, doing blurry output require more rays per pixels (covering a wider target), but you could re-use rays from adjacent pixels and offload the handling to the same engine (running on the left-over AI cores) that currently manages RT quality with lower amount of rays in current gen GPU ray tracing.

            So that's yet another argument why ray-tracing and VR should go hand in hand.

            ---

            (*): well, if you're young enough to have a lens that is soft enough to still be adjustable at your age.
            Older /. grey beards would need either manually adjusting the focus by using reading glasses, at least until they get their lens replaced due to cataract (modern lens replacement tend to be soft and allow reading without glasses again).

            • by robi5 ( 1261542 )

              Thanks for the explanation, good point, it's kinda hard to change the focal length of the eye without also doing cross-eye or parallel eye. Another secondary input to focus length inference could be the Z-distance of the spot in the scene I'm looking at, e.g. a nearby object or a distant building. It's possible but unlikely to focus something other than what we look at.

      • by doug141 ( 863552 )

        A lot of people don't want VR at any price or quality, I don't know if it'll ever be a "mainstream success."

        • the majority would happily accept/want VR, just not at the crappy and very expensive levels that it currently exists. It will never be more than a niche until they can get the quality up and the commodity pricing.
      • The Saturn did decently in Japan. Far from PlayStation numbers, but it beat the Nintendo 64 by a hair.

      • Thats because the screen door effect is still prominent

        Na. Not even noticeable on my Index, and it isn't even the hottest shit out anymore.

        so headset sales are relatively low in spite of the hype

        Relative to... what? Mice? Displays?
        It's a multi-billion dollar industry now with yearly shipments counted in the millions.

        none of the VR proponents are willing to even objectively self-assess and acknowledge it.

        Ah yes, I mean you can literally watch 1000 videos on youtube of random reviews of VR gear, but sure... That's what it is.

        VR headsets still havent sold anywhere near what even the 1990s era Sega Saturn console sold

        Well sure... NVidia hasn't sold as many GTX1080Tis either. Did you have a point? I'm pretty sure gaming consoles and VR headsets have drastically different margins and revenue models

      • ...1990s era Sega Saturn console sold (it was widely considered a FLOP .. anyone even remember it? neither do I) ..

        What???

        I owned one back in the day, and finished tomb raider on it as a kid.
        I now have a boxed Sega Saturn, among other systems.

        Just because you claim to not know the console (despite being able to quote the console's existence) that's your problem.

      • Thats because the screen door effect is still prominent

        Nope.

        I mean, screen door is present but that's not the reason. The reason is that the images don't respond to eye movements, that controller inputs don't produce corresponding accelerations (no matter how fancy your chair), etc., etc.

        IOW what happens in VR still isn't anything like what happens in real life. All they have is a stereo image, that's it.

      • by DrXym ( 126579 )
        The screen door effect is definitely the biggest issue with VR. Even on a 400ppi display, once you put a lens in front of it you see all those individual rgb pixels and even the gaps around them. So even if the image resolution were no better (e.g. 2K per eye), having a display where you can't even see the individual display pixels would be a vast improvement. It all depends of course on the display not compromising in other ways from its greater resolution, e.g. refresh rate, contrast, brightness etc.
        • by robi5 ( 1261542 )

          It's missing the point—the same tech that can be used for eg. a jump to 2K per eye can just be used to increase the area of light source relative to the area that comes from the pixel grid pitch, ie. one could keep the current resolution and just decrease the pixel borders. In fact it may be easier, all things being equal.

      • Comment removed based on user account deletion
      • You bought the worst VR system on the market and think it's not good, what a shock.

        There are some really fun VR games and apps, but there's also a lot of shovelware. Lots of devs are rushing products to market and it shows. Give it some more time and we'll have more fully fleshed out games like Half Life Alyx which is incredible. You wouldn't know about it as your toy goggles don't support it.

    • What about Star Wars: Squadrons? Seems like a killer game for those with VR + HOTAS.

  • by Joe_Dragon ( 2206452 ) on Monday October 26, 2020 @08:39PM (#60652512)

    need 10GB uncapped DIA fiber line for streaming or high end GPU to drive that thing.

  • If they get pixel density this high, they could increase frames per second by alternating which pixels are lit. There would far more pixels than could be reasonably addressed so why not use the extra pixels to improve frame rate?
  • I'm not expert on VR, but I recall reading years ago about people who felt nauseous from early VR attempts. It was speculated that the nearly imperceptible lag was the culprit, not insufficient resolution. So I wonder whether just cranking the resolution through the roof is going to really make things better?
    • Comment removed based on user account deletion
    • A sufficiently power-dense battery would also be an improvement. Right now it's a pain to be tethered to a wire.
    • by doug141 ( 863552 )

      Most people can get over it, it's just a matter of learning to pay attention to your legs and not your eyes for balance. After you do that, you can stop paying attention to your legs and it all goes automatic again if your inner ear is working. Saying VR has a nausea problem is like saying bicycles have an Achilles heel because novices can't balance the things. If a few people can't or won't adapt, the party will go on without them just fine.

      • When I first got my headset, there were some things that could induce nausea. Skyrim in particular, and only in free movement mode.
        After 3 or 4 sessions, it never happened again.
      • Most people can get over it, it's just a matter of learning to pay attention to your legs and not your eyes for balance. After you do that, you can stop paying attention to your legs and it all goes automatic again if your inner ear is working.

        It's interesting that you characterize the nausea effect as an inner-ear / balance problem. I used to work as a commercial fisherman, and got plenty of experience adjusting to being at sea - what is commonly referred to as "getting your sea legs", as well as the less commonly known problem of readjusting to land afterwards (getting your land legs?).

        You are right that it requires heeding your inner ear rather than your eyesight. I think we're so used to visual clues like perpendicular walls, door frames, et

    • Things are better. "early VR" attempts are just those. This is an industry that has made huge leaps over the past 5 years. I remember playing with an Oculus DK1 and I nearly throw up. I get horribly seasick. Yet with a Rift S I have zero issues even with long gaming sessions, and it is by far not the best performing headset on the market in terms of latency, refresh rate, or resolution.

    • by PPH ( 736903 )

      That's mainly due to the system's latency. The delay between moving your head, the sensors picking up that movement, recalculating the scene and refreshing the display. Older systems were bad. Newer systems have improved.

      But here's one problem: One thing that mitigates the nausea-inducing effects of latency is your brains' becoming accustomed to a slow input. Once it gets used to the idea that your visual input has some inherent lag, it compensates and you get over it. But the thing is; I want to keep my b

  • Well, not that Butterfly effect, but this sounds like a similar process to how iridescent colors are seen in nature, like butterfly wings. Aside from VR stuff, for regular images imagine you could do the same thing without LED backlight, just reflected ambient light. Probably don't need 10000 PPI on a tablet.

  • If this is true, I suspect that the reality is that what will happen is that graphics will have to adjust to rendering the vector graphics directly, i.e. instructions will be made to draw a triangle between three points, and fill it with a given color. How this will be accomplished at the rendering layer, I don't know, but it won't involve translating to pixels anymore.

    • Why? What's the benefit? At the end we need to draw to the display so something somewhere will need to push pixels. The current generation displayport can already push 2x 8K displays at 120Hz. By the time this hits the market no doubt that standard will have increased as well.

      The problem with adding a step for rendering some conversion is that it introduces latency. So it may be more efficient to send triangles but then that that will need to be converted to pixels for display.

  • Pixelation is a little different than the screen door effect. It might be acceptable visually to render pixelated graphics, as long as they don't have the screen door effect. Think Quake 1 level graphics, but upscaled with no increase in underlying render resolution. The eye seems to accept pixelated graphics, look at all the pixelated games we play.
    • The eye seems to accept pixelated graphics, look at all the pixelated games we play.

      It's the brain, not they eye. Most people don't really think about it much or realize it, but our brains render our idea of reality constantly. We are only ever looking at a tiny baseball sized part of the world with direct input from our eyes at any given time. This is why you can swear you saw something but the moment you look directly at it, it disappears.

    • Yes. Surprisingly... even pixelated VR (I had to play some games this way with my old GPU) doesn't really trip up the brain's ability to "feel" like you're there.
  • by Merk42 ( 1906718 ) on Tuesday October 27, 2020 @08:36AM (#60653814)
    You know there will be people, some probably from here, who "can totally see the pixels, sorry about your poor eyes grandpa".
  • From what I've read the main problem with VR isn't screen technology as much as latency. The sensors take a bit of time to figure out how you are moving your head, then that needs to be sent down USB or wireless to the computer, which then has to re-render the scene and pump it back to the display, which can also be laggy.

    None of the interfaces or operations are laggy in and of themselves, but they can add up. I think thunderbolt can help out here - as latency across PCIe is much lower than USB.

  • Its not the PPI is the gyroscopic latency

    GPUs aren't fast enough to process rendering with head controls, that's why the games that make you feel the least sick are on rails.

    John Karmack had some amazing ideas at Oculus by doing better PPI at camera focal points to minimize the GPU load.

    No this PPI development would make VR worse.

    • by robi5 ( 1261542 )

      Head motion follows physics, there's inertia and everything. And the head movement direction can sometimes be anticipated, eg. an enemy suddenly appearing on the right is bound to be followed by a sudden head turn in that direction. And as with camera image stabilization systems, it'd be possible to render a larger area and the headset could make last-millisecond decisions on image translation; sure it's not quite identical with proper 3D modeling but can yield sub-frame responsiveness.

Real Users find the one combination of bizarre input values that shuts down the system for days.

Working...