Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware

ARM's New CPU and GPU Will Power Mobile VR In 2017 (theverge.com) 38

An anonymous cites a story on The Verge: ARM, the company that designs the processor architectures used in virtually all mobile devices on the market, has used Computex Taipei 2016 to announce new products that it expects to see deployed in high-end phones next year. The Cortex-A73 CPU and Mali-G71 GPU are designed to increase performance and power efficiency, with a particular view to supporting mobile VR. ARM says that its Mali line of GPUs are the most widely used in the world, with over 750 million shipped in 2015. The new Mali-G71 is the first to use the company's third-generation architecture, known as Bifrost. The core allows for 50 percent higher graphics performance, 20 percent better power efficiency, and 40 percent more performance per square mm over ARM's previous Mali GPU. With scaling up to 32 shader cores, ARM says the Mali-G71 can match discrete laptop GPUs like Nvidia's GTX 940M. It's also been designed around the specific problems thrown up by VR, supporting features like 4K resolution, a 120Hz refresh rate, and 4ms graphics pipeline latency.
This discussion has been archived. No new comments can be posted.

ARM's New CPU and GPU Will Power Mobile VR In 2017

Comments Filter:
  • by JMZero ( 449047 ) on Monday May 30, 2016 @10:07AM (#52210707) Homepage

    ..but they are not the present. Top-end processor/GPUs are just now getting fast enough for VR to work well. The next generation will be wireless connections to the PC doing the rendering. A fully integrated solution that doesn't suck is at least a couple generations away.

    • by Kjella ( 173770 )

      Depends on rendering complexity, I guess. If you can do Super Mario-style graphics well in VR, there could be a niche for that separate from photorealistic VR. Like how the Wii was a big hit despite having the weakest graphics, for more casual players and use as a gimmick.

    • I think you don't understand an SoC.

      The mobile GPU has an added advantage over a desktop PC. More bandwidth between host memory and device memory. The GPU is basically connected to system memory controller. With desktop GPUs it will always take time to DMA all your data over a PCI express bus. That's great for precaching all your rendering data on the GPU, but it kinda limits how you can design your VR application. It will definitely evolve into Augmented Reality. You'll need a lot of memory bandwidth to co

      • by JMZero ( 449047 )

        Well, I actually understand this reasonably well. I'm a programmer, and I've done some 3d development - I've also spent some time with Cardboard, and I've had an Oculus DK2 since they launched.

        There's definitely special concerns around VR, and I'm sure a custom designed mobile architecture will be able to get some juice out of tight system integration... but you also just need to fill a bunch of polygons at very consistent, very high FPS. Huge dedicated boards in PCs are a lot better at this than tiny low

        • by K. S. Kyosuke ( 729550 ) on Monday May 30, 2016 @11:56AM (#52211251)

          There's definitely special concerns around VR, and I'm sure a custom designed mobile architecture will be able to get some juice out of tight system integration... but you also just need to fill a bunch of polygons at very consistent, very high FPS. Huge dedicated boards in PCs are a lot better at this than tiny low power mobile chips.

          Alternatively, you could employ a low-latency eye tracker and selectively degrade the parts of the picture that are currently in the peripheral regions of the user's field of vision. That should work for all three of geometry processing (here you just add two more dimensions to your dynamic LOD), shading, and rasterization. That might turn out to work even better than single screen high resolution displays, since those have to display the whole FOV at high resolution because the machine doesn't know what you're looking at.

          • by JMZero ( 449047 )

            I know there's been some experimentation with this, and it might end up being an important thing for any setup (even without the performance stuff, it could be that cheesing some "focus effects" using eye position would make things more realistic).

            But I don't think we'll see it as a core feature (or a solution to general performance problems) within the next generation or two.

          • Alternatively, you could employ a low-latency eye tracker and selectively degrade the parts of the picture that are currently in the peripheral regions of the user's field of vision.

            Current solution don't work at a sufficient speed with a low enough latency.

            Come to think of it, even *HEAD*-tracking is suffering from latency and rendering speed problems, to the point of being one of the big bullet point of the current crop of VR research.

            Eye speed can be said to be too fucking fast for current-day VR solution to be able to keep up with it.
            (Also currently, no rendering system I know of is designed to handle variable resolution. But it's not my area of expertise. And also, the kind of til

            • You should only need a low-resolution camera to capture the eye, and those should be able to work at very high speeds. Processing the results is a slightly more complicated problem but probably still feasible, especially if the system can anticipate sudden moves from what is actually being displayed. I agree that total latency of the system is an issue, but I suspect it could be far more solvable than pushing enough computational power into mobile hardware to be able to get by without selective rendering. I

    • VR will never not suck. Even if the technology allowed photo-realistic experiences and was accessible to all, you still have the problem of cutting some of your senses off from your surroundings. It is a totally niche technology that is only viable in very controlled environments like at an amusement park. VR at home will result in lots of broken things, injuries, and friends/family pranking you. VR in public will be a lot worse with all sorts of stupid deaths and a huge increase in muggings.

      • by JMZero ( 449047 )

        It's already pretty cool, actually, even if you're so scared of "moving around your living room blind" that you're just sitting down and watching stuff.

        I mean... uh... have you tried it? If you haven't, you might not understand that pranking people in VR isn't a negative, it's super fun. Poking people while they're playing (especially if they are covered in spiders) makes the whole thing more immersive and social.

        I don't know how VR will play out, but it's already pretty entertaining - and there's no reas

        • Sure, just like how 3D TV caught on so well...

          • by JMZero ( 449047 )

            I didn't say it'll catch on - how would I know? People buy tons of stupid crap, and sometimes good ideas get buried. Maybe everyone will think like you do - "oh, I couldn't possibly be seen with something odd looking on my head" or "what if my living room is suddenly full of knives" and "I won't buy a holodeck until it accurately recreates smells"? So yeah, VR could completely flop, I agree on that.

            But that's not what we were talking about. You said it will always suck. I said it's already cool. And i

    • Lightweight backpack PC with a big lithium ion battery underneath.
    • Lightweight backpack PCs with lithium ion batteries underneath does away with the bothersome tether.
    • Wireless is not as easy as it sounds. You need to send the sensor data to the computer and then send high resolution high frame rate video back to the HMD. That requires huge amounts of bandwidth, because compressing and decompressing the data would take time and consume resources on the wireless headset. If you end up having a power cord in the HMD, why not also put in a signal cable.

      Any extra latency in the signal chain from motion to photons is also really bad. Currently you don't get that much boost fro

  • I for one am looking forward to the "mobile VR" phenomenon.

    I plan on finding a nice comfortable seat at an outdoor cafe and watching hipsters walk into traffic. This is my dream for the future. I even have a nice spot scoped out. It's on Division St near Paulina in Chicago's Wicker Park neighborhood. The food is good and the drinks aren't watered down. Plus there's a Starbucks on the corner that's a honeypot for hipsters. Any of you from Chicago know exactly where I'm talking about.

  • by LuxuryYacht ( 229372 ) on Monday May 30, 2016 @10:26AM (#52210805) Homepage

    Will There be Mali Driver Source or a way to build for Linux without being tied to only one kernel version? Source is highly unlikely, but it would be nice to have some way to build mali drivers for Linux for other than the one kernel version they pick of if you require an RT kernel for you application. I'd even settle for a tool that modifies their binary so that you can at least build for the kernel version you need vs the only one they chose to release a binary for.

  • 50 percent faster than their current turd is still a hopeless turd. Their GPU is nowhere near adequate for VR, except with 1990s poly counts.

  • ... it's a mildly good thing that mobile SoC get support for high refresh and display bandwith. A higher refresh is better, even just for displaying white on black scrolling text. If you plug a cell phone into a monitor and run a linux-like desktop etc. (while charging from the same cable) it's a least a bit useful and the increase in power use is not a problem. I'd even play the old Quake 3 (back then, games were games!)

    The only problem is high refresh monitors are also high margin ones, so you may well be

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...