Home Virtual Reality Quest 3’s Mixed Reality Occlusion Is Now Higher Quality

Quest 3’s Mixed Reality Occlusion Is Now Higher Quality

Dynamic occlusion on Quest 3 is currently only supported in a handful of apps, but now it’s higher quality, uses less CPU and GPU, and is slightly easier for developers to implement.

Occlusion refers to the ability of virtual objects to appear behind real objects, a crucial capability for mixed reality headsets. Doing this for only pre-scanned scenery is known as static occlusion, while if the system supports changing scenery and moving objects it’s known as dynamic occlusion.

Basic description of the general concept of occlusion from Meta.

Quest 3 launched with support for static occlusion but not dynamic occlusion. A few days later dynamic occlusion was released as an “experimental” feature for developers, meaning it couldn’t be shipped on the Quest Store or App Lab, and in December that restriction was dropped.

Developers implement dynamic occlusion on a per-app basis using Meta’s Depth API, which provides a coarse per-frame depth map generated by the headset. Integrating it is a relatively complex process, though. It requires developers to modify their shaders for all virtual objects they want to be occluded, far from the ideal scenario of a one-click solution. As such, very few Quest 3 mixed reality apps currently support dynamic occlusion.

Another problem with dynamic occlusion on Quest 3 is that the depth map is very low resolution, so you’ll see an empty gap around the edges of objects and it won’t pick up details like the spaces between your fingers.

Footage from Meta.

With v67 of the Meta XR Core SDK, though, Meta has slightly improved the visual quality of the Depth API and significantly optimized its performance. The company says it now uses 80% less GPU and 50% less CPU, freeing up extra resources for developers.

To make it easier for developers to integrate the feature, v67 also adds support for easily adding occlusion to shaders built with Unity’s Shader Graph tool, and refactors the code of the Depth API to make it easier to work with.

I tried out the Depth API with v67 and can confirm it provides slightly higher quality occlusion, though it’s still very rough. But v67 has another trick up its sleeve that is more significant than the raw quality improvement.

UploadVR trying out Depth API with hand mesh occlusion in the v67 SDK.

The Depth API now has an option to exclude your tracked hands from the depth map so that they can be masked out using the hand tracking mesh instead. Some developers have been using the hand tracking mesh to do hands-only occlusion for a long time now, even on Quest Pro for example, and with v67 Meta provides a sample showing how to do this alongside the Depth API for occlusion of everything else.

I tested this out and found it results in significantly higher quality occlusion for your hands, though it adds some visual inconsistencies at your wrist, where the system transitions to occlusion being powered by the depth map.

In comparison, Apple Vision Pro has dynamic occlusion only for your hands and arms, because it masks them out the same way Zoom masks you out rather than generating a depth map. That means on Apple’s headset the quality of occlusion for your hands and arms is significantly higher, though you’ll see peculiarities like objects you’re holding appearing behind virtual objects and being invisible in VR.

Quest developers can find Depth API documentation for Unity here and for Unreal here.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment