Depth-based Occlusions

Place or move content behind real world objects without breaking immersion.

Overview

Without an understanding of depth, virtual objects will sometimes unrealistically be visible behind real world objects. For example, in the comparison below, the virtual Yeti without any occlusion is visible even as it moves behind the wall. With occlusion, the Yeti is properly hidden and appears much more believably part of the environment.

../../../_images/yeti_occlusion.gif

Occlusions are supported on any device that supports depth estimation.

Note

Currently, occlusions do not use LIDAR depth data on LIDAR-enabled devices.

Enabling Occlusions

To achieve occlusions, the ScreenSpaceMesh OcclusionMode is a ARDepthManager setting that creates a flat mesh that sits perpendicular to the camera view and then continually adjusts the mesh’s vertices to match depth estimation outputs. For optimal performance, depth is calculated exclusively in rendering shaders.

The aspect ratio and resolution of the occlusion mesh will not necessarily match that of the device camera or screen (those values are determined by the underlying deep-learning model). In particular, there will possibly be padding (empty space) between the left and right sides of the mesh and the screen edges, resulting in no occlusion happening in those regions.

The occlusion mesh has a number of vertices equal to the number of pixels on the depth texture plus whatever padding exists on the left and right sides.

Using ScreenSpaceMesh

To set up mesh occlusions in a scene that already has an ARDepthManager and ARRenderingManager, simply:

  1. Set the ARDepthManager’s OcclusionMode to ScreenSpaceMesh.

  2. In order to suppress a semantic channel, set the Semantic Segmentation Manager property to an ARSemanticSegmentationManager from the scene. Also, fill the Suppression Channel property with the name of the channel that should be suppressed.

    • Suppressing a class will push all pixels labeled with that class to the max depth. If all of your virtual objects are placed above the ground, for example, suppressing the ground can be useful to reduce the chance of noisy/inaccurate depth outputs accidentally occluding your object.

    • For the up-to-date list of available classes, check the Semantic Segmentation page.

The ScreenSpaceMesh occlusion technique provides the scaffolding required to use DepthMeshOccluder without extra set up,

  • It sets the camera’s near and far clipping planes to match the near and far distances of the underlying depth estimation algorithm, because the algorithm is not great at making estimations outside of that range.

  • On each frame update, it will update a texture with the latest interpolated depth map. That texture is what DepthMeshOccluder and ARDepthManager use to manipulate mesh vertices.

You can use the DepthMeshOccluder independently if you’d prefer to set it up yourself. Both ARDepthManager.OcclusionTechnique and DepthMeshOccluder interfaces allow toggling of the occlusion effect, if you don’t always want the effect active. To assist in debugging, both allow you to swap between a few different color masks to visualize the invisible mesh being used to occlude scene objects. The following example shows a disparity color mask used on the right, and no color mask used on the left.

../../../_images/car_occlusion.jpeg

ScreenSpaceMesh Occlusions and Meshing

If you’re using both ScreenSpaceMesh and Meshing in your scene, you may get undesired occlusion results. Meshing features of the ARDK by default enable Unity mesh occlusions, which can compete with occlusion calculations from ARDepthManager’s ScreenSpaceMesh technique.

ScreenSpaceMesh occlusions work well with dynamic object occlusions (for example, the user’s moving hands occluding objects), whereas Meshing occlusions work well with static objects in larger enironments.

You can disable Meshing occlusions by disabling Dynamic Occlusion in the Unity Mesh Renderer for the mesh prefab your ARMeshManager is using.