How Google’s ARCore Depth API Tackles Occlusion with a Single Camera
Google has taken augmented reality to a whole new level for perceiving 3D space and providing optimal AR experiences without bringing in the use of dedicated hardware like depth sensing cameras. Google’s ARCore [1] Depth API is capable of creating depth maps with just a single camera. For those who don’t know, ARCore is Google’s platform for augmented reality runtime and an SDK for Android and iOS. With this, developers can easily do positional tracking, surface detection and various other estimations. When moving your phone with a single camera, Depth API [2] will take multiple pictures around you and then compare those images to estimate the distance from each pixel to create a 3-dimensional structure of your surroundings making it possible to project real-world instances of objects into the current frame. ARCore visualizations How the Magic Happens Google says, “Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera [3...