Include the following details (edit as applicable):
- Issue category: Real-time Mapping-Depth / ARDK Documentation /
- Device type & OS version: Android
- Host machine & OS version: Windows
- Issue Environment : Dev Portal
- Xcode version:
- ARDK version: 1.2.0
- Unity version: 2020.3.26f1
Description of the issue:
Goal: Replace the depth API with other pre-trained depth estimation model
-
How does the DepthBuffer generate Raw Depth for each frame?
After digging into the ARDK code a bit, I found that Awareness/DepthBufferProcessor.cs has variable ‘buffer’ assigned with ‘frame.Depth’ but didn’t get further details on from where the raw depth gets generated at first. -
How to Use the Playback Feature?
Does this feature let us provide pre-recorded video as input instead of a live camera feed from the device? If so, how to provide a pre-recorded video in Unity? I could set the Runtime Environment to Playback but didn’t get more information on how to provide the input video.