The images acquired from the capturing cameras should further be processed before view synthesis and visualisation.

For RGB-only cameras, a depth estimation process will deliver the required depth images. Since they must be of high quality, we have opted for MPEG’s Depth Estimation Reference Software (DERS): it delivers high quality, albeit at a long compute time per frame (in the order of minutes). Targeting real-time processing, we have slightly modified the algorithm (less inter-process dependencies), further optimized and ported it onto a GPU (Graphical Processing Unit), which leads to the method of “Graph-cuts Reference depth estimation in the GPU”, abbreviated as GoRG.

Image by Umberto

Without prior knowledge (e.g. an approximation of the depth images), however, real-time processing cannot be reached yet. This can be overcome by using depth sensing devices, which depth maps are used as priors, further filtered (better follow the object silhouettes, get rid of undefined depth values/holes, etc) with GoRG. Real-time processing is achieved on a multi-GPU platform (the number of GPUs depends on the number of input camera devices, but this typically ranges between 4 and 8 GPUs). Typical results are shown in the video in the nearby panel.

After GoRG processing, the depth maps are of good quality for use in the view synthesis process as seen in the visualisation section.

Video Camera Lens


Image by Hammer & Tusk