About Us
Supported by European Union’s Horizon 2020 research and innovation programme, Hovitron is an ambitious project with a goal to develop a first of its kind truly 3D light-field vision system combining light-field image capture and light-field head-mounted display (HMD).
This system will support holographic vision (i.e. with correct eye accommodation and vergence) for a more immersive and efficient remote control in tele-robotic applications, where the operator will experience more visual comfort and less fatigue allowing him/her to sustain work for longer time periods than with today’s stereoscopic HMDs.
Timeline
From mid-2020 to mid-2021, we have explored the various multi-camera capture options, as well as the real-time view synthesis to set up a first Proof-of-Concept (PoC-1) for a first user test validating Depth Image-Based Rendering (DIBR) for Holographic Vision (cf. Hovi in HoviTron) when having perfect depth images (i.e. with synthetic content).
A lot of effort has been devoted in 2021 for the real-time operation aspects, especially with regards to the depth estimation/sensing.
End of 2021/beginning of 2022, the full pipeline is set up for a second Proof-of-Concept (PoC-2) by mid-2022, further validating Depth Image-Based Rendering (DIBR) for Holographic Vision for real content (i.e. with possibly imperfect depth images).
From mid-2022 to end-2022, further finetuning will lead to an end-to-end HoviTron pipeline, ready to be shown as a fully functional prototype for tele-robotic operation (cf. Tron in HoviTron) with Holographic Vision (cf. Hovi in HoviTron).
Our Partners
Laboratory of Image Synthesis and Analysis (LISA) of the Université Libre de Bruxelles (ULB), Belgium works on various projects in Virtual Reality (VR), point cloud acquisition using 3D LIDAR scanners, and 3D reconstructions in cultural heritage, as well as on image-based free navigation applications, including optimized implementations for real-time processing.
LISA is an active contributor to the multi-camera MPEG-I immersive video activities and has played an essential role in MPEG-I’s technology development. LISA will adapt the MPEG-I test conditions towards the HoviTron specific working conditions, evaluating, improving, and accelerating MPEG-I’s technology towards HoviTron’s light field visualization.
The Centre for Software Technology and Multimedia Systems (CITSEM) from the Universidad Politécnica de Madrid (UPM) specializes in Image and Video Technologies and High-Performance Computing in Embedded Systems.
Within Hovitron, UPM takes lead in capturing light-field images and obtaining high-resolution depth maps needed for the view synthesis and light-light visualization.
Swiss start-up CREAL develops break-through light-field display technology to solve this problem. CREAL’s light-field display projects genuinely three-dimensional hologram-like images with near-retinal resolution. With light-field display, each eye can change focus naturally between virtual objects in the image, just like it would in the real world, unlike today's Virtual and Augmented Reality (VR/AR) headsets which typically use two flat-screen images to create stereoscopic illusion of an image depth while ignoring the natural need of our eyes to focus on the correct distance.
Deutsches Zentrum für Luft und Raumfahrt (DLR) is the national centre for aerospace, energy, and transportation research in Germany. DLR is responsible for planning and implementing the German space programme on behalf of the German federal government. The Institute for Robotics and Mechatronics of DLR Robotic Mechatronics Centre develops a wide array of robots to enable humans to interact more safely and efficiently with their surrounding environments.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 951989.
Our Partners
The robotic hardware has been provided by DLR, while CREAL developed the light field head mounted display for Holographic Vision. DLR has also provided some calibration software for conventional cameras and extensions to light field cameras.
Cross-fertilization between HoviTron and MPEG-I took place thanks to two partners of the HoviTron consortium, with UPM focusing mainly on depth estimation, and ULB developing the Reference View Synthesizer (RVS with manual) of MPEG-I, as well as some plenoptic camera technology, e.g. the Reference Lenslet Converter (RLC) related to MPEG’s Lenslet Video Coding (LVC) exploration.
ULB has also invested a lot of effort in the light field calibration with their Reference Plenoptic Virtual camera Calibrator (RPVC), and UPM has done a lot of work (e.g. acceleration) around the MPEG-I’s Depth Estimation Reference Software (DERS), as well as developing a spatio-temporal filtering for the Azure Kinect cameras, aka the Kinect Refinement Tool (KiRT).