View synthesis based on temporal prediction via warped motion vector fields
Résumé
The demand for 3D content has increased over the last years as 3D displays are now widespread. View synthesis methods, such as depth-image-based-rendering, provide an efficient tool in 3D content creation or transmission, and are integrated in coding solutions for multiview video content such as 3D-HEVC. In this paper, we propose a view synthesis method that takes advantage of temporal and inter-view correlations in multiview video sequences. We use warped motion vector fields computed in reference views to obtain temporal predictions of a frame in a synthesized view and blend them with depth-image-based-rendering synthesis. Our method is shown to bring gains of 0.42dB in average when tested on several multiview sequences.