Fusion of Global and Local Motion Estimation for Distributed Video Coding
Résumé
The quality of side information plays a key role in distributed video coding. In this paper, we propose a new approach that consists in combining global and local motion compensation at the decoder side. The parameters of the global motion are estimated at the encoder using Scale Invariant Feature Transform (SIFT) features. Those estimated parameters are sent to the decoder in order to generate a globally motion compensated side information. Conversely, a locally motion compensated side information is generated at the decoder based on motion compensated temporal interpolation of neighboring reference frames. Moreover, an improved fusion of global and local side information during the decoding process is achieved using the partially decoded Wyner-Ziv frame and decoded reference frames. The proposed technique improves significantly the quality of the side information, especially for sequences containing high global motion. Experimental results show that, as far as the rate distortion performance is concerned, the proposed approach can achieve a PSNR improvement of up to 1.9 dB for a GOP size of 2 and up to 4.65 dB for larger GOP sizes, with respect to the reference DISCOVER codec.