Download PDFOpen PDF in browser

iCNN: A Convolutional Neural Network for Fractional Interpolation in Video Coding

EasyChair Preprint no. 1398, version 2

Versions: 12history
10 pagesDate: September 4, 2019


Motion compensated prediction has significantly contributed to the temporal redundancy in video coding by predicting the current frame from the list of previously reconstructed frames. Later video coding standard HEVC uses DCTIF to interpolate fractional pixels for more accurate motion compensated prediction. Although the fixed interpolation filters have been improved, they are not able to adapt to the diversity of video content. Inspired by super-resolution, we design the interpolation Convolutional Neural Network for fractional interpolation in video coding. Our work also solves two main problems in applying Convolutional Neural Network to fractional interpolation in video coding: there is no training set for fractional interpolation and integer pixels change after processing. As a result, this work achieves a 2.6% BD-rate reduction compared to the baseline HEVC.

Keyphrases: deep learning, Fractional Interpolation, Motion compensated prediction, Motion Compensated Prediction., video coding

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Chi Do-Kim Pham and Jinjia Zhou},
  title = {iCNN: A Convolutional Neural Network for Fractional Interpolation in Video Coding},
  howpublished = {EasyChair Preprint no. 1398},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browser