Download PDFOpen PDF in browser

Video Summarization: How to Use Deep-Learned Features Without a Large-Scale Dataset

EasyChair Preprint no. 450

6 pagesDate: August 24, 2018


This paper proposes a framework incorporating deep-learned features with the conventional machine learning models within which the objective function is optimized by using quadratic programming or quasi-Newton methods instead of an end-to-end deep learning approach which uses variants of stochastic gradient descent algorithms. A temporal segmentation algorithm is first scrutinized by using a learning to rank scheme to detect the abrupt changes of frame appearances in a video sequence. Afterward, a peak-searching algorithm, statistics-sensitive non-linear iterative peak-clipping (SNIP), is employed to acquire the local maxima of the filtered video sequence after rank pooling, where each of the local maxima corresponds to a key frame in the video. Simulations show that the new approach outperforms the main state-of-the-art works on four public video datasets.

Keyphrases: CNN, keyframe selection, ranking machine, temporal evolution, video summarization

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Didik Purwanto and Yie-Tarng Chen and Wen-Hsien Fang and Wen-Chi Wu},
  title = {Video Summarization: How to Use Deep-Learned Features Without a Large-Scale Dataset},
  howpublished = {EasyChair Preprint no. 450},
  doi = {10.29007/21q3},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser