Download PDFOpen PDF in browser

CNN-based Classification of Illustrator Style in Graphic Novels: Which Features Contribute Most?

EasyChair Preprint no. 557

12 pagesDate: October 4, 2018

Abstract

Can classification of graphic novel illustrators be achieved by convolutional neural network features (CNN) evolved for classifying concepts on photographs? Assuming that basic features at lower network levels generically represent invariants of our environment, they should be reusable. However, features at what level of abstraction are characteristic of illustrator style? We tested transfer learning by classfiying roughly 50,000 digitized pages from about 200 comic books of the Graphic Narrative Corpus (GNC) by illustrator. For comparison, we also classified Manga109 by book. We tested the predictability of visual features by experimentally varying which of the mixed layers of Inception V3 was used to train classifiers. Overall, the top-1 test-set classification accuracy in the artist attribution analysis increased from 92% for mixed-layer 0 to over 97% when adding mixed-layers higher in the hierarchy. Above mixed-layer 5, there were signs of overfitting, suggesting that texture-like mid-level vision features were sufficient. Experiments varying input material show that page layout and coloring scheme are important contributors. Thus, stylistic classification of comics artists is possible re-using pretrained CNN features, given only a limited amount of additional training material. We propose that CNN features are general enough to provide the foundation of a visual stylometry, potentially useful for comparative art history.

Keyphrases: Classification, cnn based classification, Convolutional Neural Network, experimental study, graphic novels, stylometry

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:557,
  author = {Jochen Laubrock and David Dubray},
  title = {CNN-based Classification of Illustrator Style in Graphic Novels: Which Features Contribute Most?},
  howpublished = {EasyChair Preprint no. 557},
  doi = {10.29007/z3f1},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser