Tags:background knowledge, few-shot learning, image classification and machine learning
Abstract:
Learning from limited examples is a challenging task for existing vision models trained for the task of image classification. We hypothesise that this limitation can be overcome by the integration of background knowledge with the vision model. This work employs a deep visual embedding model that is trained to map images to a vector space predefined using similarity vectors computed using a knowledge graph. In this case, the knowledge graph acts as the source of background knowledge and the similarity vectors are the media through which the knowledge is transferred to the vision model during training. We conduct an experimental evaluation of our method using two datasets, mini-ImageNet and Stanford Dogs Dataset, to compare with baselines of both few-shot and fine-grained few-shot image classification. WordNet is used as the source of background knowledge, via which similarities between classes are computed. We discuss our findings and insights for future work in this talk.
Few-Shot Image Classification Informed by Background Knowledge