Download PDFOpen PDF in browser

Improving Siamese Networks for One-Shot Learning Using Kernel-Based Activation Functions

EasyChair Preprint no. 893, version 3

Versions: 123history
15 pagesDate: August 19, 2020

Abstract

The lack of a large amount of training data has always been the constraining factor in solving many problems in machine learning, making one-shot Learning one of the most intriguing ideas in machine learning. It aims to learn the necessary objective information from one or only a few training examples. This process of learning in neural networks is generally accomplished by using a proper objective function (loss function) and embeddings extraction (architecture). In this paper, we discussed metric-based deep learning architectures for one-shot learning such as siamese neural networks and present a method to improve on their accuracy using Kafnets (kernel-based non-parametric activation functions for neural networks) by learning finer embeddings with relatively less number of epochs. Using kernel activation functions, we are able to achieve strong results that exceed ReLU-based deep learning models in terms of embedding structure, loss convergence, and accuracy. The project code with results can be found at Github:https://github.com/shruti-jadon/Siamese-Network-for-One-shot-Learning.

Keyphrases: computer vision, decision boundary, Kernels, machine learning, one-shot learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:893,
  author = {Shruti Jadon and Aditya Acrot Srinivasan},
  title = {Improving Siamese Networks for One-Shot Learning Using Kernel-Based Activation Functions},
  howpublished = {EasyChair Preprint no. 893},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser