Download PDFOpen PDF in browser
ZH
Switch back to the title and the abstract
in Chinese

Meta-Learning with Differentiable Convex Optimization

EasyChair Preprint 10372

16 pagesDate: June 10, 2023

Abstract

Many meta-learning approaches for few-shot learning
rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to
learn representations for few-shot learning and show they
offer better tradeoffs between feature size and performance
across a range of few-shot recognition benchmarks. Our
objective is to learn feature embeddings that generalize well
under a linear classification rule for novel categories. To
efficiently solve the objective, we exploit two properties of
linear classifiers: implicit differentiation of the optimality
conditions of the convex problem and the dual formulation
of the optimization problem. This allows us to use highdimensional embeddings with improved generalization at a
modest increase in computational overhead. Our approach,
named MetaOptNet, achieves state-of-the-art performance
on miniImageNet, tieredImageNet, CIFAR-FS, and FC100
few-shot learning benchmarks. Our code is available online

Keyphrases: 元学习, 凸优化, 小样本学习

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:10372,
  author    = {Yue Gong},
  title     = {Meta-Learning with Differentiable Convex Optimization},
  howpublished = {EasyChair Preprint 10372},
  year      = {EasyChair, 2023}}
Download PDFOpen PDF in browser