How Many Bits Does it Take to Quantize Your Neural Network?

EasyChair Preprint no. 1000, version history

VersionDatePagesVersion notes
1May 25, 20199
2September 12, 20198

We provide an example for the non-monotonicity of robustness and improve our experimental evaluation, which now includes a comparison between encodings and against a recently published gradient descent–based method for quantized networks.

3February 24, 202018

We provide a comparison between various SMT-solvers.

Keyphrases: adversarial attacks, bit-vectors, Quantized Neural Networks, SMT solving

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:1000,
  author = {Mirco Giacobbe and Thomas A. Henzinger and Mathias Lechner},
  title = {How Many Bits Does it Take to Quantize Your Neural Network?},
  howpublished = {EasyChair Preprint no. 1000},

  year = {EasyChair, 2020}}