Tags:activation function, deep neural network, interval analysis, Linear Approximation, Linear Bounding, Neural Network Verification, Neural Networks and Robustness Verification
Abstract:
Linear approximations of nonlinear functions have a wide range of applications such as rigorous global optimization and, recently, verification problems involving neural networks. In the latter case, a linear approximation must be hand-crafted for the neural network's activation functions. This hand-crafting is tedious, potentially error-prone, and requires an expert to prove the soundness of the linear approximation. Such a limitation is at odds with the rapidly advancing deep learning field -- current verification tools either lack the necessary linear approximation, or perform poorly on neural networks with state-of-the-art activation functions. In this work, we consider the problem of automatically synthesizing sound linear approximations for a given neural network activation function. Our approach is example-guided: we develop a procedure to generate examples, and then we leverage machine learning techniques to learn a (static)function that outputs linear approximations. However, since the machine learning techniques we employ do not come with formal guarantees, the resulting synthesized function may produce linear approximations with violations. To remedy this,we bound the maximum violation using rigorous global optimization techniques, then adjust the synthesized linear approximation accordingly to ensure soundness. We evaluate our approach on several neural network verification tasks. Our evaluation shows that the automatically synthesized linear approximations greatly improve the accuracy (i.e., in terms of the number of verification problems solved) compared to hand-crafted linear approximations instate-of-the-art neural network verification tools.
Example Guided Synthesis of Linear Approximations for Neural Network Verification