Tags:multigrid, preconditioners, preconditioners. and stochastic gradient descent
Abstract:
This presentation discusses recent results on using machine learning approach to optimize parameters of the multigrid method. As parameters of multigrid method we consider restriction/prolongation operators and relaxation parameters in post- and pre-smoothing iterative processes. The main idea of the proposed method is to represent one iteration of multigrid method in the form of a computational graph such that every node represents elementary operation (e.g. matrix by vector product or Galerkin projection) that can be differentiated w.r.t. parameters. After that, we introduce stochastic functional which is an unbiased estimation of upper bound for spectral radius of the iteration matrix of multigrid method. Thus, we state minimization problem to find parameters of multigrid method such that convergence speed to be as fast as possible. To solve the stated minimization problem we use the modern modification of stochastic gradient descent. The stochastic gradient of the proposed functional can be computed with automatic differentiation tool like Autograd. Although the cost of the computational procedure in its current form is very high, it allows us to show that even for simple problems we can find better parameters in the standard multigrid methods. The developed framework can be considered as a numerical tool for the justification of "how optimal" are the parameters used. Development of fast (surrogate) models for computation of these optimal parameters is a topic of future research.
Can we do better? Using machine learning to optimize multigrid methods