CM2022: 17TH COPPER MOUNTAIN CONFERENCE ON ITERATIVE METHODS
PROGRAM FOR WEDNESDAY, APRIL 6TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-11:05 Session 7A: Machine Learning for Iterative Algorithms
09:00
GNNs: An AI/ML metaphor we can use for iterative methods on unstructured meshes

ABSTRACT. In the popular mind, neural networks (NNs) are something which are fed with some kind of fixed size data, like sliding windows on an image. This renders many common NN architectures unsuitable for many unstructured grid calculations, including sparse linear algebra. Graph neural networks (GNNs) offer an alternative NN architecture, with fixed-size NNs locally on nodes and edges of the mesh, and variable-size update functions moving data between them. This node/edge framework naturally lends itself to expressing the ideas of iterative methods on unstructured meshes. To illustrate this point, we'll show how several common iterative methods map their way onto GNNs and highlight some of the ways GNNs are being used (or could be used) in the iterative methods community

09:25
Learning Aggregates and Interpolation for Algebraic Multigrid
PRESENTER: Nicolas Nytko

ABSTRACT. Algebraic multigrid solvers are among the most efficient methods for finding solutions to large, sparse linear systems of equations, such as those arising from the discretization of elliptic PDEs. Their implementation, however, often relies on constructing a coarse grid and transfer operators through the use of heuristics or other approximations. Consequently, their overall convergence depends on a judicious selection of parameters in these heuristics. In this talk, we evaluate the use of neural networks to select a coarse grid and transfer operators for isotropic and anisotropic diffusion problems. We show how graph neural networks can be used to output a tentative set of node groupings, followed by interpolation construction analogous to smoothed-aggregation multigrid. Difficulties in training such neural networks due to the lack of gradient information are addressed through the use of genetic evolution strategies. Finally, performance of the learned multigrid solver is compared to off-the-shelf methods from PyAMG.

09:50
Learning an Algebraic Multigrid Prolongation Operator Using a Modified Graph Neural Network Architecture
PRESENTER: Nicholas Moore

ABSTRACT. This work develops a suite of new graph neural network machine learning architectures that generate data-driven prolongators for use in Algebraic Multigrid (AMG). Algebraic Multigrid is a powerful and common technique for solving large, sparse linear systems. Its effectiveness is problem dependent and heavily depends on the choice of the prolongation operator, which interpolates the coarse mesh results onto a finer mesh. Previous work has used recent developments in graph neural networks to learn a prolongation operator from a given coefficient matrix. We expand on this work by exploring architectural enhancements of graph neural networks. A new method for generating a training set is developed which more closely aligns to the test set. Asymptotic error reduction factors are compared on a test suite of 3-dimensional Poisson problems with varying degrees of element stretching. Results show modest improvements in asymptotic error factor over both commonly chosen baselines and learning methods from previous work.

10:15
Deep Learning in Adaptive Domain Decomposition Methods
PRESENTER: Axel Klawonn

ABSTRACT. The convergence rate of domain decomposition methods is generally determined by the eigenvalues of the preconditioned system. For second-order elliptic partial differential equations, coefficient discontinuities with a large contrast can lead to a deterioration of the convergence rate. A remedy can be obtained by enhancing the coarse space with elements, which are often called constraints, that are computed by solving small eigenvalue problems on portions of the interface of the domain decomposition, i.e., edges in two dimensions or faces and edges in three dimensions. In general, it is difficult to predict where these constraints have to be computed, i.e., on which edges or faces. Here, a machine learning based strategy using neural networks is suggested to predict the geometric location of these edges or faces in a preprocessing step. This reduces the number of eigenvalue problems that have to be solved during the iteration. Numerical experiments for model problems and realistic microsections using regular decompositions as well as those from graph partitioners are provided, showing very promising results.

10:40
Learning optimized domain decomposition
PRESENTER: Ali Taghibakhshi

ABSTRACT. In this study, we aim to improve the performance of Restricted Additive Schwarz (RAS) methods using Graph Convolutional Neural Networks (GCNN) and unsupervised learning. Our approach employs GCNNs to learn optimal modifications to the subdomain matrices within a RAS preconditioner, akin to those from optimized Schwarz methods, based on finite-element discretizations on arbitrary meshes. The key aspect of GCNNs in this setting is their ability to test the learned network on arbitrarily larger problems, keeping the computational cost linear with problem size. The performance of the learned network is compared with both RAS and classical optimized Schwarz algorithms.

09:00-11:05 Session 7B: Eigenvalue Solution Methods
09:00
Eigenvalue solution via the use of a single random vector

ABSTRACT. In this talk, we show the design of a reliable and efficient eigensolver based on the use of a single random vector in an eigenvalue detection scheme. Given a region of interest, some randomized estimators applied to a spectral projector are used to detect the existence of eigenvalues. The reliability of the estimators with a single random vector are studied so as to obtain a robust threshold for eigenvalue detection. This is then combined with repeated domain partitioning to find eigenvalues to a desired accuracy. Preconditioned Krylov subspace methods are used to solve multiple shifted linear systems in the eigenvalue detection scheme and Krylov subspaces are reused for multiple shifts. We also show how another randomized strategy can be used to obtain eigenvectors reliably with little extra costs.

09:25
On the computation of dominant eigenpairs of a matrix valued linear operator
PRESENTER: Carmela Scalone

ABSTRACT. The topic of this talk is a numerical method to compute the rightmost eigenpairs of a linear matrix valued operator, considering a priori the hypothesis (as this is a common property of several applications) for which the corresponding eigenmatrix has quickly decaying singular values. This is the reason why the search of approximate eigensolutions is constrained to a low-rank manifold. Thanks to the solution of a suited ordinary differential equation, we are able to approximate the rightmost eigenpair of the linear operator. After analyzing the behaviour of such ODE on the whole space, we conclude that, under generic assumptions, when the rightmost eigenvalue is simple and real, the solution converges globally to its leading eigenmatrix.  After that, we project the differential equation on a low-rank manifold of prescribed rank. The projected operator is nonlinear and this makes the analysis more difficult. Finally, we propose two explicit numerical methods. The numerical experiments show that the approach is effective and competitive.

09:50
Low-Rank Stopping Criteria for Singular Value Decompositions

ABSTRACT. Iterative singular value decompositions (SVD) are increasingly important for low rank approximations of large matrices. However, many low rank constraints do not fully describe the two parameters required by most SVD methods: the desired rank and residual norm accuracy. We propose new algorithms to effectively handle these ill-posed problems. Specifically, we discuss criteria built specifically for singular value thresholding and spectrum gap finding.

10:15
A Restricted SVD based CUR Decomposition for matrix triplets

ABSTRACT. We propose a restricted SVD based CUR (RSVD-CUR) decomposition for matrix triplets $(A, B, G)$. Given matrices $A$, $B$, and $G$ of compatible dimensions, such a decomposition provides a coordinated low-rank approximation of the three matrices in terms of a subset of their rows and columns. We select the subset of rows and columns of the original matrices using the discrete empirical interpolation method (DEIM) on the orthogonal and nonsingular matrices from the restricted singular value decomposition of the matrix triplet. We investigate the connections between a DEIM-based RSVD-CUR approximation and a DEIM-based CUR factorization as well as a DEIM-based generalized CUR decomposition. The RSVD-CUR factorization may be suitable for applications where we are interested in approximating one data matrix relative to two other given matrices. Two key applications that we discuss include multi-view dimension reduction and data perturbation problem of the form $A_E=A + BFG$, where $BFG$ is a non-white noise matrix. In numerical experiments, we show the advantages of the new method over the standard CUR approximation for these applications.

10:40
A Warm-Start Krylov-Schur Algorithm for Solving a Sequence of Eigenvalue Problems
PRESENTER: Arielle Carr

ABSTRACT. In this talk, we discuss a warm-started Krylov-Schur algorithm for solving a sequence standard eigenvalue problems (SEP) of the form $\mathcal{A}^{(k)}{\bf y} = \mu {\bf y}$ for $k = 1,2,\dots,N$. When computing invariant subspaces for a sequence of matrices, we can recycle the approximate invariant subspace computed for one matrix to warm-start the eigensolver for another matrix in the sequence, rather than starting the eigensolver from scratch with a single vector. This recycling technique makes computing invariant subspaces of very large-scale matrices tractable by reducing the total number of matrix-vector products required in the eigensolver, while still producing invariant subspaces that are as accurate as for the standard Krylov-Schur method.

Since we start the eigensolver with a space that is not an exact Krylov space for the given matrix, we make several modifications to our eigensolver to account for this. (1) Given the start space, we first compute the Krylov decomposition with minimum residual norm based on the analysis in [Stewart 2002]. (2) After we expand this approximate Krylov space, we must account for the residual matrix when computing the Rayleigh quotient. (3) The standard convergence test for Krylov-Schur can no longer be used, and so we propose a convergence test based on the residual of the eigenvalue problem. We show that this cycle reduces the norm of the residual matrix.

An additional complication emerges when solving the sequence of eigenvalue problems that arise in the numerical simulation of brake squeal [Grabner et al. 2016]. Here, a sequence of $N$ generalized eigenvalue problems of the form ${\bf A}^{(k)}{\bf y} = \mu{\bf B}^{(k)}{\bf y}$ are solved as SEPs with the coefficient matrix of the form $\mathcal{A}^{(k)}=({\bf B}^{(k)})^{-1}{\bf A}^{(k)}$. In this case, because the matrix-vector products in the eigensolver require a linear solve, an iterative linear solver, such as GMRES, can be used to efficiently compute them. We can improve convergence of the iterative linear solver by using a Krylov subspace recycling method, such as GCRO-DR. We provide good results for both an academic and industrial model from the brake squeal application.

09:00-11:05 Session 7C: Transport and Nuclear Applications
09:00
Evaluation of the Robbins-Monro relaxation scheme for multiphysics coupling with Monte Carlo neutron transport
PRESENTER: Steven Hamilton

ABSTRACT. Multiphysics calculations involving Monte Carlo neutron transport and fluids/thermal hydraulics solvers have become increasingly common in recent years for performing high fidelity simulations of nuclear reactors. The stochastic noise introduced by the Monte Carlo solver generally precludes the use of Newton or even Anderson iteration solvers. Therefore, Picard iteration is used almost exclusively, alternating between solving the neutron transport and thermal hydraulics portions of the problem. Many problems of interest require a damping/relaxation factor to ensure numerical stability. With a fixed relaxation factor, the accuracy of a given calculation is limited by the statistical resolution of a single Monte Carlo solve, as determined by the number of independent particle histories performed in a single nonlinear iteration. Several authors have advocated for the use of a Robbins-Monro relaxation, which uses predefined iteration-dependent relaxation factors to allow statistical convergence across Picard iterations. This approach has the potential to enable smaller statistical error in a multiphysics problem when the number of particle histories per Monte Carlo calculation remains constant. However, because early iterations have comparatively larger weights in the Robbins-Monro algorithm, a bias is introduced that will decrease as the number of Picard iterations is increased. To date, the tradeoff between accuracy and runtime for standard vs. Robbins-Monro style relaxation schemes has not been adequately characterized. This talk will examine the behavior of different relaxation approaches and attempt to provide guidance for solver selection. Numerical results will be provided using the Shift Monte Carlo solver and the ENRICO multiphysics coupling code.

09:25
A Parareal Application for a Multilevel Quasi-Static Monte Carlo Neutron Transport Solver
PRESENTER: Evan Gonzalez

ABSTRACT. The parareal algorithm has been implemented within a Monte Carlo neutron transport solver that uses the transient multilevel (TML) method, a multilevel time-stepping scheme with a hierarchy of physics approximations, to simulate reactor transients. The TML method uses nested layers of the predictor-corrector quasi-static method (PCQM), where the Monte Carlo shape function is solved via transient fission source iteration and the amplitude function is solved using a transient Coarse Mesh Finite Difference (CMFD) solver. The CMFD flux is further factorized into shape and amplitude functions, where the CMFD amplitude function is now solved via the exact point kinetics equations (EPKE). The initial parareal implementation uses the transient CMFD solver as a coarse solver and the EPKE solver as a fine solver. The implementation was tested using the C5G7-TD3 multigroup reactor transient benchmark.

09:50
Solving the Neutron Transport Equation for Microreactor Modeling using Unstructured Meshes and Exascale Computing Architectures
PRESENTER: William Dawn

ABSTRACT. The Microreactor Exascale eZ CALculation (MEZCAL) tool has been developed to accurately and efficiently solve the neutron transport equation in general, unstructured meshes to support the design and modeling of microreactors. MEZCAL solves the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation using the Finite Element Method (FEM). As solving the neutron transport equation is computationally expensive, MEZCAL is designed to efficiently use exascale computing architectures with an emphasis on GPU computing. To leverage existing tools, MEZCAL is built using the MFEM library and uses solvers from HYPRE, PETSc, and SLEPc.

Verification of the neutron transport solver in MEZCAL is demonstrated with the solution to a one-dimensional cylindrical problem which has a semi-analytic solution. After verification, a realistic microreactor model based on the MARVEL design is presented. Spatial and angular refinement of a two-dimensional MARVEL microreactor is presented and it is demonstrated that significant spatial refinement is required for an accurate solution. Preliminary results are also presented for a three-dimensional model of the MARVEL microreactor. Finally, a weak scaling study is performed to investigate how the methods in MEZCAL will scale for larger problems with the next generation of exascale computing architectures.

10:15
Nonlinear Iteration Method of Moments with Functional Derivatives for the Linear Boltzmann Transport Equation

ABSTRACT. The Boltzmann equation is a fundamental model of particle transport in various physical systems. It describes photon transport in problems of high-energy density physics, neutron and gamma transport in a nuclear reactor, as well as particle transport phenomena in astrophysics, plasma etc. The BTE is an integro-differential equation for high-dimensional particle distribution function in phase space and time. The differential operator of the BTE accounts for particle advection. Its integral operator describes redistribution of particles in angle and energy due to interactions and involves integration over the full range of corresponding elements of the phase space. Iteration methods must be used to solve the BTE. There are general approaches and iterative methods that can be applied for particle transport problems in different physical applications. The basic iteration techniques such as Richardson iteration scheme are inefficient and converge very slowly in a large class of important problems.

The nonlinear projection iterative (NPI) approach has been successful in developing efficient iteration methods for the BTE. It is based on applying projection operators in angular and energy variables to formulate a system of low-order equations for a set of moments of the particle distribution function. The closed system of the low-order moment equations and the high-order BTE equation is defined by means of prolongation operators and exact closures. As a result, NPI methods are defined by a nonlinear system of equations. The closures terms are formulated by means of functionals that are weakly-dependent on the BTE solution. A fixed-point iteration scheme for solving coupled high-order and low-order equations converges rapidly. The NPI methods can be interpreted as two- or multi-level iteration algorithms. Iteration methods for solving the BTE equation formulated with the NPI approach are the quasidiffusion (QD) method (aka Eddington Variable Factor (VEF) method), flux methods, $\alpha$-weighted methods. The HOLO methods such as nonlinear diffusion acceleration, coarse-mesh finite differencing (CMFD) method, partial current-based CMFD are based on similar elements.

This talk presents a new nonlinear iteration method for the linear BTE derived by means of the NPI approach and based on the low-order QD (VEF) equations for full-range angular moments of the high-order BTE solution. This method uses Gateaux derivatives of functionals, namely, the Eddington factors. The direction for the Gateaux derivative is evaluated by iterative estimates of the high-order solution. The method is defined with a parameter that scales the amplitude of the Gateaux derivative. A Fourier analysis of the linearized equations of the method in the vicinity of the solution enables to determine the optimal value of the scaling parameter that reduces the spectral radius by about of a factor of 2 compared to the basic version of the QD (VEF) method. The proposed approach can be extended to modify other NPI and HOLO methods. In this talk, we present the iterative algorithm for solving one-group particle transport problems in 1D slab geometry. Numerical results that illustrate performance of the algorithm will be demonstrated.

10:40
Krylov Linear Solvers and Quasi-Monte Carlo Methods for Transport Simulations
PRESENTER: Samuel Pasmann

ABSTRACT. The Boltzmann form of the neutron transport equation describes the behavior of neutrons as they collide from one atom to another and is important in nuclear reactor design and accident analysis. The full neutron transport equation is space, direction, time and energy dependent. In many cases an analytic solution cannot be solved unless many simplifying assumptions are made. Numerical methods must be relied upon to provide physically realistic solutions. Iterative methods, like the diffusion accelerate Source Iteration (SI) are a standard practice. These iterative methods can be enhanced by replacing the deterministic transport sweep with Monte Carlo (MC) simulation. This Hybrid method can be further enhanced using Quasi-Monte Carlo (QMC) techniques, reducing the noise inherent in MC simulations. However, the SI method can suffer from low convergence rates, particularly in optically thick media. Preconditioned Krylov Subspace methods provide a more efficient solution compared to the SI method. Krylov methods including GMRES and BiCGStab are shown to outperform the SI in several test problems using the outlined hybrid QMC-iterative solver method.

11:05-11:25Break
11:25-11:50Virtual Mini Banquet (Bighorn A link)
11:50-13:05 Session 8: Student Paper Winner Presentations (Bighorn A link)
11:50
A scalable and robust vertex-star relaxation for high-order FEM
PRESENTER: Pablo Brubeck

ABSTRACT. Pavarino proved that the additive Schwarz method with vertex patches and a low-order coarse space gives a $p$-robust solver for symmetric and coercive problems. However, for very high polynomial degree it is not feasible to assemble or factorize the matrices for each patch. In this work we introduce a direct solver for separable patch problems that scales to very high polynomial degree on tensor product cells. The solver constructs a tensor product basis that diagonalizes the blocks in the stiffness matrix for the internal degrees of freedom of each individual cell. As a result, the non-zero structure of the cell matrices is that of the graph connecting internal degrees of freedom to their projection onto the facets. In the new basis, the patch problem is as sparse as a low-order finite difference discretization, while having a sparser Cholesky factorization. We can thus afford to assemble and factorize the matrices for the vertex-patch problems, even for very high polynomial degree. In the non-separable case, the method can be applied as a preconditioner by approximating the problem with a separable surrogate. We demonstrate the approach by solving the Poisson equation at $p = 15$, and a $H(\text{div})$-conforming interior penalty discretization of incompressible linear elasticity at $p = 15$.

12:15
pISTA: preconditioned Iterative Soft Thresholding Algorithm for Graphical Lasso
PRESENTER: Gal Shalom

ABSTRACT. We propose a novel quasi-Newton method for solving the sparse inverse covariance estimation problem also known as the graphical least absolute shrinkage and selection operator (GLASSO). This problem is often solved using a second order quadratic approximation. However, in such algorithms the Hessian term is complex and computationally expensive to handle. To this end, our method uses the inverse of the Hessian as a preconditioner to simplify and approximate the quadratic element at the cost of a more complex l1 element. The variables of the resulting preconditioned problem are coupled only by the l1 sub-derivative of each other, which can be guessed with minimal cost using the gradient itself, allowing the algorithm to be parallelized and implemented efficiently on GPU hardware accelerators. Numerical results on synthetic and real data demonstrate that our method is competitive with other state-of-the-art approaches.

12:40
Toward Parallel in Time for Chaotic Dynamical Systems

ABSTRACT. As CPU clock speeds have stagnated, and high performance computers continue to have ever higher core counts, increased parallelism is needed to take advantage of these new architectures. Traditional serial time-marching schemes are a significant bottleneck, as many types of simulations require large numbers of time-steps which must be computed sequentially. Parallel in Time schemes, such as the Multigrid Reduction in Time (MGRIT) method, remedy this by parallelizing across time-steps, and have shown promising results for parabolic problems. However, chaotic problems have proved more difficult, since chaotic initial value problems are inherently ill-conditioned. MGRIT relies on a hierarchy of successively coarser time-grids to iteratively correct the solution on the finest time-grid, but due to the nature of chaotic systems, subtle inaccuracies on the coarser levels can lead to poor coarse-grid corrections. Here we propose a modification to nonlinear FAS multigrid, as well as a novel time-coarsening scheme, which together better capture long term behavior on coarse grids and greatly improve convergence of MGRIT for chaotic initial value problems. We provide supporting numerical results for the Lorenz system model problem.

13:05-13:30End of Day Break