FIREDRAKE '21: FIREDRAKE '21
PROGRAM FOR WEDNESDAY, SEPTEMBER 15TH
Days:
next day
all days

View: session overviewtalk overview

14:00-15:10 Session 1: Numerics and code
Chair:
Location: Zoom
14:00
Introduction
14:10
asQ: a library for parallel-in-time solution of all-at-once systems using the diagonalisation technique in Firedrake
PRESENTER: Colin Cotter

ABSTRACT. We will describe our development of asQ, a library for parallel-in-time solution of all-at-once systems using the diagonalisation technique in Firedrake. asQ automatically builds UFL forms for all-at-once systems from a UFL description of the time-dependent PDE, together with their alpha-circulant modifications (either at the level of the nonlinear problem for combination with Picard iteration, or at the level of the Jacobian for quasi-Newton iteration, or as a preconditioner), which can be block diagonalised using FFT (after averaging in the Jacobian to produce a discretised time-independent linear PDE). We will report on our progress towards efficient scalable preconditioners for atmosphere/ocean dynamics and towards a parallel-in-time implementation.

14:30
Hybridization and postprocessing in FEEC with Firedrake and Slate

ABSTRACT. Finite element exterior calculus (FEEC) unifies several families of conforming finite element methods for Laplace-type problems, including the scalar and vector Poisson equations. This talk presents a framework for hybridization of FEEC, which recovers known hybrid methods for the scalar Poisson equation and gives new hybrid methods for the vector Poisson equation. We also generalize Stenberg postprocessing, proving new superconvergence estimates. We will focus in particular on the implementation of these methods in Firedrake and Slate. Based on joint work with Gerard Awanou, Maurice Fabien, and Johnny Guzman (arXiv:2008.00149).

14:50
Matrix-free, hybridised, compatible, high order finite element methods in Firedrake

ABSTRACT. One way to achieve high accuracy for simulations of physically complex problems is to discretise the partial differential equations with higher order, compatible finite element methods (FEM). Since higher order, compatible approximations result in big equation systems, performance optimisations become crucial. A speedup of the system solve can be established by loosening the global coupling of the FEM with a hybridization preconditioner, such that it is sufficient to execute expensive operations on smaller matrices. The preconditioner is defined through local linear algebra operations on FEM tensors and is represented in the domain specific language Slate in Firedrake.

The local linear algebra operations need further optimisations in order to achieve high performance for high order, compatible FEM, because not only the size of the global but also the local tensors is considerably high. By employing locally matrix-free methods high storage requirements/data movement can be avoided in favor of executing more FLOPS and a high FLOP/data ratio is advantageous for high performance on recent computer architectures.

I will talk about the automatic code generation of fully matrix-free, hybridised, compatible, high order FEM in Firedrake.

15:10-15:20Coffee Break
15:20-16:20 Session 2: New Firedrake capabilities 1
Location: Zoom
15:20
Latest developments in Fireshape: a shape optimization toolbox for Firedrake
PRESENTER: Alberto Paganini

ABSTRACT. In this talk, we give a quick introduction to shape optimization and discuss the latest developments in Fireshape: what we did, what we didn't, what we wish to do, and what we wish we hadn't done.

15:40
Firedrake I/O for checkpointing
PRESENTER: Koki Sagiyama

ABSTRACT. Firedrake's DumbCheckpoint class has been used in some applications to checkpoint states on disk. The support, however, has been limited in that it only saves Functions and not Meshes and that, when loading, one is responsible for reconstructing the same mesh of the same distribution as used when saving. In this work we introduce a new class for checkpointing, CheckpointFile, which significantly improves Firedrake's checkpointing feature. Specifically, it saves Meshes as well as Functions, saves and loads on different number of MPI processes, and creates only a single output file no matter how many MPI processes one uses. It also supports applications that use extrusion and/or timestepping.

16:00
Flame Graphs for Free with PETSc

ABSTRACT. Detailed performance analysis of an application is an essential step that must be taken prior to optimisation. A vast number of tools already exist for this purpose but they usually suffer from a combination of: involved setup, lack of support for parallelism, and poor interoperability across programming languages.

Here we present a new contribution to PETSc's logging infrastructure that allows users to easily visualise the runtime of their code using a flame graph. The tool requires absolutely zero setup - it is already available to all applications that use PETSc - works in parallel, and can be extended using any programming language that has PETSc bindings.

To provide a motivating example, we analyse the performance of a Firedrake application. Many of the internal functions in Firedrake have been annotated with PETSc events, giving users insights into the performance of the entire application.

16:20-16:40Coffee Break
16:40-17:40 Session 3: Ocean applications
Location: Zoom
16:40
Implementing a new model for sub-ice shelf ocean dynamics in the Firedrake framework: verification against MITgcm and preliminary adjoint calculations
PRESENTER: William Scott

ABSTRACT. Determining how much ice in Antarctica could contribute to sea level rise is a pressing issue in climate science. The recent increase in the contribution to sea level rise from Antarctica is predominantly due to the melting of ice beneath floating ice shelves in West Antarctica. Ice shelves are generally constrained by valleys and ridges extending from the sea floor. This provides a buttressing force to the glaciers inland. When ice shelves thin, the buttressing force decreases, so the grounded ice flows faster into the ocean. Much of the uncertainty in constraining the timescales of ice loss is due to lack of understanding of how the ocean flows under these floating ice shelves. This is especially true at the grounding zone of glaciers, where the ice first begins to float. It is very hard to make direct field measurements owing to the extreme environments and satellites are limited to a surface only view. The ice is typically hundreds of metres thick when the glacier begins to float. Numerical models of ice flow are very sensitive to how much melt is imposed at the grounding line. Satellite and radar measurements of grounding zone retreat also paint a complicated picture of heterogenous retreat rates across the glacier at seasonal and tidally dominated frequencies. Standard ocean models used for simulating flow under ice shelves struggle to resolve these processes up to the grounding zone, due to constraints imposed by structured grid discretisations. The motivation behind this work is to investigate ocean flow near the grounding zone of glaciers and accurately resolve the small scale processes that are fundamental to our understanding of the larger scale flow. A key part of this is to retain the ‘full’ physics, i.e. we solve the incompressible Navier-Stokes equations with the Boussinesq buoyancy approximation, without making common geophysical fluid dynamics approximations of shallow water and neglecting vertical acceleration of the flow. This means we can resolve flow near the grounding zone of glaciers on unstructured meshes with order ‘one-to-one’ aspect ratios. We have implemented a three equation melt parametrisation to represent the ice-ocean boundary. Most of the current work has been testing our model against MITgcm in 2d and 3d simulations, based on test cases from the Ice Shelf Ocean Model Intercomparison Project (ISOMIP+). Good agreement has been found between both models, despite the different numerical discretisation schemes, i.e. DG finite elements versus finite volumes (MITgcm). A main drive behind using Firedrake, is the availability of an automatically-generated adjoint model. Our first adjoint calculations, of sensitivities of melt rate with respect to different inputs, are promising. We hope that this will be useful in addressing questions about important processes, such as how subglacial outflow affects circulation within the ice shelf cavity, as well as reducing uncertainty when choosing values for unknown parameters, such as the turbulent exchange coefficients which govern the transfer of heat and salt at the ice ocean interface as part of the melt parameterisation.

17:00
Tsunami source inversion by coupling automatic differentiation tools
PRESENTER: Joe Wallwork

ABSTRACT. In this presentation, we demonstrate the inversion of a coupled earthquake-tsunami model in order to approximate the crustal physics of the 2011 Tōhoku earthquake. Given an array of subfaults within the main earthquake fault, the so-called Okada model approximates the resulting sea bed deformation in terms of nine control parameters per subfault. Under the assumption that the bed deformation translates into an equivalent ocean surface displacement, we may thereby establish initial conditions for subsequent tsunami propagation modelling using Thetis. The objective functional for the inversion is taken to be a sum of (point-wise) squared misfits against gauge data, which can be represented in Firedrake thanks to the recently developed VertexOnlyMesh. Coupled earthquake-tsunami inversion then seeks to deduce optimal values for the Okada parameters on each subfault from the output of the tsunami model. The Okada model implementation used in this work - due to GeoClaw - is written in pure Python (rather than UFL), so we differentiate it using the automatic differentiation tool ADOL-C and present details on its coupling with Pyadjoint.

17:20
On constraining ocean mesoscale eddy energy dissipation time-scale
PRESENTER: Julian Mak

ABSTRACT. The global ocean overturning circulation plays an important component in the climate, and it is increasingly recognised that geostrophic mesoscale eddies play a fundamental role in setting the sensitivity of the global ocean overturning circulation to changes in the atmospheric conditions. A major discrepancy is on the differing sensitivities displayed by models that permit mesoscale eddies and those that parameterise mesoscale eddies, raising the concern that climate models, which by and large are eddy parameterising, are not necessarily faithful in their projections. There is however growing evidence that that parameterisations that utilise eddy energy information in some way, particularly those that use the eddy energy as a prognostic variable to inform on the spatio-temporal distribution and magnitude of the eddy-mean feedback, have success in reconciling the observed discrepancies in the sensitivities, highlighting a need to understand and constrain ocean eddy energy cycles.

Here we focus on constraining the transfer rate of eddy energy out of the mesoscales, modelled as a mesoscale eddy energy dissipation, previously found to have significant impacts on the modelled responses in global ocean circulation calculations. The problem is turned into one of a parameter inference, and while ideally we solve the more complete inference problem constrained upon the equations governing ocean dyamics, subject to some assumptions we solve less correct but useful alternative problem that is essentially a variational optimisation problem constrained by an elliptic PDE. The problem is tackled using Firedrake, wrapped to the tlm_adjoint package (Maddison et al, 2019, Siam J. Sci. Comp.) on an immersed manifold, with appropriate data diagnosed from a high resolution global ocean circulation model. Implications and further directions are discussed.

17:40-17:50Coffee Break
17:50-18:30 Session 4: Solvers at scale
Location: Zoom
17:50
An augmented Lagrangian preconditioner for the magnetohydrodynamics equations at high Reynolds and coupling numbers
PRESENTER: Fabian Laakmann

ABSTRACT. The magnetohydrodynamics (MHD) equations are generally known to be difficult to solve numerically, due to their highly nonlinear structure and the strong coupling between the electromagnetic and hydrodynamic variables, especially for high Reynolds and coupling numbers. In this talk, we present a scalable augmented Lagrangian preconditioner for a finite element discretisation of the B-E formulation of the incompressible resistive MHD equations. For stationary problems, our solver achieves robust performance with respect to the Reynolds and coupling numbers in two dimensions and good results in three dimensions. Our approach relies on specialized parameter-robust multigrid methods for the hydrodynamic and electromagnetic blocks. Moreover, we describe the extension of our work to fully implicit methods for time-dependent problems, which we solve robustly in both two and three dimensions. Our scheme ensures exactly divergence-free approximations of both the velocity and the magnetic field. We confirm the robustness of our scheme by numerical experiments in which we consider fluid and magnetic Reynolds numbers and coupling numbers up to 10,000 for stationary problems and up to 100,000 for transient problems in two and three dimensions.

18:10
Code generation for productive portable scalable finite element simulation on HPC in Firedrake
PRESENTER: Jack Betteridge

ABSTRACT. The right combination of discretisations, differential operators, preconditioners and solvers can produce scalable, high performance simulations of PDEs. However, the optimal combination changes based on the application and available hardware, and software development time is often a severely limited resource. Using Firedrake we demonstrate that generating simulation code from a high-level Python interface provides an effective mechanism for simulations on high performance computers in very few lines of code.

We show that moving from one supercomputer to another can require significant algorithmic changes to achieve scalable performance. Modern HPC architectures can vary wildly in performance characteristics, which drastically changes the performance of simulations. Using code generation allows for substantial algorithmic changes to be achieved with little change to the simulation code. We present results from the Isambard and ARCHER2 HPC facilities and look ahead to new features that will allow Firedrake to continue to run efficiently at scale.

18:30-19:30After work Gather Town drinks