FENICS'17: THE FENICS 2017 CONFERENCE
PROGRAM FOR MONDAY, JUNE 12TH
Days:
next day
all days

View: session overviewtalk overview

12:00-13:00Registration and welcome lunch
13:10-14:10 Session 2: Software and Algorithms I
Location: Salle Tavenas Main Theatre
13:10
Generating matrix actions with TSFC and Loo.py

ABSTRACT. Generating efficient finite element code for multiple architectural targets requires great flexibility. In this work, we describe a chain of transformations that starts with UFL and goes through TSFC to produce low-level code for computing the matrix-free action of bilinear forms. We convert TSFC’s intermediate representation of an element-local kernel into an equivalent kernel in Loo.py, which is a transformation-based tool that allows tedious and error-prone conversion of high-level loop descriptions into tight, low-level code in OpenCL, CUDA, or other device languages.

The element kernel requires several kinds of transformations before its compilation and use, all of which demonstrate important features of Loo.py. For one, we “batch” the element kernel to iterate over many cells. Then, we generate and fuse in additional kernels that scatter and gather between element-level and assembled processor-global storage for each field. Beyond these, we may apply transformations to produce intra-node parallelism, loop-unrolling, or other performance optimizations. The resulting kernel can be invoked on each MPI rank as a kind of Firedrake “implicit matrix” and hence used inside a Krylov method.

13:25
Computationally Efficient Numerical Quadratures for the Finite Element Assembly of High-Order non-Lagrange Elements

ABSTRACT. 1. Introduction --------------- There is a growing consensus in the finite element community that state-of-the-art low-order finite element (FE) technology requires, and will continue to require, too extensive computational resources to provide the necessary resolution for complex simulations, even at the rate of computational power increase [1]. The requirement for precise resolution naturally leads us to consider methods with a higher order of grid convergence than the classical second-order provided by most industrial grade codes. In particular, for high-frequency time-harmonic simulations, high-order schemes allow to efficiently resolve the rapid small- scale spatial oscillations of the solution and allow to alleviate the pollution effect [2]. However, with this approach, the computational cost of solving the linear system of equations rapidly becomes overshadowed by the cost of assembling the finite eqlement matrix itself, as the order of the basis functions increases [3]. The aim of this abstract is to present a reformulation of the assembly procedure in a computationally more efficient way.

2. Classical finite element assembly ------------------------------------ By applying the classical FE scheme, the solution is computed using elementary integrals T^e_{i,j}, where each T^e_{i,j} is giving the contribution of the degrees of freedom (DOF) i and j of the mesh element e. The classical FE assembly algorithm computes on-the-fly the T^e_{i,j} terms for every pair of DOF i and j on every element e of the mesh. It is worth noticing that increasing the FE basis order will have two impacts on the computation time: each element will have more DOFs and the numerical quadrature will require more points. Both phenomena will substantially increase the assembly time, as shown in Figure 1. Furthermore, when handling non-Lagrange basis functions, the orientation of the different functions needs to be taken into account. This additional complexity is also classically treated on-the-fly during the assembly, by analysing the orientation an edge (or face) on which a given basis function is defined. Let us note that non-Lagrange bases are commonly encountered when constructing discrete subspace of H(curl) in electromagnetic applications for instance.

3. Efficient assembly --------------------- The key idea of a fast assembly procedure is to compute all the T^e_{i,j} terms using computationally efficient BLAS3 operations, as proposed by [3, 4] for standard nodal Lagrange finite elements. The T^e_{i,j} terms can be computed by the product of two matrices. The first matrix will be composed of the Jacobian matrices and non-linear terms evaluated at each quadrature point. The second matrix will be composed of only the FE basis functions defined over the reference element and also evaluated at each quadrature point. Unfortunately, in this approach, non-Lagrange basis functions cannot be treated because of the ori- entation problem. To circumvent this limitation, the previous solution has been adapted to handle more than one reference space, leading thus to more than one matrix-matrix product: one per possible orien- tation. However, with this new solution, the orientations of the edges and faces cannot be determined on-the-fly, and need to be considered during a pre-processing step. This can be efficiently achieved by ex- ploiting a newly designed orientation dictionary structure and additional geometrical transformations [5].

4. Numerical results -------------------- Figure 1 presents the assembly times of the on-the-fly and matrix assembly procedures for an increasing FE basis order. The FE matrix is assembled for a high-frequency electromagnetic cavity problem. The tests were done on an Intel Xeon E5645 and by using the OpenBLAS implementation of the matrix-matrix product with 6 threads. It is worth mentioning that the classical implementation also uses 6 threads for the assembly. It can be seen from Figure 1 that the matrix procedure is much faster than the classical one for high-order solutions. For instance, the speedup for a sixth-order problem, with more than 900.000 unknowns, is around 20.

Figure 1: Assembly time and speedup for the classical and fast procedures

Acknowledgements ---------------- This work was supported in part by the Belgian Science Policy Office under grant IAP P7/02 and the Walloon Region under grant WIST3 DOMHEX. N. Marsic was a fellowship beneficiary with the Belgian Research Training Fund for Industry and Agriculture (FRIA).

References ---------- [1] P. E. Vincent and A. Jameson, "Facilitating the adoption of unstructured high-order methods amongst a wider community of fluid dynamicists," Mathematical Modelling of Natural Phenomena, vol. 6, no. 3, pp. 97--140, 2011. [2] F. Ihlenburg and I. Babuska, "Finite element solution of the Helmholtz equation with high wave number part II: The h-p version of the fem," SIAM Journal on Numerical Analysis, vol. 34, no. 1, pp. 315--358, 1997. [3] K. Hillewaert, "Exploiting data locality in the DGM discretisation for optimal efficiency," in ADIGMA, vol. 113 of Notes on Numerical Fluid Mechanics and Multidisciplinary Design, pp. 11-- 23, Springer Berlin Heidelberg, 2010. [4] J. Lambrechts, Finite element methods for coastal flows: application to the great barrier reef. PhD thesis, Université catholique de Louvain, Belgique, 2011. [5] N. Marsic and C. Geuzaine, "Efficient finite element assembly of high order Whitney forms," IET Science, Measurement and Technology, vol. 9, no. 2, pp. 204--210, 2015.

13:40
FInAT: Automating Optimal Code Generation for High-Order Finite Element Methods

ABSTRACT. FEniCS and Firedrake have demonstrated that code generation is a key technology in enabling the productive exploitation of advanced numerical methods for complex systems of equations. However, no existing high-level code generation systems facilitate the generation of optimal implementations of high-order finite element methods.

The optimal algorithms for high-order finite elements rely on the exploitation of structure within the finite elements. For example, the tabulation matrix of a tensor product element can be written as the outer product of the tabulation matrices and of the factor elements. This structure enables the reordering of assembly loop nests to avoid redundant computations, a technique known as sum factorisation. Considering its implementation in either FEniCS or Firedrake, a particular issue is that FIAT, the finite element library, is unable to express such structure within the elements.

Here we present FInAT, a smarter library of finite elements. While FIAT evaluates the element basis as a table of numerical values, FInAT provides a symbolic expression for this evaluation. Thus the tensor product structure is preserved, so that the form compiler or a suitable optimisation tool can sum factorise the assembly kernel.

Through its integration with TSFC, FInAT is incorporated in the Firedrake system. TSFC is the form compiler in Firedrake, which together with COFFEE can restructure the assembly loop nests to produce sum factored algorithms with optimal complexity.

13:55
Automating Hybridization via Composing Abstractions
SPEAKER: Thomas Gibson

ABSTRACT. The first hybridization of a finite element method was proposed in 1965 for a numerical method for solving the equations of linear elasticity. As a sub-set of static condensation methods, its primary advantage allows one to reduce the number of globally-coupled degrees of freedom. In practice, this requires algebraically manipulating the local discretized systems during the equation assembly process to produce the reduced global problem. Using conventional model development techniques, hybridization of complex discretizations requires manual intervention in intricate numerical code, and this intervention must be repeated every time the model is modified, extended, or debugged.

In contrast, the Firedrake and FEniCS projects take the discretized equations in symbolic form as input, and automatically generates high performance parallel code from this mathematical specification. In this talk, we present a robust abstraction framework within Firedrake for specifying local operations on finite element tensors. By introducing these symbolic operations, and generating code from them, we successfully extend hybridized finite elements for automated simulation.

14:10-14:40Coffee Break
14:40-15:25 Session 3: Software and Algorithms II
Location: Salle Tavenas Main Theatre
14:40
Gmsh 3.0 - Gmsh goes boolean!

ABSTRACT. In this talk I will give a brief overview of the Gmsh project, and present the new constructive solid geometry features introduced in Gmsh 3.0.

14:55
Geometric Multigrid in FEniCS

ABSTRACT. In this talk we describe a recent addition to FEniCS: native support for geometric multigrid via PETSc.

Rather than building an additional software project on top of FEniCS, we have included support for GMG inside FEniCS itself. This is achieved by teaching PETSc how to construct the necessary prolongation and restriction operators. That is, we enable the use of pc_type "mg" in PETSc.

The main technical difficulty is building a sparse matrix that represents the prolongation operation between two function spaces. Our code for the construction of the prolongation operator works in parallel, for any Lagrange element. The code works even when the domain decompositions of the meshes do not overlap, at the cost of some communication.

The code relies on the user supplying the hierarchy of meshes. The code does not rely on the hierarchy being nested or constructed via uniform refinement. In fact, the code works even when the meshes describe different domains (under certain well-understood conditions). This is frequently necessary for complex domains arising in industrial practice.

Its current restrictions are that it does not work for non-Lagrange elements or for mixed finite element problems. Its extension to mixed finite elements is relatively straightforward (and has been achieved in defcon, the author's package for bifurcation analysis), but relies on some modifications to dolfin that are not yet ready for merging. This may or may not be resolved by the time of the talk.

Contributions to support other element types and multigrid for $H$(div) and $H$(curl) problems would be extremely welcome.

15:10
Across the great divide: composable block preconditioning from UFL

ABSTRACT. UFL and the FEniCS problem solving language present a beautiful set of abstractions for succinctly writing variational problems. This removes a large challenge in the development of complex models. Preconditioning the resulting systems of equations is often still problematic. Where monolithic algebraic approaches are available, life is simple: just hand off the Jacobian to your favourite solver library. Problems with coupled systems of variables do not usually fit this paradigm. The state of the art here is often block preconditioning, relying on block factorisations and approximations of the block inverses. If we are lucky, it is possible construct such block preconditioners algebraically (by manipulation of the assembled Jacobian). But usually, we are not lucky.

In this talk, I discuss how we overcome this problem in Firedrake. Leveraging PETSc's flexibility in defining operators and preconditioners, and using UFL to simplify the assembly of any auxiliary operators our preconditioner might need.

This approach allows the develoment of matrix-free iterative solvers with assembly of only the necessary preconditioning blocks. Moreover, our approach is easily extensible, allowing development and composition of complex preconditioners for subblocks as we add new couplings to our model. I will illustrate the usage with examples from fluid convection and phase separation.

15:25-15:40Short break, poster preparation
15:40-16:10 Session 4: Magnetics
Location: Salle Tavenas Main Theatre
15:40
LIAB: a FEniCS based computational tool for laser annealing simulation

ABSTRACT. In the paper we present a FEniCS based tool (named LIAB: LASSE Innovation Application Booster), in the development stage, for the simulation of LA process. This is a complex self-consistent problem, where the heating is evaluated by mean of the time harmonic solution of the Maxwell equations The main features of the LIAB package include the following: Versatile Graphical User Interface for the structure design, the material assignment and the simulation analysis; Interface with the FEniCS solver for the automatic generation of the mesh and the runtime control; Many materials calibration (optical and thermal properties and mass transport) as a function of temperature and phases; Efficient coupling with Electromagnetic Simulation for the self-consistent source estimate (i.e. power dissipation) in nano-structured topographies; Experimental validation in nanostructured samples; Multiple-dopant models simulating dopant atoms redistribution including diffusion solubility and segregation; Alloy model e.g. SiGe (where melting point depends on the alloy fraction); Multiple phases (e.g. amorphous, liquid, crystal).

15:55
A Finite Element Approach to Computational Magnetics with Defect Correction
SPEAKER: Ulrich Römer

ABSTRACT. We address the finite element method in a computational magnetics context, where highly accurate solutions with high differentiability are required. To this end, defect correction is applied in combination with adjoint error estimation. A key ingredient of the method is a post-processing of the finite element solution on unstructured grids with radial basis functions. Numerical examples with FEniCS are given to illustrate the improved accuracy of the magnetic fields.

16:10-16:50 Session 5: In memory of Hans Petter Langtangen
Location: Salle Tavenas Main Theatre
16:10
In memory of Hans Petter Langtangen
SPEAKER: Anders Logg

ABSTRACT. Remembering the lifetime contribution of Hans Petter Langtangen to scientific computing.

16:50-18:30 Session 6: Poster Session and Refreshments
Location: Salle Tavenas Rear Room
16:50
iRANitsche: reduced assembly with a Nitsche-based reduced basis method
SPEAKER: Davide Baroli

ABSTRACT. Model order reduction methods are a family of well-established techniques for reducing the computational cost of parametrized PDEs. They provide a good trade-off between fast computation and high-fidelity approximation while preserving reliability. One popular model order reduction technique is the reduced basis method (RB). This approach leads to a a set of global RB basis functions that optimally represent the solution and also provides a reliability estimate of the gap between high fidelity solution and reduced one of the parametrized PDE, for a given range of possible sample parameters. In this poster, we present a novel alchemy of model reduction, reduced assembly, and domain decomposition method that we call iRANitsche. Rather than using globally reduced basis function computed offline, we design a spatially local reduced basis space and we apply the `reduced assembly`(RA) technique, which is used to efficiently reduce the assembly complexity cost of online Galerkin projection. Our approach instead deals with reducing the degree of freedom of domain preserving a certain accuracy of the locally reduced basis function. However, it leads to non-matching mesh and in principle may lead to discontinuous basis function across the interface of each pair of sub-domains. For this reason, we employ a Nitsche-based gluing formulation using the MultiMesh framework developed recently in DOLFIN. The robustness of this coupling scheme is determined by the penalty coefficients that are chosen using ghost penalty technique in . We will show how we develop our package at the top of DOLFIN, SLEPc4py and PETSC4py. The numerical tests performed in 2D and 3D on an academic and the patient-specific problems provided by Dr. Hertel of Hospital Center de Luxembourg shows the good performance of the method and reduction of computation cost of some order of magnitude with respect the high-fidelity approximation.

16:50
MD-FEM Force Field Simulations of Particle Subjected to Dielectrophoresis Interactions

ABSTRACT. Particles with sizes in range from sub-micrometer to about 1 millimeter and whit particular electrical and/or magnetic properties, experience mechanical forces and torques when are subjected to electromagnetic fields (this type of particles are called “electromechanical particles”). The theoretical study of these large class of complex systems is possible thanks to the development of real systems' models and numerical simulations of the stable (multi-particle) configurations and their dynamics. One of the phenomena that affect electromechanical particles is the dielectrophoresis (DEP). A branch of emerging application relates to controlled manipulation of particles dispersed in colloidal solutions (i.e. biological particles such as cells or DNA), since the strong selectivity of the response depends on the particle volume, shape and composition. Application fields of dielectrophoresis include cell partitioning/isolation for the capture/separations without the use of biomarkers. This contribution focuses on the theoretical study of the dynamics of micro-sized spherical biological particles suspended in a colloidal solution, which are subjected to dielectrophoresis in the presence of non-homogeneous and non-uniform variable electric fields. Most DEP models in literature are based on particles in the diluted solution limit; in this case the forces are calculated using an approximate method (standard DEP). The electric field is applied through the electrodes present in a microfluidic channel in which the colloidal solution flows.Particle manipulation and characterization using DEP are generally performed in a confined region near electrodes, so that interaction between the particle and the surrounding walls can be significant. In this work, we present numerical simulations of movement of MDA-MB-231 tumor cells near electrodes edges; we run a more detailed study, with a not approximate calculation of the dielectrophoretic force: DEP forces are estimated by integrating the Maxwell tensor; itleads to an overall DEP force independent of the complex dielectric permittivity of the particles and suspending medium and depending only on the type of boundary (conductive or isolating) and on the ratio between the particles and electrodes dimensions. The dynamics is simulated by techniques borrowed from Molecular Dynamics (MD): our goal is first evaluate the forces acting on electromechanical particles starting from the initial configuration of the system (position of the particles, geometry of the electrodes, electrical potentials applied), secondly the dynamics of particles through the integration of the equations of motion using MD-like techniques.The Coupled MD-FEM study of particles’ kinetics consists of a loop whit the following steps: initial positions of the particles; calculation of forces; calculation of acceleration; integration of the equations of motion; new positions. We use the numerical integration technique called Verlet Method. The coupled MD-FEM algorithm and its implementation in the FEniCS environment are presented. Realistic simulated cases will be discussed, showing also the difficulties of the methodic implementation in 3D domains. We carry out simulations of the movement of MDA-MB-231 tumor cells near the electrodes edges, based both on the standard DEP theory and on the non approximate theoretical model (MST). We found that, in the case of standard DEP, the cells experience an attractive force that traps near the electrodes, while in the case of use of MST-DEP force the cells also form chains due to dipole-dipole interaction and some escape the attraction of the electrodes. Our work shows the potential of coupled MD – FEM study of electromechanical particles.

16:50
Identification of Mechanical Properties of 3D Myocardial Tissue: An Inverse Modeling and Optimal Experimental Design Problem

ABSTRACT. In this work, we present novel computational tools for bayesian parameter inversion and optimal experimental design for determining the 3D strain energy function parameters for myocardial tissue.

16:50
Kinetic simulation of plasma-object interaction with FEniCS
SPEAKER: Diako Darian

ABSTRACT. The understanding of plasma-object interaction is of importance for men and space technology in harsh and inhospitable space environments. The flowing plasma in the upper-atmosphere contains energetic megaelectronvolt electrons and ions that may cause damage to spacecrafts and satellites, and represents serious health hazard to astronauts. Due to the complexity of the plasma-object interaction, in order to study realistic space weather conditions, it is advisable to use first-principle numerical simulations, such as with particle-in-cell (PIC) method, where the dynamics of plasma particles can be studied in self-consistent force fields. The use of finite element method to study plasma-object interaction has the advantage of giving considerable flexibility to have an accurate representation of the complex spacecraft or satellite geometries.

This poster presents a particle-in-cell code written in FEniCS environment to study plasma-object interaction.

16:50
Optimality Of Apriori Error Estimates Of Discontinuous Galerkin Method For Elliptic Problems With Nonlinear Newton Boundary Conditions

ABSTRACT. We are concerned with the discontinuous Galerkin method (dG) for the numerical solution of the Poisson problem with nonlinear Newton boundary condition in bounded two-dimensional polygonal domains. Such boundary condition models a polynomial behavior at the boundary which can be used in modeling of electrolysis of aluminium (see e.g. \cite{fkr}). Similar nonlinear property of the solution on the boundary is observed also in radiation heat transfer problems or in nonlinear elasticity.

We consider the following {\sl boundary value problem}: Find $u:\overline{\Omega}\to\mR$ such that \begin{eqnarray}\label{bvp1} & & -\Delta u = f \quad \mbox{in}\ \Omega,\medskip\\ & & \frac{\partial u}{\partial n} + \kappa\vert u\vert^\alpha\, \nonumber u = \varphi\quad\mbox{on}\ \partial\Omega,\label{bvp2} \end{eqnarray}

where $f:\Omega\to\mR$ and $\varphi:\partial\Omega\to\mR$ are given functions and $\kappa>0,\ \alpha\geq 0$ are given constants. % $\partial/\partial n$ is the derivative in the direction of the unit % outward normal to $\partial\Omega$. The % classical solution of the above problem can be defined as a function % $u\in C^2(\overline{\Omega})$ satisfying \eqref{bvp1} and \eqref{bvp2}.

In paper \cite{clanek}, the dG discretization of problem \eqref{bvp1} is analyzed and theoretical a priori error estimates for the error are derived. The discretization error is measured by the so called dG norm, which consists of the $H^1(\Omega)-$norm augmented by additional contributions over the edges of the mesh elements penalizing the discontinuity of the discrete dG solution.

We assume that the exact solution $u \in H^{s}(\Omega), s > 2,$ and the dG discretization with discontinuous piecewise polynomial functions of degree $p$ is used. When the solution $u$ is non-zero at least on a part of the boundary with positive $d-1$ measure, it is proven in \cite{diplomka} that the dG method reaches the optimal rate of convergence measured in the dG norm $O(h^{r})$, where $r = \min \{ p , s-1 \}.$ When $u \equiv 0$ on the whole boundary then the theoretical convergence rate is decreased to $O(h^{\frac{r}{\alpha + 1}})$ due to the nonlinearity of the problem.

We present experiments, computed by the FEniCS software, performed in order to verify the optimality of these theoretical bounds and convergence rates. The decrease of the error both for problems with smooth solution and for irregular solutions in non-convex domains is examined. We measure the dependence of the experimental order of convergence on the regularity of the solution and also on the parameter $\alpha$ of the nonlinearity of the problem.

It seems that the convergence behaves similarly to the theoretical estimates in these experiments. Only, in the case when $u|_{\partial \Omega} = 0$, we observed that the $L^2(\Omega)-$norm was the major part of the error decreasing as $O(h^{\frac{r+1}{\alpha + 1}})$ while the $H^1(\Omega)-$seminorm decreased with the optimal order $r$. Hence, it seems that the error estimate of order $O(h^{\frac{r}{\alpha+1}})$ proved in \cite{clanek} is not optimal. An optimal error estimate based on the analysis of the $L^2(\Omega)$ norm is our goal for further work.

16:50
A new algorithmic differentiation tool (not only) for FEniCS
SPEAKER: Simon Funke

ABSTRACT. The derivation and implementation of adjoint models for time-dependent, non-linear PDEs is a challenging task. A common strategy is to an apply algorithmic differentiation tool (AD) which (semi- )automatically derives the adjoint model from the forward model. Specifically for finite-element models, [1] proposed a high-level AD approach, which derives the adjoint by analysing and exploiting the high- level mathematical structure inherent in finite element methods. This idea has shown to provide major benefits compared to traditional low-level AD tools, including near to theoretically optimal performance and natural support of parallelism. However, the high-level AD tool for FEniCS, dolfin-adjoint, lacks important features such as differentiation with respect to Dirichlet boundary conditions and higher-order derivatives. To overcome these limitations, we propose a new algorithmic differentiation tool for FEniCS. The core of this tool is formed by pyadjoint, a generic operator overloading AD tool written in Python. pyadjoint considers the model as a sequence of arbitrary operations with inputs and outputs. This abstraction can be seen as a generalisation of low and high-level AD tools: operations can be individual floating point operations (as for traditional AD tools), entire systems of differential equations (as for high-level AD tools), or a mix of both. The adjoint developer must overload each relevant model function according to the pyadjoint API, and in particular provide implementations for their derivatives. With this information pyadjoint records a tape of model operations at runtime and automatically derives and executes the associated adjoint model. Specifically, the support for adjoint FEniCS model is achieved by overloading of the FEniCS API, in particular the creating of new objects such as Functions, Constants, and overloading operators such as assemble, project and solve.

16:50
Discontinuous solution of the Laplace equation in a mixed finite element method and level-set setting
SPEAKER: Michal Habara

ABSTRACT. Electrochemical sciences (batteries, chemical cells, corrosion, metal oxidation) are mostly based on Butler-Volmer kinetic relation. Additionally, in the real world problems, the interface of discontinuity evolves in time and is implicitly given. This type of issue could be characterised as a nonlinear elliptic interface problem. In this contribution we show very natural interpretation of this problem in terms of mixed Poisson problem (saddle-point structure).

16:50
Uncertainty quantification for soft tissue biomechanics
SPEAKER: Paul Hauseux

ABSTRACT. Monte Carlo methods are used to assess the effects of uncertainty in material parameters in soft tissue models. We present a sensitivity derivative Monte Carlo method [1, 2] to provide effectively statistical results. This technique reduces the error in the Monte Carlo estimator and therefore the workload compared to the standard Monte Carlo approach. A global sensitivity analysis is also performed to study how the uncertainty of the inputs influence the outputs and to determine the uncertain parameters which have the most influence on the variance of a given quantity of interest. We implement our forward and tangent linear model solvers using DOLFIN [3] and we use the Python toolbox chaospy [4] to generate stochastic objects. Numerical results of a FE stochastic analysis of brain deformation are presented and discussed. We implemented two hyperelastic soft tissue models: a classical Mooney-Rivlin material and an anisotropic Holzapfel and Ogden model [5].

16:50
Sensitivity Analysis of Cardiac Growth Models

ABSTRACT. Introduction The heart is a dynamic organ capable of changing its shape in response to the body’s demands. For example, the human heart continuously adapts in size and geometry to meet greater blood flow needs of the growing body during normal development. In this case, a gradually imposed volume overload leads to progressive chamber enlargement. Another example of a normal physiological growth can be found in the athlete’s heart, where a sustained elevated chamber pressure results in the chamber wall thickening and an overall increase in cardiac mass. Growth processes, however, can also be maladaptive as found in many cardiovascular diseases where structural changes in the heart progressively decompensate cardiac function.

In order to better understand this balance between adaptive and maladaptive cardiac growth, we examine the effect of known growth stimuli using a mechanical model of the heart. We perform a sensitivity analysis of existing growth models in order to assess the relative importance of model parameters and respective mechanisms. This work can eventually lead to simplifications in the model systems for prediction of growth, or help in localizing shortcomings that need to be addressed in the existing modeling frameworks.

Methods In order to simulate the motion of the heart throughout the cardiac cycle, we use a nonlinear finite element (FE) model of a realistic left ventricle (LV) coupled to a lumped-parameter model of the systemic circulation. Under the quasi-static assumption, this problem is reduced to finding the displacement u, hydrostatic pressure p and LV pressure p LV that minimize the incompressible strain energy functional Π parameterised by the LV volume V LV [1]: The muscular tissue of the heart is modeled as a transversely isotropic hyperelastic material via the strain energy density Ψ [2]:

where m = (a, b, a f , b f ) is a set of passive material parameters, C e is the elastic right Cauchy-Green. By introducing an activation parameter γ representing the active tensor, I1 = trC e and I shortening in the fiber direction f 0 at zero-load, the model incorporates muscle contraction using the active strain approach, which is based on a multiplicative decomposision of the deformation gradient F = I + Grad(u) into an elastic and an active parts F = F e F a with F a defined as:

The dynamic changes in the ventricular blood pressure and volume over the entire cardiac cycle are modeled by a three-element Windkessel model described by a system of ordinary differential equations [3]. At each time step, the coupling between the FE model and the circulatory model is achieved through an additional Lagrange multiplier p LV which represents the LV cavity pressure. The problem (1) is solved such that the simulated LV cavity volume V (u) matches the target volume value V LV obtained from the circulatory model.

Growth process in the heart wall is modeled by deforming the reference unloaded geometry to a new grown configuration, again through a multiplicative decomposition of F into an elastic and, this time, a growth part, where F = F e F g . The constitutive laws for finite growth can be expressed using a generic format for the growth tensor F g = θ f f 0 ⊗ f 0 + θ s s 0 ⊗ s 0 + θ n n 0 ⊗ n 0. The evolution of the local tissue growth parameter θ g = [θ f , θ s , θ n ] T can be defined in terms of a growth activation function φ (F e ) and a growth rate function k(θ θ g ) which specifies the speed and nonlinearity of the growth process [4].

Using the above described model it is possible to simulate various physiological conditions together with the associated structural adaptation of the heart walls in response to change in loadings. In each case, a growth model can be chosen depending on the nature of the considered physiology. The sensitivity of the system’s grown state to the prescribed growth model can then be estimated. For this, we define an objective functional J(u), the model output of interest, which is to serve as a qualitative and/or quantitative measure of how well the growth model reproduces the expected behavior. More specifically, if we are to compare the model response to a real measurement, then the objective functional can be defined as the mismatch between the simulated u and the measured u exp grown states at a reference time tref :

Finally, the sensitivities of J to θ , where θ is a set of growth parameters specific to a given model, are defined as dJ(u)dθ.

The solver has been developed within the FEniCS [5] framework and the functional gradients are computed by solving an automatically derived adjoint equations [6].

Results We first focus on implementing and testing a strain-driven growth law to simulate a volume overload state of the left ventricle. To achieve this, the initial physiological equilibrium of the heart is altered by increasing a diastolic filling pressure. This initiates cardiac growth that continues until a new equilibrium state is reached. The model is able to reproduce qualitatively experimental observations reported in the literature, such as LV cavity dilation due to fiber over-stretching and a gradual increase in myocardium volume. At the next step, we perform a sensitivity analysis of the model with respect to the growth model parameters and evaluate its performance in reproducing the expected growth behavior.

16:50
A High-Order Particle-Mesh Operator Splitting Approach for the Incompressible Navier-Stokes Equations

ABSTRACT. The poster will present a particle-mesh method for simulating incompressible fluid flows. Building upon particle-in-cell concepts, the method is formulated as an operator splitting strategy in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based method is employed for the consitutive modeling in order to account for the inter-particle interactions. An immediate advantage of this hybrid Lagrangian-Eulerian approach is that no additional stabilization techniques are required in the advective limit.

Key to the presented methodology is the generic variational framework for the coupling between the scattered particles and the mesh in terms of L2-minimization problems. These projections remain local and can be efficiently combined with the hybridized Discontinuous Galerkin method for solving the constitutive equations. An efficient solution strategy for the HDG equations is presented by using a static condensation procedure. The model is implemented using tools from the FEniCS-project.

As shown by means of various numerical examples the developed methodology overcomes various issues which were to date persistent to particle-mesh methods, such as: the presented particle-mesh coupling results in optimal spatial accuracy and the particle distribution remains uniform over time, even for complicated problems such as the flow around a circular cylinder, Figure 1. Heuristic particle shifting algorithms as often applied in existing particle(-mesh) methods are thus not required.

16:50
Implementation of Diffuse Interface Models in FEniCS

ABSTRACT. Diffuse interface models have been successfully used as a tool for mathematical description of the motion of several immiscible fluids and their mutual interaction. Multicomponent flows of this type are of interest in many engineering applications.

The diffuse interface models usually couple the system of fourth-order partial differential equations of Cahn-Hilliard type together with incompressible Navier-Stokes equations. We have recently developed a variant of the model that is applicable also in a general non-isothermal setting. The coupling allows for consistent incorporation of surface tension effects into the model. The interface between the components is in this case treated as a thin layer across which the components are allowed to mix. The main challenge in numerical simulations is the very fine spatial resolution required for capturing the dynamics of the interface.

Several variants of diffuse interface models have been implemented and tested using the FEniCS project [1] and FENaPack [2], which is an open-source package implementing preconditioners for Navier-Stokes equations. A particular variant of the model has been applied in a practical problem of simulating the float glass process (Pilkington process), see [3].

[1] M. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. Rognes, and G. Wells, The FEniCS Project version 1.5, Archive of Numerical Software, 3 (2015).

[2] J. Blechta and M. Řehoř, FENaPack - FEniCS Navier-Stokes preconditioning package. URL: https://github.com/blechta/fenapack.

[3] M. Řehoř, J. Blechta, and O. Souček, On some practical issues concerning the implementation of Cahn-Hilliard-Navier-Stokes type models, International Journal of Advances in Engineering Sciences and Applied Mathematics, 9 (2017), pp. 30–39.

16:50
FEniCS: Sustainable Software Development Practices

ABSTRACT. The FEniCS project aims to provide a high productivity environment for development of finite element based simulation software. Techniques applied to achieve this goal include mixed language programming and code generation, which enables writing high performance programs in a high level language. End-user productivity is a high priority goal in our software designs. To sustain the productivity of the multinational team of part-time developers (mainly researchers and students) is paramount to the long term survival of the project. To minimize the developer workload while making the process open and accessible to new contributors and users, we regularly question which tools are the best available for our needs. On this poster we will present our current tool choices and work flows for developers and the wider FEniCS community. This list includes version control, build systems, testing, release management, team communication, documentation, and end user support. The most recent addition to our toolbox are developer curated Docker images. We are investigating their usefulness in testing infrastructure, end user deployment, HPC cluster deployment, and as reproducible software environments to accompany journal publications. We welcome discussion on alternatives that can simplify our lives.

This poster was previously presented at SIAM CSE17 PP108 Minisymposterium: Software Productivity and Sustainability for CSE and Data Science.

16:50
From medical images to uncertainty quantification with FEniCS and 3D Slicer

ABSTRACT. It is increasingly recognised that simulation tools can be useful in aiding physicians making critical medical decisions. In this work, we propose an image to finite element analysis pipeline for patient-specific simulation and demonstrate its effectiveness by simulating a Kyphoplasty operation on a fractured vertebra. Computing the von-Mises stress allows us to identify the region of the risk of fracture after the bone augmentation. The pipeline consists a pre-processing of medical images segmentation using the open-source software 3DSlicer [Fedorov et al. 2012]. Then, the tetrahedral mesh quality is enhanced by modern smoothing technique within PyMesh [Zhou et al. 2016]. Finally, we use the FEniCS to compactly express and efficiently solve the elasticity model using the finite element method. In addition, we enhance the classical pipeline with two cutting-edge numerical approaches: model reduction and uncertainty quantification. We feel is is increasingly important to understand the sensitivity of biomechanical models to underlying uncertainties in parameters, e.g. boundary conditions, material properties. However, the cost of such an analysis is computationally expensive, because it demands that the same model is run with many different realisations of the parameters. To overcome this issue and obtain a near-real-time simulation, we introduce the model reduction technique.

16:50
Loop fusion for finite element assembly in PyOP2
SPEAKER: Tianjiao Sun

ABSTRACT. We discuss loop fusion techniques implemented in PyOP2 which speeds up the global assembly of linear and bilinear forms in Firedrake. The techniques involve direct fusion, applied to mixed finite elements, and indirect fusion, applied to Discontinuous Galerkin finite elements. We will also present preliminary results with several representative assembly kernels.

16:50
Functional-type two-sided a posteriori error estimates for high-order discretizations

ABSTRACT. We present reliable and sharp two-sided a posteriori estimates for high-order discretisations of the Poisson problem using functional-type arguments. The resulting scheme is implemented in FEniCS.

16:50
Grid Independence and Manufacturability in Microchannel Heat Sink Topology Optimization
SPEAKER: Yang Zhou

ABSTRACT. The ongoing pursuit for higher power densities in electronic components such as CPUs, graphics cards, and other devices poses severe demands on cooling solutions. Air-cooled heat sinks are becoming increasingly inadequate and being replaced by liquid cooling counterparts. To explore the currently untapped potential of liquid-cooled microchannel heat sinks, topology optimization is used in this work. Using this methodology, novel tree-like microchannel layouts with superior performance over parallel channel layouts can be achieved automatically.

This work starts from earlier topology optimization studies for optimal cooling topologies for pressure-driven liquid-cooled microchannel heat sinks [1]. From a physical point of view, optimized designs should incorporate a large amount of very fine channels, since this is beneficial towards the heat transfer to the fluid. Additionally, for a fixed pressure drop over the heat sink, these channels should be fairly short to limit friction losses. Consequently, optimized designs include a combination of small channels with excellent heat transfer characteristics and larger channels acting as fluid distributors and collectors to limit the pressure drop, as seen in figure 1. In this figure, black zones correspond to solid walls and white to liquid channels. The left and right side are the fluid inlet and outlet respectively, and the top and bottom are fixed solid walls.

The optimization problem is solved on a discretized finite volume mesh based on the conjugate heat transfer equations. By introducing a material with imaginary properties into the simulation, material properties in every cell are updated during the optimization procedure. Within this procedure, the fictitious material is pushed towards either solid or fluid, in the final optimized design. This numerical solution procedure brings with it a number of issues, most notably, the appearance of channel dimensions that equal the coarseness or fineness of the design grid used in the optimization. Consequently, optimized designs exhibit a high degree of grid dependence. Moreover, on the one hand an accurate solution of state variables asks for a fine resolution, while on the other hand designs optimized on those fine grids are not manufacturable as a result.

In order to avoid such grid dependent designs and fabrication limits violation, regularization methods are employed. These methods originate from structural topology optimization research, where similar problems have been encountered [2]. Among multiple existing approaches, parameter space reduction [3] and mesh-independent filtering methods [4] are tested in this work.

We assume here that the microchannel heat sinks are fabricated using silicon etching techniques such as deep reactive ion etching. In order to ensure manufacturability, a lower threshold is imposed on the width of the smallest channels, equal to etching dimensions that lie comfortably within current technological limits. This threshold is subsequently translated into specific parameters used in each of the regularization methods. Optimized designs are deemed manufacturable if all feature sizes are above this selected lower threshold. Grid independence is verified by simulating the optimization problem with varying grid coarseness and comparing the resulting topologies.

Testing of the two regularization methods indicates that using parameter space reduction, grid independent optimized designs can be realized, and manufacturability can be guaranteed. However, the design space is limited by the coarseness of the design grid, resulting in features such as staircase designs. Mesh-independent filtering methods allow for smoother solid-liquid interfaces using finer design grids, but simulations show that although there is a moderate level grid independency in the optimized designs, the filtering methods fail to ensure manufacturability for any given design. Reasons for this undesirable behavior include the non-convexity of the optimization problem, which has a tendency to get stuck in different local minima, and the fact that the filter methods appear not to completely remove the high frequencies in the design.

In conclusion, it appears that parameter space reduction performs favorably compared to mesh-independent filtering methods with respect to manufacturability of optimized designs and both methods display a similar level of grid independence. However, both methods have their own specific advantages and disadvantages.