FENICS'17: THE FENICS 2017 CONFERENCE
PROGRAM FOR TUESDAY, JUNE 13TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:15-10:15 Session 7: Optimisation I
Location: Salle Tavenas Main Theatre
09:15
Topology Optimization under Uncertainties in FEniCS

ABSTRACT. The application of phase field models to shape optimization allows for the implicit treatment of the topology of the work piece. Gradient descent approaches lead to an Allan-Cahn gradient flow type problem. We treat additional constraints of the model either with a projection step in each iteration or with Lagrange multipliers in an augmented linear system. Adaptive methods for the resulting time-stepping and the underlying finite element mesh considerably speed up the computation. Furthermore, additional adaptive choices of key model parameters allow for robust convergence in practice.

The modelling of material and load uncertainties leads to a stochastic cost functional for the optimization which requires a proper risk measure in turn. The conditional value at risk is here a practical choice which allows to modifier the robustness of the resulting shape by some risk parameter. The resulting stochastic gradient descent solves an augmented optimization problem with monte carlo sampling or tensor reconstruction where we employ parallel computations.

09:30
Equilibrated Warping: Finite Element Image Correlation with Mechanical Regularization
SPEAKER: Martin Genet

ABSTRACT. This abstract describes a finite element-based image correlation method with, as regularization, a novel continuum large deformation formulation of the equilibrium gap principle, previously introduced at the discrete level for linearized elasticity.

09:45
Total Variation Image Reconstruction on Smooth Surfaces
SPEAKER: Marc Herrmann

ABSTRACT. % This template is losely based on the abstract template from Conference % on Geometry: Theory and Applications (CGTA 2015), and uses code % listings style ``anslistings'' from the Archive of Numerical Software. \documentclass[10pt,a4paper]{article} %\usepackage[square]{natbib} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{algorithm} \usepackage{algorithmicx} \usepackage{algpseudocode} \usepackage{anslistings} % <-- should be provided with this template

\pagestyle{empty} \usepackage[left=25mm, right=25mm, top=15mm, bottom=20mm, noheadfoot]{geometry} % please don't change geometry settings!

%\usepackage[T1]{fontenc} %% get hyphenation and accented letters right %\usepackage{mathptmx} %% use fitting times fonts also in formulas

%%% % Commands defined by Heiko \newcommand{\n}{\nabla} \newcommand{\io}{\int_{\Omega}} \newcommand{\iot}{\int_{{\Omega}_T}} \newcommand{\ka}{\left(} \newcommand{\kz}{\right)} % \newcommand{\dt}{{\partial}_t} \newcommand{\nn}{\nonumber} \newcommand{\om}{\Omega} \newcommand{\omt}{{\Omega}_T} % \newcommand{\R}{{\bar R}} \newcommand{\ep}{\epsilon} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\ga}{\gamma} \newcommand{\de}{\delta} \newcommand{\la}{\lambda} \newcommand{\pa}{\partial} \newcommand{\tp}{\tilde \psi} \newcommand{\bg}{\bar g} \newcommand{\auskommentieren}[1]{}

\newcommand{\Riemann}{\bar R_{\al\be\ga\de}} \newcommand{\riemann}{R_{\al\be\ga\de}} \newcommand{\Ricci}{\bar R_{\al\be}} \newcommand{\ricci}{R_{\al\be}} \newcommand{\real}{\mathbb{R}} \newcommand{\nat}{\mathbb{N}} \newcommand{\Fij}{F^{ij}} \newcommand{\vp}{\varphi} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \DeclareMathOperator{\graph}{graph} % \DeclareMathOperator{\div}{div} \DeclareMathOperator{\supp}{supp}

\newcommand{\Fz}{F^{-2}} \newcommand{\Fg}{F^{ij}g_{ij}} \newcommand{\Fgr}{F^{rs}g_{rs}} \newcommand{\pab}{{\psi}_{\al\be}}

\renewcommand{\AA}{\mathcal{A}} \newcommand{\PP}{\mathcal{P}} \newcommand{\RT}[1]{\mathcal{R}\mathcal{T}_{\!#1}} \newcommand{\DG}[1]{\mathcal{D}\mathcal{G}_{#1}}

% Commands defined by Roland \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \newcommand{\oo}{\mathpzc{o}} \newcommand{\dual}[2]{\langle #1 ,\, #2 \rangle} \newcommand{\scalarprod}[2]{( #1 ,\, #2 )} \newcommand{\embeds}{\hookrightarrow} \newcommand{\interior}{\operatorname{int}} \newcommand{\length}{\operatorname{length}} \newcommand{\area}{\operatorname{area}} \newcommand{\bigabs}[1]{\bigl\lvert#1\bigr\rvert} \newcommand{\fenics}{\textsc{FEniCS}} \renewcommand{\d}{\textup{d}} \newcommand{\dx}{\textup{d}x} \newcommand{\du}{\textup{d}u} \newcommand{\dq}{\textup{d}q} \newcommand{\dt}{\textup{d}t} \newcommand{\ds}{\textup{d}s} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\BB}{\mathcal{B}} \newcommand{\LL}{\mathcal{L}} \DeclareMathOperator{\id}{id} \renewcommand{\div}{\operatorname{div}} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\vnorm}[1]{\left\lVert#1\right\rVert} \renewcommand{\bf}{{\boldsymbol{f}}} \newcommand{\bp}{{\boldsymbol{p}}} \newcommand{\bm}{{\boldsymbol{m}}} \newcommand{\bX}{{\boldsymbol{X}}} \newcommand{\bY}{{\boldsymbol{Y}}} \newcommand{\bV}{{\boldsymbol{V}}} \newcommand{\bW}{{\boldsymbol{W}}} \newcommand{\bu}{{\boldsymbol{u}}} \newcommand{\bq}{{\boldsymbol{q}}} \newcommand{\bv}{{\boldsymbol{v}}} \newcommand{\bw}{{\boldsymbol{w}}} \newcommand{\bn}{{\boldsymbol{n}}} \newcommand{\bH}{{\boldsymbol{H}}} \newcommand{\bL}{{\boldsymbol{L}}} \newcommand{\bone}{{\boldsymbol{1}}} \newcommand{\bzero}{{\boldsymbol{0}}} \newcommand{\weakly}{\rightharpoonup} \newcommand{\boldeta}{{\boldsymbol{\eta}}} %Exception: \boldeta b/c \beta is taken \newcommand{\closure}{\operatorname{cl}} \newcommand{\esssup}{\operatorname*{ess~sup}} % Commands defined by Jose \newcommand{\bpbar}{{\overline{\boldsymbol{p}}}} \newcommand{\vbar}{{\overline{v}}} \newcommand{\pbar}{{\overline{p}}} \newcommand{\bG}{{\boldsymbol{G}}} \newcommand{\bC}{{\boldsymbol{C}}}

\begin{document} \thispagestyle{empty}

% Insert title here: \title{Total Variation Image Reconstruction on Smooth Surfaces}

% Insert authors and afiliations here, speaker in bold: \author{\textbf{Marc Herrmann}, University of Wuerzburg, marc.herrmann@uni-wuerzburg.de, \\Roland Herzog, University of Chemnitz, roland.herzog@mathematik.tu-chemnitz.de, \\Heiko Kr\"oner, University of Hamburg, heiko.kroener@uni-hamburg.de, \\Stephan Schmidt, University of Wuerzburg, stephan.schmidt@uni-wuerzburg.de, \\Jos{\'e} Vidal, University of Chemnitz, jose.vidal-nunez@mathematik.tu-chemnitz.de}

\date{} % please leave date empty \maketitle\thispagestyle{empty}

% Insert keywords here Keywords: \emph{total bounded variation, Fenchel predual problem, interior-point methods, image reconstruction, image denoising, image inpainting, surfaces}\\

% Insert text here: An analog of the total variation image reconstruction approach \cite{rudin} for images $f$, defined on smooth surfaces, is introduced, which can be used as a application for 3D scanner data of objects and their textures. A field, where the texture is of great importance, is neuroimaging, where random noise enters due to eddy-current distortions or physical motion during the magnetic resonance imaging (MRI) process \cite{chang}. For this purpose, we consider the image reconstruction problem \begin{equation} \label{eq:TVL2_on_surface} \left\{ \quad \begin{aligned} \text{Minimize} \quad & \frac{1}{2}\int_S \lvert Ku-f\rvert^2 \, \ds + \frac{\alpha}{2}\int_S \lvert u\rvert^2 \, \ds + \beta \int_S \lvert \nabla u\rvert \\ \text{over} \quad & u \in BV(S), \end{aligned} \right. \end{equation} where $S \subset \R^3$ is a smooth, compact, orientable and connected surface without boundary. The observed data $f\in L^2(S)$, parameters $\beta>0$, $\al \ge 0$ and the observation operator $K\in \LL(L^2(S))$ are given. Furthermore, $BV(S)$ denotes the space of functions of bounded variation on the surface $S$. A function $u \in L^1(S)$ belongs to $BV(S)$ if the TV-seminorm defined by \begin{equation} \int_S \vert\nabla u\vert = \sup \left\{ \int_S u \div \boldeta \,\,\ds : \boldeta \in \bV \right\} \label{eq:seminorm} \end{equation} is finite, where $\bV=\left\{\boldeta \in \bC_c^{\infty}(\interior S, TS):\boldeta \text{ is a vector field}, \; \vert\boldeta(p)\vert_2 \le 1 \text{ for all }p \in S\right\}$ with $TS$ as the \emph{tangent bundle} of S. Note that $BV(S) \embeds L^2(S)$ and hence, the integrals in \eqref{eq:TVL2_on_surface} are well defined.

The non-smoothness of TV-seminorm is dealt with a duality approach, see \cite{chan, hinter,lai}. This leads to the predual problem of \eqref{eq:TVL2_on_surface}, which is a quadratic optimization problem for the vector field $\bp \in\bH(\div; S):=\left\{\bv \in \bL^2(S;TS)\,|\,\div \bv\in L^2(S)\right\}$ with pointwise inequality constraints on the surface.

For the application, we concentrate on the classical denoising problem, where $K=\text{id}$ holds, see Figure~\ref{fig:ShoeDenoised}. The numerical studies are based on geometries obtained by scanning real objects with the Artec Eva 3D scanner. The scanner provides Wavefront .obj files, which contain a description of the geometry via vertices and triangles. The surface texture is provided as a 2D flat bitmap file and a mapping of each physical surface triangle into said bitmap. Thus, originally the textured object is described by a varying number of pixels glued onto each surface triangle. Due to the impossibility of continuously mapping a closed surface onto the flat bitmap, there are discontinuities in the bitmap. Essentially, two adjacent triangles on the surface can be part of discontinuous regions in the texture file. In order to apply our novel denoising scheme, the above mentioned Wavefront object including the texture needs to be made available to the finite element library used to discretize the predual problem. One way of achieving this is to provide the texture data $f$ at each quadrature point. However, for ease of implementation and processing within the finite element framework FEniCS, we converted the textured object into the finite element setting. To account for both natural discontinuities in the texture as well as the discontinuity of the surface-to-texture mapping, we chose a discontinuous Lagrange finite element representation of the texture data $f$. Thus, $u$ and $f$ are elements of the $\DG{r}$ finite element space on the surface.

To carry out the texture preprocessing, we compute the spatial location for each degree of freedom of the surface DG function $f$ within the texture bitmap and use the respective gray value. For color textures, this is realized via a vector valued DG function on the surfaces with values in the RGB color space. In the original Wavefront object, each surface triangle will usually obtain data from multiple texture pixels. Thus, in order to maintain a similar quality of the texture in the DG setting, higher order finite element spaces are needed, depending on the quality of the scan. Although in the original Wavefront object, the number of pixels per triangle may vary significantly, whereas we use a constant finite element order, typically $r = 2$ or $r = 3$. Because we actually solve the predual problem via a interior-point method, we need to find $\bp \in \bH(\div;S)$ before recovering the image $u$. We employ a conforming discretization by surface Raviart--Thomas finite elements. The Raviart--Thomas element space $\RT{r+1}$ is designed to be the smallest polynomial space with $\RT{r+1}|_K \subset \PP_{r+1}$ for every triangle $K$ such that the divergence maps onto $\PP_{r}$~\cite{fenics}. For denoising, we have the relation $u = \text{id}^{-1}(\div \bp + f)$ . Except of the divergence, there is no differentiation. Therefore we choose to discretize $u \in \DG{r}$, $\bp \in \RT{r+1}$ and $f \in \DG{r}$.

\begin{thebibliography}{00} \addcontentsline{toc}{chapter}{References} %\bibitem{pu} A.A. Milne, \textit{Winnie-the-Pooh}, Methuen, 1926.

\bibitem{chan} T.F. Chan, G.H. Golub, P. Mulet: A nonlinear primal-dual method for total variation-based image restoration, \textit{SIAM Journal on Scientific Computing}, 6 (1999), 1964-1977.

\bibitem{chang} Y.N. Chang and H.H. Chang: Automatic brain MR image denoising based on texture feature-based artificial neural networks, \textit{Bio-Medical Material and Engineering}, 26 (2015), 1275-1282.

\bibitem{hinter} M. Hinterm\"uller, K. Kunisch: Total bounded variation regularization as a bilaterally constrained optimization problem, \textit{SIAM Journal on Applied Mathematics}, 64 (2004), 1311-1333.

\bibitem{lai} R. Lai and T. F. Chan: A framework for intrinsic image processing on surfaces, \textit{Computer Vision and Image Understanding}, 115 (2011), 1647-1661.

\bibitem{fenics} A. Logg, K.A. Mardal, G. Wells, eds., \textit{Automated Solution of Differential Equations by the Finite Element Method}, vol. 84 of Lecture Notes in Computational Science and Engineering, Springer Berlin Heidelberg, 2012.

\bibitem{rognes} M.E. Rognes, D.A. Ham, C.J. Cotter, and A.T.T. McRae: Automating the solution of PDEs on the sphere and other manifolds in FEniCS 1.2, \textit{Geoscientific Model Development}, 6 (2013), 2099-2119.

\bibitem{rudin} L.I. Rudin, S. Osher, and E. Fatemi: Nonlinear total variation based noise removal algorithms, \textit{Physica D}, 60 (1992), 259-268.

\bibitem{schiela} A. Schiela: Barrier methods for optimal control problems with state constraints, \textit{SIAM Journal on Optimization}, 20 (2009), 1002-1031.

\end{thebibliography}

\end{document}

10:00
Automated Solution of Parabolic-Elliptic Systems for Control Applications with Embedded Runge-Kutta Methods

ABSTRACT. The objective of this work is to present a Python code developed for the solution of differential-algebraic systems of the form M(y)y'=f(t,y) with a possibly singular matrix function multiplying the derivatives, arising from the spatial discretization of transient partial differential equation systems by the finite element method.

For the discretization in space the FEniCS library is used and for the discretization in time we implemented in an object-oriented Python code diagonally implicit Runge-Kutta methods, sub-families of them and Rosenbrock-Wanner methods. In all the cases an embedded pair of methods is used to compute an error estimator and adapt the step size with a marginal extra computational cost.

This code can be used to solve optimal control subject to transient partial differential equation systems as the computation of the discrete gradient cost function for each of the time discretization methods is computed by means of the discrete adjoint state.

During the talk we will show for different problems, such as a non-linear mixed heat equation or the transient Stokes system that the methods presented perform well converging with the expected order of accuracy.

10:15-10:45Coffee Break
10:45-11:30 Session 8: Optimisation II
Location: Salle Tavenas Main Theatre
10:45
UFL and the Automatic Derivation of Weak and Strong Shape Derivatives

ABSTRACT. A fully automatic framework for conducting shape optimization using UFL and FEniCS is discussed. Such problems encompass a wide variety of problems such as finding drag optimal shapes in fluids or reconstructing inclusions by measuring reflections of waves within the area of non-destructive testing. Because the physics are usually modeled via partial differential equations (PDEs), shape optimization usually forms a special sub-class of problems within PDE constrained optimization, where the domain of the PDE is the design unknown.

Thus, the directional derivative of the objective with respect to perturbations of the input mesh is necessary. Under sufficient regularity assumptions, tangential calculus can be used to find a surface representation of such derivatives. However, this usually requires applying the divergence theorem on surfaces.

The main goal of this talk is to present an automatic derivation of the shape derivative by first formulating the problem in the Uniform Form Language (UFL) and then processing the expression by a semantic analysis, which automatically applies the formal differentiation rules of shape calculus. In addition to applying the divergence theorem on surfaces, more complex transformations need to be applied when dependencies on the curvature or normal are present or when higher order derivatives are desired.

Having a UFL-tree as an output means the result of the automatic differentiation procedure can immediately and automatically be discretized into a full numerical solution framework via FEniCS. The talk features examples of automatically created Newton-like solvers for problems in drag minimization in fluid flow and wave reconstructions. Because there are surface representations of such derivatives, this differentiation procedure also nicely interfaces with FEniCS and its capability to solve PDEs on manifolds, which is augmented with different approaches to compute curvature and other surface intrinsic quantities.

11:00
Shape Optimization with Geometric Constraints

ABSTRACT. Shape optimization has received significant interest from both a theoretical and an applied point of view over the last decades. The approaches used can roughly be categorized into those based on using a parametrization for the shape or its deformation and then applying an optimization algorithm to the discretized problem (discretive-then-optimize) and those that formulate the problem as optimization over an infinite-dimensional space of shapes or deformations and then discretize afterwards (optimize-then-discretize). We follow the latter approach and search for diffeomorphisms T in X=W^{1,\infty} that deform an initial shape. The optimization problem then reads as minimizing J(T(Omega)) over all deformations T in X.

We choose W^{1,\infty} as the space of admissible deformations as Lipschitz regularity of the domain is needed by many problems in which a PDE constraint is included in the optimization problem.

Furthermore we want to include certain geometric constraints; we do this by additionally requiring T in K, for some set K. A classical example for a geometric constraint that is often considered is volume/mass conservation, i.e. K are all those transformations so that T(Omega) has the same volume as Omega. In our work we investigate constraints of the form

K = {T in X : T(\Omega) \subset C},

for some convex set C. A classical application where this is relevant is wing design in Formula 1, where the teams are given bounding boxes in which the wing needs to be contained.

We will present how these constraints can be included by using a Moreau-Yosida regularization of the constraints. Our approach is implemented in Firedrake and we present synthetic examples without PDE constraint as well as the minimization of the Drag/Lift ratio of an airfoil as an example for a more realistic, PDE constrained problem.

11:15
Shape Optimization with Multiple Meshes

ABSTRACT. For shape optimization problems, the computational domain is the design variable. Changing the shape of an airfoil in a channel to minimize drag is such a problem. The evolving domains complicate the numerical solution of shape optimization problems, and typically require large mesh deformations with quality checks and a re-meshing software as a fallback. We propose an approach for solving shape optimization problems on multiple overlapping meshes. In this approach, each mesh can be moved freely and hence the multi-mesh approach allows larger deformation of the domain than standard single-mesh approaches. The approach has been implemented in FEniCS and dolfin-adjoint, by employing the already tested environment for multi-mesh. We give examples of derivation of the shape-optimization problem for a Stokes flow and present implementation of this in FEniCS.

Consider a general PDE constrained shape optimization problem we want to minimize a goal functional J, which is subject to a state equation F, with solution u over the domain Omega. We choose to divide the domain Omega into two non-overlapping domains by creating an artificial interface Gamma, such that the union of Omega0 and Omega1 is the original domain Omega. This is depicted in Figure 1. Extension to an arbitrary number of overlapping domains is possible. The weak formulation of the state equations are then formulated and the continuity over the artificial boundary is enforced by using Nitsches method.

For minimization, we choose a gradient based scheme, and find the gradient by using the adjoint method. By employing the Hadamard formulas for Volume and Surface objective functions one can achieve the functional sensitivities as a function of the moving boundary and not the domain.

A concrete example of this approach is the shape optimization of an obstacle in Stokes-flow in the domain specified in Figure 2.

For deformation of the mesh, we have used two different deformation equations, a Laplacian smoothing and a set of Eikonal convection equations. For the multi-mesh problem, deformation is only done one the front mesh, while the background mesh is stationary. Figure 3 shows that with the Laplacian deformation the mesh degenerates in both the single-mesh and multi-mesh-case. Figure 4 shows that the Eikonal convection equations preserves the mesh-quality in the multi-mesh-case, but not in the single-mesh case, where the mesh degenerates at the boundary. We conclude that with a multi-mesh-approach, the meshes are preserved better than with a single-mesh approach.

11:30
Analytic Metric-Based Adaptation Using a Continuous Mesh Model

ABSTRACT. Anisotropic adaptive meshing is widely recognized as an important tool in the numerical solution of convection-diffusion problems. In most flow problems we are not interested in the flow properties in the entire domain but instead we would like to know more about the flow in a smaller region. Using the anistropy and size information from the underlying solution we can adapt towards an optimal mesh.

Moving from solution to an optimized mesh happens through error estimates based on the higher order derivatives. Here we present a generalized mesh adaptation and mesh optimization method for discontinuous Galerkin Schemes using a metric-based continuous mesh model[1]. The rationale behind the new error model is to incorporate as much as possible analytic optimization techniques acting on the continuous mesh, as opposed to numerical optimization acting on the discrete mesh. The metric based approach facilitates changing and manipulating the mesh in a general non-isotropic way. The metric is a symmetric positive definite matrix that encodes the information required to construct a mesh element as shown in Fig.1

We also discuss target-based adaptation in this context. This is important in a variety of applications, where one is interested in computing certain solution-dependent functionals rather accurately, as opposed to minimizing the global error norm. To incorporate target-based adaptation we extend previous continuous mesh models to a weighted-norm optimization, where the weight comes from an adjoint solution providing sensitivities with respect to some relevant target functional [2].

Additionally, high-order consistent numerical schemes using piecewise polynomial approximation call for hp-adaptation to maximize efficiency. We present a two step algorithm using the continuous error estimate to produce a close-to-optimal hp mesh [3].

[1] Ajay Mandyam Rangarajan, Aravind Balan, and Georg May, Mesh Adaptation and Optimization for Discontinuous Galerkin Methods Using a Continuous Mesh Model, AIAA Modeling and Simulation Technologies Conference, AIAA SciTech Forum, (AIAA 2016-2142) [2] Ajay Mandyam Rangarajan and Georg May, Adjoint-based anisotropic mesh adaptation for Discontinuous Galerkin Methods Using a Continuous Mesh Model, AIAA Aviation 2017 (Accepted) [3] Vit Dolejsi, Georg May and Ajay Mandyam Rangarajan, A Continuous hp-mesh model for adaptive discontinuous Galerkin Schemes, In preparation

11:45-12:00Short break
12:00-12:45 Session 9: Biomechanics
12:00
Accurate Numerical Modeling of Small Collections of Cardiac Cells
SPEAKER: Marie Rognes

ABSTRACT. Cardiac tissue is commonly modeled using the classical monodomain or bidomain models. These models have provided valuable insight in cardiac electrophysiology; in particular, the models have been useful tools for understanding the nature of excitation waves and how these waves are affected by mutation or by illness, and also how drugs can be applied to alleviate disease. However, the classical models of cardiac tissue fundamentally assume that the discrete nature of cardiac tissue can be represented by homogenized equations where the extracellular space, the cell membrane and the intracellular spare are continuous and exist everywhere. This is a bold assumption.

In this talk, we will discuss a more accurate model of cardiac tissue based on a geometrically explicit representation of the extracellular space, the cell membrane and the intracellular domain. This EMI model of cardiac tissue combines elliptic equations in the extracellular and in the intracellular domains with a system of nonlinear equations modelling the electrochemical processes across the cell membrane. We will present the EMI equations, propose new families of mixed finite element methods for these equations, analyze these families of methods in terms of inf-sup stability and a priori convergence, and show some numerical results using FEniCS of course.

12:15
The impact of microscopic structure on mechanical properties of plant cell walls and tissues

ABSTRACT. To better understand plant growth and development it is important to analyse how the microscopic structure of plant tissues impacts their mechanical properties. The elastic properties of plant tissues are strongly determined by the mechanical properties of the cell walls surrounding plant cells and by the cross-linked pectin network of the middle lamella which joins individual cells together. Primary cell walls of plant cells consist mainly of oriented cellulose microfibrils, pectin, hemicellulose, proteins, and water. Since the turgor pressure acts isotropically, it is the microstructure of the cell walls, e.g. the orientation of the cellulose microfibrils, which determines the anisotropic deformation and expansion of plant cells. Also it is supposed that calcium-pectin cross-linking chemistry is one of the main regulators of plant cell wall elasticity and extension. Here we investigate the impact of the orientation of cellulose microfibrils on the elastic deformations of plant cell walls and tissues using multiscale modelling and numerical simulations. Equations of linear elasticity coupled with reaction-diffusion equations for chemical processes are used to model mechanical properties of plant cell walls and tissues.

12:30
Simulating the Response of Bioprosthetics Heart Valve Tissues to Cyclic Loading in FEniCS
SPEAKER: Will Zhang

ABSTRACT. There are over 100,000 heart valve surgeries done each year in the U.S. costing over $20 billion. Bioprosthetic heart valves(BHV) are the most popular surgical replacements, but the life expectancy remains stuck at 10-15 years. One of the most crucial effects involved in the mechanical failure of BHVs is permanent set(PS); especially in the early stage (2-3 years). PS cause the geometry of BHVs to change permanently over time, resulting in extraneous stress on the leaflets. While this process does not damage the tissue, it accelerates the rate of failure. PS is a result of the scission-healing of exogenously crosslinked matrix. This allows the unloaded configuration of the matrix to evolve while convecting the underlying collagen fiber architecture(CFA). Thus, we developed a structural constitutive model for the exogenously crosslinked BHVs biomaterials based on the PS mechanism. The results show that PS alone can capture and more importantly predict how the shape and mechanical response leaflet biomaterial change during the early stage. In this work, we developed a full time dependent numerical implementation of our permanent set model to simulate exogenously crosslinked tissues under cyclic loading using the FEniCS project.

12:45-13:45Lunch Break
13:45-14:45 Session 10: Visualisation and Large Scale Computing
Location: Salle Tavenas Main Theatre
13:45
podS: A Parallel Multilevel Monte Carlo Framework
SPEAKER: Hugo Hadfield

ABSTRACT. Multilevel Monte Carlo (MLMC) is an improvement upon traditional Monte Carlo methods. It is designed to dramatically reduce the computational cost of solving certain stochastic partial differential equations (SPDEs) and other problems involving uncertainty. MLMC is an embarrassingly parallel technique. In theory any number of problem realisations could be solved in parallel. Reality forces researchers to use computer systems with finite memory and processing cores. The problem we face is: given the architecture of the computer system on which we wish to compute a stochastic approximation using the MLMC method, how do we best assign system resources? To this end, we have developed the C++ library podS. podS is designed to automatically and optimally allocate resources for an MLMC problem over a variety of computer architectures, from laptop to supercomputer. The goal is to allow researchers to focus on the problems they want to solve and not the intricacies of parallelising code on different hardware.

We introduce podS and show examples of its use with FEniCS and Docker to compute approximations of SPDEs. We also discuss the technical challenges that we faced designing the library.

14:00
Solving Poisson’s equation on the Microsoft HoloLens
SPEAKER: Carl Lundholm

ABSTRACT. We develop an app for solving Poisson’s equation with the finite element method using Microsoft’s augmented reality glasses HoloLens. The idea with the HoloLens app is to set up and solve a Poisson problem in a real world room where the HoloLens user is located and then visualize the computed solution in the room. The app works by first letting the HoloLens create a surface mesh of the surroundings through a spatial scan. The surface mesh is then used to construct a geometry that defines the solution domain by a polyhedral representation of the room. A tetrahedral mesh is then generated from the geometry. The user may provide problem data by placing sources in the room and setting boundary conditions on the walls, floor, and ceiling. The finite element method is then used to assemble the Poisson system before it is solved. Finally the computed solution may be visualized in the room. The development environment for HoloLens apps consists of the game engine Unity and the IDE Microsoft Visual Studio. Projects are initially started in Unity and then exported to Visual Studio where the coding takes place. The programming language used is C#. For the finite element assembly of the system we use the FEniCS form compiler (FFC) to compute the element stiffness matrix. Not only does this automatically take care of steps in going from the variational formulation to the linear system, but it also makes it easier to generalize the app to other types of differential equations. Say for example that we would like to know how a dangerous substance spreads in a room after a leak has sprung. The heat equation could be used as a simplistic model for describing this situation, potentially making apps like this useful in building planning and safety engineering.

14:15
3D visualization with FEniCS in Jupyter Notebooks

ABSTRACT. As part of the OpenDreamKit project (www.opendreamkit.org), we are working on improving the state of 3D visualization in Jupyter Notebooks (www.jupyter.org).

Jupyter Notebooks run from a web browser based user interface and can display anything a browser can display, which includes interactive javascript widgets and GPU accelerated 3D graphics through WebGL. These fundamental technologies have now matured somewhat and reached widespread support, including mobile targets.

The last couple of years a number of visualization projects have started to explore this area, including established projects such as VTK, Paraview and MayaVi, as well as newer smaller projects such as ipyvolume, k3d-jupyter, and SciviJS, the latter two being initiatives from the OpenDreamKit project.

In this talk we will present our current work on this front, and invite you as users and developers of FEniCS to provide your opinions on which future directions of this project that would be useful to you.

14:30
Uncontainable enthusiasm: the pain and joy of Docker on HPC and cloud

ABSTRACT. The FEniCS Project has embraced the use of containers for Continuous Integration, and for distribution of pre-configured images to users. However, there is another area where Docker containers hold great promise: High Performance Computing (HPC).

Because of privilege restrictions on HPC, it is necessary to use a different runtime, such as Shifter or Singularity, rather than Docker. Performance can be very good, even for MPI jobs, provided that the right MPI libraries are used.

Finally, Microsoft Azure now provides Virtual Machines (VMs) with Infiniband interconnect. Here, we have full control, and can use Docker without restriction. Additionally, there is a useful project called Batch Shipyard which simplifies the deployment of Docker on Azure. The performance is very good for medium sized problems.

14:45-15:15Coffee Break
15:15-16:30 Session 11: Coupled and interface problems
Location: Salle Tavenas Main Theatre
15:15
Mixed-Dimensional Linear Elasticity with Relaxed Symmetry
SPEAKER: Wietse Boon

ABSTRACT. Thin inclusions in materials are common in a variety of applications, ranging from steel plate reinforced concrete to composite materials and cemented fracture systems. To ease implementation issues such as mesh generation, mixed-dimensional representations of the material have become an attractive strategy. In such a representation, the thin inclusions are considered as lower-dimensional manifolds with significantly different material properties. The associated, governing equations are then fully coupled to the surroundings.

The explicit consideration of momentum conservation is especially important when considering stress states around thin inclusions. For example, in the context of fracture propagation, the region near fracture tips is of particular interest since the stress state contains a wealth of information. An accurate representation of the surrounding stress, possessing physical conservation properties, is therefore imperative.

In this work, we employ mixed finite elements to obtain a locally conservative discretization scheme. The symmetry of the stress tensor is then imposed in a weak sense, which allows for the use of familiar, conforming, finite elements with relatively few degrees of freedom. The mixed-dimensional representation is exploited to the fullest extent by considering subdomains of all dimensionalities. In particular, intersections between d-dimensional manifolds are considered as separate (d-1)-dimensional manifolds and all couplings are incorporated between domains with codimension one.

We present several theoretical results including well-posedness of the variational formulation and the mixed finite element discretization scheme. Moreover, numerical examples and their implementations using FEniCS are considered in two and three dimensions.

15:30
Towards coupled mixed dimensional finite elements in FEniCS

ABSTRACT. In the brain, the vasculature twist and turn through the brain tissue as topologically one-dimensional structures embedded in three dimensions. In the Earth’s crust, topologically two-dimensional faults and one-dimensional fractures cut through the three-dimensional rock. In general, there is an abundance of examples that call for mathematical and numerical models coupling n < m and m dimensional spaces.

This talk is intended to continue the discussion on how best to support such finite element spaces and associated finite element formulations in the FEniCS framework. Emphasis will be put on key motivating examples, key abstractions and structures and key challenges.

15:45
Trace Constrained Problems in FEniCS

ABSTRACT. Mixed function spaces with subspaces defined over domains with different topological dimensions arise in numerous applications. In \cite{babuska} the Lagrange multiplier defined on domain boundary is used to enforce Dirichlet boundary conditions on the solution of the Poisson problem. More recently, \cite{toro, aslak} seek electric potential of the extracellular/cellular domains in spaces of functions over $\mathbb{R}^d$ while the potential of the membrane separating the two media is represented as function defined over $d-1$ dimensional embedded manifold. In reduced order modeling of interstitial flows \cite{dangelo} model tissue as a three dimensional structure while the vasculature is modeled as a one dimensional curve.

Due to the dimensional gap between the domains variational forms stemming from these problems are currently not supported in FEniCS and as such the platform cannot be used for the applications unless various hacks are employed. In this talk we present our package (FEniCS)\textsubscript{ii} which offers set of primitives, in particular the trace operator and assembler, allowing for proper discretization of the trace constrained problems in FEniCS. The package is built on top of cbc.block \cite{haga} and therefore the assembled systems are to be solved iteratively. For the above listed application we shall discuss suitable preconditioners.

16:00
Tracer transport in microvasculature: A case study on coupling 1D-3D problems in FEniCS

ABSTRACT. We implement a model of tracer transport in rodent brain microvasculature. As the blood vessels involved are very thin, they are modeled as a one-dimensional subdomain of the three-dimensional tissue. This necessitates a bidirectional coupling of a 1D problem to a 3D one. We give an overview of the model, the challenges involved in implementing it in FEniCS, and discuss our results.

16:15
A Space-Time Cut Finite Element Method for the Heat Equation in FEniCS
SPEAKER: Magne Nordaas

ABSTRACT. We present a space-time cut finite element method for parabolic PDEs on overlapping meshes, implemented using the FEniCS multimesh functionality. This work extends the space-time method developed in \cite{exjobb} to two and three spatial dimensions and to more than two meshes.

Here the concept of overlapping meshes means that the computational domain consists of two or more overlapping meshes, and each mesh has a prescribed time-dependent movement. An example where such use of overlapping meshes could be advantageous over traditional methods, is when the domain contains a moving object, see Figure \ref{figmeshmess}. The idea is first to generate one mesh around the object and background mesh for the empty solution domain. The mesh-enclosed object is then brought back into the comutational domain by placing it ``on top'' of the background mesh. Finally the mesh enclosed object can be moved around in the domain as seen in Figure \ref{figmeshoverlap}. By using overlapping meshes, mesh generation only has to be done initially as opposed to traditional methods where a new mesh has to be generated when the old mesh becomes too deformed.

Finite element methods may then be derived for the overlapping mesh environment by using Nitsche-based techniques to account for the interface between two meshes. Using cG($p$) elements in space and dG($q$) elements in time, we let $V_h$ be a corresponding finite element space that permits a discontinuity on the interface between two meshes.

The finite element variational formulation used here reads: Find $u_h \in V_h$ such that for all $v \in V_h$, \begin{equation} \begin{split} & \sum_{i=1}^2\sum_{n=1}^{N} \bigg( \big([\![u_h]\!]_{n-1}, v_{n-1}^+\big)_{\Omega_{i}(t_{n-1})} + \int_{t_{n-1}}^{t_{n}} \big(\dot{u}_h, v\big)_{\Omega_i(t)} +\big(\nab u_h, \nab v\big)_{\Omega_i(t)} \ud t \bigg) \\ &\qquad + \sum_{n=1}^N \bigg( \int_{\Gamma_n} -\bn^t [u_h]v_\sigma - \langle \partial_{\bn^x} u_h \rangle [v] - \langle \partial_{\bn^x} v \rangle [u_h] + |\bar n^x| \gamma h_K^{-1} [u_h][v] \ud\bs \bigg) \\ %& + \sum_{n=1}^N \int_{I_n} |\bar n^x|(\lambda[\nab u_h],[\nab v])_{\Omega_O(t)} \ud t \\ &\quad = \sum_{i=1}^2\sum_{n=1}^N \int_{t_{n-1}}^{t_n} \big(f, v\big)_{\Omega_i(t)} \ud t. \end{split} \label{feform} \end{equation} Here, $\Omega_1$ denotes the non-overlapped part of the background mesh, $\Omega_2$ denotes the moving and overlapping mesh, $\Gamma_n$ denotes the space-time interface between the meshes on the time interval $(t_{n-1}, t_n]$, and $\bar n = (\bar n^t, \bar n^x)$ denotes the unit vector normal to the space-time surface $\Gamma_n$.

The method presented above has been implemented in FEniCS using the multimesh functionality. The first step is to replace the integral in time with a suitable quadrature rule. Afterwards, the individual parts of the variational form are expressed with Python code, mimicking the mathematical syntax and making use of the multimesh integral types \pyth{dX} and \pyth{dI}. For example,

\begin{minipage}[c][1cm]{0.22\textwidth} \begin{align*} \sum_{i=1}^2 \int_{\Omega_i} \nab u_h \cdot \nab v \ud x \end{align*} \end{minipage} \begin{minipage}[t][1cm]{0.75\textwidth} \begin{cpython} inner(grad(u_h), grad(v_h)) * dX \end{cpython} \end{minipage}

\begin{minipage}[c][1cm]{0.22\textwidth} \begin{align*} \int_\Gamma \langle \partial_{\bn^x} u_h \rangle [v] \ud s \end{align*} \end{minipage} \begin{minipage}[t][1cm]{0.75\textwidth} \begin{cpython} avg(inner(nbar_x, grad(u_h))) * jump(v) * dI \end{cpython} \end{minipage}

With this implementation in FEniCS, we further investigate the aspects of using space-time cut-FEM for time-dependent problems. An interesting matter is for example how the speed of the moving mesh influences the error convergence. In \cite{exjobb}, a drop in the order of convergence with respect to the time-step was observed when the mesh speed became sufficiently high, as shown in figure \ref{fignumres}. Previously, numerical experiments for this method have been limited to the case with one spatial dimension, cG(1) elements in space, and dG(0) and dG(1) elements in time. In the present work, we study the numerical error convergence in two or three spatial dimensions, and we also consider higher order elements in time.

\begin{thebibliography}{00} \addcontentsline{toc}{chapter}{References} \bibitem{exjobb} C. Lundholm, \textit{A space-time cut finite element method for a time-dependent parabolic model problem}, Master's thesis, Chalmers University of Technology, 2015. \end{thebibliography}

16:30-16:40 Session 12: Group photo
Location: Salle Tavenas Rear Room
19:15-23:59 Session : Conference Dinner
Location: La Table du Belvédère