Thematic Programme: Numerical Analysis of Complex PDE Models in the Sciences
Workshop 3: Interplay of geometric processing, modelling, and adaptivity in Galerkin methods
Abstracts
Sören Bartels (University of Freiburg)
Title: Approximating gradient flow evolutions of self-avoiding inextensible curves and elastic knots
Abstract: We discuss a semi-implicit numerical scheme that allows for minimizing the bending energy of curves within certain isotopy classes. To this end we consider a weighted sum of the bending energy ${\rm B}$ and the tangent-point functional ${\rm TP}$, i.e.,
\[
E(u) = \kappa {\rm B}(u) + \varrho {\rm TP}(u)
= \frac{\kappa}{2} \int_I |u''(x)|^2 \,{\rm d} x + \varrho
\iint_{I\times I} \frac{\,{\rm d} x\,{\rm d} y}{r(u(y),u(x))^{q}}
\]
with the
tangent-point radius
$r(u(y), u(x))$ which is the radius of the circle that is tangent to the curve $u$ at the point $u(y)$ and that intersects with $u$ in $u(x)$. We define evolutions via the gradient flow for $E$ within a class of arclength parametrized curves, i.e., given an initial curve $u^0 \in H^2(I;\mathbb{R}^3)$ we look for a family $u:[0,T]\to H^2(I;\mathbb{R}^3)$ such that, with an appropriate inner product $(\cdot,\cdot)_X$ on $H^2(I;\mathbb{R}^3)$,
\[
(\partial_t u, v)_X = - \, \delta E(u)[v], \quad u(0) = u^0,
\]
subject to the linearized arclength constraints
\[
[\partial_t u]' \cdot u' = 0, \quad v' \cdot u' = 0.
\]
Our numerical approximation scheme for the evolution problem is specified via a semi-implicit discretization, i.e., for a step-size $\tau>0$ and the associated backward difference quotient operator $d_t$, we compute iterates $(u^k)_{k=0,1,\dots}\subset H^2(I;\mathbb{R}^3)$ via the recursion
\[
(d_t u^k,v)_X + \kappa ([u^k]'',v'') = - \varrho \delta {\rm TP}(u^{k-1})[v]
\]
with the constraints
\[
[d_t u^k]' \cdot [u^{k-1}]' = 0, \quad v \cdot [u^{k-1}]' = 0.
\]
The scheme leads to sparse systems of linear equations in the time steps for cubic $C^1$ splines and a nodal treatment of the constraints. The explicit treatment of the nonlocal tangent-point functional avoids working with fully populated matrices and furthermore allows for a straightforward parallelization of its computation. Based on estimates for the second derivative of the tangent-point functional and a uniform bi-Lipschitz radius, we prove a stability result implying energy decay during the evolution as well as maintenance of arclength parametrization.
We present some numerical experiments exploring the energy landscape, targeted to the question how to obtain global minimizers of the bending energy in knot classes, so-called elastic knots. This is joint work with Philipp Reiter (University of Georgia).Lourenço Beirão da Veiga (Dipartimento di Matematica e Applicazioni,
Università di Milano-Bicocca)
Title: The Stokes complex for Virtual Elements
Abstract: The Virtual Element Method (in short VEM, born in 2013) is a recent generalization of the Finite Element Method. By avoiding the explicit integration of the shape functions that span the discrete Galerkin space and introducing a novel construction of the associated stiffness matrix, the VEM acquires very interesting properties with respect to more standard Galerkin methods. For instance, the VEM easily allows for polygonal/polyhedral meshes including non-convex elements and still yields a conforming solution with (possibly) high order accuracy.
In the present talk we introduce a family of VEMs in the framework of incompressible fluid dynamics, more specifically the Stokes and Navier-Stokes equations. This method, that can handle general polytopal meshes, has three equivalent formulations, all yielding divergence-free velocities (which is well known to be an advantage, when compared to more traditional inf-sup stable schemes). The first one is a mixed velocity-pressure formulation. The second choice is more computationally efficient and is an immediate derivation of the first one by automatic elimination (not static condensation) of certain internal variables; although having a piecewise constant pressure, it still yields a highly accurate velocity. Finally, the third formulation is based on a suitable VEM Stokes complex (by introducing an associated highly regular virtual space) in order to obtain a smaller and definite positive system, at the price of a higher condition number.
In the present talk we introduce a family of VEMs in the framework of incompressible fluid dynamics, more specifically the Stokes and Navier-Stokes equations. This method, that can handle general polytopal meshes, has three equivalent formulations, all yielding divergence-free velocities (which is well known to be an advantage, when compared to more traditional inf-sup stable schemes). The first one is a mixed velocity-pressure formulation. The second choice is more computationally efficient and is an immediate derivation of the first one by automatic elimination (not static condensation) of certain internal variables; although having a piecewise constant pressure, it still yields a highly accurate velocity. Finally, the third formulation is based on a suitable VEM Stokes complex (by introducing an associated highly regular virtual space) in order to obtain a smaller and definite positive system, at the price of a higher condition number.
Silvia Bertoluzza (CNR, IMATI "Enrico Magenes")
Title: Stabilizing DG methods on polygonal meshes via computable dual norms
Abstract: The choice of the stabilization terms (or, equivalently, of the definition of the numerical fluxes and/or traces) is a key issue in the design of DG methods. Usually, some form of penalization is somehow added, where some residual term is measured in a mesh dependent norm, designed to somehow mimic the norm of the dual space where the residual naturally exists. The treatment, in the analysis, of such mesh dependent norms calls for the combination of direct and inverse inequalities, which results in a loss of optimality whenever these do not compensate each other, as it happens, when considering the dependence on the mesh size parameter $h$, if the mesh quality deteriorates, or, also for strongly shape regular tessellations, when studying the dependence on the polynomial degree $k$. With the final aim of overcoming such limitations, we will present, in this talk, a way to design "cheap" computable norms (and scalar products) for the dual spaces, and of using them in designing the stabilization term for a DG type method for the Poisson equation on a polygonal mesh, in which the unknown in the polygonal elements, the fluxes and the unknown on the edges, are all, independently, approximated by polynomials of degree $k$.
This is a joint work with Daniele Prada and Ilaria Perugia, realized in the framework of the ERC Project CHANGE (grant agreement No 694515).
This is a joint work with Daniele Prada and Ilaria Perugia, realized in the framework of the ERC Project CHANGE (grant agreement No 694515).
Peter Binev (University of South Carolina)
Title: Near-Best Adaptive Approximation on Conforming Partitions
Abstract: The usual adaptive strategy for finding conforming partitions is ”mark$~\to~$subdivide$~\to~$complete”. In this strategy any element can be marked for subdivision but since the resulting partition often contains hanging nodes, additional elements have to be subdivided in the completion step to get a conforming partition. This process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells.
This hints at the possibility to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step would be necessary. This talk will present such a strategy together with theoretical results about its near-optimal performance.
This is a joint work with Francesca Fierro and Andreas Veeser from University of Milan.
This is a joint work with Francesca Fierro and Andreas Veeser from University of Milan.
Daniele Boffi (Dipartimento di Matematica "F. Casorati",
University of Pavia, Italy)
Title: Adaptive finite element method for the Maxwell eigenvalue problem
Abstract: It is well known that the eigenproblem associated with the Maxwell system can be analyzed with the help of suitable mixed formulations. Taking advantage of this remark, we prove the convergence with optimal rate for the edge finite element approximation of the Maxwell eigenvalue problem. In three dimensions, the result is not a trivial extension of the analysis previously performed for the approximation of the Laplace eigenproblem in mixed form.
Particular attention is paid to the case of multiple eigenvalues and clusters of eigenvalues.
[1] D. Boffi, D. Gallistl, F. Gardini, and L. Gastaldi. Optimal convergence of adaptive FEM for eigenvalue clusters in mixed form. Mathematics of Computation, 86(307) (2017) 2213-2237
[2] D. Boffi, L. Gastaldi, R. Rodríguez, and I. Šebestová. Residual-based a posteriori error estimation for the Maxwell's eigenvalue problem. To appear in IMA Journal of Numerical Analysis. arXiv:1602.00675
[3] D. Boffi, L. Gastaldi, R. Rodríguez, and I. Šebestová. A posteriori error estimates for Maxwell's eigenvalue problem. Submitted.
[4] D. Boffi and L. Gastaldi. Adaptive finite element method for the Maxwell eigenvalue problem. arXiv:1804.02377
References:
[1] D. Boffi, D. Gallistl, F. Gardini, and L. Gastaldi. Optimal convergence of adaptive FEM for eigenvalue clusters in mixed form. Mathematics of Computation, 86(307) (2017) 2213-2237
[2] D. Boffi, L. Gastaldi, R. Rodríguez, and I. Šebestová. Residual-based a posteriori error estimation for the Maxwell's eigenvalue problem. To appear in IMA Journal of Numerical Analysis. arXiv:1602.00675
[3] D. Boffi, L. Gastaldi, R. Rodríguez, and I. Šebestová. A posteriori error estimates for Maxwell's eigenvalue problem. Submitted.
[4] D. Boffi and L. Gastaldi. Adaptive finite element method for the Maxwell eigenvalue problem. arXiv:1804.02377
Andrea Bressan (University of Oslo)
Title: Best approximation space on uniform partitions
Abstract: Piecewise polynomials are commonly used for the approximation of the solution of PDEs.
A piecewise polynomial is described by the partition, the degree and the regularity.
Let $\mathbb S_{p,k,n}$ be the space of globally $\mathcal C^k$ piecewise polynomials of degree $p$ on the partitions of $[0,1]$ in $n$ uniform segments and $C_{p,k,n}$ be the smallest constant such that: $\forall f\in H^{p+1}(0,1)$
$$
\inf_{g\in \mathbb S_{p,k,n}} \Vert f -g \Vert_{L^2} \le C_{p,k,n} \Vert \partial^{p+1}f\Vert_{L^2}.
$$
For given degree $p$ and space dimension $\dim \mathbb S_{p,k,n}$, which choice of $k,n$ minimises $C_{p,k,n}$?
While a full answer to the question is difficult, we are able to compare the spaces used in Discontinuous Galerkin (DG), $\mathcal C^0$ Finite Elements Mehtod (FEM) and IsoGeometric Analysis (IGA) that correspond respectively to $k=-1$, $k=0$ and $k=p-1$.
The comparison extends to $f\in H^{q+1}(0,1)$, $q\le p$, broken Sobolev spaces and tensor product spaces.
Franco Brezzi (IMATI-CNR, Pavia)
Title: Virtual Element approximation of Magnetostatic problems via Vector Potential Formulation
Abstract: After recalling the classical Magnetostatic problem and its formulation via the vector potential approach, we present a family of approximations based on Virtual Element Methods, inspired from the Stokes Elements of Beirao-Lovadina-Vacca. Some more recent adaptations as the Serendipity approach or the treatment of curved boundaries will also be treated.
Erik Burman (UCL)
Title: Integrating geometric data in the computational solution of PDE using cutFEM
Abstract: Recently there has been an increased interesed in unfitted finite element methods. That is methods where the computational mesh does not respect the physical geometry of the problem. In this talk we will give an overview of recent results on the so-called cut finite element method (cutFEM). We will show how the method is designed for a simple model problem and then discuss some more advanced applications, such as PDEs on surfaces, bulk surface coupling, contact problems, shape optimization, or data assimilation.
Fehmi Cirak (University of Cambridge)
Title: Isogeometric analysis with manifold-based basis functions
Abstract: We present a novel isogeometric analysis technique that builds on manifold-based basis functions for geometric modelling and analysis. Manifold-based surface construction techniques are well known in geometric modelling and a number of variants exist. Common to all is the concept of constructing a smooth surface by blending together overlapping patches (or, charts) as in differential geometry description of manifolds. We combine manifold techniques with conformal parameterisations and the partition-of-unity method to derive smooth basis functions for unstructured quadrilateral meshes. Carefully constructed manifold basis functions are suitable for both geometric modelling and analysis. Our numerical simulations indicate their optimal convergence in finite element analysis of Poisson and thin-shell problems.
Bernardo Cockburn (University of Minnesota)
Title: An introduction to the theory of M-decompositions
Abstract: We provide a short introduction to the theory of M-decompositions in the framework of steady-state diffusion problems. This theory allows us to systematically devise hybridizable discontinuous Galerkin and mixed methods which can be proven to be superconvergent on unstructured meshes made of elements of a variety of shapes. The main feature of this approach is that it reduces such an effort to the definition, for each element $K$ of the mesh, of the spaces for the flux, $\boldsymbol{V}(K)$, and the scalar variable, $W(K)$, which, roughly speaking, can be decomposed into suitably chosen orthogonal subspaces related to the space of traces on $\partial K$ of the scalar unknown, $M(\partial K)$. We describe the main properties of the M-decompositions and show how to actually construct them. Finally, we provide many examples in the two-dimensional setting. We end by briefly commenting on several extensions including other equations like the wave equation, the equations of linear elasticity, and the equations of incompressible fluid flow.
This is joint work with Guosheng Fu (Brown University), Francisco-Javier Sayas (University of Delaware) and Ke Shi (Old Dominion University).
This is joint work with Guosheng Fu (Brown University), Francisco-Javier Sayas (University of Delaware) and Ke Shi (Old Dominion University).
Alexandre Ern (University Paris-Est and INRIA)
Title: Edge finite element approximation of Maxwell's equations in heterogeneous domains
Abstract: We derive H(curl)-error estimates and improved $L^2$-error
estimates for the Maxwell's equations in heterogeneous domains approximated using edge finite elements. These estimates only invoke the
expected regularity pickup of the exact solution in the scale of the
Sobolev spaces, which is typically lower than $\frac12$ and can be
arbitrarily close to $0$. The key tools for the analysis are commuting
quasi-interpolation operators in H(curl)- and H(div)-conforming finite element spaces and, most crucially, newly-devised quasi-interpolation operators delivering optimal estimates on the decay rate of the best-approximation error for functions with
Sobolev smoothness index arbitrarily close to $0$. This is joint work with J.-L. Guermond (Texas A&M University).
John Evans (University of Colorado Boulder)
Title: Mesh Generation, Parameterization, and Optimization for High-Order Finite Element and Isogeometric Analysis
Abstract: High-order finite element and isogeometric methods offer the potential for significantly reducing the computational cost required to achieve a specified level of accuracy in a number of application areas, from computational fluid dynamics to electromagnetic wave scattering. However, in order to unlock the power of a high-order finite element or isogeometric method, one must employ a suitable curvilinear mesh. In this talk, three topics in the area of high-order mesh generation, parameterization, and optimization will be discussed. First, a new mesh generation procedure will be described which automatically generates a geometrically exact curvilinear volumetric mesh given a set of watertight multi-patch NURBS or T-spline bounding surfaces. Second, approximation results detailing the precise impact of mesh parameterization on solution quality for high-order finite element and isogeometric methods will be presented, and a new family of computable quality metrics and a corresponding definition of curvilinear mesh shape regularity will be proposed based on these approximation results. Finally, a new mesh optimization procedure will be described which leverages the new family of computable quality metrics. The proposed mesh generation and optimization procedures as well as the new mesh quality metrics take advantage of the use of a Bernstein-Bezier polynomial or rational basis to represent the geometric mapping over each element. A number of examples will be provided illustrating the promise of the proposed mesh generation and optimization procedures as well as the utility of the new mesh quality metrics.
Carlotta Giannelli (University of Florence)
Title: Refinement and coarsening strategies for adaptive methods with hierarchical splines
Abstract: Hierarchical B-spline constructions are currently widely used in several studies and applications connected to adaptive isogeometric methods. Being defined as a multi-level extension of the B-spline model, the hierarchical approach naturally provides an elegant solution to perform adaptivity, while simultaneously preserving a local tensor-product structure. An important issue in this setting concerns the possibility of considering suitable graded meshes that guarantee a limited interaction between the different levels of the spline hierarchy. The talk will present the design of a general framework for the development of refinement and coarsening algorithms with hierarchical splines. We will also show that, when the truncated basis for hierarchical spline spaces is considered, more localized refinement and de-refinement procedures can be exploited.
Jay Gopalakrishnan (Portland State University)
Title: Discretization errors in the FEAST algorithm for eigenvalues
Abstract: FEAST is a filtered subspace iteration for approximating a cluster of eigenvalues and its associated eigenspace. The algorithm is motivated by a quadrature approximation of an operator-valued contour integral of the resolvent. Resolvents on infinite dimensional spaces are discretized in computable finite-dimensional spaces before the algorithm is applied. We report on our analysis of how such discretizations result in errors in the eigenspace approximations computed by the algorithm.
Ralf Hiptmair (Seminar for Applied Mathematics, ETH Zürich)
Title: Approximating Shape Gradients
Abstract: We consider functionals depending on solutions of boundary value problem. They can be regarded as shape functionals, because they usually depend on the underlying domain.
Their (directional) derivatives with respect to variations of the domain are
The two approaches yield two different but equivalent formulas. Both rely on solutions of two boundary value problems (BVPs), but one involves integrating their traces on the boundary of the domain, while the other evaluates integrals in the volume. Usually, the two BVPs can only be solved approximately, for instance, by finite element methods. However, when used with finite element solutions, the equivalence of the two formulas breaks down. By means of a comprehensive convergence analysis, we establish that the volume based expression for the shape gradient generally offers better accuracy in a finite element setting. The results are confirmed by several numerical experiments.
(joint work with A. Paganini, S. Sargheini, and J.-Z. Li)
shape-gradients
and expressions for those can be derived by means of (i) domain
transformations or (ii) Hadamard representation formulas.
The two approaches yield two different but equivalent formulas. Both rely on solutions of two boundary value problems (BVPs), but one involves integrating their traces on the boundary of the domain, while the other evaluates integrals in the volume. Usually, the two BVPs can only be solved approximately, for instance, by finite element methods. However, when used with finite element solutions, the equivalence of the two formulas breaks down. By means of a comprehensive convergence analysis, we establish that the volume based expression for the shape gradient generally offers better accuracy in a finite element setting. The results are confirmed by several numerical experiments.
(joint work with A. Paganini, S. Sargheini, and J.-Z. Li)
References:
R. Hiptmair and J.-Z. Li
, Shape derivatives in differential forms
II: Application to scattering problems
, Report 2017-24, SAM, ETH
Zürich, 2017.
R. Hiptmair and A. Paganini
, Shape optimization by pursuing diffeomorphisms
, Comput. Methods Appl. Math., 15 (2015), pp. 291–305.
R. Hiptmair, A. Paganini, and S. Sargheini
, Comparison of approximate shape gradients
, BIT Numerical Mathematics, 55 (2014),
pp. 459–485.
A. Paganini
, Numerical shape optimization with finite elements
, ETH Dissertation 23212, ETH Zürich, 2016.
S. Sargheini
, Shape Sensitivity Analysis of Electromagnetic Scattering Problems
, ETH Dissertation 23067, ETH Zürich, 2016.Bert Juettler (JKU Linz, Institute of Applied Geometry, and ÖEW, RICAM Linz)
Title: Accurate Numerical Quadrature on Trimmed Elements in Isogeometric Analysis
Abstract: Trimming is a fundamental operation in geometric design, which is frequently used to enhance the flexibility of the prevailing tensor-product NURBS representations. Isogeometric Analysis aims at unifying the geometric representations, which are employed for numerical simulation and geometric modeling. It is therefore desirable to extend this simulation technology to trimmed representations of surfaces and volumes. In this talk, we we show how to derive special quadrature rules for trimmed domains by considering the implicit representation of the trimming curves and surfaces. Joint work Felix Scholz (ÖAW, RICAM Linz).
Ulrich Langer (Institute for Computational Mathematics,
Johannes Kepler University Linz, and RICAM Linz)
Title: Adaptive Space-Time Isogeometric Analysis
Abstract: This talk is concerned with locally stabilised space-time IgA approximations to
parabolic initial boundary value problems.
Originally, similar, but globally stabilised space-time IgA schemes
were presented and studied by Langer, Neumüller, and Moore (2016).
The current work devises a localised version of this scheme
that is more suited for adaptivity.
We prove coercivity (ellipticity), boundedness, and consistency of
the mesh-dependent bilinear form generating the IgA scheme.
Using these fundamental properties
together with the corresponding approximation error estimates for B-splines,
we show that space-time IgA solutions generated
by the new scheme satisfy asymptotically optimal a priori
discretization error estimates. For adaptive mesh
refinement algorithm, we choose the functional a posteriori error control approach
that has been rigorously
studied in earlier works by Repin (2002) and Langer, Matculevich, and Repin (2017).
We present numerical
results that confirm improved global error convergence as well as the local efficiency of the error indicators that follow from the error majorants.
This research work is a joint work with Svetlana Matculevich and Sergey Repin, and was supported by the Austrian Science Fund (FWF) through the project S117-03 within the National Research Network “Geometry + Simulation”.
This research work is a joint work with Svetlana Matculevich and Sergey Repin, and was supported by the Austrian Science Fund (FWF) through the project S117-03 within the National Research Network “Geometry + Simulation”.
Donatella Marini (Dipartimento di Matematica, Università di Pavia, and IMATI - CNR)
Title: Lowest order Virtual Element approximation of magnetostatic problems
Abstract: We present a lowest order Serendipity Virtual Element method, and show its use for the numerical solution of linear magneto-static problems in three dimensions. The method can be applied to very general decompositions of the computational domain (as is natural for Virtual Element Methods) and uses as unknowns the (constant) tangential component of the magnetic field $\mathbf{H}$ on each edge, and the vertex values of the Lagrange multiplier $p$ (used to enforce the solenoidality of the magnetic induction $\mathbf{B} = \mu\mathbf{H}$). In this respect the method can be seen as the natural generalization of the lowest order Edge Finite Element Method (the so-called “first kind Nédélec” elements) to polyhedra of almost arbitrary shape, and as we show on some numerical examples it exhibits very good accuracy (for being a lowest order element) and excellent robustness with respect to distortions.
Pedro Morin (Facultad de Ingenieria Quimica, Universidad Nacional del Litoral, Argentina)
Title: A new perspective on adaptive hierarchical B-splines
Abstract: We introduce a type of hierarchical spline spaces based on a parent-children relation, with two main features. First, the construction and handling is convenient for implementation and secondly, they are well suited for the theoretical analysis of adaptive isogeometric methods.
The framework that we provide makes it simple to create hierarchical basis with control on the overlapping. Linear independence is always desired for the well posedness of the linear systems, and to avoid redundancy. The control on the overlapping of basis functions from different levels is necessary to close theoretical arguments in the proofs of optimality of adaptive methods.
In order to guarantee linear independence, and to control the overlapping of the basis functions, some basis functions additional to those initially marked must be refined. However, with our framework and refinement procedures, the complexity of the resulting bases is under control.
More precisely, if we construct hierarchical bases $\{{\cal H}_k\}_k$ through subsequent calls to ${\cal H}_{k+1} = $Refine$({\cal H}_k,{\cal M}_k)$, where ${\cal M}_k \subset {\cal H}_k$ denotes the set of marked functions, we obtain \[ \# {\cal H}_{R} - \# {\cal H}_{0}\le C \sum_{k=0}^{R-1} \#{\cal M}_{k} , \] with a constant $C$ independent of $R$.
The framework that we provide makes it simple to create hierarchical basis with control on the overlapping. Linear independence is always desired for the well posedness of the linear systems, and to avoid redundancy. The control on the overlapping of basis functions from different levels is necessary to close theoretical arguments in the proofs of optimality of adaptive methods.
In order to guarantee linear independence, and to control the overlapping of the basis functions, some basis functions additional to those initially marked must be refined. However, with our framework and refinement procedures, the complexity of the resulting bases is under control.
More precisely, if we construct hierarchical bases $\{{\cal H}_k\}_k$ through subsequent calls to ${\cal H}_{k+1} = $Refine$({\cal H}_k,{\cal M}_k)$, where ${\cal M}_k \subset {\cal H}_k$ denotes the set of marked functions, we obtain \[ \# {\cal H}_{R} - \# {\cal H}_{0}\le C \sum_{k=0}^{R-1} \#{\cal M}_{k} , \] with a constant $C$ independent of $R$.
Michael Neilan (University of Pittsburgh)
Title: Exact smoothed piecewise polynomial sequences on Alfeld splits
Abstract: We develop exact polynomial sequences on Alfeld splits in any spatial dimension and any polynomial degree. An Alfeld split of a tetrahedron is obtained by connecting the vertices of an $n$-simplex with its barycenter. We show that, on these triangulations, the kernel of the exterior derivative has enhanced smoothness. Byproducts of this theory include characterizations of discrete divergence-free subspaces for the Stokes problem, commutative projections, and simple formulas for the dimensions of smooth polynomial spaces. This is joint work with Guosheng Fu and Johnny Guzman (Brown).
Ricardo H. Nochetto (Department of Mathematics and Institute for Physical Science and Technology, University of Maryland, College Park, USA)
Title: Structure preserving FEM for the Q-model of uniaxial nematic liquid crystals
Abstract: The Landau-DeGennes Q-model of uniaxial nematic liquid crystals seeks a rank-one
traceless tensor Q that minimizes a Frank-type energy plus a double well potential
that confines the eigenvalues of Q to lie between -1/2 and 1. We propose a finite
element method (FEM) which preserves this basic structure and satisfies a discrete
form of the fundamental energy estimates. We prove that the discrete problem Gamma
converges to the continuous one as the meshsize tends to zero, and propose a
discrete gradient flow to compute discrete minimizers. Numerical experiments
confirm the ability of the scheme to approximate configurations with defects.
Dirk Praetorius (TU Wien, Institute for Analysis and Scientific Computing)
Title: Axioms of adaptivity revisited: Optimal adaptive IGAFEM
Abstract: The
axioms of adaptivity
from [Carstensen et al., Comput. Math. Appl. 67, 2014] analyze under which assumptions on the a posteriori
error estimator and the mesh-refinement strategy, a mesh-refining adaptive algorithm yields convergence with optimal algebraic rates. In our talk, which is based on the recent work [Gantner et al., M3AS 27, 2017], we now address the question which properties of the FEM are sufficient to ensure that the usual weighted-residual error estimator is well-defined and satisfies the axioms of adaptivity. In particular, our analysis covers conforming FEM in the framework of isogeometric analysis with hierarchical splines.
The talk is based on joint work with Gregor Gantner (TU Wien).
Giancarlo Sangalli (Department of Mathematics, University of Pavia, Italy)
Title: Computational efficiency in isogeometric analysis
Abstract: The concept of $k$-refinement was proposed as one of the key features of isogeometric analysis, “a new, more efficient, higher-order concept”, in the seminal work by T.J.R. Hughes, J.A. Cottrell, and Y. Bazilevs, entitled “Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement”, and published on Comput. Methods Appl. Mech. Engrg., Vol. 194, pp. 4135-4195 (2005). The idea of using high-degree and continuity splines (or NURBS, etc.) as a basis for a new high-order method appeared very promising from the beginning, and received confirmations from the next developments. $k$-refinement leads to several advantages: higher accuracy per degree-of-freedom, improved spectral accuracy, the possibility of structure-preserving smooth discretizations are the most interesting features that have been studied actively in the community. At the same time, the $k$-refinement brings significant challenges at the computational level: using standard finite element routines, its computational cost grows with respect to the degree, making degree raising computationally expensive. However, recent results confirm that the $k$-refinement is superior from the point of view of computational efficiency, with respect to low-degree $h$-refinement, when a proper code design beyond standard finite element technology is adopted.
A fundamental ingredient is the weighted quadrature, which is an ad-hoc strategy to compute the integrals of
the Galerkin system. Its aim is to reduce the number of quadrature points, that are 2 per element per direction, that is, independent of the spline degree. This talk is a presentation of the main ideas behind weighted quadrature and its application to the isogeometric $k$-refinement.
Andreas Schröder (University of Salzburg)
Title: Error estimates for primal-hybrid and dual-mixed $hp$-finite element methods
Abstract: $hp$-finite elements are well-established for the solution of partial differential equations, where high accuracy is needed. They rely on the adaptation of the mesh size as well as of the local polynomial degree, which is often done by using a posteriori error estimates. In many cases exponential convergence rates can be shown or at least observed in numerical experiments.
In this talk, we discuss $hp$-finite elements for the discretization of primal and (dual-)mixed formulations of variational equations as well as variational inequalities resulting from contact problems. The mixed methods are based on the introduction of a flux field in the $H({\rm div})$-space. Whereas the primal approaches require some continuity of the ansatz function along the edges of the finite element mesh, the dual-mixed approaches necessitate continuity in the normal direction of the edges, which results from the discretization of $H({\rm div})$. In both cases, hybridization techniques can be applied in order to enforce the desired continuity (at least in some Gauss points) and to enable the use of Lagrange-type basis functions along with their advantageous nodal properties. A focus of the talk is on the derivation of a posteriori error estimates, where we use some post-processing reconstructions of the potential in $H^1$. Two approaches of error control are considered: In the first approach, the post-processing reconstruction is explicitly computed, whereas in the second approach, a reconstruction is applied which does not require an explicit computation. The latter enables the direct use of the discrete potential instead of its reconstruction, which significantly improves the error estimation. The applicability of the estimates within adaptive schemes is demonstrated in several numerical experiments, in which efficiency indices and convergence rates are studied.
In this talk, we discuss $hp$-finite elements for the discretization of primal and (dual-)mixed formulations of variational equations as well as variational inequalities resulting from contact problems. The mixed methods are based on the introduction of a flux field in the $H({\rm div})$-space. Whereas the primal approaches require some continuity of the ansatz function along the edges of the finite element mesh, the dual-mixed approaches necessitate continuity in the normal direction of the edges, which results from the discretization of $H({\rm div})$. In both cases, hybridization techniques can be applied in order to enforce the desired continuity (at least in some Gauss points) and to enable the use of Lagrange-type basis functions along with their advantageous nodal properties. A focus of the talk is on the derivation of a posteriori error estimates, where we use some post-processing reconstructions of the potential in $H^1$. Two approaches of error control are considered: In the first approach, the post-processing reconstruction is explicitly computed, whereas in the second approach, a reconstruction is applied which does not require an explicit computation. The latter enables the direct use of the discrete potential instead of its reconstruction, which significantly improves the error estimation. The applicability of the estimates within adaptive schemes is demonstrated in several numerical experiments, in which efficiency indices and convergence rates are studied.
Iain Smears (Department of Mathematics, University College London, United Kingdom)
Title: Time-parallel iterative solvers for parabolic evolution equations: an inf-sup theoretic approach
Abstract: Many parallel-in-time methods can be viewed as iterative solvers for a large time-global nonsymmetric linear system arising from the discretization of a time-dependent equation.
The nonsymmetry of these systems represents a key challenge in the analysis, since the available theory for iterative methods for nonsymmetric systems is much more limited than for their symmetric counterparts.
In this talk, we will show how the underlying inf-sup theory of continuous and discretized parabolic problems provides an effective approach to the construction and rigorous analysis of parallel-in-time solvers. In particular, we consider the implicit Euler discretization of a general linear parabolic evolution equation with time-dependent self-adjoint spatial operators. We first show that the discrete system admits a similar inf-sup condition as for the underlying continuous operator. We use this to show that the standard nonsymmetric time-global system can be equivalently reformulated as an original symmetric saddle-point system that remains inf-sup stable in the same norms. The essential idea is that the mapping from trial functions to their optimal test functions in the inf-sup condition defines a left-preconditioner that symmetrizes the system in a stable way.
We then propose and analyse an inexact Uzawa method for the saddle-point reformulation based on an efficient parallel-in-time preconditioner. The preconditioners is non-intrusive and easy to implement in practice, since it simply combines existing spatial preconditioners with parallel Fast Fourier Transforms (FFT) in time. We prove robust spectral bounds, leading to convergence rates that are independent of the number of time-steps, final time, or spatial mesh sizes. The theoretical parallel complexity of the method then grows only logarithmically with respect to the number of time-steps, owing to the parallel FFT. Numerical experiments of large-scale parallel computations, with up to 131 072 processors and more than 2 billion unknowns, show the effectiveness of the method, along with its good weak and strong scaling properties.
Joint work with Martin Neumüller, Linz.
In this talk, we will show how the underlying inf-sup theory of continuous and discretized parabolic problems provides an effective approach to the construction and rigorous analysis of parallel-in-time solvers. In particular, we consider the implicit Euler discretization of a general linear parabolic evolution equation with time-dependent self-adjoint spatial operators. We first show that the discrete system admits a similar inf-sup condition as for the underlying continuous operator. We use this to show that the standard nonsymmetric time-global system can be equivalently reformulated as an original symmetric saddle-point system that remains inf-sup stable in the same norms. The essential idea is that the mapping from trial functions to their optimal test functions in the inf-sup condition defines a left-preconditioner that symmetrizes the system in a stable way.
We then propose and analyse an inexact Uzawa method for the saddle-point reformulation based on an efficient parallel-in-time preconditioner. The preconditioners is non-intrusive and easy to implement in practice, since it simply combines existing spatial preconditioners with parallel Fast Fourier Transforms (FFT) in time. We prove robust spectral bounds, leading to convergence rates that are independent of the number of time-steps, final time, or spatial mesh sizes. The theoretical parallel complexity of the method then grows only logarithmically with respect to the number of time-steps, owing to the parallel FFT. Numerical experiments of large-scale parallel computations, with up to 131 072 processors and more than 2 billion unknowns, show the effectiveness of the method, along with its good weak and strong scaling properties.
Joint work with Martin Neumüller, Linz.
Olaf Steinbach (Institute of Applied Mathematics, TU Graz)
Title: Coercive space-time finite element methods
Abstract: In this talk we give an overview on recent work on coercive
space-time variational formulations for both the heat and the
wave equation. Using some modified Hilbert transfomation we
end up with Galerkin-Bubnov finite element methods.
We also present some numerical results to show the potential
of the proposed approach, in particular when considering
adaptive discretiziations in space and time, and the construction
of parallel solution methods for time dependent problems.
The talk is based on joint work with M. Zank and H. Yang.
The talk is based on joint work with M. Zank and H. Yang.
Rob Stevenson (Korteweg-de Vries (KdV) Institute for Mathematics, University of Amsterdam)
Title: Optimal preconditioners of linear complexity for problems of negative order discretized on locally refined meshes
Abstract: Discretized operators of negative order arise by the application of the boundary element method, or as Schur complements in domain decomposition methods.
Using a boundedly invertible operator of opposite order discretized by continuous piecewise linears, we construct an optimal preconditioner for operators of negative order discretized by (dis)continuous piecewise polynomials of arbitrary order.
Our method ([5]) is a variation of the well-studied dual mesh preconditioning technique [4,3,1].
Compared to earlier proposals, it has the advantages that it does not require the inverse of a non-diagonal matrix, it applies without any mildly grading assumption on the mesh, and it does not require a barycentric refinement of the mesh underlying the trial space.
The cost of the preconditioner is the sum of the cost of the discretized opposite order operator plus a cost that scales linearly in the number of unknowns. Thinking of the canonical example of the single layer operator, an obvious choice for the operator of opposite order is the hypersingular operator. Aiming at a preconditioner of optimal complexity, however, we wish to apply a multi-level operator instead, as the one proposed in [2].
Other than with operators of positive order, so far on locally refined meshes optimal (multi-level) preconditioners of linear complexity seem not to be available. We modify the method from [2] such that it applies on locally refined meshes.
[1] A. Buffa and S. H. Christiansen. A dual finite element complex on the barycentric refinement.
[2] J. H. Bramble, J. E. Pasciak, and P. S. Vassilevski. Computational scales of Sobolev norms with application to preconditioning.
[3] R. Hiptmair. Operator preconditioning.
[4] O. Steinbach and W. L. Wendland. The construction of some efficient preconditioners in the boundary element method.
[5] R. P. Stevenson and R. van Venetië. Optimal preconditioning for problems of negative order, 2018, arXiv:1803.05226. Submitted.
The cost of the preconditioner is the sum of the cost of the discretized opposite order operator plus a cost that scales linearly in the number of unknowns. Thinking of the canonical example of the single layer operator, an obvious choice for the operator of opposite order is the hypersingular operator. Aiming at a preconditioner of optimal complexity, however, we wish to apply a multi-level operator instead, as the one proposed in [2].
Other than with operators of positive order, so far on locally refined meshes optimal (multi-level) preconditioners of linear complexity seem not to be available. We modify the method from [2] such that it applies on locally refined meshes.
References:
[1] A. Buffa and S. H. Christiansen. A dual finite element complex on the barycentric refinement.
Math. Comp.
, 76(260):1743–1769, 2007.
[2] J. H. Bramble, J. E. Pasciak, and P. S. Vassilevski. Computational scales of Sobolev norms with application to preconditioning.
Math. Comp.
, 69(230):463–480, 2000.
[3] R. Hiptmair. Operator preconditioning.
Comput. Math. Appl.
, 52(5):699–706, 2006.
[4] O. Steinbach and W. L. Wendland. The construction of some efficient preconditioners in the boundary element method.
Adv. Comput. Math.
, 9(1-2):191–216, 1998.
Numerical treatment of boundary integral equations.
[5] R. P. Stevenson and R. van Venetië. Optimal preconditioning for problems of negative order, 2018, arXiv:1803.05226. Submitted.
Thomas Takacs (Johannes Kepler University Linz)
Title: $C^1$-smooth isogeometric spaces on multi-patch domains
Abstract: Multi-patch spline parametrizations are used in geometric design and isogeometric analysis to represent physical domains having a geometrically complex shape. Over these multi-patch domains we can define isogeometric spaces in a patch-wise manner, by composing a B-spline on every patch with the inverse of the geometry parametrization. If the isogeometric spaces are constructed to be $C^1$-smooth within the physical domain, they can be used for standard Galerkin discretizations of $4^{th}$ order PDEs.
We consider a particular class of $C^0$-smooth planar multi-patch B-spline parametrizations, satisfying specific geometric continuity constraints, which allow the construction of $C^1$-smooth isogeometric spaces with optimal approximation properties. We characterize those spaces and study their properties and suitability for isogeometric analysis. Moreover, we discuss the construction of a basis for the $C^1$-smooth spaces, which generalizes the idea of the Argyris finite element to tensor-product spline patches.
The work presented here is based on a collaboration with Mario Kapl and Giancarlo Sangalli.
We consider a particular class of $C^0$-smooth planar multi-patch B-spline parametrizations, satisfying specific geometric continuity constraints, which allow the construction of $C^1$-smooth isogeometric spaces with optimal approximation properties. We characterize those spaces and study their properties and suitability for isogeometric analysis. Moreover, we discuss the construction of a basis for the $C^1$-smooth spaces, which generalizes the idea of the Argyris finite element to tensor-product spline patches.
The work presented here is based on a collaboration with Mario Kapl and Giancarlo Sangalli.
Rafael Vázquez (Ecole Polytechnique Fédérale de Lausanne)
Title: Dual complex for structure preserving isogeometric methods
Abstract: Isogeometric methods consist in the discretization of partial differential equations using functions utilized in Computer Aided Design, such as NURBS or B-splines, for the approximation of the discrete solution. Structure preserving isogeometric methods were first introduced in [1], and have been successfully used in fluid mechanics and computational electromagnetism afterwards. The methods developed in [1] are based on a De Rham complex of B-spline spaces, that can be seen as a generalization of edge and face finite elements with higher continuity across elements.
In the context of finite elements, a dual finite element complex, defined in a new mesh constructed by barycentric refinement, was introduced by Buffa and Christiansen in [2]. In the present paper we develop a dual spline complex for isogeometric methods. Compared to the dual complex in [2], the dual spline complex we present is much easier to construct, thanks to the tensor-product structure of B-splines. We will also show preliminary work on the generalization of the construction to domains formed by the union of several patches, that would generalize our setting to arbitrary topology.
[1] A. Buffa, G. Sangalli, R. Vázquez, Isogeometric analysis in electromagnetics: B-splines approximation, Comput. Methods Appl. Mech. Engrg. 199 (2010), 1143-1152.
[2] A. Buffa and S. Christiansen, A dual finite element complex on the barycentric refinement, Math. Comp. 76 (2007) 1743-1769.
In the context of finite elements, a dual finite element complex, defined in a new mesh constructed by barycentric refinement, was introduced by Buffa and Christiansen in [2]. In the present paper we develop a dual spline complex for isogeometric methods. Compared to the dual complex in [2], the dual spline complex we present is much easier to construct, thanks to the tensor-product structure of B-splines. We will also show preliminary work on the generalization of the construction to domains formed by the union of several patches, that would generalize our setting to arbitrary topology.
References:
[1] A. Buffa, G. Sangalli, R. Vázquez, Isogeometric analysis in electromagnetics: B-splines approximation, Comput. Methods Appl. Mech. Engrg. 199 (2010), 1143-1152.
[2] A. Buffa and S. Christiansen, A dual finite element complex on the barycentric refinement, Math. Comp. 76 (2007) 1743-1769.