Thematic Programme: Numerical Analysis of Complex PDE Models in the Sciences
Workshop 1: Interplay of tensor structured formats with advanced PDE discretizations; session on Signal processing techniques and directionally adapted discretizations
Abstracts
Markus Bachmayr (Universität Bonn)
Title: Stability of Low-Rank Tensor Representations and Structured Multilevel Preconditioning for Elliptic PDEs (Part I)
Abstract: Folding grid-value vectors into high-order tensors in combination with low-rank representation in the tensor-train format leads to highly efficient approximations for various classes of functions. These include solutions of elliptic PDEs on nonsmooth domains or with oscillatory data, for which simple discretizations parametrized by low-rank tensors have been shown to result in highly compressed, adaptive approximations.
Straightforward choices of the underlying basis, such as piecewise multilinear finite elements on uniform tensor product grids, lead to the well-known matrix ill-conditioning of discretized operators. We demonstrate that for low-rank representations, the use of tensor structure can additionally lead to representation ill-conditioning, a new effect specific to computations in tensor networks.
We address this problem in the context of FE discretizations by a suitable representation of a BPX preconditioner. Our construction will be discussed in further detail in the talk of V. Kazeev (Part II).
This is joint work with Vladimir Kazeev.
Straightforward choices of the underlying basis, such as piecewise multilinear finite elements on uniform tensor product grids, lead to the well-known matrix ill-conditioning of discretized operators. We demonstrate that for low-rank representations, the use of tensor structure can additionally lead to representation ill-conditioning, a new effect specific to computations in tensor networks.
We address this problem in the context of FE discretizations by a suitable representation of a BPX preconditioner. Our construction will be discussed in further detail in the talk of V. Kazeev (Part II).
This is joint work with Vladimir Kazeev.
Mario Bebendorf (University of Bayreuth)
Title: Degenerate approximation of Green's function in the presence of high-contrast coefficients
Abstract: Hierarchical matrices are well suited for treating non-local operators with logarithmic-linear complexity. In particular, the inverse and the factors of the LU decomposition of finite element discretizations of elliptic boundary value problems can be approximated with such structures.
However, a proof for this shows a strong dependence of the local rank on the ratio of the largest and smallest coefficient in the differential operator with respect to the $L^2$-norm. Nevertheless, this kind of dependence cannot be observed in practice. The aim of this talk is to show that the above dependence
can be avoided also theoretically with respect to a suitable norm. From this, a logarithmic dependence with respect to the $L^2$-norm can be deduced.
Steffen Börm (Department of Mathematics, University of Kiel)
Title: Hybrid compression of boundary element matrices for high-frequency Helmholtz problems
Abstract: Boundary element methods lead to dense matrices that have to be approximated efficiently in order to allow us to treat complicated three-dimensional geometries. The oscillatory behaviour of the kernel function associated with the high-frequency Helmholtz equation poses a challenge that can be overcome by suitably modified approximation schemes.
One of these schemes is
Although this approach is fast and robust, it may lead to very high storage requirements. This problem can be solved by algebraic recompression: combining orthogonal projections and rank-revealing factorizations, the ranks can be significantly reduced. The resulting hybrid compression algorithm inherits the robustness and accuracy of the analytic approximation and the almost optimal storage requirements of algebraic compression methods.
One of these schemes is
directional interpolation
: the oscillatory kernel function is split into a plane wave and a remainder that is smooth within a cone around this wave's direction. Interpolating the remainder yields a degenerate approximation that can be used to obtain a low-rank approximation.
Although this approach is fast and robust, it may lead to very high storage requirements. This problem can be solved by algebraic recompression: combining orthogonal projections and rank-revealing factorizations, the ranks can be significantly reduced. The resulting hybrid compression algorithm inherits the robustness and accuracy of the analytic approximation and the almost optimal storage requirements of algebraic compression methods.
Wolfgang Dahmen (Mathematics Department, University of South Carolina)
Title: Data and Reduced Models
Abstract: Given data obtained from applying a fixed finite number of linear functionals to an unknown state, we discuss the recovery of such a state under the assumption that it belongs to a certain compact set in a Hilbert space. Of particular interest is the case where the compact set is the solution manifold $\mathcal{M}$ for a family of parameter dependent PDEs and our knowledge about $\mathcal{M}$ consists in a (reduced) subspace approximating the manifold within a certain accuracy. We identify the best possible recovery error and a corresponding optimal algorithm. This highlights, in particular, the role of the angle between the reduced space and the space spanned by the Riesz-lifted measurement functionals. We further indicate several developments building on this result such as the construction of reduced spaces which are optimal for the given measurement functionals or data dependent reduced spaces for parameter estimation.
Leszek Demkowicz (Institute for Computational Engineering and Sciences (ICES), The University of Texas at Austin)
Title: Current Research Results on the DPG Method for Wave Propagation Problems
Abstract: I will provide a short overview of two ongoing projects dealing with the application of Discontinuous Petrov Galerkin Method (DPG) technology to challenging wave propagation problems. The first one will deal with modeling of optical amplifiers (nonlinear Maxwell equations coupled with heat equation, [1]). The second one with the construction of multigrid preconditioners for high frequency acoustics and electromagnetics problems [2].
[1] Sriram Nagaraj, DPG Methods for Nonlinear Fiber Optics, Ph.D. dissertation, The University of Texas, April 2018.
[2] S. Petrides and L. Demkowicz, An Adaptive DPG Method for High Frequency Time-harmonic Wave Propagation Problems, Comput. Math. Appl., 74(8): 1999-2017, 2017.
References:
[1] Sriram Nagaraj, DPG Methods for Nonlinear Fiber Optics, Ph.D. dissertation, The University of Texas, April 2018.
[2] S. Petrides and L. Demkowicz, An Adaptive DPG Method for High Frequency Time-harmonic Wave Propagation Problems, Comput. Math. Appl., 74(8): 1999-2017, 2017.
Sergey Dolgov (University of Bath)
Title: Solving large ODEs with conservation laws by low rank decompositions
Abstract: We propose an algorithm for the solution of large-scale evolutionar equations (ODEs and discretized time-dependent PDEs), assuming that the solution and the right-hand side admit a compressed approximation via low rank matrices or tensors.
The scheme begins with a collocation discretization in time, which turns a linear ODE into a linear system of algebraic equations. A low rank approximation to the solution of this system is computed via an alternating iteration, without solving the original large problem.
This introduces a certain approximation error; however, we show that the conservation laws, defined by linear functions of the solution, can be preserved in the low rank scheme with the machine precision.
One example of such functions is the probability normalization in the Fokker-Planck or chemical master equations. Moreover, the low rank representation allows cheap error estimation, and hence adaptation of time intervals, interleaved with adaptation of tensor ranks. Numerical experiments demonstrate that this fully adaptive normalization-preserving solution can be faster and more accurate than the stochastic simulation algorithms.
The scheme begins with a collocation discretization in time, which turns a linear ODE into a linear system of algebraic equations. A low rank approximation to the solution of this system is computed via an alternating iteration, without solving the original large problem.
This introduces a certain approximation error; however, we show that the conservation laws, defined by linear functions of the solution, can be preserved in the low rank scheme with the machine precision.
One example of such functions is the probability normalization in the Fokker-Planck or chemical master equations. Moreover, the low rank representation allows cheap error estimation, and hence adaptation of time intervals, interleaved with adaptation of tensor ranks. Numerical experiments demonstrate that this fully adaptive normalization-preserving solution can be faster and more accurate than the stochastic simulation algorithms.
Dennis Elbrächter (University of Vienna)
Title: Approximation of high dimensional tensor products using deep ReLU networks
Abstract: We consider the construction of deep ReLU networks capable of efficiently approximating tensor products. Combining this with results concerning the approximation of well behaved (i.e. fullfilling some smoothness properties) one dimensional functions, this provides insights into the efficient approximation of high-dimensional functions with tensor structures. In particular we apply this to the example of the Black-Scholes model for european maximum option pricing where it can be shown that the (explicitely known) solution to the $d$-dimensional problem can, for every $n\in\mathbb{N}$, be approximated up to an $\epsilon$-error by a neural network of size $\mathcal{O}(d^{2+\frac{1}{n}}\epsilon^{-\frac{1}{n}})$. We further hope that the presented techniques can be used in order to derive similar results for sufficiently structured PDEs.
Michael Feischl (KIT - Karlsruhe Institute of Technology)
Title: H-Matrices and Multi-index Monte Carlo
Abstract: We consider a new method to generate normal or log-normal random fields which builds on fast matrix-vector multiplication via H-matrices. The method proves to be robust with respect to the covariance length of the random field, and is particularly efficient for very smooth and very rough random fields. Moreover, the method applies to a fairly general class of covariance functions and is not limited to the stationary case. We use this new method in combination with Monte Carlo integration, to solve a Poisson equation with random coefficient. Moreover, and independently of the H-Matrix approach, we exploit the inherent sparsity of the problem and propose a new Multi-Index Monte Carlo approach in three parameters: the finite-element approximation error, the approximation error of the random field, and the integration error of the Monte Carlo rule. The rigorous results enable the user to significantly save on computational time.
Helmut Harbrecht (Departement Mathematik und Informatik, Universität Basel, Schweiz)
Title: Second moment analysis for boundary value problems with random input parameters
Abstract: The present talk is dedicated to the computation of the statistical moments of elliptic boundary value problems with random input parameters. By applying a first or second order Taylor expansion with respect to the input parameter under consideration, one can approximate the impact of the random perturbation on the solution. Thus, a representation of the solution is obtained which is second or even third order accurate in the size of the perturbation's amplitude. The major advantage of this approach is that one arrives at purely deterministic equations for the solution's moments.
We discuss the solution of the respective equations by the sparse grid combination technique, by low-rank approximation, and also by $\mathcal{H}$-matrix techniques which can be seen as an adaptive low-rank approximation. Numerical results are presented to verify and quantify the proposed approach.
We discuss the solution of the respective equations by the sparse grid combination technique, by low-rank approximation, and also by $\mathcal{H}$-matrix techniques which can be seen as an adaptive low-rank approximation. Numerical results are presented to verify and quantify the proposed approach.
Clemens Hofreither (Institute of Computational Mathematics, Johannes Kepler University Linz)
Title: Applications of Low-Rank Tensor Methods in Isogeometric Analysis
Abstract: This talk is focused on uses of low-rank tensor approximation methods for the solution of partial differential equations using the Isogeometric Analysis (IgA) methodology. The extensive use of tensor product discretizations (e.g., tensor product B-splines or NURBS) in IgA makes it a particularly attractive target for the use of tensor methods, and the concept of the geometry transform allows the extension of tensor methods from tensor product domains to a much more general class of domains.
After a brief introduction to IgA, we describe a method for accelerating the assembly of the resulting stiffness matrices, a significant bottleneck in IgA. The procedure exploits the facts that (a) the IgA stiffness matrices have block-banded structure, and (b) they often have low Kronecker rank. Combined, these two properties allow us to reorder the nonzero entries of the stiffness matrix into a relatively small, dense matrix or tensor of low rank. A suitable black-box low-rank approximation algorithm is then applied to this matrix or tensor, which allows its fast and accurate computation. Numerical experiments demonstrate speedups up to several orders of magnitude over full Gauss quadrature in multiple examples.
Very recently, first efforts have been made towards accelerating also the solution of the linear systems arising in IgA problems by tensor methods. We describe here a novel approach based on greedily constructing suitable tensor product subspaces in which the solution of the linear system is approximated by means of Galerkin projection. This results in approximations to the solution as tensors in Tucker format. In each iteration, the tensor product subspace is enriched according to a rank one tensor which minimizes the residual and is computed using a modified Alternating Least Squares method. We demonstrate the performance of this algorithm on some numerical examples and give an outlook to future work.
After a brief introduction to IgA, we describe a method for accelerating the assembly of the resulting stiffness matrices, a significant bottleneck in IgA. The procedure exploits the facts that (a) the IgA stiffness matrices have block-banded structure, and (b) they often have low Kronecker rank. Combined, these two properties allow us to reorder the nonzero entries of the stiffness matrix into a relatively small, dense matrix or tensor of low rank. A suitable black-box low-rank approximation algorithm is then applied to this matrix or tensor, which allows its fast and accurate computation. Numerical experiments demonstrate speedups up to several orders of magnitude over full Gauss quadrature in multiple examples.
Very recently, first efforts have been made towards accelerating also the solution of the linear systems arising in IgA problems by tensor methods. We describe here a novel approach based on greedily constructing suitable tensor product subspaces in which the solution of the linear system is approximated by means of Galerkin projection. This results in approximations to the solution as tensors in Tucker format. In each iteration, the tensor product subspace is enriched according to a rank one tensor which minimizes the residual and is computed using a modified Alternating Least Squares method. We demonstrate the performance of this algorithm on some numerical examples and give an outlook to future work.
Guido Kanschat (Heidelberg University)
Title: Tensor based inversion for overlapping Schwarz smoothers
Abstract: Overlapping Schwarz methods using local spaces supported on vertex patches as smoothers for multigrid algorithms were introduced by Arnold, Falk, and Winther in 1997. They provide robust convergence in particular for divergence-constrained problems and almost incompressible elasticity. Furthermore, experiments suggest that convergence rates for discontinuous Galerkin discretization are independent of the penalty parameter, which is particularly important for higher order methods. Their major drawback lies in the fact that fairly big local problems have to be solved. Here, inversion methods based on sum factorization and a tensor decomposition provide a tool, which reduces the effort for smoothing almost to that of a regular operator application. We show that these methods can be implemented efficiently and yield high performance due to their computational intensity for high order spaces.
Vladimir Kazeev (Stanford University)
Title: Stability of Low-Rank Tensor Representations and Structured Multilevel Preconditioning for Elliptic PDEs (Part II)
Abstract: Folding grid-value vectors into high-order tensors in combination with low-rank representation in the tensor-train format leads to highly efficient approximations for various classes of functions. These include solutions of elliptic PDEs on nonsmooth domains or with oscillatory data, for which simple discretizations parametrized by low-rank tensors have been shown to result in highly compressed, adaptive approximations. As discussed in the talk of M. Bachmayr (Part I), the solution of corresponding linear systems in the low-rank form poses additional difficulties associated with the representation ill-conditioning of discrete operators.
We show that the issues of matrix ill-conditioning and representation ill-conditioning can be circumvented simultaneously by carefully combining classical ideas of multilevel preconditioning and more recent techniques for the explicit low-rank representation of matrices. Specifically, we construct an explicit tensor-structured representation of a BPX preconditioner, with ranks independent of the number of discretization levels. Combined with a carefully constructed representation of differentiation and $L_2$ projection, it allows to avoid both matrix and representation ill-conditioning. Numerical tests, including problems with highly oscillatory coefficients, show that our result paves the way to reliable and efficient solvers that remain stable for mesh sizes near machine precision (for up to $2^{50}\approx 10^{15}$ nodes in each dimension).
This is joint work with Markus Bachmayr.
We show that the issues of matrix ill-conditioning and representation ill-conditioning can be circumvented simultaneously by carefully combining classical ideas of multilevel preconditioning and more recent techniques for the explicit low-rank representation of matrices. Specifically, we construct an explicit tensor-structured representation of a BPX preconditioner, with ranks independent of the number of discretization levels. Combined with a carefully constructed representation of differentiation and $L_2$ projection, it allows to avoid both matrix and representation ill-conditioning. Numerical tests, including problems with highly oscillatory coefficients, show that our result paves the way to reliable and efficient solvers that remain stable for mesh sizes near machine precision (for up to $2^{50}\approx 10^{15}$ nodes in each dimension).
This is joint work with Markus Bachmayr.
Boris Khoromskij (Max-Planck-Institute for Mathematics in the Sciences) (joint talk with Venera Khoromskaia)
Title: Tensor numerical methods in computational quantum chemistry
Abstract: Rank-structured tensor approximation of functions and operators in the traditional canonical, Tucker and tensor train (TT) formats allows the linear complexity scaling in dimension. Further data-compression to the logarithmic scale can be achieved by using the method of quantized TT (QTT) approximation. The novel range-separated tensor format allows the efficient low-rank representation of highly unstructured many-particle interaction potentials with multiple cusps.
We discuss how the tensor numerical methods based on different tensor approximation techniques apply in the solution of complicated multi-dimensional PDEs arising in computational quantum chemistry. We sketch the new tensor methods for calculation of electrostatic potentials of large bio-molecules (the Poisson-Boltzmann equation), and for the solution of (post) Hartree-Fock eigenvalue problems for estimation of excitation energies and density of states for optical spectra of molecules (the Bethe-Salpeter equation).
The efficiency of the tensor approach is illustrated by numerical examples.
http://personal-homepages.mis.mpg.de/bokh;
We discuss how the tensor numerical methods based on different tensor approximation techniques apply in the solution of complicated multi-dimensional PDEs arising in computational quantum chemistry. We sketch the new tensor methods for calculation of electrostatic potentials of large bio-molecules (the Poisson-Boltzmann equation), and for the solution of (post) Hartree-Fock eigenvalue problems for estimation of excitation energies and density of states for optical spectra of molecules (the Bethe-Salpeter equation).
The efficiency of the tensor approach is illustrated by numerical examples.
http://personal-homepages.mis.mpg.de/bokh;
bokh@mis.mpg.de
Melvin Leok (University of California, San Diego)
Title: Variational discretizations of gauge field theories using group-equivariant interpolation spaces
Abstract: Variational integrators are geometric structure-preserving numerical methods that preserve the symplectic structure, satisfy a discrete Noether's theorem, and exhibit exhibit excellent long-time energy stability properties. An exact discrete Lagrangian arises from Jacobi's solution of the Hamilton–Jacobi equation, and it generates the exact flow of a Lagrangian system. By approximating the exact discrete Lagrangian using an appropriate choice of interpolation space and quadrature rule, we obtain a systematic approach for constructing variational integrators. The convergence rates of such variational integrators are related to the best approximation properties of the interpolation space.
Many gauge field theories can be formulated variationally using a multisymplectic Lagrangian formulation, and we will present a characterization of the exact generating functionals that generate the multisymplectic relation. By discretizing these using group-equivariant spacetime finite element spaces, we obtain methods that exhibit a discrete multimomentum conservation law. We will then briefly describe an approach for constructing group-equivariant interpolation spaces that take values in the space of Lorentzian metrics that can be efficiently computed using a generalized polar decomposition. The goal is to eventually apply this to the construction of variational discretizations of general relativity, which is a second-order gauge field theory whose configuration manifold is the space of Lorentzian metrics.
Many gauge field theories can be formulated variationally using a multisymplectic Lagrangian formulation, and we will present a characterization of the exact generating functionals that generate the multisymplectic relation. By discretizing these using group-equivariant spacetime finite element spaces, we obtain methods that exhibit a discrete multimomentum conservation law. We will then briefly describe an approach for constructing group-equivariant interpolation spaces that take values in the space of Lorentzian metrics that can be efficiently computed using a generalized polar decomposition. The goal is to eventually apply this to the construction of variational discretizations of general relativity, which is a second-order gauge field theory whose configuration manifold is the space of Lorentzian metrics.
Lek-Heng Lim (University of Chicago)
Title: Tensor network ranks
Abstract: In problems involving approximation, completion, denoising, dimension reduction, estimation, interpolation, modeling, order reduction, regression, etc, we argue that the near-universal practice of assuming that a function, matrix, or tensor (which we will see are all the same object in this context) has
low rank
may be ill-justified. There are many natural instances where the object in question has high rank with respect to the classical notions of rank: matrix rank, tensor rank, multilinear rank – the latter two being the most straightforward generalizations of the former. To remedy this, we show that one may vastly expand these classical notions of ranks: Given any undirected graph $G$, there is a notion of $G$-rank associated with $G$, which provides us with as many different kinds of ranks as there are undirected graphs. In particular, the popular tensor network states in physics (e.g., mps
, ttns
, peps
, ctns
) may be regarded as functions of a specific $G$-rank for various choices of $G$. Among other things, we will see that a function, matrix, or tensor may have very high matrix, tensor, or multilinear rank and yet very low $G$-rank for some $G$. In fact the difference is in the orders of magnitudes and the gaps between $G$-ranks and these classical ranks are arbitrarily large for some important objects in computer science, mathematics, and physics. Furthermore, we show that there is a $G$ such that almost every tensor has $G$-rank exponentially lower than its rank or the dimension of its ambient space.Anthony Nouy (Centrale Nantes)
Title: Adaptive sampling for learning tree tensor networks
Abstract: We present an extension of principal component analysis (PCA) for functions of multiple random variables and an associated algorithm for the approximation of such functions using tree-based low-rank formats (tree tensor networks) [1]. The algorithm only requires evaluations of functions on a structured set of points which is constructed adaptively. It relies on the estimation of principal subspaces through empirical PCA of partial evaluations of the function, and on the projection of the function onto tensor products of these subspaces. Each projection onto a given subspace is obtained by a least-squares method using random samples drawn from an optimal distribution. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format.
Joint work with C. Haberstich and G. Perrin.
[1] A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low rank formats. arXiv preprint, 2017.
Joint work with C. Haberstich and G. Perrin.
References:
[1] A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low rank formats. arXiv preprint, 2017.
Philipp Christian Petersen (TU Berlin)
Title: Neural Networks and Partial Differential Equations: Challenges and Opportunities
Abstract: Novel machine learning techniques based on deep learning, i.e., the data-driven manipulation of neural networks, have reported remarkable results in many areas such as image classification, game intelligence, or speech recognition. Driven by these successes, many scholars have started using them in areas which do not focus on traditional machine learning tasks.
For instance, more and more researchers are employing neural networks to develop tools for the discretisation and solution of partial differential equations.
Two reasons can be identified to be the driving forces behind the increased interest in neural networks in the area of the numerical analysis of PDEs.
On the one hand, powerful approximation theoretical results have been established which demonstrate that neural networks can represent functions from the most relevant function classes with a minimal number of parameters. On the other hand, highly efficient machine learning techniques for the training of these networks are now available and can be used as a black box.
In this talk, we will give an overview of some approaches towards the numerical treatment of PDEs with neural networks and study the two aspects above. We will recall some classical and some novel approximation theoretical results and tie these results to PDE discretisation. Afterwards, providing a counterpoint, we analyse the structure of network spaces and deduce considerable problems for the black box solver. In particular, we will identify a number of structural properties of the set of neural networks that render optimisation over this set especially challenging and sometimes impossible.
The talk is based on joint work with Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Felix Voigtlaender, and Mones Raslan.
In this talk, we will give an overview of some approaches towards the numerical treatment of PDEs with neural networks and study the two aspects above. We will recall some classical and some novel approximation theoretical results and tie these results to PDE discretisation. Afterwards, providing a counterpoint, we analyse the structure of network spaces and deduce considerable problems for the black box solver. In particular, we will identify a number of structural properties of the set of neural networks that render optimisation over this set especially challenging and sometimes impossible.
The talk is based on joint work with Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Felix Voigtlaender, and Mones Raslan.
Maxim Rakhuba (ETH Zurich, Switzerland)
Title: Tensor methods for high-dimensional eigenvalue problems
Abstract: In this talk we focus on the solution to high-dimensional eigenvalue problems which arise, for example, in quantum chemistry applications. Although tensor methods have proven to be an efficient tool for the approximation of solutions to such problems, their performance deteriorates quickly with the increase of the number of eigenstates to be computed. We address this issue by designing a new algorithm motivated by the ideas of Riemannian optimization for the approximation of multiple eigenstates in the tensor-train format. The proposed method is implemented in TensorFlow software library and hence allows for a natural GPU parallelization. We showcase our method by calculating hundreds of energy levels in molecules using vibrational Schrödinger equation. As an additional application we consider a nonlinear eigenvalue problem using the example of density functional theory equations and discuss the arising difficulties connected with its nonlinearity and ill-conditioning on fine grids.
Holger Rauhut (RWTH Aachen University)
Title: Multilevel compressive sensing methods for parametric PDEs
Abstract: Compressive sensing (CS) enables accurate recovery of approximately sparse vectors from incomplete information. We present and analyze a method for numerically solving parametric PDEs based on CS methods. One can show that the solution of certain parametric operator equations is analytic in the parameters which can be exploited to show convergence rates for nonlinear (sparse) approximation. Building on this fact, CS can be used to compute approximations from samples (snapshots) of the parametric operator equation for randomly chosen parameters, which in turn can be computed by standard techniques including Petrov-Galerkin methods. We provide theoretical approximation rates for this scheme. Moreover, we then pass to a multilevel version of this scheme, similarly to multilevel Monte Carlo methods, and show that the overall complexity of computing parametric solutions can be further reduced.
Reinhold Schneider (TU Berlin)
Title: Hierarchical Tensor Representation and Variational Monte Carlo
Abstract: This joint work with M. Eigel (WIAS Berlin), P. Trunschke (HU Berlin) and S. Wolf (TU Berlin)
Hierarchical Tucker tensor format, introduced by Hackbusch and coworkers, and a particular case of Tensor Trains (TT) (Tyrtyshnikov) have been introduced for high dimensional problems. The parametrization has been known in quantum physics as tensor network states. The multi-linear parametrization can be applied to solve high-dimensional PDE into a variational framework restricted to tensor low rank classes. For non-linear or more complex problems this direct approach becomes more difficult due to required tensor product approximations of the operators.
In Variational Monte Carlo we replace our objective functional by an empirical functional, in a similar way as for risk minimization or loss functions in statistical learning. For the optimization we need only computable gradients at sample points. This works also for deep neuronal networks using backpropagation. At a price, we can show only
This approach can be carried out immediately for parametric problems in uncertainty quantification. And can be formally extended to approximate the meta-stable eigen-functions of the corresponding Backward Kolmogorov operator by numerical approximation of the transfer operator (also know as Koopman operator), and vice versa the Fokker Planck operator. The latter is joint work with F. Nüske and F. Noe from FU Berlin.
Hierarchical Tucker tensor format, introduced by Hackbusch and coworkers, and a particular case of Tensor Trains (TT) (Tyrtyshnikov) have been introduced for high dimensional problems. The parametrization has been known in quantum physics as tensor network states. The multi-linear parametrization can be applied to solve high-dimensional PDE into a variational framework restricted to tensor low rank classes. For non-linear or more complex problems this direct approach becomes more difficult due to required tensor product approximations of the operators.
In Variational Monte Carlo we replace our objective functional by an empirical functional, in a similar way as for risk minimization or loss functions in statistical learning. For the optimization we need only computable gradients at sample points. This works also for deep neuronal networks using backpropagation. At a price, we can show only
convergence in probability
, i.e. error estimates holds with high probability. The analysis is carried out in the spirit of Cucker and Smale, and first estimates about covering numbers for hierarchical tensors are available.This approach can be carried out immediately for parametric problems in uncertainty quantification. And can be formally extended to approximate the meta-stable eigen-functions of the corresponding Backward Kolmogorov operator by numerical approximation of the transfer operator (also know as Koopman operator), and vice versa the Fokker Planck operator. The latter is joint work with F. Nüske and F. Noe from FU Berlin.
Thomas Strohmer (University of California, Davis)
Title: Uncertainty mitigation and inverse problems
Abstract: Uncertainty quantification is a prominent topic in areas such as inverse problems, PDEs, and signal processing. In this talk we take a more proactive standpoint and aim for uncertainty mitigation, i.e., the reduction or even elimination of the effect of the error in the output caused by input uncertainty. I will present a theoretical and numerical framework for several uncertainty mitigation scenarios arising in inverse problems.
Jakob Zech (ETH Zürich)
Title: Stochastic collocation and deep ReLU networks for UQ
Abstract: We analyze convergence rates of multilevel stochastic
collocation for parametric maps $u:U\to X$ taking values in a Banach
space $X$ and defined on the parameter domain
$U = [-1,1]^{\mathbb{N}}$. Under certain sparsity and smoothness
assumptions, which include $u$ to admit holomorphic extensions in each
variable, we present algebraic best $N$-term convergence rates for the
uniform approximation of $u$. As a consequence, convergence rates for
Smolyak interpolation/quadrature algorithms are obtained. By
reapproximating truncated generalized polynomial chaos expansions with
deep ReLU networks, we prove that such networks allow for uniform
approximation of $u$ in case $X=\mathbb{R}$. In terms of the networks
size, the convergence rate is (up to polylogarithmic terms) the same
as obtained by stochastic collocation. Our main application are
solutions of parametric linear and nonlinear partial differential
equations (PDEs). In the case of interpolation and quadrature,
Galerkin methods are used to obtain approximations of the PDE solution
at a given point in the parameter domain. As an outlook, we discuss
the approximation of $u$ with a ReLU network for $X=H_0^1([0,1])$.
[1] Ch. Schwab and J. Zech.
[2] J. Zech, D. Dũng and Ch. Schwab.
References:
[1] Ch. Schwab and J. Zech.
Deep Learning in High Dimension
. Technical Report 2017-57, Seminar for Applied Mathematics, ETH Zürich. 2017.[2] J. Zech, D. Dũng and Ch. Schwab.
Multilevel approximation of parametric and stochastic PDEs
. Technical Report 2018-05, Seminar for Applied Mathematics, ETH Zürich. 2018.