TBA

\n\nFor further information about the seminar\, ple ase visit this webpage.

DTEND;TZID=Europe/Zurich:20240503T120000 END:VEVENT BEGIN:VEVENT UID:news1656@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240415T160416 DTSTART;TZID=Europe/Zurich:20240426T110000 SUMMARY:Seminar in Numerical Analysis: Malte Peter (University of Augsburg) DESCRIPTION:We consider the upscaled linear elasticity problem in the conte xt of periodic homogenisation in the stationary setting as well as in the time-dependent regime where the wavelength is much larger than the microst ructure. Based on measurements of the deformation of the (macroscopic) bou ndary of a body for a given forcing\, the aim is to deduce information on the geometry of the microstructure. After a general introduction to period ic homogenisation in the context of linear elasticity\, we are able to pro ve for a parametrised microstructure that there exists at least one soluti on of the associated minimisation problem based on the L^2-difference of t he measured deformation and the computed deformation for a given parameter vector. To facilitate the use of gradient-based algorithms\, we derive th e Gâteaux derivatives using the Lagrangian method of Céa\, and we presen t numerical experiments showcasing the functioning of the method.\\r\\nThi s is joint work with T. Lochner (University of Augsburg).\\r\\n\\r\\nFor f urther information about the seminar\, please visit this webpage [t3://pag e?uid=1115]. X-ALT-DESC:We consider the upscaled linear elasticity problem in the con text of periodic homogenisation in the stationary setting as well as in th e time-dependent regime where the wavelength is much larger than the micro structure. Based on measurements of the deformation of the (macroscopic) b oundary of a body for a given forcing\, the aim is to deduce information o n the geometry of the microstructure. After a general introduction to peri odic homogenisation in the context of linear elasticity\, we are able to p rove for a parametrised microstructure that there exists at least one solu tion of the associated minimisation problem based on the L^2-difference of the measured deformation and the computed deformation for a given paramet er vector. To facilitate the use of gradient-based algorithms\, we derive the Gâteaux derivatives using the Lagrangian method of Céa\, and we pres ent numerical experiments showcasing the functioning of the method.

\n< p>This is joint work with T. Lochner (University of Augsburg).\n\nF or further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20240426T120000 END:VEVENT BEGIN:VEVENT UID:news1655@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240402T130832 DTSTART;TZID=Europe/Zurich:20240419T110000 SUMMARY:Seminar in Numerical Analysis: Barbara Verfürth (University of Bon n) DESCRIPTION:In the last years\, there has been an increasing interest in ti me-modulated materials to obtain enhanced properties. As mathematical mode l\, we study the classical wave equation with time-dependent coefficient\, which may also include spatial multiscale features. Based on joint work w ith Bernhard Maier\, we present a numerical multiscale method for spatiall y multiscale\, (slowly) time-evolving coefficients. The method is inspired by the Localized Orthogonal Decomposition (LOD) and entails time-dependen t multiscale spaces. We provide a rigorous a priori error analysis for the considered setting. Numerical examples illustrate the theoretical finding s and investigate an adaptive approach for the computation of the time-dep endent basis functions. On the other hand\, we will also briefly discuss t he setting of spatially homogeneous\, temporal multiscale coefficients. (H igher-order) multiscale expansions may help to interpret effective physica l material properties and are numerically illustrated.\\r\\nFor further in formation about the seminar\, please visit this webpage [t3://page?uid=111 5]. X-ALT-DESC:In the last years\, there has been an increasing interest in
time-modulated materials to obtain enhanced properties. As mathematical mo
del\, we study the classical wave equation with time-dependent coefficient
\, which may also include spatial multiscale features.

Based on join
t work with Bernhard Maier\, we present a numerical multiscale method for
spatially multiscale\, (slowly) time-evolving coefficients. The method is
inspired by the Localized Orthogonal Decomposition (LOD) and entails time-
dependent multiscale spaces. We provide a rigorous a priori error analysis
for the considered setting. Numerical examples illustrate the theoretical
findings and investigate an adaptive approach for the computation of the
time-dependent basis functions.

On the other hand\, we will also bri
efly discuss the setting of spatially homogeneous\, temporal multiscale co
efficients. (Higher-order) multiscale expansions may help to interpret eff
ective physical material properties and are numerically illustrated.

For further information about the seminar\, please visit this webpage< /a>.

DTEND;TZID=Europe/Zurich:20240419T120000 END:VEVENT BEGIN:VEVENT UID:news1550@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231204T091610 DTSTART;TZID=Europe/Zurich:20231215T110000 SUMMARY:Seminar in Numerical Analysis: Caroline Geiersbach (WIAS Berlin) DESCRIPTION:Many problems in shape optimization involve constraints in the form of one or more partial differential equations. In practice\, the mate rial properties of the underlying shape on which a PDE is defined are not known exactly\; it is natural to use a probability distribution based on e mpirical measurements and incorporate this information when designing an o ptimal shape. Additionally\, one might wish to obtain a shape that is robu st in its response to certain external inputs\, such as forces. It is help ful to view shape optimization problems subject to uncertainty through the lens of stochastic optimization\, where a wealth of theory and algorithms already exist for finite-dimensional problems. The focus will be on the a lgorithmic handling of these problems in the case of a high stochastic dim ension. Stochastic approximation\, which dynamically samples from the stoc hastic space over the course of iterations\, is favored in this case\, and we show how these methods can be applied to shape optimization. We study the classical stochastic gradient method\, which was introduced in 1951 by Robbins and Monro and is widely used in machine learning. In particular\, we investigate its application to infinite-dimensional shape manifolds. F urther\, we present numerical examples showing the performance of the meth od\, also in combination with the augmented Lagrangian method for problems with geometric constraints. \\r\\nJoint work with: Kathrin Welker\, Este fania Loayza-Romero\, Tim Suchan\\r\\n\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Many problems in shape optimization involve constraints in th e form of one or more partial differential equations. In practice\, the ma terial properties of the underlying shape on which a PDE is defined are no t known exactly\; it is natural to use a probability distribution based on empirical measurements and incorporate this information when designing an optimal shape. Additionally\, one might wish to obtain a shape that is ro bust in its response to certain external inputs\, such as forces. It is he lpful to view shape optimization problems subject to uncertainty through t he lens of stochastic optimization\, where a wealth of theory and algorith ms already exist for finite-dimensional problems. The focus will be on the algorithmic handling of these problems in the case of a high stochastic d imension. Stochastic approximation\, which dynamically samples from the st ochastic space over the course of iterations\, is favored in this case\, a nd we show how these methods can be applied to shape optimization. We stud y the classical stochastic gradient method\, which was introduced in 1951 by Robbins and Monro and is widely used in machine learning. In particular \, we investigate its application to infinite-dimensional shape manifolds. Further\, we present numerical examples showing the performance of the me thod\, also in combination with the augmented Lagrangian method for proble ms with geometric constraints. \;

\nJoint work with: Kathrin Wel ker\, Estefania Loayza-Romero\, Tim Suchan

\n\nFor further informati on about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231215T120000 END:VEVENT BEGIN:VEVENT UID:news1570@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231127T102321 DTSTART;TZID=Europe/Zurich:20231208T110000 SUMMARY:Seminar in Numerical Analysis: Martin Vohralik (Inria Paris) DESCRIPTION:A posteriori estimates enable to certify the error committed in a numerical simulation. In particular\, the equilibrated flux reconstruct ion technique yields a guaranteed error upper bound\, where the flux\, obt ained by a local postprocessing\, is of independent interest since it is a lways locally conservative. In this talk\, we tailor this methodology to m odel nonlinear and time-dependent problems to obtain estimates that are ro bust\, i.e.\, of quality independent of the strength of the nonlinearities and the final time. These estimates include\, and build on\, common itera tive linearization schemes such as Zarantonello\, Picard\, Newton\, or M- and L-ones. We first consider steady problems and conceive two settings: w e either augment the energy difference by the discretization error of the current linearization step\, or we design iteration-dependent norms that f eature weights given by the current iterate. We then turn to unsteady prob lems. Here we first consider the linear heat equation and finally move to the Richards one\, that is doubly nonlinear and exhibits both parabolic– hyperbolic and parabolic–elliptic degeneracies. Robustness with respect to the final time and local efficiency in both time and space are addresse d here. Numerical experiments illustrate the theoretical findings all alon g the presentation. Details can be found in [1-4].\\r\\nA. Ern\, I. Smears \, M. Vohralík\, Guaranteed\, locally space-time efficient\, and polynomi al-degree robust a posteriori error estimates for high-order discretizatio ns of parabolic problems\, SIAM J. Numer. Anal. 55 (2017)\, 2811–2834.\\ r\\nA. Harnist\, K. Mitra\, A. Rappaport\, M. Vohralík\, Robust energy a posteriori estimates for nonlinear elliptic problems\, HAL Preprint 04033 438\, 2023.\\r\\nK. Mitra\, M. Vohralík\, A posteriori error estimates fo r the Richards equation\, Math. Comp. (2024)\, accepted for publication.\\ r\\nK. Mitra\, M. Vohralík\, Guaranteed\, locally efficient\, and robust a posteriori estimates for nonlinear elliptic problems in iteration-depend ent norms. An orthogonal decomposition result based on iterative lineariza tion\, HAL Preprint 04156711\, 2023.\\r\\n\\r\\nFor further information a bout the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:A posteriori estimates enable to certify the error committed in a numerical simulation. In particular\, the equilibrated flux reconstru ction technique yields a guaranteed error upper bound\, where the flux\, o btained by a local postprocessing\, is of independent interest since it is always locally conservative. In this talk\, we tailor this methodology to model nonlinear and time-dependent problems to obtain estimates that are robust\, i.e.\, of quality independent of the strength of the nonlineariti es and the final time. These estimates include\, and build on\, common ite rative linearization schemes such as Zarantonello\, Picard\, Newton\, or M - and L-ones. We first consider steady problems and conceive two settings: we either augment the energy difference by the discretization error of th e current linearization step\, or we design iteration-dependent norms that feature weights given by the current iterate. We then turn to unsteady pr oblems. Here we first consider the linear heat equation and finally move t o the Richards one\, that is doubly nonlinear and exhibits both parabolic –hyperbolic and parabolic–elliptic degeneracies. Robustness with respe ct to the final time and local efficiency in both time and space are addre ssed here. Numerical experiments illustrate the theoretical findings all a long the presentation. Details can be found in [1-4].

\nA. Ern\, I.
Smears\, M. Vohralík\, Guaranteed\, locally space-time efficient\, and po
lynomial-degree robust a posteriori error estimates for high-order discret
izations of parabolic problems\, *SIAM J. Numer. Anal.***55<
/strong> (2017)\, 2811–2834.**

A. Harnist\, K. Mitra\, A. Rappapor t\, M. Vohralík\, Robust energy a posteriori estimates for nonlinear elli ptic problems\, HAL Preprint \;04033438\, 2023.

\nK. Mitra\, M.
Vohralík\, A posteriori error estimates for the Richards equation\, *M
ath. Comp.* (2024)\, accepted for publication.

K. Mitra\, M. V ohralík\, Guaranteed\, locally efficient\, and robust a posteriori estima tes for nonlinear elliptic problems in iteration-dependent norms. An ortho gonal decomposition result based on iterative linearization\, HAL Preprint \;04156711\, 2023.

\n\nFor further information about the semina r\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231208T120000 END:VEVENT BEGIN:VEVENT UID:news1544@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230922T165627 DTSTART;TZID=Europe/Zurich:20231110T110000 SUMMARY:Seminar in Numerical Analysis: Larisa Beilina (University of Göteb org) DESCRIPTION:An adaptive finite element/finite difference domain decompositi on method for solution of time-dependent Maxwell's equations for electric field in conductive media will be presented. This method is applied for r econstruction of dielectric permittivity and conductivity functions using time-dependent scattered data of electric field at the boundary of the do main.\\r\\nAll reconstruction algorithms are based on optimization approac h for finding of stationary point of the Lagrangian. Derivation of a poste riori error estimates for the regularized solution and Tikhonov functional will be presented. Based on these estimates adaptive reconstruction alg orithms are developed. Computational tests will show robustness of propo sed algorithms in 3D.\\r\\n\\r\\nFor further information about the seminar \, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:An adaptive finite element/finite difference domain decomposi tion \;method for solution of time-dependent Maxwell's equations for e lectric field in conductive media will be presented. This method is applie d for reconstruction of dielectric permittivity and conductivity \;fun ctions using time-dependent scattered data of electric field at the bounda ry of the domain.

\nAll reconstruction algorithms are based on optim ization approach for finding of stationary point of the Lagrangian. Deriva tion of a posteriori error estimates for the regularized solution and Tikh onov functional will be presented. \;Based on these estimates adaptiv e reconstruction algorithms are developed. \;Computational tests will show robustness of proposed algorithms in 3D.

\n\nFor further infor mation about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231110T120000 END:VEVENT BEGIN:VEVENT UID:news1583@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231024T091747 DTSTART;TZID=Europe/Zurich:20231027T110000 SUMMARY:Seminar in Numerical Analysis: Carsten Gräser (FAU Erlangen-Nürnb erg) DESCRIPTION:We consider the regularization of a supervised learning proble m by partial differential equations (PDEs). For the resulting regularized problem we derive error bounds in terms of a PDE error term and a data error term. These error contributions quantify the accuracy of the PDE mod el used for regularization and the data coverage. Furthermore\, the disc retization of the PDE-regularized learning problem by generalized Galerki n methods including finite elements and neural networks approaches is in vestigated. A nonlinear version of Céa's lemma allows to derive errors b ounds for both classes of discretizations and gives first insights into e rror analysis of variational neural network discretizations of PDEs.\\r\ \n\\r\\nFor further information about the seminar\, please visit this webp age [t3://page?uid=1115]. X-ALT-DESC:We consider the regularization of a supervised \;learning
problem by partial differential equations \;(PDEs). For the resulting
regularized problem we \;derive error bounds in terms of a PDE error
term \;and a data error term. These error contributions

quantify
the accuracy of the PDE model used for \;regularization and the data
coverage.

Furthermore\, the discretization of the PDE-regular
ized \;learning problem by generalized Galerkin methods \;includin
g finite elements and neural networks approaches \;is investigated. A
nonlinear version of Céa's lemma \;allows to derive errors bounds for
both classes of \;discretizations and gives first insights into error
\;analysis of variational neural network discretizations \;of PDE
s.

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231027T120000 END:VEVENT BEGIN:VEVENT UID:news1537@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231013T095041 DTSTART;TZID=Europe/Zurich:20231020T110000 SUMMARY:Seminar in Numerical Analysis: Marco Zank (U Wien) DESCRIPTION:For the discretization of time-dependent partial differential e quations\, the standard approaches are explicit or implicit time-stepping schemes together with finite element methods in space. An alternative appr oach is the usage of space-time methods\, where the space-time domain is d iscretized and the resulting global linear system is solved at once. In th is talk\, some recent developments in space-time finite element methods ar e reviewed. For this purpose\, the heat equation and the wave equation ser ve as model problems. First\, for both model problems\, space-time variati onal formulations and their unique solvability in space-time Sobolev space s are discussed\, where a modified Hilbert transformation is used such tha t ansatz and test spaces are equal. Second\, conforming discretization sch emes\, using piecewise polynomial\, globally continuous functions\, are in troduced. Solvability and stability of these numerical schemes are discuss ed. Next\, we investigate efficient direct solvers for the occurring huge linear systems. The developed solvers are based on the Bartels--Stewart me thod and on the Fast Diagonalization method\, which result in solving a se quence of spatial subproblems. The solver based on the Fast Diagonalizatio n method allows solving these spatial subproblems in parallel\, leading to a full parallelization in time. In the last part of the talk\, numerical examples are shown and discussed.\\r\\n\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:For the discretization of time-dependent partial differential equations\, the standard approaches are explicit or implicit time-steppin g schemes together with finite element methods in space. An alternative ap proach is the usage of space-time methods\, where the space-time domain is discretized and the resulting global linear system is solved at once. In this talk\, some recent developments in space-time finite element methods are reviewed. For this purpose\, the heat equation and the wave equation s erve as model problems. First\, for both model problems\, space-time varia tional formulations and their unique solvability in space-time Sobolev spa ces are discussed\, where a modified Hilbert transformation is used such t hat ansatz and test spaces are equal. Second\, conforming discretization s chemes\, using piecewise polynomial\, globally continuous functions\, are introduced. Solvability and stability of these numerical schemes are discu ssed. Next\, we investigate efficient direct solvers for the occurring hug e linear systems. The developed solvers are based on the Bartels--Stewart method and on the Fast Diagonalization method\, which result in solving a sequence of spatial subproblems. The solver based on the Fast Diagonalizat ion method allows solving these spatial subproblems in parallel\, leading to a full parallelization in time. In the last part of the talk\, numerica l examples are shown and discussed.

\n\nFor further information abou t the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231020T120000 END:VEVENT BEGIN:VEVENT UID:news1562@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230926T153730 DTSTART;TZID=Europe/Zurich:20231013T110000 SUMMARY:Seminar in Numerical Analysis: Markus Weimar (Julius-Maximilians-Un iversität Würzburg) DESCRIPTION:As a rule of thumb in approximation theory\, the asymptotic spe ed of convergence of numerical algorithms is governed by the regularity of the objects we like to approximate. Besides classical isotropic Sobolev s moothness\, in the last decades the notion of so-called dominating- mixed regularity of functions turned out to be an important concept in numerical analysis. Indeed\, it naturally arises in high-dimensional real-world app lications\, e.g.\, related to the electronic Schrödinger equation. Althou gh optimal approximation rates for embeddings within the scales of isotr opic or dominating-mixed Lp-Sobolev spaces are well-understood\, not that much is known for embeddings across those scales (break-of-scale embedd ings).\\r\\nIn this lecture\, we first review the Fourier analytic approac h towards by now well-established (Besov and Triebel-Lizorkin) scales of d istribution spaces that measure either isotropic or dominating-mixed reg ularity. In addition\, we introduce new function spaces of hybrid smoothne ss which are able to simultaneously capture both types of regularity at the same time. As a further generalization of the aforementioned scales\, they particularly include standard Sobolev spaces on domains. On the other hand\, our new spaces yield an appropri- ate framework to study break-of- scale embeddings by means of harmonic analysis. We shall present (non-)ada ptive wavelet-based multiscale algorithms that approximate such embed- din gs at optimal dimension-independent rates of convergence. Important specia l cases cover the approximation of functions having dominating-mixed Sobol ev smoothness w.r.t. Lp in the norm of the (isotropic) energy space H1. \\r\\nThe talk is based on a recent paper [1] which represents the first p art of a joint work with Glenn Byrenheid (FSU Jena)\, Markus Hansen (PU Ma rburg)\, and Janina Hübner (RU Bochum).\\r\\nReferences:\\r\\n[1] G. Byre nheid\, J. Hübner\, and M. Weimar. Rate-optimal sparse approximation of c ompact break-of-scale embeddings. Appl. Comput. Harmon. Anal. 65:40–66\ , 2023 (arXiv:2203.10011).\\r\\nFor further information about the seminar\ , please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:As a rule of thumb in approximation theory\, the asymptotic s peed of convergence of numerical algorithms is governed by the regularity of the objects we like to approximate. Besides classical isotropic Sobolev smoothness\, in the last decades the notion of so-called dominating- mixe d regularity of functions turned out to be an important concept in numeric al analysis. Indeed\, it naturally arises in high-dimensional real-world a pplications\, e.g.\, related to the electronic Schrödinger equation. Alth ough optimal approximation rates for embeddings \;within \;the sca les of isotropic or dominating-mixed \;Lp-Sobolev spaces are well-unde rstood\, not that much is known for embeddings \;across \;those sc ales (break-of-scale embeddings).

\nIn this lecture\, we first revie w the Fourier analytic approach towards by now well-established (Besov and Triebel-Lizorkin) scales of distribution spaces that measure \;either \;isotropic or dominating-mixed regularity. In addition\, we introduc e new function spaces of hybrid smoothness which are able to \;simulta neously \;capture both types of regularity at the same time. As a furt her generalization of the aforementioned scales\, they particularly includ e standard Sobolev spaces on domains. On the other hand\, our new spaces y ield an appropri- ate framework to study break-of-scale embeddings by mean s of harmonic analysis. We shall present (non-)adaptive wavelet-based mult iscale algorithms that approximate such embed- dings at optimal dimension- independent rates of convergence. Important special cases cover the approx imation of functions having dominating-mixed Sobolev smoothness w.r.t.&nbs p\;Lp \;in the norm of the (isotropic) energy space \;H1.

\nThe talk is based on a recent paper [1] which represents the first part of a joint work with Glenn Byrenheid (FSU Jena)\, Markus Hansen (PU Marburg) \, and Janina Hübner (RU Bochum).

\nReferences:

\n[1] G. Byre nheid\, J. Hübner\, and M. Weimar. Rate-optimal sparse approximation of c ompact break-of-scale embeddings. Appl. Comput. Harmon. Anal. \;65:40 –66\, 2023 (arXiv:2203.10011).

\nFor further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20231013T120000 END:VEVENT BEGIN:VEVENT UID:news1538@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230915T113109 DTSTART;TZID=Europe/Zurich:20230929T110000 SUMMARY:Seminar in Numerical Analysis: Rüdiger Kempf (U Bayreuth) DESCRIPTION:Reproducing kernel Hilbert spaces (RKHSs) and the closely rela ted kernel methods are well-established and well-studied tools in classi cal approximation theory. More recently\, they see many uses in other prob lems in applied and numerical analysis.\\r\\nIn machine learning\, support vector machines heavily rely on RKHSs. For neural networks Barron spaces are connected to certain RKHSs and offer a possibility for a theoretical a nalysis of these networks.\\r\\nAnother application of RKHSs is in high(er )-dimensional approximation. For instance in the field of quasi Monte-Carl o methods\, kernel-techniques are used to derive an error analysis for hig h-dimensional quadrature rules. We also developed a novel kernel-based app roximation method for higher-dimensional meshfree function reconstruction\ , based on Smolyak operators.\\r\\nIn this talk I will provide an introduc tion into the theory of RKHSs\, their kernels and associated kernel method s. In particular\, I will focus on a multiscale approximation scheme for r escaled radial basis functions. This method will then be used to derive th e new tensor product multilevel method for higher- dimensional meshfree approximation\, which I will discuss in detail.\\r\\n\\r\\nFor further inf ormation about the seminar\, please visit this webpage [t3://page?uid=1115 ]. X-ALT-DESC:Reproducing kernel Hilbert spaces (RKHSs) \;and the close ly related \;kernel methods \;are well-established and well-studie d tools in classical approximation theory. More recently\, they see many u ses in other problems in applied and numerical analysis.

\nIn machin e learning\, support vector machines heavily rely on RKHSs. For neural net works Barron spaces are connected to certain RKHSs and offer a possibility for a theoretical analysis of these networks.

\nAnother application of RKHSs is in high(er)-dimensional approximation. For instance in the fi eld of quasi Monte-Carlo methods\, kernel-techniques are used to derive an error analysis for high-dimensional quadrature rules. We also developed a novel kernel-based approximation method for higher-dimensional meshfree f unction reconstruction\, based on Smolyak operators.

\nIn this talk I will provide an introduction into the theory of RKHSs\, their kernels an d associated kernel methods. In particular\, I will focus on a multiscale approximation scheme for rescaled radial basis functions. This method will then be used to derive the new \;tensor product multilevel method&nbs p\;for higher- dimensional meshfree approximation\, which I will discuss i n detail.

\n\nFor further information about the seminar\, please vis it this webpage.

DTEND;TZID=Europe/Zurich:20230929T120000 END:VEVENT BEGIN:VEVENT UID:news1536@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230904T192551 DTSTART;TZID=Europe/Zurich:20230922T110000 SUMMARY:Seminar in Numerical Analysis: Robert Gruhlke (FU Berlin) DESCRIPTION:Ensemble methods have become ubiquitous for solving Bayesian in ference problems\, in particular the efficient sampling from posterior den sities. State-of-the-art subclasses of Markow-Chain-Monte-Carlo methods r ely on gradient information of the log-density including Langevin samplers such as Ensemble Kalman Sampler (EKS) and Affine Invariant Langevin Dyna mics (ALDI). These dynamics are described by stochastic differential equa tions (SDEs) with time homogeneous drift terms. \\r\\nIn this talk we pre sent enhancement strategies of such ensemble methods based on sample enric hment and homotopy formalism\, that ultimately lead to time-dependent drif t terms that possible assimilate a larger class of target distributions w hile providing faster mixing times. \\r\\nFurthermore\, we present an alt ernative route to construct time-inhomogeneous drift terms based on rever se Diffusion processes that are popular in state-of-the-art Generative Mo delling such as Diffusion maps. Here\, we propose learning these log-dens ities by propagation of the target distribution through an Ornstein-Uhlenb eck process. For this\, we solve the associated Hamilton-Jabobi-Bellman eq uation through an adaptive explicit Euler discretization using low-rank co mpression such as functional Tensor Trains for the spatial discretization. \\r\\n\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Ensemble methods have become ubiquitous for solving Bayesian inference problems\, in particular the efficient sampling from posterior d ensities. \;State-of-the-art subclasses of Markow-Chain-Monte-Carlo me thods rely on gradient information of the log-density including Langevin s amplers such as \;Ensemble Kalman Sampler (EKS) and Affine Invariant L angevin Dynamics (ALDI). These dynamics are described by \;stochastic differential equations (SDEs) with time homogeneous drift terms. \;

\nIn this talk we present enhancement strategies of such ensemble meth ods based on sample enrichment and homotopy formalism\, that ultimately le ad to time-dependent drift terms that possible \;assimilate a larger c lass of target distributions while providing faster mixing times. \;\n

Furthermore\, we present an alternative route to construct \;ti me-inhomogeneous drift terms based on reverse Diffusion processes that are \;popular in state-of-the-art Generative Modelling such as Diffusion maps. \;Here\, we propose learning these log-densities by propagation of the target distribution through an Ornstein-Uhlenbeck process. For this \, we solve the associated Hamilton-Jabobi-Bellman equation through an ada ptive explicit Euler discretization using low-rank compression such as fun ctional Tensor Trains for the spatial discretization.

\n\nFor furthe r information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20230922T120000 END:VEVENT BEGIN:VEVENT UID:news1464@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230509T105935 DTSTART;TZID=Europe/Zurich:20230512T110000 SUMMARY:Seminar in Numerical Analysis: Martin Eigel (WIAS Berlin) DESCRIPTION:Weighted least squares methods have been examined thouroughly t o obtain quasi-optimal convergence results for a chosen (polynomial) basis of a linear space. A focus in the analysis lies on the construction of op timal sampling measures and the derivation of a sufficient sample complexi ty for stable reconstructions. When considering holomorphic functions such as solutions of common parametric PDEs\, the anisotropic sparsity they ex hibit can be exploited to achieve improved results adapted to the consider ed problem. In particular\, the sparsity of the data transfers to the solu tion sparsity in terms of polynomial chaos coefficients. When using nonlin ear model classes\, it turns out that the known results cannot be used dir ectly. To obtain comparable a priori rates\, we introduce a new weighted v ersion of Stechkin's lemma. This enables to obtain optimal complexity resu lts for a model class of low-rank tensor trains. We also show that the sol ution sparsity results in sparse component tensors and sketch how this can be realised in practical algorithms. A nice application is the reconstruc tion of Galerkin solutions for parametric PDEs. With this\, a provably con verging a posteriori adaptive algorithm can be derived for linear model PD Es with non-affine coefficients.\\r\\n\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Weighted least squares methods have been examined thouroughly to obtain quasi-optimal convergence results for a chosen (polynomial) bas is of a linear space. A focus in the analysis lies on the construction of optimal sampling measures and the derivation of a sufficient sample comple xity for stable reconstructions. When considering holomorphic functions su ch as solutions of common parametric PDEs\, the anisotropic sparsity they exhibit can be exploited to achieve improved results adapted to the consid ered problem. In particular\, the sparsity of the data transfers to the so lution sparsity in terms of polynomial chaos coefficients. When using nonl inear model classes\, it turns out that the known results cannot be used d irectly. To obtain comparable a priori rates\, we introduce a new weighted version of Stechkin's lemma. This enables to obtain optimal complexity re sults for a model class of low-rank tensor trains. We also show that the s olution sparsity results in sparse component tensors and sketch how this c an be realised in practical algorithms. A nice application is the reconstr uction of Galerkin solutions for parametric PDEs. With this\, a provably c onverging a posteriori adaptive algorithm can be derived for linear model PDEs with non-affine coefficients.

\n\nFor further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20230512T120000 END:VEVENT BEGIN:VEVENT UID:news1500@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230427T100910 DTSTART;TZID=Europe/Zurich:20230505T110000 SUMMARY:Seminar in Numerical Analysis: Elena Moral Sánchez (Max-Planck Ins titute for Plasma Physics) DESCRIPTION:The cold-plasma wave equation describes the propagation of an e lectromagnetic wave in a magnetized plasma\, which is an inhomogeneous\, d ispersive and anisotropic medium. The thermal effects are assumed to be ne gligible\, which leads to a linear partial differential equation. Besides\ , we assume that the electromagnetic field of the propagating wave is in t he time-harmonic regime. This model has applications in magnetic confineme nt fusion devices\, like the Tokamak. Namely\, electromagnetic waves are u sed to heat up the plasma (Electron cyclotron resonance heating (ECRH)) or for interferometry and reflectometry diagnostics (to measure plasma densi ty and position\, etc.). In the first part of this talk\, we introduce th e cold-plasma model\, together with a qualitative study of the plasma mode s which expose the complexity of the problem. In the second part\, we desc ribe the problem and the simplifications we carry out\, which yield the in definite Helmholtz equation. It is solved with B-Spline Finite Elements pr ovided by the Psydac library and some results are shown. Lastly\, we discu ss the performance and potential ways of preconditioning.\\r\\n\\r\\nFor f urther information about the seminar\, please visit this webpage [t3://pag e?uid=1115]. X-ALT-DESC:The cold-plasma wave equation describes the propagation of an
electromagnetic wave in a magnetized plasma\, which is an inhomogeneous\,
dispersive and anisotropic medium. The thermal effects are assumed to be
negligible\, which leads to a linear partial differential equation. Beside
s\, we assume that the electromagnetic field of the propagating wave is in
the time-harmonic regime.

This model has applications in magnetic c
onfinement fusion devices\, like the Tokamak. Namely\, electromagnetic wav
es are used to heat up the plasma (Electron cyclotron resonance heating (E
CRH)) or for interferometry and reflectometry diagnostics (to measure plas
ma density and position\, etc.).

In the first part of this ta
lk\, we introduce the cold-plasma model\, together with a qualitative stud
y of the plasma modes which expose the complexity of the problem.

In
the second part\, we describe the problem and the simplifications we carr
y out\, which yield the indefinite Helmholtz equation. It is solved with B
-Spline Finite Elements provided by the Psydac library and some results ar
e shown. Lastly\, we discuss the performance and potential ways of precond
itioning.

For further information about the seminar\, please vis it this webpage.

DTEND;TZID=Europe/Zurich:20230505T120000 END:VEVENT BEGIN:VEVENT UID:news1472@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230417T092542 DTSTART;TZID=Europe/Zurich:20230428T110000 SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (CNRS — Universit é Pierre et Marie Curie) DESCRIPTION:We introduce a scalable adaptive element-based domain decomposi tion (DD) method for solving saddle point problems defined as a block two by two matrix. The algorithm does not require any knowledge of the constra ined space. We assume that all sub matrices are sparse and that the diagon al blocks are spectrally equivalent to a sum of positive semi definite mat rices. The latter assumption enables the design of adaptive coarse space f or DD methods that extends the GenEO theory to saddle point problems. Nume rical results on three dimensional elasticity problems for steel-rubber st ructures discretized by a finite element with continuous pressure are show n for up to one billion degrees of freedom along with comparisons to Algeb raic Multigrid Methods.\\r\\n\\r\\nFor further information about the semin ar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:We introduce a scalable adaptive element-based domain decompo sition (DD) method for solving saddle point problems defined as a block tw o by two matrix. The algorithm does not require any knowledge of the const rained space. We assume that all sub matrices are sparse and that the diag onal blocks are spectrally equivalent to a sum of positive semi definite m atrices. The latter assumption enables the design of adaptive coarse space for DD methods that extends the GenEO theory to saddle point problems. Nu merical results on three dimensional elasticity problems for steel-rubber structures discretized by a finite element with continuous pressure are sh own for up to one billion degrees of freedom along with comparisons to Alg ebraic Multigrid Methods.

\n\nFor further information about the semi nar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20230428T120000 END:VEVENT BEGIN:VEVENT UID:news1480@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230418T090618 DTSTART;TZID=Europe/Zurich:20230421T110000 SUMMARY:Seminar in Numerical Analysis: Omar Lakkis (University of Sussex) DESCRIPTION:Least-squares finite element recovery-based methods provide a s imple and practical way to approximate linear elliptic PDEs in nondivergen ce form where standard variational approach either fails or requires techn ically complex modifications.\\r\\nThis idea allows the creation of effici ent solvers for fully nonlinear elliptic equations\, the linearization of which leaves us with an equation in nondivergence form. An important class of fully nonlinear elliptic PDEs can be written in Hamilton--Jacobi--Bell man (Dynamic Programming) form\, i.e.\, as the supremum of a collection of linear operators acting on the unkown.\\r\\nThe least-squares FEM approac h\, a variant of the nonvariational finite element method\, is based on gr adient or Hessian recovery and allows the use of FEMs of arbitrary degree. The price to pay for using higher order FEMs is the loss of discrete-leve l monotonicity (maximum principle)\, which is valid for the PDE and crucia l in proving the convergence of many degree one FEM and finite difference schemes.\\r\\nSuitable functional spaces and penalties in the least-square s's cost functional must be carefully crafted in order to ensure stability and convergence of the scheme with a good approximation of the gradient ( or Hessian) under the Cordes condition on the family of linear operators b eing optimized.\\r\\nFurthermore\, the nonlinear operator which is not nec essarily everywhere differentiable\, must be linearized in appropriate fun ctional spaces using semismooth Newton or Howard's policy iteration method . A crucial contribution of our work\, is the proof of convergence of the semismooth Newton method at the continuum level\, i.e.\, on infinite dimes ional functionals spaces. This allows an easy use of our non-monotone sche mes which provides convergence rates as well as a posteriori error estimat es.\\r\\n\\r\\nFor further information about the seminar\, please visit th is webpage [t3://page?uid=1115]. X-ALT-DESC:Least-squares finite element recovery-based methods provide a simple and practical way to approximate linear elliptic PDEs in nondiverg ence form where standard variational approach either fails or requires tec hnically complex modifications.

\nThis idea allows the creation of e fficient solvers for fully nonlinear elliptic equations\, the linearizatio n of which leaves us with an equation in nondivergence form. An important class of fully nonlinear elliptic PDEs can be written in Hamilton--Jacobi- -Bellman (Dynamic Programming) form\, i.e.\, as the supremum of a collecti on of linear operators acting on the unkown.

\nThe least-squares FEM approach\, a variant of the nonvariational finite element method\, is bas ed on gradient or Hessian recovery and allows the use of FEMs of arbitrary degree. The price to pay for using higher order FEMs is the loss of discr ete-level monotonicity (maximum principle)\, which is valid for the PDE an d crucial in proving the convergence of many degree one FEM and finite dif ference schemes.

\nSuitable functional spaces and penalties in the l east-squares's cost functional must be carefully crafted in order to ensur e stability and convergence of the scheme with a good approximation of the gradient (or Hessian) under the Cordes condition on the family of linear operators being optimized.

\nFurthermore\, the nonlinear operator wh ich is not necessarily everywhere differentiable\, must be linearized in a ppropriate functional spaces using semismooth Newton or Howard's policy it eration method. A crucial contribution of our work\, is the proof of conve rgence of the semismooth Newton method at the continuum level\, i.e.\, on infinite dimesional functionals spaces. This allows an easy use of our non -monotone schemes which provides convergence rates as well as a posteriori error estimates.

\n\nFor further information about the seminar\, pl ease visit this webpage.

DTEND;TZID=Europe/Zurich:20230421T120000 END:VEVENT BEGIN:VEVENT UID:news1455@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230403T123724 DTSTART;TZID=Europe/Zurich:20230414T110000 SUMMARY:Seminar in Numerical Analysis: Vesa Kaarnioja (FU Berlin) DESCRIPTION:We describe a fast method for solving elliptic PDEs with uncert ain coefficients using kernel-based interpolation over a rank-1 lattice po int set [1]. By representing the input random field of the system using a model proposed by Kaarnioja\, Kuo\, and Sloan [2]\, in which a countable n umber of independent random variables enter the random field as periodic f unctions\, it is shown that the kernel interpolant can be constructed for the PDE solution (or some quantity of interest thereof) as a function of t he stochastic variables in a highly efficient manner using fast Fourier tr ansform. The method works well even when the stochastic dimension of the p roblem is large\, and we obtain rigorous error bounds which are independen t of the stochastic dimension of the problem. We also outline some techniq ues that can be used to further improve the approximation error and comput ational complexity of the method [3].\\r\\n\\r\\nReferences:\\r\\n[1] V. K aarnioja\, Y. Kazashi\, F. Y. Kuo\, F. Nobile\, and I. H. Sloan. Fast appr oximation by periodic kernel-based lattice-point interpolation with applic ation in uncertainty quantification. Numer. Math. 150:33-77\, 2022.\\r\\n[ 2] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Uncertainty quantification using periodic random variables. SIAM J. Numer. Anal. 58(2):1068-1091\, 20 20.\\r\\n[3] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Lattice-based ker nel approximation and serendipitous weights for parametric PDEs in very hi gh dimensions. Preprint 2023\, arXiv:2303.17755 [math.NA].\\r\\n\\r\\nFor further information about the seminar\, please visit this webpage [t3://pa ge?uid=1115]. X-ALT-DESC:We describe a fast method for solving elliptic PDEs with unce rtain coefficients using kernel-based interpolation over a rank-1 lattice point set [1]. By representing the input random field of the system using a model proposed by Kaarnioja\, Kuo\, and Sloan [2]\, in which a countable number of independent random variables enter the random field as periodic functions\, it is shown that the kernel interpolant can be constructed fo r the PDE solution (or some quantity of interest thereof) as a function of the stochastic variables in a highly efficient manner using fast Fourier transform. The method works well even when the stochastic dimension of the problem is large\, and we obtain rigorous error bounds which are independ ent of the stochastic dimension of the problem. We also outline some techn iques that can be used to further improve the approximation error and comp utational complexity of the method [3].

\n\nReferences:

\n[1] V. Kaarnioja\, Y. Kazashi\, F. Y. Kuo\, F. Nobile\, and I. H. Sloan. Fast approximation by periodic kernel-based lattice-point interpolation with ap plication in uncertainty quantification. Numer. Math. 150:33-77\, 2022.

\n[2] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Uncertainty quantifi cation using periodic random variables. SIAM J. Numer. Anal. 58(2):1068-10 91\, 2020.

\n[3] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Lattice -based kernel approximation and serendipitous weights for parametric PDEs in very high dimensions. Preprint 2023\, arXiv:2303.17755 [math.NA].

\n \nFor further information about the seminar\, please visit this webpag e.

DTEND;TZID=Europe/Zurich:20230414T120000 END:VEVENT BEGIN:VEVENT UID:news1468@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230403T123812 DTSTART;TZID=Europe/Zurich:20230317T110000 SUMMARY:Seminar in Numerical Analysis: Marc Dambrine (Université de Pau et des Pays de l'Adour) DESCRIPTION:As it is often the case in optimization\, the solution of a sha pe problem is sensitive to the parameters of the problem. For example\, th e loading of a structure to be optimised is known in an imprecise way. In this talk\, I will present the different approaches that have been recentl y proposed to incorporate these uncertainties in the definition of the obj ective. I will present numerical illustrations from structural optimizatio n.\\r\\n\\r\\nFor further information about the seminar\, please visit thi s webpage [t3://page?uid=1115]. X-ALT-DESC:As it is often the case in optimization\, the solution of a s hape problem is sensitive to the parameters of the problem. For example\, the loading of a structure to be optimised is known in an imprecise way. I n this talk\, I will present the different approaches that have been recen tly proposed to incorporate these uncertainties in the definition of the o bjective. I will present numerical illustrations from structural optimizat ion.

\n\nFor further information about the seminar\, please visit th is webpage.

DTEND;TZID=Europe/Zurich:20230317T120000 END:VEVENT BEGIN:VEVENT UID:news1402@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220920T084344 DTSTART;TZID=Europe/Zurich:20221216T110000 SUMMARY:Seminar in Numerical Analysis: Christophe Geuzaine (Université de Liège) DESCRIPTION:This talk is devoted to non-overlapping Schwarz domain decompos ition methods for the resolution of high frequency flow acoustics problems of industrial relevance. First\, we will present recent advances on non-r eflecting boundary techniques that provide local approximations to the Dir ichlet-to-Neumann operator for convected and heterogeneous time-harmonic w ave propagation problems [1]. Then we will show how to adapt a generic dom ain decomposition framework to flow acoustics\, based on these newly desig ned transmission conditions\, and highlight the benefit of the approach on the simulation of three-dimensional noise radiation of a high by-pass rat io turbofan engine intake [2]. [1] Marchner\, P.\, Antoine\, X.\, Geuzain e\, C.\, & Bériot\, H. (2022). Construction and numerical assessment of l ocal absorbing boundary conditions for heterogeneous time-harmonic acousti c problems. SIAM Journal on Applied Mathematics\, 82(2)\, 476-501. [2] Li eu\, A.\, Marchner\, P.\, Gabard\, G.\, Beriot\, H.\, Antoine\, X.\, & Geu zaine\, C. (2020). A non-overlapping Schwarz domain decomposition method w ith high-order finite elements for flow acoustics. Computer Methods in App lied Mechanics and Engineering\, 369\, 113223.\\r\\nFor further informatio n about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:This talk is devoted to non-overlapping Schwarz domain decomp
osition methods for the resolution of high frequency flow acoustics proble
ms of industrial relevance. First\, we will present recent advances on non
-reflecting boundary techniques that provide local approximations to the D
irichlet-to-Neumann operator for convected and heterogeneous time-harmonic
wave propagation problems [1]. Then we will show how to adapt a generic d
omain decomposition framework to flow acoustics\, based on these newly des
igned transmission conditions\, and highlight the benefit of the approach
on the simulation of three-dimensional noise radiation of a high by-pass r
atio turbofan engine intake [2].

[1] Marchner\, P.\, Antoine\
, X.\, Geuzaine\, C.\, &\; Bériot\, H. (2022). Construction and numeri
cal assessment of local absorbing boundary conditions for heterogeneous ti
me-harmonic acoustic problems. SIAM Journal on Applied Mathematics\, 82(2)
\, 476-501.

[2] Lieu\, A.\, Marchner\, P.\, Gabard\, G.\, Ber
iot\, H.\, Antoine\, X.\, &\; Geuzaine\, C. (2020). A non-overlapping S
chwarz domain decomposition method with high-order finite elements for flo
w acoustics. Computer Methods in Applied Mechanics and Engineering\, 369\,
113223.

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20221216T120000 END:VEVENT BEGIN:VEVENT UID:news1410@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221130T142441 DTSTART;TZID=Europe/Zurich:20221209T110000 SUMMARY:Seminar in Numerical Analysis: Patrick Ciarlet (ENSTA Paris) DESCRIPTION:Variational formulations are a popular tool to analyse linear P DEs (eg. neutron diffusion\, Maxwell equations\, Stokes equations ...)\, a nd it also provides a convenient basis to design numerical methods to solv e them. Of paramount importance is the inf-sup condition\, designed by Lad yzhenskaya\, Necas\, Babuska and Brezzi in the 1960s and 1970s. As is well -known\, it provides sharp conditions to prove well-posedness of the probl em\, namely existence and uniqueness of the solution\, and continuous depe ndence with respect to the data. Then\, to solve the approximate\, or disc rete\, problems\, there is the (uniform) discrete inf-sup condition\, to e nsure existence of the approximate solutions\, and convergence of those so lutions to the exact solution. Often\, the two sides of this problem (exac t and approximate) are handled separately\, or at least no explicit connec tion is made between the two.\\r\\nIn this talk\, I will focus on an appro ach that is completely equivalent to the inf-sup condition for problems se t in Hilbert spaces\, the T-coercivity approach. This approach relies on t he design of an explicit operator to realize the inf-sup condition. If the operator is carefully chosen\, it can provide useful insight for a straig htforward definition of the approximation of the exact problem. As a matte r of fact\, the derivation of the discrete inf-sup condition often becomes elementary\, at least when one considers conforming methods\, that is whe n the discrete spaces are subspaces of the exact Hilbert spaces. In this w ay\, both the exact and the approximate problems are considered\, analysed and solved at once.\\r\\nIn itself\, T-coercivity is not a new theory\, h owever it seems that some of its strengths have been overlooked\, and that \, if used properly\, it can be a simple\, yet powerful tool to analyse an d solve linear PDEs. In particular\, it provides guidelines such as\, whic h abstract tools and which numerical methods are the most “natural” to analyse and solve the problem at hand. In other words\, it allows one to select simply appropriate tools in the mathematical\, or numerical\, toolb oxes. This claim will be illustrated on classical linear PDEs\, and for so me generalizations of those models.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Variational formulations are a popular tool to analyse linear
PDEs (eg. neutron diffusion\, Maxwell equations\, Stokes equations ...)\,
and it also provides a convenient basis to design numerical methods to so
lve them. Of paramount importance is the **inf-sup condition**\, designed by Ladyzhenskaya\, Necas\, Babuska and Brezzi in the 1960s an
d 1970s. As is well-known\, it provides sharp conditions to prove well-pos
edness of the problem\, namely existence and uniqueness of the solution\,
and continuous dependence with respect to the data. Then\, to solve the ap
proximate\, or discrete\, problems\, there is the **(uniform) discre
te inf-sup condition**\, to ensure existence of the approximate sol
utions\, and convergence of those solutions to the exact solution. Often\,
the two sides of this problem (exact and approximate) are handled separat
ely\, or at least no explicit connection is made between the two.

In this talk\, I will focus on an approach that is completely equivalent t
o the inf-sup condition for problems set in Hilbert spaces\, the **T
-coercivity approach**. This approach relies on the design of an

In itself\, T-coercivity is not a new theory\, however it seems that some of its strengths have been overlooked\, and that\, if used properly\, it can be a simple\, yet powerful tool to analyse and solv e linear PDEs. In particular\, it provides guidelines such as\, which abst ract tools and which numerical methods are the most “natural” to analy se and solve the problem at hand. In other words\, it allows one to select simply appropriate tools in the mathematical\, or numerical\, toolboxes. This claim will be illustrated on classical linear PDEs\, and for some gen eralizations of those models.

\nFor further information about the se minar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20221209T120000 END:VEVENT BEGIN:VEVENT UID:news1409@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220928T143806 DTSTART;TZID=Europe/Zurich:20221202T110000 SUMMARY:Seminar in Numerical Analysis: Sébastien Imperiale (Inria — LMS\ , Ecole Polytechnique\, CNRS — Université Paris-Saclay\, MΞDISIM) DESCRIPTION:The objective of this work is to propose and analyze numerical schemes to solve transient wave propagation problems that are exponentiall y stable (i.e. the solution decays to zero exponentially fast). Applicatio ns are in data assimilation strategies or the discretisation of absorbing boundary conditions. More precisely the aim of our work is to propose a di scretization process that enables to preserve the exponential stability at the discrete level as well as a high order consistency when using a high- order finite element approximation. The main idea is to add to the wave eq uation a stabilizing term which damps the high-frequency oscillating compo nents of the solutions such as spurious waves. This term is built from a d iscrete multiplier analysis that proves the exponential stability of the s emi-discrete problem at any order without affecting the order of convergen ce.\\r\\nFor further information about the seminar\, please visit this web page [t3://page?uid=1115]. X-ALT-DESC:The objective of this work is to propose and analyze numerica l schemes to solve transient wave propagation problems that are exponentia lly stable (i.e. the solution decays to zero exponentially fast). Applicat ions are in data assimilation strategies or the discretisation of absorbin g boundary conditions. More precisely the aim of our work is to propose a discretization process that enables to preserve the exponential stability at the discrete level as well as a high order consistency when using a hig h-order finite element approximation. The main idea is to add to the wave equation a stabilizing term which damps the high-frequency oscillating com ponents of the solutions such as spurious waves. This term is built from a discrete multiplier analysis that proves the exponential stability of the semi-discrete problem at any order without affecting the order of converg ence.

\nFor further information about the seminar\, please visit thi s webpage.

DTEND;TZID=Europe/Zurich:20221202T120000 END:VEVENT BEGIN:VEVENT UID:news1408@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221110T132504 DTSTART;TZID=Europe/Zurich:20221118T110000 SUMMARY:Seminar in Numerical Analysis: Alexey Chernov (Universität Oldenbu rg) DESCRIPTION:We investigate a class of parametric elliptic eigenvalue proble ms where the coefficients (and hence the solution) may depend on a paramet er y. Understanding the regularity of the solution as a function of y is i mportant for construction of efficient numerical approximation schemes. Se veral approaches are available in the existing literature\, e.g. the comp lex-analytic argument by Andreev and Schwab (2012) and the real-variable a rgument by Gilbert et al. (2019+). The latter proof strategy is more expli cit\, but\, due to the nonlinear nature of the problem\, leads to slightly suboptimal results. In this talk we close this gap and (as a by-product) extend the analysis to the more general class of coefficients.\\r\\nFor fu rther information about the seminar\, please visit this webpage [t3://page ?uid=1115]. X-ALT-DESC:We investigate a class of parametric elliptic eigenvalue prob lems where the coefficients (and hence the solution) may depend on a param eter y. Understanding the regularity of the solution as a function of y is important for construction of efficient numerical approximation schemes. Several approaches are available in the existing literature\, e.g. \;t he complex-analytic argument by Andreev and Schwab (2012) and the real-var iable argument by Gilbert et al. (2019+). The latter proof strategy is mor e explicit\, but\, due to the nonlinear nature of the problem\, leads to s lightly suboptimal results. In this talk we close this gap and (as a by-pr oduct) extend the analysis to the more general class of coefficients.

\ nFor further information about the seminar\, please visit this webpage .

DTEND;TZID=Europe/Zurich:20221118T120000 END:VEVENT BEGIN:VEVENT UID:news1412@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221021T141230 DTSTART;TZID=Europe/Zurich:20221104T110000 SUMMARY:Seminar in Numerical Analysis: Naomi Schneider (Universität Siegen ) DESCRIPTION:Both the approximation of the gravitational potential via the d ownward continuation of satellite data and of wave velocities via the trav el time tomography using earthquake data are geoscientific ill- posed inve rse problems. To monitor certain aspects of the system Earth\, like the ma ss transport or its geomagnetic field\, it is\, however\, important to tac kle these challenges. Traditionally\, an approximation of such a linear(iz ed) inverse problem is obtained in one\, a-priori chosen basis system: eit her a global one\, e.g. spherical harmonics or polynomials on the ball\, o r a local one\, e.g. radial basis functions and wavelets on the sphere or finite elements on the ball. In the Geomathematics Group Siegen\, we devel oped methods that enable us to combine different types of trial functions for such an approximation. The idea is to make the most of the benefits of different types of available trial functions. The algorithms are called t he (Learning) Inverse Problem Matching Pursuits (LIPMPs). They construct a n approximation iteratively from an intentionally overcomplete set of tria l functions\, the dictionary\, such that the Tikhonov functional is reduce d. Due to the learning add-on\, the dictionary can very well be infinite. Moreover\, the computational costs are usually decreased. In this talk\, w e give details on the LIPMPs and show some current numerical results.\\r\\ nFor further information about the seminar\, please visit this webpage [t3 ://page?uid=1115]. X-ALT-DESC:Both the approximation of the gravitational potential via the
downward continuation of satellite data and of wave velocities via the tr
avel time tomography using earthquake data are geoscientific ill- posed in
verse problems. To monitor certain aspects of the system Earth\, like the
mass transport or its geomagnetic field\, it is\, however\, important to t
ackle these challenges.

Traditionally\, an approximation of such a l
inear(ized) inverse problem is obtained in one\, a-priori chosen basis sys
tem: either a global one\, e.g. spherical harmonics or polynomials on the
ball\, or a local one\, e.g. radial basis functions and wavelets on the sp
here or finite elements on the ball.

In the Geomathematics Group Sie
gen\, we developed methods that enable us to combine different types of tr
ial functions for such an approximation. The idea is to make the most of t
he benefits of different types of available trial functions. The algorithm
s are called the (Learning) Inverse Problem Matching Pursuits (LIPMPs). Th
ey construct an approximation iteratively from an intentionally overcomple
te set of trial functions\, the dictionary\, such that the Tikhonov functi
onal is reduced. Due to the learning add-on\, the dictionary can very well
be infinite. Moreover\, the computational costs are usually decreased.

In this talk\, we give details on the LIPMPs and show some current num
erical results.

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20221104T120000 END:VEVENT BEGIN:VEVENT UID:news1352@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220516T103313 DTSTART;TZID=Europe/Zurich:20220520T110000 SUMMARY:Seminar in Numerical Analysis: Jens Saak (Max Planck Institute for Dynamics of Complex Technical Systems) DESCRIPTION:Optimal control problems subject to constraints given by partia l differential equations are a powerful tool for the improvement of many t asks in science an technology. Classic optimization today is applicable on various problems and tackling nonlinear equations and inclusion of box co nstraints on the solutions is flexible. However\, especially for non-stati onary problems\, small perturbations along the trajectories can easily lea d to large deviations in the desired solutions. Consequently\, optimality may be lost just as easily. On the other hand\, the linear-quadratic regul ator problem in system theory is an approach to make a dynamical system re act to perturbation via feedback controls that can be expressed by the sol utions of matrix Riccati equations. It’s applicability is limited by the linearity of the dynamical system and the efficient solvability of the qu adratic matrix equation. In this talk\, we discuss how certain classes of non-stationary PDEs can be reformulated (after spatial semi-discretization ) into structured linear dynamical systems that allow the Riccati feedback to be computed. This allows us to combine both approaches and thus steer solutions of perturbed PDEs back to the optimized trajectories. The key to efficient solvers for the Riccati equations is the usage of the specific structure in the problems and the fact that the Riccati solutions usually feature a strong singular value decay\, and thus good low-rank approximabi lity.\\r\\nFor further information about the seminar\, please visit this w ebpage [t3://page?uid=1115]. X-ALT-DESC:Optimal control problems subject to constraints given by part
ial differential equations are a powerful tool for the improvement of many
tasks in science an technology. Classic optimization today is applicable
on various problems and tackling nonlinear equations and inclusion of box
constraints on the solutions is flexible. However\, especially for non-sta
tionary problems\, small perturbations along the trajectories can easily l
ead to large deviations in the desired solutions. Consequently\, optimalit
y may be lost just as easily.

On the other hand\, the linear-quadrat
ic regulator problem in system theory is an approach to make a dynamical s
ystem react to perturbation via feedback controls that can be expressed by
the solutions of matrix Riccati equations. It’s applicability is limite
d by the linearity of the dynamical system and the efficient solvability o
f the quadratic matrix equation.

In this talk\, we discuss how certa
in classes of non-stationary PDEs can be reformulated (after spatial semi-
discretization) into structured linear dynamical systems that allow the Ri
ccati feedback to be computed. This allows us to combine both approaches a
nd thus steer solutions of perturbed PDEs back to the optimized trajectori
es. The key to efficient solvers for the Riccati equations is the usage of
the specific structure in the problems and the fact that the Riccati solu
tions usually feature a strong singular value decay\, and thus good low-ra
nk approximability.

For further information about the seminar\, pl ease visit this webpage.

DTEND;TZID=Europe/Zurich:20220520T120000 END:VEVENT BEGIN:VEVENT UID:news1351@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220419T163823 DTSTART;TZID=Europe/Zurich:20220422T110000 SUMMARY:Seminar in Numerical Analysis: Matthias Voigt (FernUni Schweiz) DESCRIPTION:We introduce a model reduction approach for linear time-invaria nt second-order systems based on positive real balanced truncation. Our me thod guarantees to preserve passivity of the reduced-order model as well a s the positive definiteness of the mass and stiffness matrices and admits an a priori gap metric error bound. Our construction of the second-order r educed model is based on the consideration of an internal symmetry structu re and the invariant zeros of the system and their sign-characteristics fo r which we derive a normal form. The results are available in [1].\\r\\n[1 ] I. Dorschky\, T. Reis\, and M. Voigt. Balanced truncation model reductio n for symmetric second order systems - a passivity-based approach. SIAM J. Matrix Anal. Appl.\, 42(4):1602--1635\, 2021.\\r\\nFor further informatio n about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:We introduce a model reduction approach for linear time-invar
iant second-order systems based on positive real balanced truncation.

Our method guarantees to preserve passivity of the reduced-order model a
s well as the positive definiteness of the mass and stiffness matrices and
admits an a priori gap metric error bound.

Our construction of the
second-order reduced model is based on the consideration of an internal sy
mmetry structure and the invariant zeros of the system and their sign-char
acteristics for which we derive a normal form.

The results are avail
able in [1].

[1] I. Dorschky\, T. Reis\, and M. Voigt. Balanced tr uncation model reduction for symmetric second order systems - a passivity- based approach. SIAM J. Matrix Anal. Appl.\, 42(4):1602--1635\, 2021.

\ nFor further information about the seminar\, please visit this webpage .

DTEND;TZID=Europe/Zurich:20220422T120000 END:VEVENT BEGIN:VEVENT UID:news1333@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220315T174614 DTSTART;TZID=Europe/Zurich:20220408T110000 SUMMARY:Seminar in Numerical Analysis: Ralf Hiptmair (ETH Zürich) DESCRIPTION:We consider scalar-valued shape functionals on sets of shapes w hich are small perturbations of a reference shape. The shapes are describe d by parameterizations and their closeness is induced by a Hilbert space s tructure on the parameter domain. We justify a heuristic for finding the b est low-dimensional parameter subspace with respect to uniformly approxima ting a given shape functional. We also propose an adaptive algorithm for a chieving a prescribed accuracy when representing the shape functional with a small number of shape parameters.\\r\\nFor further information about th e seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:We consider scalar-valued shape functionals on sets of shapes which are small perturbations of a reference shape. The shapes are descri bed by parameterizations and their closeness is induced by a Hilbert space structure on the parameter domain. We justify a heuristic for finding the best low-dimensional parameter subspace with respect to uniformly approxi mating a given shape functional. We also propose an adaptive algorithm for achieving a prescribed accuracy when representing the shape functional wi th a small number of shape parameters.

\nFor further information abo ut the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20220408T120000 END:VEVENT BEGIN:VEVENT UID:news1334@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220325T150305 DTSTART;TZID=Europe/Zurich:20220401T110000 SUMMARY:Seminar in Numerical Analysis: Johannes Pfefferer (Technische Unive rsität München) DESCRIPTION:Many areas of science and engineering involve optimal control o f processes that are modeled through partial differential equations. This talk will introduce the theoretical foundation and numerical methods based on finite elements for solving PDE constrained optimal control problems. We will discuss different discretization concepts and corresponding discre tization error estimates. The discussion will include the consideration of optimal control problems with control constraints as well as with state c onstraints.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Many areas of science and engineering involve optimal control of processes that are modeled through partial differential equations. Thi s talk will introduce the theoretical foundation and numerical methods bas ed on finite elements for solving PDE constrained optimal control problems . We will discuss different discretization concepts and corresponding disc retization error estimates. The discussion will include the consideration of optimal control problems with control constraints as well as with state constraints.

\nFor further information about the seminar\, please v isit this webpage.

DTEND;TZID=Europe/Zurich:20220401T120000 END:VEVENT BEGIN:VEVENT UID:news1340@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220321T111220 DTSTART;TZID=Europe/Zurich:20220325T110000 SUMMARY:Seminar in Numerical Analysis: Stepan Shakhno (Ivan Franko National University of Lviv) DESCRIPTION:In this talk\, one- and two-step methods for solving nonlinear equations with nondifferentiable operators are proposed. These methods are based on two methods: using derivatives and using divided differences. Th e local and semi-local convergence of the proposed methods is studied and the order of their convergence is established. We apply our results to the numerical solving of systems of nonlinear equations.\\r\\nFor further inf ormation about the seminar\, please visit this webpage [t3://page?uid=1115 ]. X-ALT-DESC:In this talk\, one- and two-step methods for solving nonlinea r equations with nondifferentiable operators are proposed. These methods a re based on two methods: using derivatives and using divided differences.< br /> The local and semi-local convergence of the proposed methods is stud ied and the order of their convergence is established. We apply our result s to the numerical solving of systems of nonlinear equations.

\nFor further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20220325T120000 END:VEVENT BEGIN:VEVENT UID:news1335@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220310T115442 DTSTART;TZID=Europe/Zurich:20220318T110000 SUMMARY:Seminar in Numerical Analysis: Markus Bachmayr (Universität Mainz) DESCRIPTION:We consider the computational complexity of approximating ellip tic PDEs with random coefficients by sparse product polynomial expansions. Except for special cases (for instance\, when the spatial discretisation limits the achievable overall convergence rate)\, previous approaches for a posteriori selection of polynomial terms and corresponding spatial di scretizations do not guarantee optimal complexity in the sense of computat ional costs scaling linearly in the number of degrees of freedom. We show that one can achieve optimality of an adaptive Galerkin scheme for discret izations by spline wavelets in the spatial variable when a multiscale rep resentation of the affinely parameterized random coefficients is used. \\ r\\nM. Bachmayr and I. Voulis\, An adaptive stochastic Galerkin method ba sed on multilevel expansions of random fields: Convergence and optimality\ , arXiv:2109:09136 [https://arxiv.org/abs/2109.09136]\\r\\nFor further in formation about the seminar\, please visit this webpage [t3://page?uid=111 5]. X-ALT-DESC:We consider the computational complexity of approximating ell
iptic PDEs with random coefficients by sparse product polynomial expansion
s. Except for special cases (for instance\, when the spatial discretisatio
n limits the achievable overall convergence rate)\, previous approaches fo
r \;*a posteriori* \;selection of polynomial terms and corr
esponding spatial discretizations do not guarantee optimal complexity in t
he sense of computational costs scaling linearly in the number of degrees
of freedom. We show that one can achieve optimality of an adaptive Galerki
n scheme for discretizations by spline wavelets \;in the spatial varia
ble when a multiscale representation of the affinely parameterized random
coefficients is used. \;

M. Bachmayr and I. Voulis\, \;*An adaptive stochastic Galerkin method based on multilevel expansions of
random fields: Convergence and optimality*\, \;arXiv:2109:09136

For further informati on about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20220318T120000 END:VEVENT BEGIN:VEVENT UID:news1277@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211204T191942 DTSTART;TZID=Europe/Zurich:20211217T110000 SUMMARY:Seminar in Numerical Analysis: Eliane Bécache (POEMS\, CNRS\, INRI A\, ENSTA Paris\, Institut Polytechnique de Paris) DESCRIPTION:The PML method is one of the most widely used for the numerical simulation of wave propagation problems set in unbounded domains. However difficulties arise when the exterior domain has some complexity which p revents from using classical approaches. For instance\, it is well-known t hat PML may be unstable for time-domain eslastodynamic waves in some aniso tropic materials. More recently\, is has also been noticed that standard P ML cannot work in presence of some dispersive materials. In some cases\, new stable PMLs have been designed.\\r\\nIn this talk\, we address the qu estions of well-posedness\, stability and convergence of standard and new models of PMLs in the context of electromagnetic waves for non-dispersive and dispersive materials.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:The PML method is one of the most widely used for the numeric al simulation of wave propagation problems set in unbounded domains. Howev er \;difficulties arise when the exterior domain has some complexity which prevents from using classical approaches. For instance\, it is well- known that PML may be unstable for time-domain eslastodynamic waves in som e anisotropic materials. More recently\, is has also been noticed that sta ndard PML cannot work in presence of some dispersive materials. \;In some cases\, new stable PMLs have been designed.

\nIn this talk\, we address the questions of well-posedness\, stability and convergence of st andard and new models of PMLs in the context of electromagnetic waves for non-dispersive and dispersive materials.

\nFor further information a bout the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20211217T120000 END:VEVENT BEGIN:VEVENT UID:news1276@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211204T143630 DTSTART;TZID=Europe/Zurich:20211210T110000 SUMMARY:Seminar in Numerical Analysis: Mike Botchev (Keldysh Institute of A pplied Mathematics) DESCRIPTION:An efficient Krylov subspace algorithm for computing actions of the phi matrix function for large matrices is proposed. This matrix funct ion is widely used in exponential time integration\, Markov chains\, and n etwork analysis and many other applications. Our algorithm is based on a reliable residual based stopping criterion and a new efficient restarting procedure. We analyze residual convergence and prove\, for matrices with n umerical range in the stable complex half-plane\, that the restarted metho d is guaranteed to converge for any Krylov subspace dimension. Numerical t ests demonstrate efficiency of our approach for solving large scale evolut ion problems resulting from discretized in space time-dependent PDEs\, in particular\, diffusion and convection-diffusion problems.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid= 1115]. X-ALT-DESC:An efficient Krylov subspace algorithm for computing actions of the phi matrix function for large matrices is proposed. This matrix fun ction is widely used in exponential time integration\, Markov chains\, and network analysis and many other applications. Our algorithm is based on&n bsp\;a reliable residual based stopping criterion and a new efficient rest arting procedure. We analyze residual convergence and prove\, for matrices with numerical range in the stable complex half-plane\, that the restarte d method is guaranteed to converge for any Krylov subspace dimension. Nume rical tests demonstrate efficiency of our approach for solving large scale evolution problems resulting from discretized in space time-dependent PDE s\, in particular\, diffusion and convection-diffusion problems.

\nF or further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20211210T120000 END:VEVENT BEGIN:VEVENT UID:news1258@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211025T123428 DTSTART;TZID=Europe/Zurich:20211203T110000 SUMMARY:Seminar in Numerical Analysis: Larisa Beilina (Chalmers tekniska h ögskola) DESCRIPTION:We will discuss how to apply an adaptive finite element metho d (AFEM) for numerical solution of an electromagnetic volume integral equ ation. The problem of solution of this equation is formulated as an optim al control problem for minimizing of the Tikhonov's regularization funct ional. A posteriori error estimates for the error in the obtained finite element reconstruction and error in the Tikhonov's functional will be pr esented.\\r\\nBased on these estimates\, different adaptive finite element algorithms are formulated. Numerical examples will show efficiency of t he proposed adaptive algorithms to improve quality of 3D reconstruction of target during the process of microwave thermometry which is used in ca ncer therapies. This is joint work with the group of Biomedical Imaging a t the Department of Electrical Engineering at CTH\, Chalmers.\\r\\nFor fu rther information about the seminar\, please visit this webpage [t3://page ?uid=1115]. X-ALT-DESC:We will \; discuss how to apply an adaptive finite elemen t method (AFEM) for numerical solution of \;an electromagnetic volume integral equation. \;The problem of solution of this equation is formu lated as an optimal \;control problem for minimizing of the Tikhonov's regularization \;functional. \;A posteriori error estimates for t he error in the obtained finite \;element reconstruction and error in the Tikhonov's functional will be presented.

\nBased on these estima tes\, different adaptive finite element algorithms \;are formulated.&n bsp\;Numerical examples will show efficiency of the \;proposed adaptiv e algorithms to improve quality of 3D reconstruction \;of target durin g the process of microwave thermometry which is used in \;cancer thera pies. This is joint work with the group of \;Biomedical Imaging at the Department of Electrical \;Engineering at CTH\, Chalmers.

\nFor further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20211203T120000 END:VEVENT BEGIN:VEVENT UID:news1260@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211109T170252 DTSTART;TZID=Europe/Zurich:20211119T110000 SUMMARY:Seminar in Numerical Analysis: Jaap van der Vegt (Universiteit Twen te) DESCRIPTION:In the numerical solution of partial differential equations\, i t is frequently necessary to ensure that certain variables\, e.g.\, densit y\, pressure\, or probability density distribution\, remain within strict bounds. Strict observation of these bounds is crucial\, otherwise unphysic al solutions will be obtained that might result in the failure of the nume rical algorithm. Bounds on certain variables are generally ensured in disc ontinuous Galerkin (DG) discretizations using positivity preserving limite rs\, which locally modify the solution to ensure that the constraints are satisfied\, while preserving higher order accuracy. In practice this appro ach is mostly limited to DG discretizations combined with explicit time in tegration methods. The combination of (positivity preserving) limiters in DG discretizations and implicit time integration methods results\, however \, in serious problems. Many positivity preserving limiters are not easy t o apply in time-implicit DG discretizations and have a non-smooth formulat ion\, which hampers the use of standard Newton methods to solve the nonlin ear algebraic equations resulting from the time-implicit DG discretization . This often results in poor convergence.\\r\\nIn this presentation\, we w ill discuss a different approach to ensure that a higher order accurate DG solution satisfies the positivity constraints. Instead of using a limiter \, we impose the positivity constraints directly on the algebraic equation s resulting from a higher order accurate time-implicit DG discretization u sing techniques from mathematical optimization theory. This approach ens ures that the positivity constraints are satisfied and does not affect the higher order accuracy of the time-implicit DG discretization. The resulti ng algebraic equations are then solved using a specially designed semi-smo oth Newton method that is well suited to deal with the resulting nonlinear complementarity problem. We will demonstrate the algorithm on several par abolic model problems.\\r\\nFor further information about the seminar\, pl ease visit this webpage [t3://page?uid=1115]. X-ALT-DESC:

In the numerical solution of partial differential equations\, it is frequently necessary to ensure that certain variables\, e.g.\, dens ity\, pressure\, or probability density distribution\, remain within stric t bounds. Strict observation of these bounds is crucial\, otherwise unphys ical solutions will be obtained that might result in the failure of the nu merical algorithm. Bounds on certain variables are generally ensured in di scontinuous Galerkin (DG) discretizations using positivity preserving limi ters\, which locally modify the solution to ensure that the constraints ar e satisfied\, while preserving higher order accuracy. In practice this app roach is mostly limited to DG discretizations combined with explicit time integration methods. The combination of (positivity preserving) limiters i n DG discretizations and implicit time integration methods results\, howev er\, in serious problems. Many positivity preserving limiters are not easy to apply in time-implicit DG discretizations and have a non-smooth formul ation\, which hampers the use of standard Newton methods to solve the nonl inear algebraic equations resulting from the time-implicit DG discretizati on. This often results in poor convergence.

\nIn this presentation\, we will discuss a different approach to ensure that a higher order accura te DG solution satisfies the positivity constraints. Instead of using a li miter\, we impose the positivity constraints directly on the algebraic equ ations resulting from a higher order accurate time-implicit DG discretizat ion using techniques from mathematical optimization theory. \;This ap proach ensures that the positivity constraints are satisfied and does not affect the higher order accuracy of the time-implicit DG discretization. T he resulting algebraic equations are then solved using a specially designe d semi-smooth Newton method that is well suited to deal with the resulting nonlinear complementarity problem. We will demonstrate the algorithm on s everal parabolic model problems.

\nFor further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20211119T120000 END:VEVENT BEGIN:VEVENT UID:news1257@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211025T121842 DTSTART;TZID=Europe/Zurich:20211112T110000 SUMMARY:Seminar in Numerical Analysis: Rolf Krause (Università della Svizz era italiana) DESCRIPTION:Non-convex minimization problems show up in manifold applicatio ns: non-linear elasticity\, phase field models\, fracture propagation\, or the training of neural networks. Traditional multilevel decompositions a re the basic ingredient of the most efficient class of solution methods for linear systems\, i.e. of multigrid methods\, which allow to solve cert ain classes of linear systems with optimal complexity. The transfer of the se concepts to non-linear problems\, however\, is not straightforward\, ne ither in terms of the design of the multilevel decomposition nor in terms of convergence properties. In this talk\, we will discuss multilevel dec ompositions for convex\, non-convex and possibly non-smooth minimization p roblems. We will discuss in detail how multilevel optimization methods c an be constructed and analyzed and we will illustrate the sometimes sign ificant gain in performance\, which can be achieved by multilevel minimiza tion techniques. Examples from mechanics\, geophysics\, and machine learni ng will illustrate our discussion.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Non-convex minimization problems show up in manifold applicat
ions: non-linear elasticity\, phase field models\, fracture propagation\,
or the training of neural networks.

Traditional multilevel de
compositions are the basic ingredient \;of the most efficient class o
f solution methods for linear systems\, i.e. of multigrid methods\, which
allow to solve certain classes of linear systems with optimal complexity.
The transfer of these concepts to non-linear problems\, however\, is not s
traightforward\, neither in terms of the design of the multilevel decompos
ition nor in terms of convergence properties. In this talk\, we will discu
ss \;multilevel decompositions for convex\, non-convex and possibly n
on-smooth minimization problems. We will discuss in detail how \;mult
ilevel optimization methods can be constructed and analyzed and we will il
lustrate \;the sometimes significant gain in performance\, which can
be achieved by multilevel minimization techniques. Examples from mechanics
\, geophysics\, and machine learning will illustrate our discussion.

For further information about the seminar\, please visit this webpage< /a>.

DTEND;TZID=Europe/Zurich:20211112T120000 END:VEVENT BEGIN:VEVENT UID:news1256@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211025T121402 DTSTART;TZID=Europe/Zurich:20211029T110000 SUMMARY:Seminar in Numerical Analysis: Jochen Garke (Rheinische Friedrich-W ilhelms-Universität Bonn) DESCRIPTION:We present a conceptual framework that helps to bridge the know ledge gap between the two individual communities from machine learning an d numerical simulation to identify potential combined approaches and to promote the development of hybrid systems. We give examples of different types of combinations using exemplary approaches of simulation-assisted m achine learning and machine-learning assisted simulation. We also discuss an advanced pairing where we see particular further potential for hybrid systems.\\r\\nFor further information about the seminar\, please visit th is webpage [t3://page?uid=1115]. X-ALT-DESC:We present a conceptual framework that helps to bridge the kn
owledge gap \;between the two individual communities from machine lear
ning and \;numerical simulation to identify potential combined approac
hes and to \;promote the development of hybrid systems.

W
e give examples of different types of combinations using exemplary \;a
pproaches of simulation-assisted machine learning and machine-learning&nbs
p\;assisted simulation. We also discuss an advanced pairing where we see&n
bsp\;particular further potential for hybrid systems.

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20211029T120000 END:VEVENT BEGIN:VEVENT UID:news1157@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202913 DTSTART;TZID=Europe/Zurich:20210604T110000 SUMMARY:Seminar in Numerical Analysis: Ivan Dokmanić (Universität Basel) DESCRIPTION:This talk will be an overview of my group's research between de ep learning and inverse problems. I will first describe the current (?) st ate of the field and then present a medley of our results\, including 1) a neural network architecture for wave-based inverse problems derived from Fourier integral operators\; 2) an approach to nonlinear traveltime tomogr aphy based on neural priors\; and 3) provably injective neural networks th at are universal approximators of probability measures supported on low-di mensional manifolds. My secret hope is to spark discussions that could evo lve to collaborations.\\r\\nFor further information about the seminar\, pl ease visit this webpage [t3://page?uid=1115]. X-ALT-DESC:This talk will be an overview of my group's research between deep learning and inverse problems. I will first describe the current (?) state of the field and then present a medley of our results\, including 1) a neural network architecture for wave-based inverse problems derived fro m Fourier integral operators\; 2) an approach to nonlinear traveltime tomo graphy based on neural priors\; and 3) provably injective neural networks that are universal approximators of probability measures supported on low- dimensional manifolds. My secret hope is to spark discussions that could e volve to collaborations.

\nFor further information about the seminar \, please visit this webpage.

DTEND;TZID=Europe/Zurich:20210604T120000 END:VEVENT BEGIN:VEVENT UID:news1158@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202830 DTSTART;TZID=Europe/Zurich:20210507T110000 SUMMARY:Seminar in Numerical Analysis: Erik Burman (University College Lond on) DESCRIPTION:In many applications both in medical science and in the geoscie nces the accurate approximation of solutions to wave equations is an impor tant component for optimisation or inverse identification. Examples inclu de thermoacoustic imaging or high frequency ultrasound treatments in medic ine (HIFU) or fault slip analysis in seismology. These problems have in c ommon the need for computational solution of an inverse problem where the forward problem is set in a heterogeneous domain. Indeed typically the sou nd speed in the bulk domain jumps over material interfaces. Sometimes ther e is even a need for coupling of the acoustic and elastodynamic equations in the presence of liquid inclusions. In this talk we will give a snapshot of our ongoing work in these topics\, motivated by two such applications: HIFU and the propagation of seismic waves. After a brief introduction of the applications we will first discuss the analysis of some approximation methods for inverse initial value problems subject to the wave equation. W e will then consider a hybrid high order method for the approximation of w ave propagation in heterogeneous media\, using cut element techniques to a void meshing of interfaces. Finally we will discuss some open problems tha t remain in order to understand the approximation of the inverse initial v alue problem in heterogeneous media using high order methods.\\r\\nFor fur ther information about the seminar\, please visit this webpage [t3://page? uid=1115]. X-ALT-DESC:In many applications both in medical science and in the geosc iences the accurate approximation of solutions to wave equations is an imp ortant \;component for optimisation or inverse identification. Example s include thermoacoustic imaging or high frequency ultrasound treatments i n medicine (HIFU) \;or fault slip analysis in seismology. These proble ms have in common the need for computational solution of an inverse proble m where the forward problem is set in a heterogeneous domain. Indeed typic ally the sound speed in the bulk domain jumps over material interfaces. So metimes there is even a need for coupling of the acoustic and elastodynami c equations in the presence of liquid inclusions. In this talk we will giv e a snapshot of our ongoing work in these topics\, motivated by two such a pplications: HIFU and the propagation of seismic waves. After a brief intr oduction of the applications we will first discuss the analysis of some ap proximation methods for inverse initial value problems subject to the wave equation. We will then consider a hybrid high order method for the approx imation of wave propagation in heterogeneous media\, using cut element tec hniques to avoid meshing of interfaces. Finally we will discuss some open problems that remain in order to understand the approximation of the inver se initial value problem in heterogeneous media using high order methods.< /p>\n

For further information about the seminar\, please visit this web page.

DTEND;TZID=Europe/Zurich:20210507T120000 END:VEVENT BEGIN:VEVENT UID:news1184@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202737 DTSTART;TZID=Europe/Zurich:20210430T110000 SUMMARY:Seminar in Numerical Analysis: Pieter Barendrecht (KAUST) DESCRIPTION:In this talk\, we're going to take a closer look at the basics of both univariate and bivariate splines\, including Bézier- and B-spline curves\, box splines and subdivision surfaces. Next\, we'll shift our foc us to applications of smooth spline surfaces of arbitrary manifold topolog y within the realm of computer graphics. Finally\, a couple of aspects and applications of splines in the context of numerical methods will be discu ssed. Expect more illustrations than equations\, and in addition a couple of (interactive) live software demos!\\r\\nFor further information about t he seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:In this talk\, we're going to take a closer look at the basic s of both univariate and bivariate splines\, including Bézier- and B-spli ne curves\, box splines and subdivision surfaces. Next\, we'll shift our f ocus to applications of smooth spline surfaces of arbitrary manifold topol ogy within the realm of computer graphics. Finally\, a couple of aspects a nd applications of splines in the context of numerical methods will be dis cussed. Expect more illustrations than equations\, and in addition a coupl e of (interactive) live software demos!

\nFor further information ab out the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20210430T120000 END:VEVENT BEGIN:VEVENT UID:news1167@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202614 DTSTART;TZID=Europe/Zurich:20210423T110000 SUMMARY:Seminar in Numerical Analysis: Markus Melenk (TU Wien) DESCRIPTION:We consider the Helmholtz equation with piecewise analytic coef ficients at large wavenumber k > 0. The interface where the coefficients j ump is assumed to be analytic. We develop a k-explicit regularity theory f or the solution that takes the form of a decomposition into two components : the first component is a piecewise analytic\, but highly oscillatory fun ction and the second one has finite regularity but features wavenumber-ind ependent bounds. This decomposition generalizes earlier decompositions of [MS10\, MS11\, EM11\, MSP12]\, which considered the Helmholtz equation wit h constant coefficients\, to the case of (piecewise) analytic coefficients . This regularity theory allows to show for high order Galerkin discretiza tions (hp-FEM) of the Helmholtz equation that quasi-optimality is reached if (a) the approximation order p is selected as p = O(log k) and (b) the m esh size h is such that kh/p is sufficiently small. This extends the resul ts of [MS10\, MS11\, EM11\, MSP12] about the onset of quasi-optimality of hp-FEM for the Helmholtz equation to the case of the heterogeneous Helmhol tz equation.\\r\\nJoint work with: Maximilian Bernkopf (TU Wien)\, Théoph ile Chaumont-Frelet (Inria).\\r\\nReferences [EM11] S. Esterhazy and J.M. Melenk\, On stability of discretizations of the Helmholtz equation\, in: Numerical Analysis of Multiscale Problems\, Graham et al.\, eds\, Sp ringer 2012 [MS10] J.M. Melenk and S. Sauter\, Convergence Analysis f or Finite Element Discretizations of the Helmholtz equation with Dirichlet -to-Neumann boundary conditions\, Math. Comp. 79:1871–1914\, 2010 [MS11] J.M. Melenk and S. Sauter\, Wavenumber explicit convergence analysis for finite element discretizations of the Helmholtz equation\, SIAM J. Nu mer. Anal.\, 49:1210–1243\, 2011 [MSP12] J.M. Melenk\, S. Sauter\, A. Pa rsania\, Generalized DG-methods for highly indefinite Helmholtz problems\, J. Sci. Comp. 57:536–581\, 2013\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:We consider the Helmholtz equation with piecewise analytic co efficients at large wavenumber k >\; 0. The interface where the coeffici ents jump is assumed to be analytic. We develop a k-explicit regularity th eory for the solution that takes the form of a decomposition into two comp onents: the first component is a piecewise analytic\, but highly oscillato ry function and the second one has finite regularity but features wavenumb er-independent bounds. This decomposition generalizes earlier decompositio ns of [MS10\, MS11\, EM11\, MSP12]\, which considered the Helmholtz equati on with constant coefficients\, to the case of (piecewise) analytic coeffi cients. This regularity theory allows to show for high order Galerkin disc retizations (hp-FEM) of the Helmholtz equation that quasi-optimality is re ached if (a) the approximation order p is selected as p = O(log k) and (b) the mesh size h is such that kh/p is sufficiently small. This extends the results of [MS10\, MS11\, EM11\, MSP12] about the onset of quasi-optimali ty of hp-FEM for the Helmholtz equation to the case of the heterogeneous H elmholtz equation.

\nJoint work with: Maximilian Bernkopf (TU Wien)\ , Théophile Chaumont-Frelet (Inria).

\n**References**<
br /> [EM11] \; \; \;S. Esterhazy and J.M. Melenk\, On stabil
ity of discretizations of the Helmholtz equation\, in: Numerical \;Ana
lysis of Multiscale Problems\, Graham et al.\, eds\, Springer 2012

[
MS10] \; \;J.M. Melenk and S. Sauter\, Convergence Analysis for
Finite Element Discretizations of the Helmholtz equation with Dirichlet-to
-Neumann boundary conditions\, Math. Comp. 79:1871–1914\, 2010

[MS
11] \; \;J.M. Melenk and S. Sauter\, Wavenumber explicit converg
ence analysis for finite element discretizations of the Helmholtz equation
\, SIAM J. Numer. Anal.\, 49:1210–1243\, 2011

[MSP12] J.M. Melenk\
, S. Sauter\, A. Parsania\, Generalized DG-methods for highly indefinite H
elmholtz problems\, J. Sci. Comp. 57:536–581\, 2013

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20210423T120000 END:VEVENT BEGIN:VEVENT UID:news1154@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202454 DTSTART;TZID=Europe/Zurich:20210416T110000 SUMMARY:Seminar in Numerical Analysis: Guy Gilboa (Technion - Israel Instit ute of Technology) DESCRIPTION:Recent studies on nonlinear eigenvalue problems show surprising analogies to harmonic analysis (e.g. Fourier or wavelets). In this talk we first show the total-variation (TV) transform\, based on the TV gradien t flow\, and its application in image processing. We then present new resu lts on analyzing gradient flows of homogeneous nonlinear operators (such a s the p-Laplacian). Our framework allows a thorough investigation of Dynam ic-Mode-Decomposition (DMD)\, a central dimensionality reduction method fo r time series data. We present analytic solutions of simple nonlinear case s\, reveal shortcomings of DMD and propose improved decomposition methods. \\r\\nFor further information about the seminar\, please visit this webpag e [t3://page?uid=1115]. X-ALT-DESC:Recent studies on nonlinear eigenvalue problems show surprisi ng analogies to harmonic analysis (e.g. Fourier or wavelets). \;In thi s talk we first show the total-variation (TV) transform\, based on the TV gradient flow\, and its application in image processing. We then present n ew results on analyzing gradient flows of homogeneous nonlinear operators (such as the p-Laplacian). Our framework allows a thorough investigation o f Dynamic-Mode-Decomposition (DMD)\, a central dimensionality reduction me thod for time series data. We present analytic solutions of simple nonline ar cases\, reveal shortcomings of DMD and propose improved decomposition m ethods.

\nFor further information about the seminar\, please visit t his webpage.

DTEND;TZID=Europe/Zurich:20210416T120000 END:VEVENT BEGIN:VEVENT UID:news1161@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210627T202314 DTSTART;TZID=Europe/Zurich:20210409T110000 SUMMARY:Seminar in Numerical Analysis: Barbara Kaltenbacher (Universität K lagenfurt) DESCRIPTION:High intensity (focused) ultrasound HIFU is used in numerous me dical and industrial applications ranging from litotripsy and thermotherap y via ultrasound cleaning and welding to sonochemistry. In this talk\, we will highlight two computational aspects related to the relevant nonlinea r acoustic phenomena\, namely\\r\\n absorbing boundary conditions for the treatment of open domain problems\; optimization tasks for ultrasound fo cusing. \\r\\nStrictly speaking\, acoustic sound propagation takes place i n full space or at least in a domain that is typically much larger than th e region of interest Ω. To restrict attention to a bounded domain Ω\, e. g\, for computational purposes\, artificial reflections on the boundary ∂Ω have to be avoided. This can be done by imposing so-called absorbin g boundary conditions ABC that induce dissipation of outgoing waves. Here it will turn out to be crucial to take into account nonlinearity of the PD E also in these ABC. This is joint work with Igor Shevchenko (Imperial Co llege London). In the context of applications in HIFU\, focusing of nonli nearly propagating waves amounts to optimization problems. The design of u ltrasound excitation via piezoelectric transducers leads to a boundary con trol problem\; focusing high intensity ultrasound by a silicone lens requi res shape optimization. For both problem classes\, we will discuss the der ivation of gradient information in order to formulate optimality condition s and drive numerical optimization methods. This is joint work with Chris tian Clason (University of Duisburg-Essen)\, Vanja Nikolić (TU München) \, and Gunther Peichl (University of Graz). Finally we will provide an ou tlook on imaging with nonlinearly acoustic waves\, which amounts to identi fying spatially varying coefficients (sound speed and/or coefficient of nonlinearity) in the Westervelt equation. This is recent joint work with Masahiro Yamamoto (University of Tokyo) and William Rundell (Texas A&M Uni versity).\\r\\nFor further information about the seminar\, please visit th is webpage [t3://page?uid=1115]. X-ALT-DESC:High intensity (focused) ultrasound HIFU is used in numerous medical and industrial applications ranging from litotripsy and thermother apy via ultrasound cleaning and welding to sonochemistry. \;In this ta lk\, we will highlight two computational aspects related to the relevant n onlinear acoustic phenomena\, namely

\n- absorbing boundary con ditions for the treatment of open domain problems\;
- optimization tasks for ultrasound focusing.

Strictly speaking\, acousti
c sound propagation takes place in full space or at least in a domain that
is typically much larger than the region of interest Ω. To restrict atte
ntion to a bounded domain Ω\, e.g\, for computational purposes\, artifici
al reflections on the boundary ∂Ω \;have to be avoided. This can be
done by imposing so-called absorbing boundary conditions ABC that induce
dissipation of outgoing waves. Here it will turn out to be crucial to take
into account nonlinearity of the PDE also in these ABC. \;This is joi
nt work with Igor Shevchenko (Imperial College London).

In th
e context of applications in HIFU\, focusing of nonlinearly propagating wa
ves amounts to optimization problems. The design of ultrasound excitation
via piezoelectric transducers leads to a boundary control problem\; focusi
ng high intensity ultrasound by a silicone lens requires shape optimizatio
n. For both problem classes\, we will discuss the derivation of gradient i
nformation in order to formulate optimality conditions and drive numerical
optimization methods. \;This is joint work with Christian Clason (Uni
versity of Duisburg-Essen)\, Vanja Nikolić \;(TU München)\, and Gunt
her Peichl (University of Graz).

Finally we will provide an o
utlook on imaging with nonlinearly acoustic waves\, which amounts to ident
ifying \; spatially varying coefficients (sound speed and/or coefficie
nt of nonlinearity) in the Westervelt equation. \;This is recent joint
work with Masahiro Yamamoto (University of Tokyo) and William Rundell (Te
xas A&\;M University).

For further information about the semina r\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20210409T120000 END:VEVENT BEGIN:VEVENT UID:news1127@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210623T193706 DTSTART;TZID=Europe/Zurich:20201211T110000 SUMMARY:Seminar in Numerical Analysis: Heiko Gimperlein (Heriot-Watt Univer sity) DESCRIPTION:Diffusion processes beyond Brownian motion have recently attrac ted significant interest from different communities in mathematics\, the p hysical and biological sciences. They are described by partial differentia l equations involving nonlocal operators with singular non-integrable kern els\, such as fractional Laplacians. This talk discusses the challenges of their approximation by finite elements and discusses our recent results o n the a priori analysis of h\, p and hp-versions for the integral fraction al Laplacian\, as well as their preconditioning. \\r\\nFor further inform ation about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Diffusion processes beyond Brownian motion have recently attr acted significant interest from different communities in mathematics\, the physical and biological sciences. They are described by partial different ial equations involving nonlocal operators with singular non-integrable ke rnels\, such as fractional Laplacians. This talk discusses the challenges of their approximation by finite elements and discusses our recent results on the a priori analysis of h\, p and hp-versions for the integral fracti onal Laplacian\, as well as their preconditioning. \;

\nFor furt her information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20201211T120000 END:VEVENT BEGIN:VEVENT UID:news1098@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210623T193612 DTSTART;TZID=Europe/Zurich:20201204T110000 SUMMARY:Seminar in Numerical Analysis: Bastian von Harrach (Goethe-Universi tät Frankfurt) DESCRIPTION:We derive a simple criterion that ensures uniqueness\, Lipschit z stability and global convergence of Newton's method for the finite dim ensional zero-finding problem of a continuously differentiable\, pointwis e convex and monotonic function. Our criterion merely requires to evaluat e the directional derivative of the forward function at finitely many eva luation points and for finitely many directions.\\r\\nWe then demonstrate that this result can be used to prove uniqueness\, stability and global c onvergence for an inverse coefficient problem with finitely many measurem ents. We consider the problem of determining an unknown inverse Robin tra nsmission coefficient in an elliptic PDE. Using a relation to monotonicit y and localized potentials techniques\, we show that a piecewise-constant coefficient on an a-priori known partition with a-priori known bounds is uniquely determined by finitely many boundary measurements and that it c an be uniquely and stably reconstructed by a globally convergent Newton i teration. We derive a constructive method to identify these boundary meas urements\, calculate the stability constant and give a numerical example. \\r\\n For further information about the seminar\, please visit this webpa ge [t3://page?uid=1115]. X-ALT-DESC:We derive a simple criterion that ensures uniqueness\, Lipsch itz \;stability and global convergence of Newton's method for the fini te \;dimensional zero-finding problem of a continuously differentiable \, \;pointwise convex and monotonic function. Our criterion merely req uires \;to evaluate the directional derivative of the forward function at \;finitely many evaluation points and for finitely many directions .

\nWe then demonstrate that this result can be used to prove unique ness\, \;stability and global convergence for an inverse coefficient p roblem with \;finitely many measurements. We consider the problem of d etermining an \;unknown inverse Robin transmission coefficient in an e lliptic PDE. Using \;a relation to monotonicity and localized potentia ls techniques\, we show \;that a piecewise-constant coefficient on an a-priori known partition \;with a-priori known bounds is uniquely dete rmined by finitely many \;boundary measurements and that it can be uni quely and stably \;reconstructed by a globally convergent Newton itera tion. We derive a \;constructive method to identify these boundary mea surements\, calculate \;the stability constant and give a numerical ex ample.

\n

For further information about the seminar\, please v
isit this webpage.

We examine how time step adaptivity can be used to control&nb sp\;potential \;instability arising from \;non-Lipschitz \;ter ms \;for \;stochastic \;partial differential \;equations ( SPDEs). I will give a brief introduction to SPDEs and illustrate the stabi lity issue with the standard uniform step Euler method \;to \;moti vate the adaptive method. I \;will present a \;strong convergence& nbsp\;result and outline the \;steps of the proof. To illustrate the&n bsp\;method \;we \;examine the stochastic Allen-Cahn\, Swift-Hohen berg\, \; Kuramoto-Sivashinsky equations and finally will discuss a po tential use of the adaptivity for the deterministic system. This is joint work with Stuart Campbell.

DTEND;TZID=Europe/Zurich:20201120T120000 END:VEVENT BEGIN:VEVENT UID:news1102@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210623T193410 DTSTART;TZID=Europe/Zurich:20201113T110000 SUMMARY:Seminar in Numerical Analysis: Gilles Vilmart (Université de Genè ve) DESCRIPTION:We show that the Strang splitting method applied to a diffusion -reaction equation with inhomogeneous general oblique boundary conditions is of order two when the diffusion equation is solved with the Crank-Nic olson method\, while order reduction occurs in general if using other Ru nge-Kutta schemes or even the exact flow itself for the diffusion part. We also show that this method recovers stationary states in contrast with sp litting methods in general.We prove these results when the source term onl y depends on the space variable. Numerical experiments suggest that the s econd order of convergence persists with general nonlinearities.\\r\\nThi s is joint work with Guillaume Bertoli (Université de Genève) and Chris tophe Besse (Institut de Mathématiques de Toulouse).\\r\\nFor further inf ormation about the seminar\, please visit this webpage [t3://page?uid=111 5]. X-ALT-DESC:We show that the Strang splitting method applied to a diffusi on-reaction \;equation with inhomogeneous general oblique boundary con ditions is of \;order two when the diffusion equation is solved with t he Crank-Nicolson \;method\, while order reduction occurs in general i f using other \;Runge-Kutta schemes or even the exact flow itself for the diffusion part. We also show that this method recovers stationary stat es in contrast with splitting methods in general.We prove these results wh en the source term only depends on the space \;variable. Numerical exp eriments suggest that the second order of \;convergence persists with general nonlinearities.

\nThis is joint work with Guillaume Bertoli (Université de Genève) and \;Christophe Besse (Institut de Mathémat iques de Toulouse).

\nFor further information about the seminar\, pl ease visit this \;webpage.

DTEND;TZID=Europe/Zurich:20201113T120000 END:VEVENT BEGIN:VEVENT UID:news1101@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210623T193336 DTSTART;TZID=Europe/Zurich:20201030T110000 SUMMARY:Seminar in Numerical Analysis: Alfio Borzi (Universität Würzburg) DESCRIPTION:The Liouville equation is the fundamental building block of mod els that govern the evolution of density functions of multi-particle syst ems. These models include different Fokker-Planck and Boltzmann equations that arise in many application fields ranging from gas dynamics to pedes trians' motion where the need arises to control these systems.\\r\\nThis talk provides an introduction to the formulation and solution of optimal control problems governed by the Liouville equation and related models. T he purpose of this framework is the design of robust controls to steer th e motion of particles\, pedestrians\, etc.\, where these agents are repre sented in terms of density functions. For this purpose\, expected-value c ost functionals are considered that include attracting potentials and dif ferent costs of the controls\, whereas the control mechanism in the gover ning models is part of the drift or is included in a collision term.\\r\\ nIn this talk\, theoretical and numerical results concerning ensemble opt imal control problems with Liouville\, Fokker-Planck and linear Boltzmann equations are presented.\\r\\n For further information about the seminar\ , please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:The Liouville equation is the fundamental building block of m odels that \;govern the evolution of density functions of multi-partic le systems. These \;models include different Fokker-Planck and Boltzma nn equations that arise \;in many application fields ranging from gas dynamics to pedestrians' motion \;where the need arises to control the se systems.

\nThis talk provides an introduction to the formulation and solution \;of optimal control problems governed by the Liouville e quation \;and related models. The purpose of this framework \;is t he design of robust controls to steer the motion of particles\, pedestrian s\, etc.\, \;where these agents are represented in terms of density fu nctions. \;For this purpose\, expected-value cost functionals are cons idered \;that include attracting potentials and different costs of the controls\, whereas \;the control mechanism in the governing models is part of the drift \;or is included in a collision term.

\nIn th is talk\, theoretical and numerical results concerning ensemble \;opti mal control problems with Liouville\, Fokker-Planck and linear \;Boltz mann equations are presented.

\n

For further information about
the seminar\, please visit this webpage.

We propose an efficient algorithm for the treatment of Volter ra integral operators based on H2-matrix compression techniques. The algor ithm is built in an evolutionary manner\, and therefore\, is well suited f or the problems\, where the right hand side depends on the solution itself and is not known for all time steps a priori. The resulting algorithm is of linear complexity O(N) w.r.t. to the number of time steps\, and require s O(N) active memory. The memory consumption can be reduced to O(log N) fo r the kernels of convolution type using the Laplace inversion techniques i ntroduced by Lubich et al\; the connection to the FOCQ algorithm is drawn. We demonstrate the effectiveness of our algorithm on a series of numerica l examples.

\n

For further information about the seminar\, ple
ase visit this \;webpage.

Many models in spatial statistics are based on Gaussian Maté
rn fields. Motivated by the relation between this class of Gaussian random
fields and stochastic partial differential equations (PDEs)\, we consider
the numerical solution of fractional-order elliptic stochastic PDEs with
additive spatial white noise on a bounded Euclidean domain.

We
propose an approximation supported by a rigorous error analysis which show
s different notions of convergence at explicit and sharp rates. We further
more discuss the computational complexity of the proposed method. Finally\
, we present several numerical experiments\, which attest the theoretical
outcomes\, as well as a statistical application where we use the method fo
r inference\, i.e.\, for parameter estimation given data\, and for spatial
prediction.

For further information about the seminar\, please vi sit this \;webpage.

DTEND;TZID=Europe/Zurich:20191220T120000 END:VEVENT BEGIN:VEVENT UID:news931@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191210T105146 DTSTART;TZID=Europe/Zurich:20191213T110000 SUMMARY:Seminar in Numerical Analysis: Théophile Chaumont-Frelet (INRIA\, Nice/Sophia Antipolis) DESCRIPTION:The Helmholtz equation models the propagation of a time-harmoni c wave. It has received much attention since it is widely employed in appl ications\, but still challenging to numerically simulate in the high-frequ ency regime. In this seminar\, we focus on acoustic waves for the sake of simplicity and consider finite element discretizations. The main goal of the presentation is to highlight the improved performance of high order me thods (as compared to linear finite elements) when the frequency is large. We will very briefly cover the zero-frequency case\, that corresponds to the well-studied Poisson equation. We take advantage of this classical se tting to recall central concepts of the finite element theory such as quas i-optimality and interpolation error. The second part of the seminar is d evoted to the high-frequency case. We show that without restrictive assump tions on the mesh size\, the finite element method is unstable\, and quasi -optimality is lost. We provide a detailed analysis\, as well as numerical examples\, which show that higher order methods are less affected by this phenomena\, and thus more suited to discretize high-frequency problems. Before drawing our main conclusions\, we briefly discuss advanced topics\, such as the use of unfitted meshes in highly heterogeneous media and mesh refinements around re-entrant corners.\\r\\nFor further information about the seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:The Helmholtz equation models the propagation of a time-harmo
nic wave.

It has received much attention since it is widely employed
in applications\,

but still challenging to numerically simulate in
the high-frequency regime.

In this seminar\, we focus on acou
stic waves for the sake of simplicity

and consider finite element di
scretizations. The main goal of the

presentation is to highlight the
improved performance of high order

methods (as compared to linear f
inite elements) when the frequency is large.

We will very bri
efly cover the zero-frequency case\, that corresponds to the well-studied
Poisson equation. We take advantage of this classical setting to recall ce
ntral concepts of the finite element theory such as quasi-optimality and i
nterpolation error.

The second part of the seminar is devoted
to the high-frequency case.

We show that without restrictive assump
tions on the mesh size\,

the finite element method is unstable\, and
quasi-optimality is lost.

We provide a detailed analysis\, as well
as numerical examples\, which

show that higher order methods are les
s affected by this phenomena\,

and thus more suited to discretize hi
gh-frequency problems.

Before drawing our main conclusions\,
we briefly discuss advanced topics\,

such as the use of unfitted mes
hes in highly heterogeneous media

and mesh refinements around re-ent
rant corners.

For further information about the seminar\, please v isit this \;webpage.

DTEND;TZID=Europe/Zurich:20191213T120000 END:VEVENT BEGIN:VEVENT UID:news926@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191130T081127 DTSTART;TZID=Europe/Zurich:20191206T110000 SUMMARY:Seminar in Numerical Analysis: Ira Neitzel (Universität Bonn) DESCRIPTION:joint work with Dominik Hafemeyer\, Florian Mannel and Boris Ve xler\\r\\n We consider a convex optimal control problem governed by a par tial differential equation in one space dimension which is controlled by a right-hand-side living in the space of functions with bounded variation . These functions tend to favor optimal controls that are piecewise const ant with often finitely many jump poins. We are interested in deriving fi nite element discretization error estimates for the controls when the sta te ist discretized with usual piecewise linear finite elements\, and the c ontrols is either variationally discrete or piecwise constant. Due to the structure of the objective function\, usual techniques for estimating th e control error cannot be applied. Instead\, these have to be derived fro m (suboptimal) error estimates for the state\, which can later be improved . \\r\\nFor further information about the seminar\, please visit this web page [t3://page?uid=current]. X-ALT-DESC:joint work with Dominik Hafemeyer\, Florian Mannel and Boris Vexler

\n

We consider a convex optimal control problem govern
ed by a partial differential equation in one space dimension which is con
trolled by a right-hand-side living in the space of functions with bounde
d variation. These functions tend to favor optimal controls that are piece
wise constant with often finitely many jump poins. We are interested in
deriving finite element discretization error estimates for the controls w
hen the state ist discretized with usual piecewise linear finite elements\
, and the controls is either variationally discrete or piecwise constant.
Due to the structure of the objective function\, usual techniques for es
timating the control error cannot be applied. Instead\, these have to be
derived from (suboptimal) error estimates for the state\, which can later
be improved.

For further information about the seminar\, please v isit this \;webpage.

DTEND;TZID=Europe/Zurich:20191206T120000 END:VEVENT BEGIN:VEVENT UID:news930@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191107T104152 DTSTART;TZID=Europe/Zurich:20191115T110000 SUMMARY:Seminar in Numerical Analysis: Michael Multerer (USI Lugano) DESCRIPTION:The numerical simulation of physical phenomena is very well und erstood given that the input data are given exactly. However\, in practic e\, the collection of these data is usually subjected to measurement erro rs. The goal of uncertainty quantification is to assess those errors and their possible impact on simulation results.In this talk\, we address diff erent numerical aspects of uncertainty quantification in elliptic partial differential equations on random domains. Starting from the modelling of random domains via random vector fields\, wediscuss how the corresponding Karhunen-Loève expansion can efficiently becomputed. For the discretisat ion of the partial differential equation\, we apply an adaptive Galerkin framework. An a posteriori error estimator is presented\, which allows fo r the problem-dependent iterative refinement of all discretisation paramet ers and the assessment of the achieved error reduction. The proposed appr oach is demonstrated in numerical benchmark problems.\\r\\nFor further in formation about the seminar\, please visit this webpage [t3://page?uid=11 15]. X-ALT-DESC:The numerical simulation of physical phenomena is very well u nderstood given that \;the input data are given exactly. However\, in practice\, the collection of these \;data is usually subjected to meas urement errors. The goal of uncertainty quantification \;is to assess those errors and their possible impact on simulation results.In this talk\ , we address different numerical aspects of uncertainty quantification&nbs p\;in elliptic partial differential equations on random domains. \;Sta rting from the modelling of random domains via random vector fields\, wedi scuss how the corresponding Karhunen-Loève expansion can efficiently beco mputed. For the discretisation of the partial differential equation\, \;we apply an adaptive Galerkin framework. An a posteriori error estimator is presented\, \;which allows for the problem-dependent iterative ref inement of all discretisation parameters \;and the assessment of the a chieved error reduction. The proposed approach is demonstrated \;in nu merical benchmark problems.

\nFor further information about the semi nar\, please visit this \;webpage.

DTEND;TZID=Europe/Zurich:20191115T120000 END:VEVENT BEGIN:VEVENT UID:news920@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191104T081039 DTSTART;TZID=Europe/Zurich:20191108T110000 SUMMARY:Seminar in Numerical Analysis: Omar Lakkis (University of Sussex) DESCRIPTION:Aposteriori error estimates provide a rigorous foundation for t he derivation of efficient adaptive algorithms for the approximation of so lutions of partial differential equations (PDEs). While the literature i s rich with results for the approximation of elliptic and parabolic PDEs\, it is much less developed for the hyperbolic equations such as the acoust ic or elastic wave equations. In this talk\, I will review some of the " standard" aposteriori results for the scalar linear wave equation\, includ ing those of [1] and [2]\, and present recent improvements and further dev elopments to lower order Sobolev norms based on Baker’s Trick [3] for ba ckward Euler schemes. Subsequent focus will be given to practically rele vant methods such as Verlet\, Cosine\, or Newmark methods\, a popular exam ple of which is the Leap-frog method [4].\\r\\nNotes: This is based on joi nt work with E.H. Georgoulis\, C. Makridakis and J.M. Virtanen.\\r\\nRefer ences:\\r\\n[1] W. Bangerth and R. Rannacher\, J. Comput. Acoust. 9(2):575 –591\, 2001.[2] C. Bernardi and E. Süli\, Math. Models Methods Appl. Sc i. 15(2):199--225\, 2005.[3] E. H. Georgoulis\, O. Lakkis\, and C. Makrida kis. IMA J. Numer. Anal.\, 33(4):1245–1264\, 2013\, http://arxiv.org/abs /1003.3641[4] E. H. Georgoulis\, O. Lakkis\, C. Makridakis\, and J. M. Vir tanen. SIAM J. Numer. Anal.\, 54(1)\, 2016\, http://arxiv.org/abs/1411.757 2 \\r\\nFor further information about the seminar\, please visit this web page [t3://page?uid=1115]. X-ALT-DESC:Aposteriori error estimates provide a rigorous foundation for the derivation of efficient adaptive algorithms for the approximation of solutions of partial differential equations (PDEs). \;While the liter ature is rich with results for the approximation of elliptic and parabolic PDEs\, it is much less developed for the hyperbolic equations such as the acoustic or elastic wave equations. \;In this talk\, I will review s ome of the "\;standard"\; aposteriori results for the scalar linea r wave equation\, including those of [1] and [2]\, and present recent impr ovements and further developments to lower order Sobolev norms based on Ba ker’s Trick [3] for backward Euler schemes. \;Subsequent focus will be given to practically relevant methods such as Verlet\, Cosine\, or New mark methods\, a popular example of which is the Leap-frog method [4].

\n

Notes: This is based on joint work with E.H. Georgoulis\, C. Ma
kridakis and J.M. Virtanen.

References:

\n[1] W. Bange
rth and R. Rannacher\, J. Comput. Acoust. 9(2):575–591\, 2001.

[2]
C. Bernardi and E. Süli\, Math. Models Methods Appl. Sci. 15(2):199--225\
, 2005.

[3] E. H. Georgoulis\, O. Lakkis\, and C. Makridakis. IMA J.
Numer. Anal.\, 33(4):1245–1264\, 2013\, http://arxiv.org/abs/1003.3641**[4] E. H. Georgoulis\, O. Lakkis\, C. Makridakis\, and J. M. Virtanen.
SIAM J. Numer. Anal.\, 54(1)\, 2016\, http://arxiv.org/abs/1411.7572 **

For further information about the seminar\, please visit this \;< a href="t3://page?uid=1115" title="Opens internal link in current window" class="internal-link">webpage.

DTEND;TZID=Europe/Zurich:20191108T120000 END:VEVENT BEGIN:VEVENT UID:news932@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191023T141150 DTSTART;TZID=Europe/Zurich:20191101T110000 SUMMARY:Seminar in Numerical Analysis: Giacomo De Souza (EPFL) DESCRIPTION:Traditional explicit Runge--Kutta schemes\, though computationa lly inexpensive\, are inefficient for the integration of stiff ordinary di fferential equations due to stability issues. Conversely\, implicit scheme s are stable but can be overly expensive due to the solution of possibly l arge nonlinear systems with Newton-like methods\, whose convergence is nei ther guaranteed for large time steps. Explicit stabilized schemes such as the Runge--Kutta--Chebyshev method (RKC) represent a viable compromise\, as the width of their stability domain grows quadratically with respect t o the number of function evaluations\, thus presenting enhanced stability properties with a reasonable computational cost. These methods are particu larly efficient for systems arising from the space discretization of parab olic partial differential equations (PDEs).The efficiency of these methods deteriorates as the system becomes stiffer\, even if stiffness is induced by only few degrees of freedom. In the framework of discretized parabolic PDEs\, the number of function evaluations has to be chosen inversely prop ortional to the smallest element size in order to achieve stability\, thus largely wasting computational resources on locally-refined meshes. We fir st tackle this issue by replacing the right hand side of the PDE with an a veraged force\, which is obtained by damping the high modes down using the dissipative effect of the equation itself and which is cheap to evaluate. Combining RKC methods with the averaged force we give rise to multirate R KC schemes\, for which the number of expensive function evaluations is ind ependent of the small elements' size.The stability properties of our metho d are demonstrated on a model problem and numerical experiments confirm th at the stability bottleneck caused by a few of fine mesh elements can be o vercome without sacrificing accuracy.\\r\\nFor further information about t he seminar\, please visit this webpage [t3://page?uid=1115]. X-ALT-DESC:Traditional explicit Runge--Kutta schemes\, though computatio
nally inexpensive\, are inefficient for the integration of stiff ordinary
differential equations due to stability issues. Conversely\, implicit sche
mes are stable but can be overly expensive due to the solution of possibly
large nonlinear systems with Newton-like methods\, whose convergence is n
either guaranteed for large time steps. Explicit stabilized schemes such a
s the Runge--Kutta--Chebyshev method (RKC) \; represent a viable compr
omise\, as the width of their stability domain grows quadratically with re
spect to the number of function evaluations\, thus presenting enhanced sta
bility properties with a reasonable computational cost. These methods are
particularly efficient for systems arising from the space discretization o
f parabolic partial differential equations (PDEs).

The efficiency of
these methods deteriorates as the system becomes stiffer\, even if stiffne
ss is induced by only few degrees of freedom. In the framework of discreti
zed parabolic PDEs\, the number of function evaluations has to be chosen i
nversely proportional to the smallest element size in order to achieve sta
bility\, thus largely wasting computational resources on locally-refined m
eshes. We first tackle this issue by replacing the right hand side of the
PDE with an averaged force\, which is obtained by damping the high modes d
own using the dissipative effect of the equation itself and which is cheap
to evaluate. Combining RKC methods with the averaged force we give rise t
o multirate RKC schemes\, for which the number of expensive function evalu
ations is independent of the small elements' size.

The stability prop
erties of our method are demonstrated on a model problem and numerical exp
eriments confirm that the stability bottleneck caused by a few of fine mes
h elements can be overcome without sacrificing accuracy.

For furth er information about the seminar\, please visit this \;webpage.

DTEND;TZID=Europe/Zurich:20191101T120000 END:VEVENT BEGIN:VEVENT UID:news887@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190514T111137 DTSTART;TZID=Europe/Zurich:20190525T110000 SUMMARY:Seminar in Numerical Analysis: Florian Faucher (Université de Pau) DESCRIPTION:We study the inverse problem associated with the propagation of time-harmonic waves. In the seismic context\, the available measurements correspond with partial reflection data\, obtained from one side illuminat ion (only from the Earth surface). The inverse problem aims at recovering the subsurface Earth medium parameters and we employ the Full Waveform Inv ersion (FWI) method\, which relies on an iterative minimization algorithm of the difference between the measurement and simulation. \\r\\nWe investi gate the deployment of new devices developed in the acoustic setting: the dual-sensors\, which are able to capture both the pressure field and the v ertical velocity of the waves. For solving the inverse problem\, we define a new cost function\, adapted to these two types of data and based upon t he reciprocity. We first note that the stability of the problem can be sho wn to be Lipschitz\, assuming piecewise linear parameters. In addition\, r eciprocity waveform inversion allows a separation between the observationa l and numerical acquisitions. In fact\, the numerical sources do not have to coincide with the observational ones\, offering new possibilities to cr eate adapted computational acquisitions\, consequently reducing the numeri cal cost. We illustrate our approach with three-dimensional medium reconst ructions\, where we start with minimal information on the target models. W e also extend the methodology for elasticity. \\r\\nEventually\, if time a llows\, we shall explore the model representation in numerical seismic inv ersion\, where the adaptive eigenspace method appears as a promising appro ach to have a compromise between number of unknowns and resolution. \\r\\nReferences \\r\\n[1] G. Alessandrini\, M. V. de Hoop\, F. Faucher\, R. Gaburro and E. Sincich\, Inverse problem for the Helmholtz e quation with Cauchy data: reconstruction with conditional well-posedness d riven iterative regularization\, ESAIM: M2AN (2019). \\r\\n[2] E. Berett a\, M. V. De Hoop\, F. Faucher\, and O. Scherzer\, Inverse boundary value problem for the Helmholtz equation: quantitative conditional Lipschitz sta bility estimates. SIAM Journal on Mathematical Analysis\, 48(6)\, pp.3962- 3983 (2016).\\r\\n[3] M. J. Grote\, M. Kray\, and U. Nahum\, Adaptive ei genspace method for inverse scattering problems in the frequency domain. I nverse Problems\, 33(2)\, 025006 (2017). \\r\\n[4] H. Barucq\, F. Fauche r\, and O. Scherzer\, Eigenvector Model Descriptors for Solving an Inverse Problem of Helmholtz Equation. arXiv preprint arXiv:1903.08991 (2019).For further information about the seminar\, please visit this webpage. X-ALT-DESC:We study the inverse problem associated with the propagation of time-harmonic waves. In the seismic context\, the available measurements c orrespond with partial reflection data\, obtained from one side illuminati on (only from the Earth surface). The inverse problem aims at recovering t he subsurface Earth medium parameters and we employ the Full Waveform Inve rsion (FWI) method\, which relies on an iterative minimization algorithm o f the difference between the measurement and simulation. \nWe investigate the deployment of new devices developed in the acoustic setting: the dual- sensors\, which are able to capture both the pressure field and the vertic al velocity of the waves. For solving the inverse problem\, we define a ne w cost function\, adapted to these two types of data and based upon the re ciprocity. We first note that the stability of the problem can be shown to be Lipschitz\, assuming piecewise linear parameters. In addition\, recipr ocity waveform inversion allows a separation between the observational and numerical acquisitions. In fact\, the numerical sources do not have to co incide with the observational ones\, offering new possibilities to create adapted computational acquisitions\, consequently reducing the numerical c ost. We illustrate our approach with three-dimensional medium reconstructi ons\, where we start with minimal information on the target models. We als o extend the methodology for elasticity. \nEventually\, if time allows\, w e shall explore the model representation in numerical seismic inversion\, where the adaptive eigenspace method appears as a promising approach to ha ve a compromise between number of unknowns and resolution.\nReferences \n[1] \;G. Alessandrini\, M. V. de Hoop\, F. Faucher\, R. Gaburro and E. Sincich\, Inverse problem for the Helmholtz e quation with Cauchy data: reconstruction with conditional well-posedness d riven iterative regularization\, ESAIM: M2AN (2019). \n[2] \;E. Beret ta\, M. V. De Hoop\, F. Faucher\, and O. Scherzer\, Inverse boundary value problem for the Helmholtz equation: quantitative conditional Lipschitz st ability estimates. SIAM Journal on Mathematical Analysis\, 48(6)\, pp.3962 -3983 (2016).\n[3] \;M. J. Grote\, M. Kray\, and U. Nahum\, Adaptive eigenspace method for inverse scattering problems in the frequency domain. Inverse Problems\, 33(2)\, 025006 (2017). \n[4] \;H. Barucq\, F. Fau cher\, and O. Scherzer\, Eigenvector Model Descriptors for Solving an Inve rse Problem of Helmholtz Equation. arXiv preprint arXiv:1903.08991 (2019).

For further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20190524T120000 END:VEVENT BEGIN:VEVENT UID:news885@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190507T193740 DTSTART;TZID=Europe/Zurich:20190517T110000 SUMMARY:Seminar in Numerical Analysis: Thomas Wihler (Universität Bern) DESCRIPTION:A wide variety of (fixed-point) iterative methods for the solut ion of nonlinear equations (in Hilbert spaces) exists. In many cases\, suc h schemes can be interpreted as iterative local linearization methods\, wh ich can be obtained by applying a suitable linear preconditioning operator to the original (nonlinear) equation. Based on this observation\, we will derive a unified abstract framework which recovers some prominent iterati ve schemes. Furthermore\, in the context of numerical solutions methods fo r nonlinear partial differential equations\, we propose a combination of t he iterative linearization approach and the classical Galerkin discretizat ion method\, thereby giving rise to the so-called iterative linearization Galerkin (ILG) methodology. Moreover\, still on an abstract level\, based on elliptic reconstruction techniques\, we derive a posteriori error estim ates which separately take into account the discretization and linearizati on errors. Furthermore\, we propose an adaptive algorithm\, which provides an efficient interplay between these two effects. In addition\, some iter ative methods and numerical computations in the specific context of finite element discretizations of quasilinear stationary conservation laws will be presented.For further information about the seminar\, please visit this webpage. X-ALT-DESC: A wide variety of (fixed-point) iterative methods for the solut ion of nonlinear equations (in Hilbert spaces) exists. In many cases\, suc h schemes can be interpreted as iterative local linearization methods\, wh ich can be obtained by applying a suitable linear preconditioning operator to the original (nonlinear) equation. Based on this observation\, we will derive a unified abstract framework which recovers some prominent iterati ve schemes. Furthermore\, in the context of numerical solutions methods fo r nonlinear partial differential equations\, we propose a combination of t he iterative linearization approach and the classical Galerkin discretizat ion method\, thereby giving rise to the so-called iterative linearization Galerkin (ILG) methodology. Moreover\, still on an abstract level\, based on elliptic reconstruction techniques\, we derive a posteriori error estim ates which separately take into account the discretization and linearizati on errors. Furthermore\, we propose an adaptive algorithm\, which provides an efficient interplay between these two effects. In addition\, some iter ative methods and numerical computations in the specific context of finite element discretizations of quasilinear stationary conservation laws will be presented.

For further information about the seminar\, pleas e visit this webpage. DTEND;TZID=Europe/Zurich:20190517T120000 END:VEVENT BEGIN:VEVENT UID:news861@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190423T105445 DTSTART;TZID=Europe/Zurich:20190503T110000 SUMMARY:Seminar in Numerical Analysis: Rémi Abgrall (Universität Zürich) DESCRIPTION:Since the work of B. Wendroff and P. Lax\, we know what should be the correct form of the numerical approximation of conservation law. W e also know\, after Hou and Le Floch\, what kind of problems we are facin g when the flux form is not respected. However\, this is not the end of the story. All these works use a one dimensional way of thinking: the mai n player is the normal flux across cell interfaces. In addition there are several excellent numerical methods that do not fit the form of the lax Wendroff theorem. In that talk\, I will introduce a more general setting and show that any reasonable scheme for conservation law can be put in th at framework. In addition\, I will show that an equivalent flux formulati on\, with a suitable definition of what is a flux\, can be explicitly c onstructed (and computed)\, so that any reasonable scheme can be put in a finite volume form. I will end the talk by showing some applications: h ow to systematically construct entropy stable scheme\, or starting from t he non conservative form of a system-say the Euler equations-\, how to construct a suitable discretisation. And more. This is a joint work with P. Bacigaluppi (now postdoc at ETH) and S. Tokareva (now at Los Alamos). For further information about the seminar\, please visit this webpage. X-ALT-DESC: Since the work of B. Wendroff and P. Lax\, we know what should be the correct form of the numerical approximation of conservation law. W e also know\, after Hou and Le Floch\, what kind of problems we are facin g when the flux form is not respected.

However\, this is not the end of the story. All these works use a one dimensional way of think ing: the main player is the normal flux across cell interfaces. In additi on there are several excellent numerical methods that do not fit the form of the lax Wendroff theorem.

In that talk\, I will introduce a more general setting and show that any reasonable scheme for conservat ion law can be put in that framework. In addition\, I will show that an e quivalent flux formulation\, with a suitable definition of what is a flux \, \; can be explicitly constructed (and computed)\, so that any reas onable scheme can be put in a finite volume form.

I will end the talk by showing some applications: how to systematically construct e ntropy stable scheme\, or starting from the non conservative form of \; a system-say the Euler equations-\, how to construct a suitable discre tisation. And more.

This is a joint work with P. Bacigaluppi (now postdoc at ETH) and S. Tokareva (now at Los Alamos).

For further information about the seminar\, please visit this webpage.

DTEND;TZID=Europe/Zurich:20190503T120000 END:VEVENT BEGIN:VEVENT UID:news857@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190402T151711 DTSTART;TZID=Europe/Zurich:20190412T110000 SUMMARY:Seminar in Numerical Analysis: Chris Stolk (University of Amsterdam ) DESCRIPTION:In this talk I discuss a recently developed finite difference d iscretisation of the Helmholtz equation and some solution methods for the resulting linear systems. In high-frequency Helmholtz problems\, pollution errors\, due to numerical dispersion\, are a main source of error. We wil l show that such errors can be strongly reduced compared to other schemes\ , including high-order finite elements\, by selecting coefficients for the discrete system that maximise the accuracy of geometrical optics phases a nd amplitudes. Such low dispersion schemes are of interest by themselves\, but can also be used to improve the efficiency of multigrid schemes. Comp utation times for a solver combining a multigrid method with domain decomp osition compare well to those of alternative methods.For further informati on about the seminar\, please visit this webpage. X-ALT-DESC: In this talk I discuss a recently developed finite difference d iscretisation of the Helmholtz equation and some solution methods for the resulting linear systems. In high-frequency Helmholtz problems\, pollution errors\, due to numerical dispersion\, are a main source of error. We wil l show that such errors can be strongly reduced compared to other schemes\ , including high-order finite elements\, by selecting coefficients for the discrete system that maximise the accuracy of geometrical optics phases a nd amplitudes. Such low dispersion schemes are of interest by themselves\, but can also be used to improve the efficiency of multigrid schemes. Comp utation times for a solver combining a multigrid method with domain decomp osition compare well to those of alternative methods.

For furth er information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20190412T120000 END:VEVENT BEGIN:VEVENT UID:news844@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190320T092425 DTSTART;TZID=Europe/Zurich:20190329T110000 SUMMARY:Seminar in Numerical Analysis: Robert Scheichl (Universität Heidel berg) DESCRIPTION:Sample-based multilevel uncertainty quantification tools\, such as multilevel Monte Carlo\, multilevel quasi-Monte Carlo or multilevel stochastic collocation\, have recently gained huge popularity due to thei r potential to efficiently compute robust estimates of quantities of inte rest (QoI) derived from PDE models that are subject to uncertainties in t he input data (coefficients\, boundary conditions\, geometry\, etc). Espe cially for problems with low regularity\, they are asymptotically optimal in that they can provide statistics about such QoIs at (asymptotically) the same cost as it takes to compute one sample to the target accuracy. H owever\, when the data uncertainty is localised at random locations\, suc h as for manufacturing defects in composite materials\, the cost per samp le can be reduced significantly by adapting the spatial discretisation in dividually for each sample. Moreover\, the adaptive process typically pro duces coarser approximations that can be used directly for the multilevel uncertainty quantification. In this talk\, I will present two novel deve lopments that aim to exploit these ideas. In the first part I will presen t Continuous Level Monte Carlo (CLMC)\, a generalisation of multilevel Mo nte Carlo (MLMC) to a continuous framework where the level parameter is a continuous variable. This provides a natural framework to use sample-wis e adaptive refinement strategy\, with a goal-oriented error estimator as our new level parameter. We introduce a practical CLMC estimator (and alg orithm) and prove a complexity theorem showing the same rate of complexit y as for MLMC. Also\, we show that it is possible to make the CLMC estima tor unbiased with respect to the true quantity of interest. Finally\, we provide two numerical experiments which test the CLMC framework alongsid e a sample-wise adaptive refinement strategy\, showing clear gains over a standard MLMC approach with uniform grid hierarchies. In the second part \, I will show how to extend the sample-adaptive strategy to multilevel s tochastic collocation (MLSC) methods providing a complexity estimate and numerical experiments for a MLSC method that is fully adaptive in the dim ension\, in the polynomial degrees and in the spatial discretisation. Thi s is joint work with Gianluca Detommaso (Bath)\, Tim Dodwell (Exeter) and Jens Lang (Darmstadt).For further information about the seminar\, please visit this webpage. X-ALT-DESC: Sample-based multilevel uncertainty quantification tools\, such as multilevel Monte Carlo\, multilevel quasi-Monte Carlo or multilevel stochastic collocation\, have recently gained huge popularity due to thei r potential to efficiently compute robust estimates of quantities of inte rest (QoI) derived from PDE models that are subject to uncertainties in t he input data (coefficients\, boundary conditions\, geometry\, etc). Espe cially for problems with low regularity\, they are asymptotically optimal in that they can provide statistics about such QoIs at (asymptotically) the same cost as it takes to compute one sample to the target accuracy. H owever\, when the data uncertainty is localised at random locations\, suc h as for manufacturing defects in composite materials\, the cost per samp le can be reduced significantly by adapting the spatial discretisation in dividually for each sample. Moreover\, the adaptive process typically pro duces coarser approximations that can be used directly for the multilevel uncertainty quantification. In this talk\, I will present two novel deve lopments that aim to exploit these ideas. In the first part I will presen t Continuous Level Monte Carlo (CLMC)\, a generalisation of multilevel Mo nte Carlo (MLMC) to a continuous framework where the level parameter is a continuous variable. This provides a natural framework to use sample-wis e adaptive refinement strategy\, with a goal-oriented error estimator as our new level parameter. We introduce a practical CLMC estimator (and alg orithm) and prove a complexity theorem showing the same rate of complexit y as for MLMC. Also\, we show that it is possible to make the CLMC estima tor unbiased with respect to the true quantity of interest. Finally\, we provide two numerical experiments which test the CLMC framework alongsid e a sample-wise adaptive refinement strategy\, showing clear gains over a standard MLMC approach with uniform grid hierarchies. In the second part \, I will show how to extend the sample-adaptive strategy to multilevel s tochastic collocation (MLSC) methods providing a complexity estimate and numerical experiments for a MLSC method that is fully adaptive in the dim ension\, in the polynomial degrees and in the spatial discretisation.

This is joint work with Gianluca Detommaso (Bath)\, Tim Dodwell (Exete r) and Jens Lang (Darmstadt).

For further information about the seminar\, please visit this webpage . DTEND;TZID=Europe/Zurich:20190329T120000 END:VEVENT BEGIN:VEVENT UID:news829@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190225T091300 DTSTART;TZID=Europe/Zurich:20190301T110000 SUMMARY:Seminar in Numerical Analysis: Markus Zimmermann (Technische Univer sität München) DESCRIPTION:Solution spaces are sets of engineering solutions\, i.e.\, desi gns that satisfy all engineering requirements. Seeking solution spaces rat her than just one possibly optimal solution is numerically challenging\, b ut it can significantly simplify the development of systems in the presenc e of uncertainty and complexity. For different system components\, solutio n spaces are decomposed into independent target regions that enable distri buted development work and encompass uncertainty without particular underl ying uncertainty model. A basic stochastic algorithm to maximize so-called box-shaped solution spaces is presented. Two recent extensions are discus sed: first\, representations as Cartesian product of two- and higher-dimen sional spaces and\, second\, so-called solution-compensation spaces\, wher e design variables are grouped according to the order in which they need t o be specified. Applications to vehicle development for crash and driving dynamics are presented.For further information about the seminar\, please visit this webpage. X-ALT-DESC: Solution spaces are sets of engineering solutions\, i.e.\, desi gns that satisfy all engineering requirements. Seeking solution spaces rat her than just one possibly optimal solution is numerically challenging\, b ut it can significantly simplify the development of systems in the presenc e of uncertainty and complexity. For different system components\, solutio n spaces are decomposed into independent target regions that enable distri buted development work and encompass uncertainty without particular underl ying uncertainty model. A basic stochastic algorithm to maximize so-called box-shaped solution spaces is presented. Two recent extensions are discus sed: first\, representations as Cartesian product of two- and higher-dimen sional spaces and\, second\, so-called solution-compensation spaces\, wher e design variables are grouped according to the order in which they need t o be specified. Applications to vehicle development for crash and driving dynamics are presented.

For further information about the semin ar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20190301T120000 END:VEVENT BEGIN:VEVENT UID:news823@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190214T222417 DTSTART;TZID=Europe/Zurich:20190222T110000 SUMMARY:Seminar in Numerical Analysis: Edwin Mai (Universität der Bundesw ehr München) DESCRIPTION:With an increasing range of applications\, Shape Optimisation p roblems receive more and more interest in the engineering community\, whil e solving such problems is still a demanding task. In this talk the exampl e of a Stokes channel flow with the objective to reduce the energy dissipa tion is considered\, on which an optimise-then-discretize approach shall b e applied. Starting with a gradient descent method\, based on the analytic al shape derivative and the adjoint approach\, an initial optimisation pro cedure is discussed and differences in the shape derivative representation and their numerical implications are highlighted. Subsequently a possible way to derive shape hessian information in a so-called tangent-on-reverse method\, i.e. combining the adjoint and sensitivity approach\, is introdu ced. The shape hessian is utilised in a reduced SQP method for the equally constrained channel flow problem comprising of the objective\, PDE and ad ditional geometric constraints. In contrary to a one-shot approach the red uced approach requires the state and adjoint equations to be solved exactl y for each optimisation step. Finally\, some features of the numerical imp lementation using the finite element software package FEniCS and the obtai ned results are presented to show superiority of using hessian information .For further information about the seminar\, please visit this webpage. X-ALT-DESC: With an increasing range of applications\, Shape Optimisation p roblems receive more and more interest in the engineering community\, whil e solving such problems is still a demanding task. In this talk the exampl e of a Stokes channel flow with the objective to reduce the energy dissipa tion is considered\, on which an optimise-then-discretize approach shall b e applied. Starting with a gradient descent method\, based on the analytic al shape derivative and the adjoint approach\, an initial optimisation pro cedure is discussed and differences in the shape derivative representation and their numerical implications are highlighted. Subsequently a possible way to derive shape hessian information in a so-called tangent-on-reverse method\, i.e. combining the adjoint and sensitivity approach\, is introdu ced. The shape hessian is utilised in a reduced SQP method for the equally constrained channel flow problem comprising of the objective\, PDE and ad ditional geometric constraints. In contrary to a one-shot approach the red uced approach requires the state and adjoint equations to be solved exactl y for each optimisation step. Finally\, some features of the numerical imp lementation using the finite element software package FEniCS and the obtai ned results are presented to show superiority of using hessian information .

For further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20190222T120000 END:VEVENT BEGIN:VEVENT UID:news412@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181206T092622 DTSTART;TZID=Europe/Zurich:20181214T110000 SUMMARY:Seminar in Numerical Analysis: Martin Burger (Universität Erlangen ) DESCRIPTION:In this talk we will discuss nonlinear spectral decom positions in Banach spaces\, which shed a new light on multiscal e methods in imaging and open new possibilities of filtering tec hniques. We provide a novel geometric interpretation of nonlinea r eigenvalue problems in Banach spaces and provide conditions un der which gradient flows for norms or seminorms yield a spectral decomposition. We will see that under these conditions standard variational schemes are equivalent to the gradient flows for ar bitrary large time step\, recovering previous results e.g. for t he one dimensional total variation flow as special cases. \\r\\n For further information about the seminar\, please visit this webpage. X-ALT-DESC: In this talk we will discuss nonlinear spectral decom positions in Banach spaces\, which shed a new light on multiscal e methods in imaging and open new possibilities of filtering tec hniques. We provide a novel geometric interpretation of nonlinea r eigenvalue problems in Banach spaces and provide conditions un der which gradient flows for norms or seminorms yield a spectral decomposition. We will see that under these conditions standard variational schemes are equivalent to the gradient flows for ar bitrary large time step\, recovering previous results e.g. for t he one dimensional total variation flow as special cases. \nFor further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181214T120000 END:VEVENT BEGIN:VEVENT UID:news409@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181203T104240 DTSTART;TZID=Europe/Zurich:20181207T110000 SUMMARY:Seminar in Numerical Analysis: Zakaria Belhachmi (LMIA - Universit é de Haute-Alsace) DESCRIPTION:We present some ideas on modelling with diffusion operators so me PDEs based geometry inpainting problems. The objective is to provide a closed loop continuous to discrete models. The loop consists of an initi al family of simple PDEs depending on some parameters selected at the dis crete level from a posteriori informations. The choice of these parameter s modify dynamically the system of equations and the resulting models con verge (in the Gamma-convergence sense) to a limit -continuous- model that capture the jump set of the restaured image. We also discuss the compress ion-based inpainting within this approach.For further information about t he seminar\, please visit this webpage. X-ALT-DESC: We present some ideas on modelling with diffusion operators so me PDEs based geometry inpainting problems. The objective is to provide a closed loop continuous to discrete models. The loop consists of an initi al family of simple PDEs depending on some parameters selected at the dis crete level from a posteriori informations. The choice of these parameter s modify dynamically the system of equations and the resulting models con verge (in the Gamma-convergence sense) to a limit -continuous- model that capture the jump set of the restaured image. We also discuss the compress ion-based inpainting within this approach.

For further informa tion about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181207T120000 END:VEVENT BEGIN:VEVENT UID:news393@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181116T170413 DTSTART;TZID=Europe/Zurich:20181123T110000 SUMMARY:Seminar in Numerical Analysis: Assyr Abdulle (EPFL) DESCRIPTION:In this talk we discuss several challenges that arise in Bayesi an inference for ordinary and partial differential equations. The numeric al solvers used to compute the forward model of such problems induce a p ropagation of the discretization error into the posterior measure for the parameters of interest. This uncertainty originating from the numerical approximation error can be accounted for using probabilistic numerical me thods. New probabilistic numerical methods for ordinary differential equa tions that share geometric properties of the true solution will be presen ted in the first part of this talk. In the second part of the talk\, w e will discuss a Bayesian approach for inverse problems involving ellipti c partial differential equations with multiple scales. Computing repeated forward problems in a multiscale context is computationnally too expensi ve and we propose a new strategy based on the use of "effective" forw ard models originating from homogenization theory. Convergence of the tru e posterior distribution for the parameters of interest towards the homo genized posterior is established via G-convergence for the Hellinger metr ic. A computational approach based on numerical homogenization and reduce d basis methods is proposed for an efficient evaluation of the forward mo del in a Markov Chain Monte-Carlo procedure. References: A. Abdulle\ , G. Garegnani\, Random time step probabilistic methods for uncertainty q uantification in chaotic and geometric numerical integration\, Preprint ( 2018)\, submitted for publication. A. Abdulle\, A. Di Blasio\, Numerical homogenization and model order reduction for multiscale inverse problems\ , to appear in SIAM MMS. A. Abdulle\, A. Di Blasio\, A Bayesian numerical homogenization method for elliptic multiscale inverse problems\, Preprin t (2018)\, submitted for publication. For further information about the s eminar\, please visit this webpage. X-ALT-DESC: In this talk we discuss several challenges that arise in Bayesi an inference for ordinary and partial differential equations. The numeric al solvers used to compute the forward model of such problems induce a p ropagation of the discretization error into the posterior measure for the parameters of interest. This uncertainty originating from the numerical approximation error can be accounted for using probabilistic numerical me thods. New probabilistic numerical methods for ordinary differential equa tions that share geometric properties of the true solution will be presen ted in the first part of this talk. \;

In the second part of t he talk\, we will discuss a Bayesian approach for inverse problems involv ing elliptic partial differential equations with multiple scales. Computi ng repeated forward problems in a multiscale context is computationnally too expensive and we propose a new strategy \;based on the use of&nb sp\; "\;effective"\; forward models originating from homogenizati on theory. Convergence of the true posterior distribution for the paramet ers of interest towards the homogenized posterior is established via G-co nvergence for the Hellinger metric. A computational approach based on num erical homogenization and reduced basis methods is proposed for an effici ent evaluation of the forward model in a Markov Chain Monte-Carlo proced ure. \;

References:

A. Abdulle\, G. Garegnani\, R andom time step probabilistic methods for uncertainty quantification in c haotic and geometric numerical integration\, Preprint (2018)\, submitted for publication.

A. Abdulle\, A. Di Blasio\, Numerical homogenizati on and model order reduction for multiscale inverse problems\, to appear in SIAM MMS.

A. Abdulle\, A. Di Blasio\, A Bayesian numerical homog enization method for elliptic multiscale inverse problems\, Preprint (201 8)\, submitted for publication.

For further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181123T120000 END:VEVENT BEGIN:VEVENT UID:news307@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181024T123048 DTSTART;TZID=Europe/Zurich:20181116T113000 SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université Greno ble Alpes) DESCRIPTION:Full waveform inversion (FWI) is a powerful high resolution sei smic imaging method\, used in the academy for global and regional scale imaging\, and in the oil & gas industry for exploration purposes. It can be understood as a PDE constrained optimization problem: the misfit betwe en recorded seismic data and synthetic seismic data computed as the solut ion of a wave propagation problem is reduced over a space of parameters c ontrolling the wave propagation. One of the main limitation of FWI is its dependency on the accuracy of the initial guess of the solution. This li mitation is due to the non-convexity of the standard least-squares misfit function used to measure the discrepancy between recorded and synthetic data\, and the use of local optimization techniques to reduce this misfi t. In recent studies\, we have studied the interest for using a misfit fu nction based on an optimal transport distance to mitigate this issue. The convexity of this distance with respect to shifted patterns is the main reason why we are interested in this distance\, as it can be seen as a pr oxy for the convexity with respect to the wave velocities we want to re construct. In this talk\, we will give an overview of this work\, startin g by introducing basic concepts on optimal transport\, before detailing the difficulties for using the optimal transport distance in the framewor k of FWI\, and reviewing the solutions we have proposed.\\r\\nFor further information about the seminar\, please visit this webpage. X-ALT-DESC: Full waveform inversion (FWI) is a powerful high resolution sei smic imaging method\, used in the academy for global and regional scale imaging\, and in the oil &\; gas industry for exploration purposes. It can be understood as a PDE constrained optimization problem: the misfit between recorded seismic data and synthetic seismic data computed as the solution of a wave propagation problem is reduced over a space of paramet ers controlling the wave propagation. One of the main limitation of FWI i s its dependency on the accuracy of the initial guess of the solution. Th is limitation is due to the non-convexity of the standard least-squares m isfit function used to measure the discrepancy between recorded and synt hetic data\, and the use of local optimization techniques to reduce this misfit. In recent studies\, we have studied the interest for using a misf it function based on an optimal transport distance to mitigate this issue . The convexity of this distance with respect to shifted patterns is the main reason why we are interested in this distance\, as it can be seen as a proxy for the convexity with respect to the wave velocities we want to \; reconstruct. In this talk\, we will give an overview of this work \, starting by introducing basic concepts on optimal transport\, before d etailing the difficulties for using the optimal transport distance in the framework of FWI\, and reviewing the solutions we have proposed.\nFor fu rther information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181116T123000 END:VEVENT BEGIN:VEVENT UID:news359@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181031T192501 DTSTART;TZID=Europe/Zurich:20181109T110000 SUMMARY:Seminar in Numerical Analysis: Stefan Sauter (Universität Zürich) DESCRIPTION:In our talk we consider the Maxwell equations in the frequency domain\, discretized by Nédélec-hp-finite elements. We develop a stabili ty and convergence analysis which is explicit with respect to the wave nu mber k\, the mesh size h\, and the local polynomial degree p. It turns ou t that\, for the choice p>=log(k) \, the discretization does not suffer f rom the so-called pollution effect. This is known for high-frequency acous tic scattering. However\, the analysis of Maxwell equations requires the development of twelve additional theoretical tools which we call "the twel ve apostels". In our talk\, we explain these "apostels" and how they are n eeded to prove the stability and convergence of our method.\\r\\nThis tal k comprises joint work with Prof. Markus Melenk\, TU Wien. \\r\\nFor furth er information about the seminar\, please visit this webpage. X-ALT-DESC: In our talk we consider the Maxwell equations in the frequency domain\, discretized by Nédélec-hp-finite elements. We develop a stabili ty and convergence analysis which is explicit with respect to the wave nu mber k\, the mesh size h\, and the local polynomial degree p. It turns ou t that\, for the choice p>\;=log(k) \, the discretization does not suff er from the so-called pollution effect. This is known for high-frequency a coustic scattering. However\, the analysis of Maxwell equations requires the development of twelve additional theoretical tools which we call " \;the twelve apostels"\;. In our talk\, we explain these "\;aposte ls"\; and how they are needed to prove the stability and convergence of our method.\nThis talk comprises joint work with Prof. Markus Melenk\, TU Wien. \nFor further information about the seminar\, please visit this < link de/forschung/mathematik/seminar-in-numerical-analysis/ - - "Opens int ernal link in current window">webpage. DTEND;TZID=Europe/Zurich:20181109T120000 END:VEVENT BEGIN:VEVENT UID:news350@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181023T144703 DTSTART;TZID=Europe/Zurich:20181102T110000 SUMMARY:Seminar in Numerical Analysis: Maryna Kachanovska (ENSTA ParisTech) DESCRIPTION:In this work we consider the problem of the sound propagation in a bronchial network. Asymptotically\, this phenomenon can be modelled by a weighted wave equation posed on a fractal (i.e. self-similar) 1D tre e. The principal difficulty for the numerical resolution of the problem is the 'infiniteness' of the geometry. To deal with this issue\, we will present transparent boundary conditions\, used to truncate the computati onal domain to a finite subtree.\\r\\nThe construction of such transpare nt conditions relies on the approximation of the Dirichlet-to-Neumann (Dt N) operator\, whose symbol is a meromorphic function that satisfies a ce rtain non-linear functional equation. We present two approaches to approx imate the DtN in the time domain\, alternative to the low-order absorbing boundary conditions\, which appear inefficient in this case.\\r\\n The f irst approach stems from the use of the convolution quadrature (cf. [Lubi ch 1988]\, [Banjai\, Lubich\, Sayas 2016])\, which consists in constructi ng an exact DtN for a semi-discretized in time problem. In this case the combination of the explicit leapfrog method for the volumic terms and the implicit trapezoid rule for the boundary terms leads to a second-order s cheme stable under the classical CFL condition.\\r\\nThe second approach is motivated by the Engquist-Majda ABCs (cf. [Engquist\, Majda 1977])\, a nd consists in approximating the DtN by local operators\, obtained from th e truncation of the meromorphic series which represents the symbol of the DtN. We show how the respective error can be controlled and provide some complexity estimates.\\r\\nThis is a joint work with Patrick Joly (INRIA \, France) and Adrien Semin (TU Darmstadt\, Germany). \\r\\nFor further in formation about the seminar\, please visit this webpage. X-ALT-DESC: In this work we consider the problem of the sound propagation in a bronchial network. Asymptotically\, this phenomenon can be modelled by a weighted wave equation posed on a fractal (i.e. self-similar) 1D tre e. The principal difficulty for the numerical resolution of the problem is the 'infiniteness' of the geometry. To deal with this issue\, we will present transparent boundary conditions\, used to truncate the computati onal domain to a finite subtree.\nThe construction of such transparent c onditions relies on the approximation of the Dirichlet-to-Neumann (DtN) o perator\, whose symbol is a meromorphic function that satisfies a certai n non-linear functional equation. We present two approaches to approximat e the DtN in the time domain\, alternative to the low-order absorbing bou ndary conditions\, which appear inefficient in this case.\n The first app roach stems from the use of the convolution quadrature (cf. [Lubich 1988] \, [Banjai\, Lubich\, Sayas 2016])\, which consists in constructing an ex act DtN for a semi-discretized in time problem. In this case the combinat ion of the explicit leapfrog method for the volumic terms and the implici t trapezoid rule for the boundary terms leads to a second-order scheme st able under the classical CFL condition.\nThe second approach is motivated by the Engquist-Majda ABCs (cf. [Engquist\, Majda 1977])\, and consists in approximating the DtN by local operators\, obtained from the truncation of the meromorphic series which represents the symbol of the DtN. We sho w how the respective error can be controlled and provide some complexity estimates.\nThis is a joint work with Patrick Joly (INRIA\, France) and A drien Semin (TU Darmstadt\, Germany). \nFor further information about the seminar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181102T120000 END:VEVENT BEGIN:VEVENT UID:news344@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181012T175433 DTSTART;TZID=Europe/Zurich:20181019T110000 SUMMARY:Seminar in Numerical Analysis: Martin Rumpf (Universität Bonn) DESCRIPTION:We investigate a generalization of cubic splines to Riemannian manifolds. Spline curves are defined as minimizers of the spline energy - a combination of the Riemannian path energy and the time integral of the s quared covariant derivative of the path velocity - under suitable interpol ation conditions. A variational time discretization for the spline energy leads to a constrained optimization problem over discrete paths on the man ifold. Existence of continuous and discrete spline curves is established u sing the direct method in the calculus of variations. Furthermore\, the co nvergence of discrete spline paths to a continuous spline curve follows fr om the Γ-convergence of the discrete to the continuous spline energy. Fin ally\, selected example settings are discussed\, including splines on embe dded finite-dimensional manifolds\, on a high-dimensional manifold of disc rete shells with applications in surface processing\, and on the infinite- dimensional shape manifold of viscous rods. This is based on joint work wi th Behrend Heeren and Benedikt Wirth.\\r\\nFor further information about t he seminar\, please visit this webpage. X-ALT-DESC:We investigate a generalization of cubic splines to Riemannian m anifolds. Spline curves are defined as minimizers of the spline energy - a combination of the Riemannian path energy and the time integral of the sq uared covariant derivative of the path velocity - under suitable interpola tion conditions. A variational time discretization for the spline energy l eads to a constrained optimization problem over discrete paths on the mani fold. Existence of continuous and discrete spline curves is established us ing the direct method in the calculus of variations. Furthermore\, the con vergence of discrete spline paths to a continuous spline curve follows fro m the Γ-convergence of the discrete to the continuous spline energy. Fina lly\, selected example settings are discussed\, including splines on embed ded finite-dimensional manifolds\, on a high-dimensional manifold of discr ete shells with applications in surface processing\, and on the infinite-d imensional shape manifold of viscous rods. This is based on joint work wit h Behrend Heeren and Benedikt Wirth.\nFor further information about the se minar\, please visit this webpage. DTEND;TZID=Europe/Zurich:20181019T120000 END:VEVENT BEGIN:VEVENT UID:news223@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T181333 DTSTART;TZID=Europe/Zurich:20180601T110000 SUMMARY:Seminar in Numerical Analysis: Kenneth Duru (Ludwig-Maximilians-Uni versität München) DESCRIPTION:High order accurate and explicit time-stable solvers are well s uited for hyperbolic wave propagation problems. However\, because of the complexities of real geometries\, internal interfaces\, nonlinear boundary /interface conditions and the presence of disparate spatial and temporal s cales present in real media and sources\, discontinuities and sharp wave f ronts become fundamental features of the solutions. Thus\, high order accu racy\, geometrically flexible and adaptive numerical algorithms are critic al for high fidelity and efficient simulations of wave phenomena in many a pplications. I will present a physics-based numerical flux suitable for in ter-element and boundary conditions in discontinuous Galerkin approximatio ns of first order hyperbolic PDEs. Using this physics-based numerical pena lty-flux\, we will develop a provably energy-stable discontinuous Galerkin approximations of the elastic waves in complex and discontinuous media. By construction the numerical flux is upwind and yields a discrete energy estimate analogous to the continuous energy estimate. The discrete energy estimates hold for conforming and non-conforming curvilinear elements. The ability to handle non-conforming curvilinear meshes allows for flexible a daptive mesh refinement strategies. The numerical scheme have been impleme nted in ExaHyPE\, a simulation engine for hyperbolic PDEs on adaptive stru ctured meshes\, for exa-scale supercomputers. I will show 3D numerical exp eriments demonstrating stability and high order accuracy. Finally\, we pre sent a large scale geophysical regional wave propagation problem in a hete rogeneous Earth model with geologically constrained media heterogeneity an d geometrically complex free-surface topography. X-ALT-DESC:High order accurate and explicit time-stable solvers are well su ited for hyperbolic wave propagation problems. However\, because of the c omplexities of real geometries\, internal interfaces\, nonlinear boundary/ interface conditions and the presence of disparate spatial and temporal sc ales present in real media and sources\, discontinuities and sharp wave fr onts become fundamental features of the solutions. Thus\, high order accur acy\, geometrically flexible and adaptive numerical algorithms are critica l for high fidelity and efficient simulations of wave phenomena in many ap plications. I will present a physics-based numerical flux suitable for int er-element and boundary conditions in discontinuous Galerkin approximation s of first order hyperbolic PDEs. Using this physics-based numerical penal ty-flux\, we will develop a provably energy-stable discontinuous Galerkin approximations of the elastic waves in complex and discontinuous media. B y construction the numerical flux is upwind and yields a discrete energy e stimate analogous to the continuous energy estimate. The discrete energy e stimates hold for conforming and non-conforming curvilinear elements. The ability to handle non-conforming curvilinear meshes allows for flexible ad aptive mesh refinement strategies. The numerical scheme have been implemen ted in ExaHyPE\, a simulation engine for hyperbolic PDEs on adaptive struc tured meshes\, for exa-scale supercomputers. I will show 3D numerical expe riments demonstrating stability and high order accuracy. Finally\, we pres ent a large scale geophysical regional wave propagation problem in a heter ogeneous Earth model with geologically constrained media heterogeneity and geometrically complex free-surface topography. DTEND;TZID=Europe/Zurich:20180601T120000 END:VEVENT BEGIN:VEVENT UID:news222@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T180942 DTSTART;TZID=Europe/Zurich:20180525T113000 SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université Greno ble Alpes) DESCRIPTION:Full waveform inversion (FWI) is a powerful high resolution sei smic imaging method\, used in the academy for global and regional scale imaging\, and in the oil & gas industry for exploration purposes. It can be understood as a PDE constrained optimization problem: the misfit betwe en recorded seismic data and synthetic seismic data computed as the solut ion of a wave propagation problem is reduced over a space of parameters c ontrolling the wave propagation. One of the main limitation of FWI is its dependency on the accuracy of the initial guess of the solution. This li mitation is due to the non-convexity of the standard least-squares misfit function used to measure the discrepancy between recorded and synthetic data\, and the use of local optimization techniques to reduce this misfit . In recent studies\, we have studied the interest for using a misfit fun ction based on an optimal transport distance to mitigate this issue. The convexity of this distance with respect to shifted patterns is the main r eason why we are interested in this distance\, as it can be seen as a pro xy for the convexity with respect to the wave velocities we want to reco nstruct. In this talk\, we will give an overview of this work\, starting by introducing basic concepts on optimal transport\, before detailing the difficulties for using the optimal transport distance in the framework o f FWI\, and reviewing the solutions we have proposed. X-ALT-DESC:Full waveform inversion (FWI) is a powerful high resolution seis mic imaging method\, used in the academy for global and regional scale i maging\, and in the oil &\; gas industry for exploration purposes. It can be understood as a PDE constrained optimization problem: the misfit b etween recorded seismic data and synthetic seismic data computed as the s olution of a wave propagation problem is reduced over a space of paramete rs controlling the wave propagation. One of the main limitation of FWI is its dependency on the accuracy of the initial guess of the solution. Thi s limitation is due to the non-convexity of the standard least-squares mi sfit function used to measure the discrepancy between recorded and synthe tic data\, and the use of local optimization techniques to reduce this mi sfit. In recent studies\, we have studied the interest for using a misfit function based on an optimal transport distance to mitigate this issue. The convexity of this distance with respect to shifted patterns is the ma in reason why we are interested in this distance\, as it can be seen as a proxy for the convexity with respect to the wave velocities we want to reconstruct. In this talk\, we will give an overview of this work\, start ing by introducing basic concepts on optimal transport\, before detailing the difficulties for using the optimal transport distance in the framewo rk of FWI\, and reviewing the solutions we have proposed. DTEND;TZID=Europe/Zurich:20180525T123000 END:VEVENT BEGIN:VEVENT UID:news221@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T175831 DTSTART;TZID=Europe/Zurich:20180518T110000 SUMMARY:Seminar in Numerical Analysis: Holger Fröning (Universität Heidel berg) DESCRIPTION:We are observing a continuous increase in concurrency and heter ogeneity for computing systems of any scale\, ranging from small mobile de vices to huge datacenters\, and driven by a steady demand for more computi ng power. One of the prime examples for an application with virtually unli mited computational requirements is machine learning\, in particular deep neural networks (DNN). At the level of data-centers\, DNN training has alr eady led to a ubiquitous use of graphics processing units (GPUs)\, forming a prime example for specialization for computational improvement. Still\, this application is strongly hindered by insufficient compute power and b y scalability limitations. Contrary\, mobile architectures for DNN inferen ce are still nascent\, and a large amount of proposals have been published in the recent years. Both applications\, training and inference\, can fur thermore benefit a lot from algorithmic optimizations to reduce the comput ational requirements. This talk presents a short introduction of the appli cation\, a summary of our observations\, and our own research on reduced p recision by extreme forms of quantizations. Finally\, this talk will offer some opinions on anticipated research problems. X-ALT-DESC:We are observing a continuous increase in concurrency and hetero geneity for computing systems of any scale\, ranging from small mobile dev ices to huge datacenters\, and driven by a steady demand for more computin g power. One of the prime examples for an application with virtually unlim ited computational requirements is machine learning\, in particular deep n eural networks (DNN). At the level of data-centers\, DNN training has alre ady led to a ubiquitous use of graphics processing units (GPUs)\, forming a prime example for specialization for computational improvement. Still\, this application is strongly hindered by insufficient compute power and by scalability limitations. Contrary\, mobile architectures for DNN inferenc e are still nascent\, and a large amount of proposals have been published in the recent years. Both applications\, training and inference\, can furt hermore benefit a lot from algorithmic optimizations to reduce the computa tional requirements. This talk presents a short introduction of the applic ation\, a summary of our observations\, and our own research on reduced pr ecision by extreme forms of quantizations. Finally\, this talk will offer some opinions on anticipated research problems. DTEND;TZID=Europe/Zurich:20180518T120000 END:VEVENT BEGIN:VEVENT UID:news220@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T175646 DTSTART;TZID=Europe/Zurich:20180504T110000 SUMMARY:Seminar in Numerical Analysis: Jan Hamaekers (Fraunhofer SCAI) DESCRIPTION:In this talk\, we introduce a new scheme for the efficient nume rical treatment of the electronic Schrödinger equation for molecules. It is based on the combination of a many-body expansion\, which corresponds t o the so-called bond order dissection Anova approach\, with a hierarchy of basis sets of increasing order. Here\, the energy is represented as a fin ite sum of contributions associated to subsets of nuclei and basis sets in a telescoping sum like fashion. Under the assumption of data locality of the electronic density (nearsightedness of electronic matter)\, the terms of this expansion decay rapidly and higher terms may be neglected. We furt her extend the approach in a dimension-adaptive fashion to generate quasi- optimal approximations\, i.e. a specific truncation of the hierarchical se ries such that the total benefit is maximized for a fixed amount of costs. This way\, we are able to achieve substantial speed up factors compared t o conventional first principles methods depending on the molecular system under consideration. In particular\, the method can deal efficiently with molecular systems which include only a small active part that needs to be described by accurate but expensive models. Finally\, we discuss to apply such a multi-level many-body decomposition in the context of machine learn ing for many-body systems. X-ALT-DESC:In this talk\, we introduce a new scheme for the efficient numer ical treatment of the electronic Schrödinger equation for molecules. It i s based on the combination of a many-body expansion\, which corresponds to the so-called bond order dissection Anova approach\, with a hierarchy of basis sets of increasing order. Here\, the energy is represented as a fini te sum of contributions associated to subsets of nuclei and basis sets in a telescoping sum like fashion. Under the assumption of data locality of t he electronic density (nearsightedness of electronic matter)\, the terms o f this expansion decay rapidly and higher terms may be neglected. We furth er extend the approach in a dimension-adaptive fashion to generate quasi-o ptimal approximations\, i.e. a specific truncation of the hierarchical ser ies such that the total benefit is maximized for a fixed amount of costs. This way\, we are able to achieve substantial speed up factors compared to conventional first principles methods depending on the molecular system u nder consideration. In particular\, the method can deal efficiently with m olecular systems which include only a small active part that needs to be d escribed by accurate but expensive models. Finally\, we discuss to apply s uch a multi-level many-body decomposition in the context of machine learni ng for many-body systems. DTEND;TZID=Europe/Zurich:20180504T120000 END:VEVENT BEGIN:VEVENT UID:news219@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T175418 DTSTART;TZID=Europe/Zurich:20180427T110000 SUMMARY:Seminar in Numerical Analysis: Pierre-Henri Tournier (UPMC - Univer sity Pierre and Marie Curie) DESCRIPTION:This work deals with preconditioning the time-harmonic Maxwell equations with absorption\, where the preconditioner is constructed using two-level overlapping Additive Schwarz Domain Decomposition\, and the PDE is discretised using finite-element methods of fixed\, arbitrary order. Th e theory shows that if the absorption is large enough\, and if the subdoma in and coarse mesh diameters are chosen appropriately\, then classical two -level overlapping Additive Schwarz Domain Decomposition preconditioning p erforms optimally – in the sense that GMRES converges in a wavenumber-in dependent number of iterations – for the problem with absorption. This w ork is an extension of the theory proposed in [1] for the Helmholtz equati on. Numerical experiments illustrate this theoretical result and also (i) explore replacing the PEC boundary conditions on the subdomains by impedan ce boundary conditions\, and (ii) show that the preconditioner for the pro blem with absorption is also an effective preconditioner for the problem w ith no absorption. The numerical results include examples arising from app lications: a problem with absorption arising from medical imaging shows th e robustness of the preconditioner against heterogeneity\, and a scatterin g problem by the COBRA cavity shows good scalability of the preconditioner with up to 3000 processors. Finally\, additional numerical results for th e elastic wave equation are presented for benchmarks in seismic inversion. \\r\\n[1] I. G. Graham\, E. A. Spence\, and E. Vainikko. Domain decomposit ion preconditioning for high-frequency Helmholtz problems with absorption. Mathematics of Computation\, 86(307):2089–2127\, 2017. X-ALT-DESC:This work deals with preconditioning the time-harmonic Maxwell e quations with absorption\, where the preconditioner is constructed using t wo-level overlapping Additive Schwarz Domain Decomposition\, and the PDE i s discretised using finite-element methods of fixed\, arbitrary order. The theory shows that if the absorption is large enough\, and if the subdomai n and coarse mesh diameters are chosen appropriately\, then classical two- level overlapping Additive Schwarz Domain Decomposition preconditioning pe rforms optimally – in the sense that GMRES converges in a wavenumber-ind ependent number of iterations – for the problem with absorption. This wo rk is an extension of the theory proposed in [1] for the Helmholtz equatio n. Numerical experiments illustrate this theoretical result and also (i) e xplore replacing the PEC boundary conditions on the subdomains by impedanc e boundary conditions\, and (ii) show that the preconditioner for the prob lem with absorption is also an effective preconditioner for the problem wi th no absorption. The numerical results include examples arising from appl ications: a problem with absorption arising from medical imaging shows the robustness of the preconditioner against heterogeneity\, and a scattering problem by the COBRA cavity shows good scalability of the preconditioner with up to 3000 processors. Finally\, additional numerical results for the elastic wave equation are presented for benchmarks in seismic inversion.\ n[1] I. G. Graham\, E. A. Spence\, and E. Vainikko. Domain decomposition p reconditioning for high-frequency Helmholtz problems with absorption. Math ematics of Computation\, 86(307):2089–2127\, 2017. DTEND;TZID=Europe/Zurich:20180427T120000 END:VEVENT BEGIN:VEVENT UID:news218@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T180010 DTSTART;TZID=Europe/Zurich:20180413T110000 SUMMARY:Seminar in Numerical Analysis: Philipp Morgenstern (Leibniz Univers ität Hannover) DESCRIPTION:We introduce a mesh refinement algorithm for the Adaptive Isoge ometric Method using multivariate T-splines. We investigate linear indepen dence of the T-splines\, nestedness of the T-spline spaces\, and linear co mplexity in the sense of a uniform upper bound on the ratio of generated a nd marked elements\, which is crucial for a later proof of rate-optimality of the method. Altogether\, this work paves the way for a provably rate-o ptimal Adaptive Isogeometric Method with T-splines in any space dimension. \\r\\nAs an outlook to future work\, we outline an approach for the handli ng of zero knot intervals and multiple lines in the interior of the domain \, which are used in CAD applications for controlling the continuity of th e spline functions\, and we also sketch basic ideas for the local refineme nt of two-dimensional meshes that do not have tensor-product structure. X-ALT-DESC:We introduce a mesh refinement algorithm for the Adaptive Isogeo metric Method using multivariate T-splines. We investigate linear independ ence of the T-splines\, nestedness of the T-spline spaces\, and linear com plexity in the sense of a uniform upper bound on the ratio of generated an d marked elements\, which is crucial for a later proof of rate-optimality of the method. Altogether\, this work paves the way for a provably rate-op timal Adaptive Isogeometric Method with T-splines in any space dimension.\ nAs an outlook to future work\, we outline an approach for the handling of zero knot intervals and multiple lines in the interior of the domain\, wh ich are used in CAD applications for controlling the continuity of the spl ine functions\, and we also sketch basic ideas for the local refinement of two-dimensional meshes that do not have tensor-product structure. DTEND;TZID=Europe/Zurich:20180413T120000 END:VEVENT BEGIN:VEVENT UID:news217@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T174831 DTSTART;TZID=Europe/Zurich:20180323T110000 SUMMARY:Seminar in Numerical Analysis: Gregor Gantner (TU Wien) DESCRIPTION:Since the advent of isogeometric analysis (IGA) in 2005\, the f inite element method (FEM) and the boundary element method (BEM) with spli nes have become an active field of research. The central idea of IGA is to use the same functions for the approximation of the solution of the consi dered partial differential equation (PDE) as for the representation of the problem geometry in computer aided design (CAD). Usually\, CAD is based o n tensor-product splines. To allow for adaptive refinement\, several exten sions of these have emerged\, e.g.\, hierarchical splines\, T-splines\, an d LR-splines. In view of geometry induced generic singularities and the fa ct that isogeometric methods employ higher-order ansatz functions\, the ga in of adaptive refinement (resp. loss for uniform refinement) is huge.\\r\ \nIn this talk\, we first consider an adaptive FEM with hierarchical splin es of arbitrary degree for linear elliptic PDE systems of second order wit h Dirichlet boundary condition for arbitrary dimension d≥2. We assume th at the problem geometry can be parametrized over the d-dimensional unit cu be. We propose a refinement strategy to generate a sequence of locally ref ined meshes and corresponding discrete solutions. Adaptivity is driven by some weighted-residual a posteriori error estimator. In [1]\, we proved li near convergence of the error estimator with optimal algebraic rate.\\r\\n Next\, we consider an adaptive BEM with hierarchical splines of arbitrary degree for weakly-singular integral equations of the first kind that arise from the solution of linear elliptic PDE systems of second order with con stant coefficients and Dirichlet boundary condition. We assume that the bo undary of the geometry is the union of surfaces that can be parametrized o ver the unit square. Again\, we propose a refinement strategy to generate a sequence of locally refined meshes and corresponding discrete solutions\ , where adaptivity is driven by some weighted-residual a posteriori error estimator. In [2]\, we proved linear convergence of the error estimator wi th optimal algebraic rate. In contrast to prior works\, which are restrict ed to the Laplace model problem\, our analysis allows for arbitrary ellipt ic PDE operators of second order with constant coefficients.\\r\\nFinally\ , for one-dimensional boundaries\, we investigate an adaptive BEM with sta ndard splines instead of hierarchical splines. We modify the corresponding algorithm so that it additionally uses knot multiplicity increase which r esults in local smoothness reduction of the ansatz space. In [3]\, we prov ed linear convergence of the employed weighted-residual error estimator wi th optimal algebraic rate.\\r\\nREFERENCES\\r\\n[1] G. Gantner\, D. Haberl ik\, and Dirk Praetorius\, Adaptive IGAFEM with optimal convergence rates: Hierarchical B-splines. Math. Mod. Meth. in Appl. S.\, Vol. 27\, 2017.\\r \\n[2] G. Gantner\, Optimal adaptivity for splines in finite and boundary element methods\, PhD thesis\, TU Wien\, 2017.\\r\\n[3] Michael Feischl\, Gregor Gantner\, Alexander Haberl\, and Dirk Praetorius. Adaptive 2D IGA b oundary element methods. Eng. Anal. Bound. Elem.\, Vol. 62\, 2016. X-ALT-DESC:Since the advent of isogeometric analysis (IGA) in 2005\, the fi nite element method (FEM) and the boundary element method (BEM) with splin es have become an active field of research. The central idea of IGA is to use the same functions for the approximation of the solution of the consid ered partial differential equation (PDE) as for the representation of the problem geometry in computer aided design (CAD). Usually\, CAD is based on tensor-product splines. To allow for adaptive refinement\, several extens ions of these have emerged\, e.g.\, hierarchical splines\, T-splines\, and LR-splines. In view of geometry induced generic singularities and the fac t that isogeometric methods employ higher-order ansatz functions\, the gai n of adaptive refinement (resp. loss for uniform refinement) is huge.\nIn this talk\, we first consider an adaptive FEM with hierarchical splines of arbitrary degree for linear elliptic PDE systems of second order with Dir ichlet boundary condition for arbitrary dimension d≥2. We assume that th e problem geometry can be parametrized over the d-dimensional unit cube. W e propose a refinement strategy to generate a sequence of locally refined meshes and corresponding discrete solutions. Adaptivity is driven by some weighted-residual a posteriori error estimator. In [1]\, we proved linear convergence of the error estimator with optimal algebraic rate.\nNext\, we consider an adaptive BEM with hierarchical splines of arbitrary degree fo r weakly-singular integral equations of the first kind that arise from the solution of linear elliptic PDE systems of second order with constant coe fficients and Dirichlet boundary condition. We assume that the boundary of the geometry is the union of surfaces that can be parametrized over the u nit square. Again\, we propose a refinement strategy to generate a sequenc e of locally refined meshes and corresponding discrete solutions\, where a daptivity is driven by some weighted-residual a posteriori error estimator . In [2]\, we proved linear convergence of the error estimator with optima l algebraic rate. In contrast to prior works\, which are restricted to the Laplace model problem\, our analysis allows for arbitrary elliptic PDE op erators of second order with constant coefficients.\nFinally\, for one-dim ensional boundaries\, we investigate an adaptive BEM with standard splines instead of hierarchical splines. We modify the corresponding algorithm so that it additionally uses knot multiplicity increase which results in loc al smoothness reduction of the ansatz space. In [3]\, we proved linear con vergence of the employed weighted-residual error estimator with optimal al gebraic rate.\n

This is a joint work with Charles M. Elliott (University of Warwick\, UK)\ , Ralf Kornhuber (Free University Berlin\, Germany) and Thomas Ranner (Uni versity of Leeds\, UK). DTEND;TZID=Europe/Zurich:20160527T120000 END:VEVENT BEGIN:VEVENT UID:news240@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T205652 DTSTART;TZID=Europe/Zurich:20160520T110000 SUMMARY:Seminar in Numerical Analysis: Julien Diaz (INRIA) DESCRIPTION:Seismic imaging techniques such as Reverse Time Migration (RTM) or Full Waveform Inversion (FWI) can be applied in time domain or in fre quency domain. After havind recalled the principle of RTM and discussed t he advantages and drawbacks of both approaches\, we focus on frequential domain. We show the usefulness of Discontinuous for solving acoustic a nd elastodynamic wave equation\, et we apply the Interior Penalty Discont inuous Galerkin method (IPDG) to the modelling of elasto/acoustic couplin g. We then present an alternative method\, the Hydridizable Discontinuous Galerkin method (HDG)\, which reduces the number of unknowns of the glob al linear system thanks to the introduction of a Lagrange multiplier defi ned only on the faces of the cells of the mesh. We illustrate the efficie ncy of HDG with respect to IPDG thanks to comparisons on academic and ind ustrial test cases. X-ALT-DESC:Seismic imaging techniques such as Reverse Time Migration (RTM) or Full Waveform Inversion (FWI) can be applied in time domain or in freq uency domain. After havind recalled the principle of RTM and discussed th e advantages and drawbacks of both approaches\, we focus on frequential domain. We show the usefulness \; of Discontinuous for solving acousti c and elastodynamic wave equation\, et we apply the Interior Penalty Dis continuous Galerkin method (IPDG) to the modelling of elasto/acoustic cou pling. We then present an alternative method\, the Hydridizable Discontin uous Galerkin method (HDG)\, which reduces the number of unknowns of the global linear system thanks to the introduction of a Lagrange multiplier defined only on the faces of the cells of the mesh. We illustrate the eff iciency of HDG with respect to IPDG thanks to comparisons on academic and industrial test cases. DTEND;TZID=Europe/Zurich:20160520T120000 END:VEVENT BEGIN:VEVENT UID:news241@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T205837 DTSTART;TZID=Europe/Zurich:20160513T110000 SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (CNRS Paris 6) DESCRIPTION:Optimized Schwarz methods (OSM) are very popular methods which were introduced by P.L. Lions for elliptic problems and B. Despres for p ropagative wave phenomena. One drawback is the lack of theoretical result s for variable coefficients problems and overlapping decompositions. We b uild here a coarse space for which the convergence rate of the two-level method is guaranteed regardless of the regularity of the coefficients. We do this by introducing a symmetrized variant of the ORAS (Optimized Rest ricted Additive Schwarz) algorithm. Numerical results on nearly incompres sible elasticity and Stokes system are shown for systems with hundred of millions of degrees of freedom on high performance computers. X-ALT-DESC:Optimized Schwarz methods (OSM) are very popular methods which w ere introduced by P.L. Lions for elliptic problems and B. Despres for pr opagative wave phenomena. One drawback is the lack of theoretical results for variable coefficients problems and overlapping decompositions. We bu ild here a coarse space for which the convergence rate of the two-level m ethod is guaranteed regardless of the regularity of the coefficients. We do this by introducing a symmetrized variant of the ORAS (Optimized Restr icted Additive Schwarz) algorithm. Numerical results on nearly incompress ible elasticity and Stokes system are shown for systems with hundred of m illions of degrees of freedom on high performance computers. DTEND;TZID=Europe/Zurich:20160513T120000 END:VEVENT BEGIN:VEVENT UID:news242@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T210058 DTSTART;TZID=Europe/Zurich:20160429T110000 SUMMARY:Seminar in Numerical Analysis: Andreas Rieder (Karlsruhe Institute of Technology) DESCRIPTION:Electrical impedance tomography is a non-invasive method for im aging the electrical conductivity of an object from voltage measurements o n its surface. This inverse problem suffers threefold: it is highly nonlin ear\, severely ill-posed\, and highly under-determined. To obtain yet reas onable reconstructions\, maximal information needs to be extracted from th e data. We will present and analyze a holistic Newton-type method which ad dresses all these challenges. Finally\, we demonstrate the performance of this concept numerically for simulated and measured data. X-ALT-DESC:Electrical impedance tomography is a non-invasive method for ima ging the electrical conductivity of an object from voltage measurements on its surface. This inverse problem suffers threefold: it is highly nonline ar\, severely ill-posed\, and highly under-determined. To obtain yet reaso nable reconstructions\, maximal information needs to be extracted from the data. We will present and analyze a holistic Newton-type method which add resses all these challenges. Finally\, we demonstrate the performance of t his concept numerically for simulated and measured data. DTEND;TZID=Europe/Zurich:20160429T120000 END:VEVENT BEGIN:VEVENT UID:news246@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T211540 DTSTART;TZID=Europe/Zurich:20151204T110000 SUMMARY:Seminar in Numerical Analysis: Sebastian Ullmann (TU Darmstadt) DESCRIPTION:Surrogate models can be used to decrease the computational cos t for uncertainty quantification in the context of parabolic PDEs with st ochastic data. Projection based reduced-order modeling provides surrogate s which inherit the spatial structure of the solution as well as the unde rlying physics. In my talk I focus on the type of models that is derived by a Galerkin projection onto a proper orthogonal decomposition (POD) of snapshots of the solution.\\r\\nStandard techniques assume that all snaps hots use one and the same spatial mesh. I present a generalization for un steady adaptive finite elements\, where the mesh can change from time ste p to time step and\, in the case of stochastic sampling\, from realizatio n to realization. I will answer the following questions: How can the codi ng effort for creating such a reduced-order model be minimized? How can t he union of all snapshot meshes be avoided? What is the main difference b etween static and adaptive snapshots in the error analysis of Galerkin re duced-order models?\\r\\nAs a numerical test case I consider a two-dimens ional viscous Burgers equation with smooth initial data multiplied by a normally distributed random variable. The results illustrate the converge nce properties with respect to the number of POD basis functions and indi cate possible savings of computation time. X-ALT-DESC:Surrogate models can be used to decrease the computational cost for uncertainty quantification in the context of parabolic PDEs with sto chastic data. Projection based reduced-order modeling provides surrogates which inherit the spatial structure of the solution as well as the under lying physics. In my talk I focus on the type of models that is derived b y a Galerkin projection onto a proper orthogonal decomposition (POD) of s napshots of the solution.\nStandard techniques assume that all snapshots use one and the same spatial mesh. I present a generalization for unstead y adaptive finite elements\, where the mesh can change from time step to time step and\, in the case of stochastic sampling\, from realization to realization. I will answer the following questions: How can the coding ef fort for creating such a reduced-order model be minimized? How can the un ion of all snapshot meshes be avoided? What is the main difference betwee n static and adaptive snapshots in the error analysis of Galerkin reduced -order models?\nAs a numerical test case I consider a two-dimensional vi scous Burgers equation with smooth initial data multiplied by a normally distributed random variable. The results illustrate the convergence prope rties with respect to the number of POD basis functions and indicate poss ible savings of computation time. DTEND;TZID=Europe/Zurich:20151204T120000 END:VEVENT BEGIN:VEVENT UID:news243@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T210827 DTSTART;TZID=Europe/Zurich:20151201T121000 SUMMARY:Seminar CCCS: Prof. M. Griebel (University of Bonn) DESCRIPTION:Polymeric viscoelastic fluids can be modelled by using the Navi er-Stokes equations on the macroscopic scale with an additional stress ten sor and a higher-dimensional Fokker-Plank equation or a corresponding stoc hastic PDE on the microscopic scale. Here\, the dimension of the microscop ic problem is 3N where N+1 is the number of beads in the underlying spring bead model for viscoelasticity. For the numerical treatment of the overal l system\, we couple the the stochastic Brownian configuration field metho d with our fully parallelized three-dimensional Navier-Stokes solver NaSt3 DGPF. But due to the microscopic problem\, we directly encounter the curse of dimensionality. To this end\, we suggest the so-called dimension-adapt ive sparse grid approach. It allows to deal with moderate-sized subproblem s in an adaptive fashion. Furthermore\, all arising subproblems can be tre ated fully in parallel. This way\, reliable multiscale simulations of visc oelastic flow problems for microscopic models with N>1 get possible for th e first time. This is joint work with Alexander Rüttgers from Bonn. X-ALT-DESC:Polymeric viscoelastic fluids can be modelled by using the Navie r-Stokes equations on the macroscopic scale with an additional stress tens or and a higher-dimensional Fokker-Plank equation or a corresponding stoch astic PDE on the microscopic scale. Here\, the dimension of the microscopi c problem is 3N where N+1 is the number of beads in the underlying spring bead model for viscoelasticity. For the numerical treatment of the overall system\, we couple the the stochastic Brownian configuration field method with our fully parallelized three-dimensional Navier-Stokes solver NaSt3D GPF. But due to the microscopic problem\, we directly encounter the curse of dimensionality. To this end\, we suggest the so-called dimension-adapti ve sparse grid approach. It allows to deal with moderate-sized subproblems in an adaptive fashion. Furthermore\, all arising subproblems can be trea ted fully in parallel. This way\, reliable multiscale simulations of visco elastic flow problems for microscopic models with N>\;1 get possible for the first time. This is \; joint work with Alexander Rüttgers from B onn. DTEND;TZID=Europe/Zurich:20151201T131500 END:VEVENT BEGIN:VEVENT UID:news244@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T211049 DTSTART;TZID=Europe/Zurich:20151120T110000 SUMMARY:Seminar in Numerical Analysis: Stefan Sauter (University of Zürich ) DESCRIPTION:In this talk we consider an intrinsic approach for the direct c omputation of the fluxes for problems in potential theory. We present a ge neral method for the derivation of intrinsic conforming and non-conforming finite element spaces and appropriate lifting operators for the evaluatio n of the right-hand side from abstract theoretical principles related to t he second Strang Lemma. This intrinsic finite element method is analyzed a nd convergence with optimal order is proved. X-ALT-DESC:In this talk we consider an intrinsic approach for the direct co mputation of the fluxes for problems in potential theory. We present a gen eral method for the derivation of intrinsic conforming and non-conforming finite element spaces and appropriate lifting operators for the evaluation of the right-hand side from abstract theoretical principles related to th e second Strang Lemma. This intrinsic finite element method is analyzed an d convergence with optimal order is proved. DTEND;TZID=Europe/Zurich:20151120T120000 END:VEVENT BEGIN:VEVENT UID:news245@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T211238 DTSTART;TZID=Europe/Zurich:20151106T110000 SUMMARY:Seminar in Numerical Analysis: Sanna Mönkölä (University of Jyv äskylä) DESCRIPTION:A wide range of numerical methods have been used for solving t ime-harmonic wave equations. Typically\, the methods are based on complex -valued formulations leading to large-scale indefinite linear equations. An alternative is to simulate time-dependent equations in time\, until th e time-harmonic solution is reached. However\, this approach suffers from poor convergence\, particularly in the case of large wavenumbers and com plicated domains. We accelerate the convergence rate by employing a contr ollability method. The problem is formulated as a least-squares optimizat ion problem\, which is solved by the conjugate gradient algorithm. The ef ficiency of the method relies on smart discretizations. For spatial discr etization we use the spectral element method or the discrete exterior cal culus\, and for time evolution we consider leap-frog style discretization with non-uniform timesteps or higher-order schemes. For constructing spa tially isotropic grids for complex geometries\, we use non-uniform polygo nal structures imitating the close packing in crystal lattices. X-ALT-DESC:A wide range of numerical methods have been used for solving ti me-harmonic wave equations. Typically\, the methods are based on complex- valued formulations leading to large-scale indefinite linear equations. A n alternative is to simulate time-dependent equations in time\, until the time-harmonic solution is reached. However\, this approach suffers from poor convergence\, particularly in the case of large wavenumbers and comp licated domains. We accelerate the convergence rate by employing a contro llability method. The problem is formulated as a least-squares optimizati on problem\, which is solved by the conjugate gradient algorithm. The eff iciency of the method relies on smart discretizations. For spatial discre tization we use the spectral element method or the discrete exterior calc ulus\, and for time evolution we consider leap-frog style discretization with non-uniform timesteps or higher-order schemes. For constructing spat ially isotropic grids for complex geometries\, we use non-uniform polygon al structures imitating the close packing in crystal lattices. DTEND;TZID=Europe/Zurich:20151106T120000 END:VEVENT BEGIN:VEVENT UID:news247@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T211947 DTSTART;TZID=Europe/Zurich:20151016T110000 SUMMARY:Seminar in Numerical Analysis: Wolfgang Hornfeck (DLR Köln) DESCRIPTION:Crystallography\, as seen from a mathematician's viewpoint\, is mainly concerned with "Sphere Packings\, Lattices and Groups" (as is the title of a famous book of Conway and Sloane). It is clear\, however\, th at there are many other connections between crystallography and mathemati cs\, ranging from more general applications of graph theory to more speci al ones such as differential geometry. In my talk I want to present some explorations into some applications of uniform distribution theory withi n a crystallographic context. X-ALT-DESC:Crystallography\, as seen from a mathematician's viewpoint\, is mainly concerned with "\;Sphere Packings\, Lattices and Groups"\; (as is the title of a famous book of Conway and Sloane). It is clear\, h owever\, that there are many other connections between crystallography an d mathematics\, ranging from more general applications of graph theory to more special ones such as differential geometry. In my talk I want to pr esent some explorations into some applications of uniform distribution th eory within a crystallographic context. DTEND;TZID=Europe/Zurich:20151016T120000 END:VEVENT BEGIN:VEVENT UID:news248@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T212124 DTSTART;TZID=Europe/Zurich:20151009T110000 SUMMARY:Seminar in Numerical Analysis: Andrea Barth (University of Stuttgar t) DESCRIPTION:Multilevel Monte Carlo methods were introduced to lower the co mputational complexity for the calculation of\, for instance\, the expect ation of a random quantity. More precisely\, in comparison to standard Mo nte Carlo methods the computational complexity is (asymptotically) equal to the calculation of one sample of the problem on the finest grid used. The price to pay for this increase in efficiency is that the problem need s to be solved not only on one (fine) grid\, but on a hierarchy of discre tizations. This implies first that the solution has to be represented on all grids and second\, that the variance of the detail (the difference of approximate solutions on two consecutive grids) converges with the refin ement of the grid.\\r\\nIn this talk\, I will give an introduction to mul tilevel Monte Carlo methods in the case when the variance of the detail d oes not converge uniformly. The idea is illustrated by the calculation of the expectation for an elliptic problem with a random multiscale coeffic ient and then extended to approximations of statistical solutions to the Navier-Stokes equations. X-ALT-DESC:Multilevel Monte Carlo methods were introduced to lower the com putational complexity for the calculation of\, for instance\, the expecta tion of a random quantity. More precisely\, in comparison to standard Mon te Carlo methods the computational complexity is (asymptotically) equal t o the calculation of one sample of the problem on the finest grid used. T he price to pay for this increase in efficiency is that the problem needs to be solved not only on one (fine) grid\, but on a hierarchy of discret izations. This implies first that the solution has to be represented on a ll grids and second\, that the variance of the detail (the difference of approximate solutions on two consecutive grids) converges with the refine ment of the grid.\nIn this talk\, I will give an introduction to multilev el Monte Carlo methods in the case when the variance of the detail does n ot converge uniformly. The idea is illustrated by the calculation of the expectation for an elliptic problem with a random multiscale coefficient and then extended to approximations of statistical solutions to the Navi er-Stokes equations. DTEND;TZID=Europe/Zurich:20151009T120000 END:VEVENT BEGIN:VEVENT UID:news249@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T212633 DTSTART;TZID=Europe/Zurich:20150925T110000 SUMMARY:Seminar in Numerical Analysis: Francois Bouchut (Université Paris- Est) DESCRIPTION:We study approximations by conforming methods of the solution t o variational inequalities which arise in the context of inviscid incompre ssible Bingham type non-Newtonian fluid flows and of the total variation f low problem.\\r\\nIn the general context of a convex lower semi-continuous functional on a Hilbert space\, we prove the convergence of time implicit space conforming approximations\, without viscosity and for non-smooth da ta. Then we introduce a general class of total variation functionals\, for which we can apply the regularization method. We consider the time implic it regularized\, linearized or not\, algorithms\, and prove their converge nce for general total variation functionals. X-ALT-DESC:We study approximations by conforming methods of the solution to variational inequalities which arise in the context of inviscid incompres sible Bingham type non-Newtonian fluid flows and of the total variation fl ow problem.\nIn the general context of a convex lower semi-continuous func tional on a Hilbert space\, we prove the convergence of time implicit spac e conforming approximations\, without viscosity and for non-smooth data. T hen we introduce a general class of total variation functionals\, for whic h we can apply the regularization method. We consider the time implicit re gularized\, linearized or not\, algorithms\, and prove their convergence f or general total variation functionals. DTEND;TZID=Europe/Zurich:20150925T120000 END:VEVENT BEGIN:VEVENT UID:news250@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T222212 DTSTART;TZID=Europe/Zurich:20150901T110000 SUMMARY:Seminar in Numerical Analysis: Abdul-Lateef Haji-Ali (King Abdullah University) DESCRIPTION:I discuss using single level and multilevel Monte Carlo method s to compute quantities of interests of a stochastic particle system in t he mean-field. In this context\, the stochastic particles follow a couple d system of Ito stochastic differential equations (SDEs). Moreover\, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity.\\r\\nIn 2012\, my Master thes is developed different versions of Multilevel Monte Carlo (MLMC) for part icle systems\, both with respect to time steps and number of particles an d proposed using particle antithetic estimators for MLMC. In that thesis\ , I showed moderate savings of MLMC compared to Monte Carlo. In this talk \, I recall and expand on these results\, emphasizing the importance of a ntithetic estimators in stochastic particle systems. I will finally concl ude by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates. X-ALT-DESC:I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in th e mean-field. In this context\, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover\, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity.\nIn 2012\, my Master thesis de veloped different versions of Multilevel Monte Carlo (MLMC) for particle systems\, both with respect to time steps and number of particles and pro posed using particle antithetic estimators for MLMC. In that thesis\, I s howed moderate savings of MLMC compared to Monte Carlo. In this talk\, I recall and expand on these results\, emphasizing the importance of antith etic estimators in stochastic particle systems. I will finally conclude b y proposing the use of our recent Multi-index Monte Carlo method to obtai n improved convergence rates. DTEND;TZID=Europe/Zurich:20150901T120000 END:VEVENT BEGIN:VEVENT UID:news251@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T213252 DTSTART;TZID=Europe/Zurich:20150529T110000 SUMMARY:Seminar in Numerical Analysis: Victorita Dolean (University of Nice ) DESCRIPTION:For linear problems\, domain decomposition methods can be used directly as iterative solvers\, but also as preconditioners for Krylov me thods. In practice\, Krylov acceleration is almost always used\, since th e Krylov method finds a much better residual polynomial than the stationa ry iteration\, and thus converges much faster. We show in this work that also for non-linear problems\, domain decomposition methods can either be used directly as iterative solvers\, or one can use them as preconditio ners for Newton’s method. For the concrete case of the parallel Schwarz method\, we show that we obtain a preconditioner we call RASPEN (Restric ted Additive Schwarz Preconditioned Exact Newton) which is similar to ASP IN (Additive Schwarz Preconditioned Inexact Newton)\, but with all compon ents directly defined by the iterative method. This has the advantage tha t RASPEN already converges when used as an iterative solver\, in contrast to ASPIN\, and we thus get a substantially better preconditioner for New ton’s method. We illustrate our findings with numerical results on the Forchheimer equation and a non-linear diffusion problem. X-ALT-DESC:For linear problems\, domain decomposition methods can be used d irectly as iterative solvers\, but also as preconditioners for Krylov met hods. In practice\, Krylov acceleration is almost always used\, since the Krylov method finds a much better residual polynomial than the stationar y iteration\, and thus converges much faster. We show in this work that also for non-linear problems\, domain decomposition methods can either be used directly as iterative solvers\, or one can use them as precondition ers for Newton’s method. For the concrete case of the parallel Schwarz method\, we show that we obtain a preconditioner we call RASPEN (Restrict ed Additive Schwarz Preconditioned Exact Newton) which is similar to ASPI N (Additive Schwarz Preconditioned Inexact Newton)\, but with all compone nts directly defined by the iterative method. This has the advantage that RASPEN already converges when used as an iterative solver\, in contrast to ASPIN\, and we thus get a substantially better preconditioner for Newt on’s method. We illustrate our findings with numerical results on the F orchheimer equation and a non-linear diffusion problem. DTEND;TZID=Europe/Zurich:20150529T120000 END:VEVENT BEGIN:VEVENT UID:news252@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T213451 DTSTART;TZID=Europe/Zurich:20150522T110000 SUMMARY:Seminar in Numerical Analysis: Stéphane Lanteri (Inria Sophia Anti polis) DESCRIPTION:We present a discontinuous finite element (discontinuous Galerk in) time-domain solver for the numerical simulation of the interaction of light with nanometer scale structures. The method relies on a compact s tencil high order interpolation of the electromagnetic field components within each cell of an unstructured tetrahedral mesh. This piecewise p olynomial numerical approximation is allowed to be discontinuous from one mesh cell to another\, and the consistency of the global approximation i s obtained thanks to the definition of appropriate numerical traces of th e fields on a face shared by two neighboring cells. Time integration is a chieved using an explicit scheme and no global mass matrix inversion is r equired to advance the solution at each time step. Moreover\, the resulti ng time-domain solver is particularly well adapted to parallel computing. The proposed method is an extension of the method that we initially prop osed in [1] for the simulation of electromagnetic wave propagation innond ispersive heterogeneous media at microwave frequencies.\\r\\nThis is a jo int work with Claire Scheid and Jonathan Viquerat.\\r\\n[1] Fezoui\, L.\, S. Lanteri\, S. Lohrengel\, and S. Piperno. Convergenceand stability of a discontinuous Galerkin time-domain method for the3D heterogeneous Maxw ell equations on unstructured meshes. ESAIM:Math. Model. Numer. Anal. \, Vol. 39\, No. 6\, 1149-1176\, 2005. X-ALT-DESC:We present a discontinuous finite element (discontinuous Galerki n) time-domain solver for the numerical simulation of the interaction of light with nanometer scale structures. The method relies on a compact st encil high order interpolation of the electromagnetic \; field compon ents within each cell of \; an unstructured tetrahedral mesh. This pi ecewise polynomial numerical approximation is allowed to be discontinuous from one mesh cell to another\, and the consistency of the global approx imation is obtained thanks to the definition of appropriate numerical tra ces of the fields on a face shared by two neighboring cells. Time integra tion is achieved using an explicit scheme and no global mass matrix inver sion is required to advance the solution at each time step. Moreover\, th e resulting time-domain solver is particularly well adapted to parallel c omputing. The proposed method is an extension of the method that we initi ally proposed in [1] for the simulation of electromagnetic wave propagati on innondispersive heterogeneous media at microwave frequencies.\nThis is a joint work with Claire Scheid and Jonathan Viquerat.\n[1] Fezoui\, L.\, S. Lanteri\, S. Lohrengel\, and S. Piperno. Convergenceand stability of a discontinuous Galerkin time-domain method for the3D \; heterogeneou s Maxwell \; equations on \; unstructured meshes. ESAIM:Math. Mod el. Numer. Anal.\, Vol. 39\, No. 6\, 1149-1176\, 2005. DTEND;TZID=Europe/Zurich:20150522T120000 END:VEVENT BEGIN:VEVENT UID:news253@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T213628 DTSTART;TZID=Europe/Zurich:20150508T110000 SUMMARY:Seminar in Numerical Analysis: Christian Stohrer (ENSTA ParisTech) DESCRIPTION:Electromagnetic phenomena can be modeled using Maxwell's equati ons. In particular we are interested in harmonic electromagnetic waves p ropagating through a highly oscillatory material such as e.g. fiber reinf orced plastic. The permittivity and the permeability of such materials va ry on a microscopic length scale. The use of standard edge finite element s is of limited profit\, since the microscopic structure requires very re fined meshes to provide satisfying approximations. This may easily result in computational costs difficult to manage. However\, if one is only int erested in the effective behavior of the solution and not in the microsco pic details\, homogenization techniques can be used to overcome these dif ficulties. In this talk we review first the results of analytical homogen ization results for Maxwell's equations. The goal of this theory is to re place the oscillatory material with an effective one\, such that the over all behavior of the solution remains unchanged. The solution of the arisi ng equations can be solved with standard numerical methods because the ef fective material depends no longer on the micro scale. In the second part of the talk we propose a multiscale scheme following the framework of th e finite element heterogeneous multiscale method (FE-HMM). Contrary to th e discretization of the analytically homogenized equation\, no effective coefficient must be precomputed beforehand. We prove that the FE-HMM solu tion converges to the homogenized one for periodic materials and show som e numerical experiments.\\r\\nThis is a joint work with Sonia Fliss and P atrick Ciarlet. X-ALT-DESC:Electromagnetic phenomena can be modeled using Maxwell's equatio ns. In particular we are interested in harmonic electromagnetic waves pr opagating through a highly oscillatory material such as e.g. fiber reinfo rced plastic. The permittivity and the permeability of such materials var y on a microscopic length scale. The use of standard edge finite elements is of limited profit\, since the microscopic structure requires very ref ined meshes to provide satisfying approximations. This may easily result in computational costs difficult to manage. However\, if one is only inte rested in the effective behavior of the solution and not in the microscop ic details\, homogenization techniques can be used to overcome these diff iculties. In this talk we review first the results of analytical homogeni zation results for Maxwell's equations. The goal of this theory is to rep lace the oscillatory material with an effective one\, such that the overa ll behavior of the solution remains unchanged. The solution of the arisin g equations can be solved with standard numerical methods because the eff ective material depends no longer on the micro scale. In the second part of the talk we propose a multiscale scheme following the framework of the finite element heterogeneous multiscale method (FE-HMM). Contrary to the discretization of the analytically homogenized equation\, no effective c oefficient must be precomputed beforehand. We prove that the FE-HMM solut ion converges to the homogenized one for periodic materials and show some numerical experiments.\nThis is a joint work with Sonia Fliss and Patric k Ciarlet. DTEND;TZID=Europe/Zurich:20150508T120000 END:VEVENT BEGIN:VEVENT UID:news254@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T213831 DTSTART;TZID=Europe/Zurich:20150424T110000 SUMMARY:Seminar in Numerical Analysis: David Cohen (Umeå University) DESCRIPTION:A fully discrete approximation of one-dimensional nonlinear sto chastic wave equations driven by multiplicative noise is presented. A stan dard finite difference approximation is used in space and a stochastic tri gonometric method for the temporal approximation. This explicit time integ rator allows for error bounds uniformly in time and space. Moreover\, unif orm almost sure convergence of the numerical solution is proved.\\r\\nThis is a joint work with Lluís Quer-Sardanyons\, Universitat Autònoma de Ba rcelona. X-ALT-DESC:A fully discrete approximation of one-dimensional nonlinear stoc hastic wave equations driven by multiplicative noise is presented. A stand ard finite difference approximation is used in space and a stochastic trig onometric method for the temporal approximation. This explicit time integr ator allows for error bounds uniformly in time and space. Moreover\, unifo rm almost sure convergence of the numerical solution is proved.\nThis is a joint work with Lluís Quer-Sardanyons\, Universitat Autònoma de Barcelo na. DTEND;TZID=Europe/Zurich:20150424T120000 END:VEVENT BEGIN:VEVENT UID:news255@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T214015 DTSTART;TZID=Europe/Zurich:20150320T110000 SUMMARY:Seminar in Numerical Analysis: Roland Griesmaier (Universität Wür zburg) DESCRIPTION:One of the main themes of inverse scattering theory for time-ha rmonic acoustic or electromagnetic waves is to determine information abou t unknown objects or inhomogeneous media from observations of scattered waves away from these objects or outside these media. Such inverse proble ms are typically non-linear and severely ill-posed.\\r\\nBesides standard regularization methods\, which are often iterative\, a completely differe nt methodology - so-called qualitative reconstruction methods - has attra cted a lot of interest recently. These algorithms recover specific qualit ative properties of scattering objects or anomalies inside a medium in a reliable and fast way. They avoid the simulation of forward models and ne ed no a priori information on physical or topological properties of the u nknown objects or inhomogeneities to be reconstructed. One of the drawbac ks of currently available qualitative reconstruction methods is the large amount of data required by most of these algorithms. It is usually assum ed that measurement data of waves scattered by the unknown objects corres ponding to infinitely many primary waves are given - at least theoretical ly.\\r\\nWe consider the inverse source problem for the Helmholtz equation as a means to provide a qualitative inversion algorithm for inverse sca ttering problems for acoustic or electromagnetic waves with a single exci tation only. Probing an ensemble of obstacles by just one primary wave at a fixed frequency and measuring the far field of the corresponding scatt ered wave\, the inverse scattering problem that we are interested in cons ists in reconstructing the support of the scatterers. To this end we rewr ite the scattering problem as a source problem and apply two recently dev eloped algorithms - the inverse Radon approximation and the convex scatte ring support - to recover information on the support of the corresponding source. The first method builds upon a windowed Fourier transform of the far field data followed by a filtered backprojection\, and although this procedure yields a rather blurry reconstruction\, it can be applied to i dentify the number and the positions of well separated source components. This information is then utilized to split the far field into individual far field patterns radiated by each of the well separated source compone nts using a Galerkin scheme. Finally we compute the convex scattering sup ports associated to the individual source components as a reconstruction of the individual scatterers. We discuss this algorithm and present nume rical results. X-ALT-DESC:One of the main themes of inverse scattering theory for time-har monic acoustic or electromagnetic waves is to determine information about unknown objects or inhomogeneous media from observations of scattered w aves away from these objects or outside these media. Such inverse problem s are typically non-linear and severely ill-posed.\nBesides standard regul arization methods\, which are often iterative\, a completely different me thodology - so-called qualitative reconstruction methods - has attracted a lot of interest recently. These algorithms recover specific qualitative properties of scattering objects or anomalies inside a medium in a relia ble and fast way. They avoid the simulation of forward models and need no a priori information on physical or topological properties of the unknow n objects or inhomogeneities to be reconstructed. One of the drawbacks of currently available qualitative reconstruction methods is the large amou nt of data required by most of these algorithms. It is usually assumed th at measurement data of waves scattered by the unknown objects correspondi ng to infinitely many primary waves are given - at least theoretically.\n We consider the inverse source problem for the Helmholtz equation as a me ans to provide a qualitative inversion algorithm for inverse scattering p roblems for acoustic or electromagnetic waves with a single excitation on ly. Probing an ensemble of obstacles by just one primary wave at a fixed frequency and measuring the far field of the corresponding scattered wave \, the inverse scattering problem that we are interested in consists in r econstructing the support of the scatterers. To this end we rewrite the s cattering problem as a source problem and apply two recently developed al gorithms - the inverse Radon approximation and the convex scattering supp ort - to recover information on the support of the corresponding source. The first method builds upon a windowed Fourier transform of the far fiel d data followed by a filtered backprojection\, and although this procedur e yields a rather blurry reconstruction\, it can be applied to identify t he number and the positions of well separated source components. This inf ormation is then utilized to split the far field into individual far fiel d patterns radiated by each of the well separated source components using a Galerkin scheme. Finally we compute the convex scattering supports as sociated to the individual source components as a reconstruction of the i ndividual scatterers. We discuss this algorithm and present numerical res ults. DTEND;TZID=Europe/Zurich:20150320T120000 END:VEVENT BEGIN:VEVENT UID:news256@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T214209 DTSTART;TZID=Europe/Zurich:20150220T110000 SUMMARY:Seminar in Numerical Analysis: Timo Betcke (University College Lond on) DESCRIPTION:The BEM++ boundary element library is a software project that w as started in 2010 at University College London to provide an open-source general purpose BEM library for a variety of application areas. In this talk we introduce the underlying design concepts of the library and disc uss several applications\, including high-frequency preconditioning for u ltrasound applications\, the solution of time-domain problems via convolu tion quadrature\, light-scattering from ice crystals\, and the solution o f coupled FEM/BEM problems with FEniCS and BEM++. X-ALT-DESC:The BEM++ boundary element library is a software project that wa s started in 2010 at University College London to provide an open-source general purpose BEM library for a variety of application areas. In this talk we introduce the underlying design concepts of the library and discu ss several applications\, including high-frequency preconditioning for ul trasound applications\, the solution of time-domain problems via convolut ion quadrature\, light-scattering from ice crystals\, and the solution of coupled FEM/BEM problems with FEniCS and BEM++. DTEND;TZID=Europe/Zurich:20150220T120000 END:VEVENT BEGIN:VEVENT UID:news257@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T215147 DTSTART;TZID=Europe/Zurich:20141219T110000 SUMMARY:Seminar in Numerical Analysis: Andrea Barth (Universität Stuttgart ) DESCRIPTION:Multilevel Monte Carlo methods for multiscale problems X-ALT-DESC:Multilevel Monte Carlo methods for multiscale problems DTEND;TZID=Europe/Zurich:20141219T120000 END:VEVENT BEGIN:VEVENT UID:news258@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T215328 DTSTART;TZID=Europe/Zurich:20141212T110000 SUMMARY:Seminar in Numerical Analysis: Olaf Schenk (Universita della Svizze ra italiana) DESCRIPTION:We will review the state-of-the art techniques in the parallel direct solution of linear systems of equations and present several recent new research directions. This includes (i) fast methods for evaluating certain selected elements of a matrix function that can be used for solvi ng the Kohn-Sham-equation without explicit diagonalization and (ii) stoch astic optimization problems under uncertainty from power grid problems fr om electrical power grid systems. Several algorithmic and performance eng ineering advances are discussed to sove the underlying sparse linear alge bra problems. The new developments include novel incomplete augmented mul ticore sparse factorizations\, multicore- and GPU-based dense matrix impl ementations\, and communication-avoiding Krylov solvers. We also improve the interprocess communication on Cray systems to solve e.g. 24-hour hori zon power grid problems from electrical power grid systems of realistic s ize with up to 1.95 billion decision variables and 1.94 billion constrain ts. Full-scale results are reported on Cray XC30 and BG/Q\, where we ob serve very good parallel efficiencies and solution times within a opera tionally defined time interval. To our knowledge\, "real-time"-compatible performance on a broad range of architectures for this class of problems has not been possible prior to present work. X-ALT-DESC:We will review the state-of-the art techniques in the parallel d irect solution of linear systems of equations and present several recent new research directions. This includes (i) fast methods for evaluating c ertain selected elements of a matrix function that can be used for solvin g the Kohn-Sham-equation without explicit diagonalization and (ii) stocha stic optimization problems under uncertainty from power grid problems fro m electrical power grid systems. Several algorithmic and performance engi neering advances are discussed to sove the underlying sparse linear algeb ra problems. The new developments include novel incomplete augmented mult icore sparse factorizations\, multicore- and GPU-based dense matrix imple mentations\, and communication-avoiding Krylov solvers. We also improve t he interprocess communication on Cray systems to solve e.g. 24-hour horiz on power grid problems from electrical power grid systems of realistic si ze with up to 1.95 billion decision variables and 1.94 billion constraint s. \; Full-scale results are reported on Cray XC30 and BG/Q\, where w e observe very \; good parallel efficiencies and solution times withi n a operationally defined time interval. To our knowledge\, "\;real-t ime"\;-compatible performance on a broad range of architectures for t his class of problems has not been possible prior to present work. DTEND;TZID=Europe/Zurich:20141212T120000 END:VEVENT BEGIN:VEVENT UID:news259@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T215544 DTSTART;TZID=Europe/Zurich:20141205T110000 SUMMARY:Seminar in Numerical Analysis: Nakul Chitnis (Swiss Tropical Health Institute\, Basel) DESCRIPTION:Malaria is an infectious disease\, spread through mosquito bite s\, that is responsible for substantial morbidity and mortality around th e world. In the last decade\, through increased funding and a global scal e up of control interventions that target mosquitoes\, significant reduct ions in transmission and disease burden have been achieved. However\, the se gains in public health are faced with the twin threat of a decrease in funding for malaria control and the development of resistance (physiolo gical and behavioural) in mosquitoes.\\r\\nMathematical models can help t o determine more efficient combinations of existing and new interventions in reducing malaria transmission and delaying the spread of resistance. We present difference equation models of mosquito population dynamics and malaria in mosquitoes\; and ordinary differential equation models of mos quito movement and population dynamics. We analyse these models to provid e threshold conditions for the survival of mosquitoes and show the existe nce of invariant positive states\; and run numerical simulations to provi de quantitative comparisons of interventions that target mosquitoes with varying levels of resistance. X-ALT-DESC:Malaria is an infectious disease\, spread through mosquito bites \, that is responsible for substantial morbidity and mortality around the world. In the last decade\, through increased funding and a global scale up of control interventions that target mosquitoes\, significant reducti ons in transmission and disease burden have been achieved. However\, thes e gains in public health are faced with the twin threat of a decrease in funding for malaria control and the development of resistance (physiolog ical and behavioural) in mosquitoes.\nMathematical models can help to det ermine more efficient combinations of existing and new interventions in r educing malaria transmission and delaying the spread of resistance. We pr esent difference equation models of mosquito population dynamics and mala ria in mosquitoes\; and ordinary differential equation models of mosquito movement and population dynamics. We analyse these models to provide thr eshold conditions for the survival of mosquitoes and show the existence o f invariant positive states\; and run numerical simulations to provide qu antitative comparisons of interventions that target mosquitoes with varyi ng levels of resistance. DTEND;TZID=Europe/Zurich:20141205T120000 END:VEVENT BEGIN:VEVENT UID:news260@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T215716 DTSTART;TZID=Europe/Zurich:20141121T110000 SUMMARY:Seminar in Numerical Analysis: Armin Iske (Universität Hamburg) DESCRIPTION:This talk discusses the utility of meshfree kernel techniques i n adaptive finite volume particle methods (FVPM). To this end\, we give t en good reasons in favour of using kernel-based reconstructions in the r ecovery step of FVPM\, where our discussion addresses relevant computatio nal aspects concerning numerical stability and accuracy\, as well as more specific points concerning efficient implementation. Special emphasis is finally placed on morerecent advances in the construction of adaptive FV PM\, where WENO reconstructions by polyharmonic spline kernelsare used in combination with ADER flux evaluations to obtain high order methods for hyperbolic problems. X-ALT-DESC:This talk discusses the utility of meshfree kernel techniques in adaptive finite volume particle methods (FVPM). To this end\, we give te n good reasons in favour of using kernel-based reconstructions in the re covery step of FVPM\, where our discussion addresses relevant computation al aspects concerning numerical stability and accuracy\, as well as more specific points concerning efficient implementation. Special emphasis is finally placed on morerecent advances in the construction of adaptive FVP M\, where WENO reconstructions by polyharmonic spline kernelsare used in combination with ADER flux evaluations to obtain high order methods for h yperbolic problems. DTEND;TZID=Europe/Zurich:20141121T120000 END:VEVENT BEGIN:VEVENT UID:news261@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T215846 DTSTART;TZID=Europe/Zurich:20141114T110000 SUMMARY:Seminar in Numerical Analysis: Wolfgang Wendland (Universität Stut tgart) DESCRIPTION:The minimal energy problem for nonnegative charges on a closed surface Γ in R^3 goes back to C.F. Gauss in 1839. The corresponding Ries z kernel is then weakly singular on Γ. If one considers double layer pot entials with dipole charges on Γ\, the minimal energy problem then is ba sed on hypersingular Riesz potentials in the form of Hadamard’s partie finie integral operators defining pseudodifferential operators of positiv e degree on smooth Γ. Existence and uniqueness results for the minimal energy problem and a corresponding boundary element method will be presen ted. X-ALT-DESC:The minimal energy problem for nonnegative charges on a closed s urface Γ in R^3 goes back to C.F. Gauss in 1839. The corresponding Riesz kernel is then weakly singular on Γ. If one considers double layer pote ntials with dipole charges on Γ\, the minimal energy problem then is bas ed on hypersingular Riesz potentials in the form of Hadamard’s partie f inie integral operators defining pseudodifferential operators of positive degree on smooth Γ. Existence and uniqueness results for the minimal e nergy problem and a corresponding boundary element method will be present ed. DTEND;TZID=Europe/Zurich:20141114T120000 END:VEVENT BEGIN:VEVENT UID:news262@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T220058 DTSTART;TZID=Europe/Zurich:20141107T110000 SUMMARY:Seminar in Numerical Analysis: Frédéric Hecht (Université Pierre -et-Marie Curie Paris 6) DESCRIPTION:FreeFem++ is a free software for numerical resolution of partia l differential equations using the finite elements method. After a short presentation of the possibilities of the software\, we will see through examples how to approach PDEs with mesh adaptation and parallel computing . These examples are among\\r\\n Piezoelectric problemsThermal problems wi th thermal resistancesElasticity problemsProblems of fluid mechanics like incompressible Navier-StokesProblem of melting and/or solidification of th e ice. (Boussinesq with specific heat) X-ALT-DESC:FreeFem++ is a free software for numerical resolution of partial differential equations using the finite elements method. After a short presentation of the possibilities of the software\, we will see through e xamples how to approach PDEs with mesh adaptation and parallel computing. These examples are among\n

Piezoelectric problems

Thermal problems with thermal resistances

Elasticity pro blems

Problems of fluid mechanics like incompressible Navie r-Stokes

Problem of melting and/or solidification of the ic e. (Boussinesq with specific heat)

involve surface differential operators\, as for bound ary conditions of Wentzell's type\, and depend on frequency\, conductivit y\, sheet thickness and sheet geometry e.g. curvature). These parameters may take small or large values and may lead to singularly perturbed

boundary integral equations.\nWe will introduce related boundary element methods in two and three dimensions and analyse well-posedness and discre tisation error depending on the model parameters. Numerical experiments c onfirm the convergence order of the discretisation error of the proposed BEM and that the discretisation error behaves for smooth enough sheets eq uivalent to the exact solution when varying the model parameters. The res ults obtained for the eddy current model\, for which a Poisson equation h as to be solved outside the mid-line\, can be transfered to the Helmholtz equation and to transmission conditions

arising from other models. DTEND;TZID=Europe/Zurich:20140523T120000 END:VEVENT BEGIN:VEVENT UID:news266@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T221031 DTSTART;TZID=Europe/Zurich:20140516T110000 SUMMARY:Seminar in Numerical Analysis: Valeriu Savcenco (Shell Global Solut ions/ TU Eindhoven) DESCRIPTION:Multirate methods are highly efficient for large-scale ODE and PDE problems with widely different time scales. Multirate methods enable one to use large time steps for slowly varying spatial regions\, and smal l steps for rapidly varying spatial regions. Multirate schemes for conse rvation laws seem to come in two flavors: schemes that are locally incons istent\, and schemes that lack mass-conservation. In this presentation th ese two defects will be discussed for one-dimensional conservation laws. Particular attention will be given to monotonicity properties of the mult irate schemes\, such as maximum principles and the total variation dimini shing (TVD) property. The study of these properties will be done within t he framework of partitioned Runge-Kutta methods. It will also be seen tha t the incompatibility of consistency and mass-conservation holds for genu ine multirate schemes\, but not for general partitioned methods. X-ALT-DESC:Multirate methods are highly efficient for large-scale ODE and P DE problems with widely different time scales. Multirate methods enable o ne to use large time steps for slowly varying spatial regions\, and small steps for rapidly varying spatial regions. Multirate schemes for conser vation laws seem to come in two flavors: schemes that are locally inconsi stent\, and schemes that lack mass-conservation. In this presentation the se two defects will be discussed for one-dimensional conservation laws. P articular attention will be given to monotonicity properties of the multi rate schemes\, such as maximum principles and the total variation diminis hing (TVD) property. The study of these properties will be done within th e framework of partitioned Runge-Kutta methods. It will also be seen that the incompatibility of consistency and mass-conservation holds for genui ne multirate schemes\, but not for general partitioned methods. DTEND;TZID=Europe/Zurich:20140516T120000 END:VEVENT BEGIN:VEVENT UID:news267@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T221204 DTSTART;TZID=Europe/Zurich:20140509T110000 SUMMARY:Seminar in Numerical Analysis: Tucker Carrington (Queen's Universit y Kingston) DESCRIPTION:To compute the vibrational spectrum of a molecule without negle cting coupling and anharmonicity one must calculate eigenvalues and eige nvectors of a large matrix representing the Hamiltonian in an appropriate basis. Iterative algorithms (e.g. Lanczos\, Davidson\, Filter Diagonalis ation) enable one to compute eigenvalues and eigenvectors. It is easy to efficiently implement iterative algorithms when a direct product basis is used. However\, for a molecule with more than four atoms\, a direct prod uct basis set is large and it is better to reduce the number of basis fun ctions required to obtain converged eigenvalues by pruning. This is done without jeopardizing the efficiency of the matrix-vector products require d by all iterative algorithms. In this talk\, I shall present new basis-s ize reduction ideas that are compatible with efficient matrix-vector prod ucts. The basis is designed to include the product basis functions couple d by the largest terms in the potential and important for computing low-l ying vibrational levels. To solve the vibrational Schrödinger equation without approximating the potential\, one must use quadrature to compute potential matrix elements. When using iterative methods in conjunction wi th quadrature\, it is important to evaluate matrix-vector products by doi ng sums sequentially. This is only possible if both the basis and the gri d have structure. Although it is designed to include only functions coupl ed by the largest terms in the potential\, the basis we use and also the (Smolyak-type) quadrature for doing integrals with the basis have enough structure to make efficient matrix-vector products possible. Using the qu adrature methods of this paper\, we evaluate the accuracy of calculations made by making multimode approximations. X-ALT-DESC:To compute the vibrational spectrum of a molecule without neglec ting coupling and anharmonicity one must calculate eigenvalues and eigen vectors of a large matrix representing the Hamiltonian in an appropriate basis. Iterative algorithms (e.g. Lanczos\, Davidson\, Filter Diagonalisa tion) enable one to compute eigenvalues and eigenvectors. It is easy to e fficiently implement iterative algorithms when a direct product basis is used. However\, for a molecule with more than four atoms\, a direct produ ct basis set is large and it is better to reduce the number of basis func tions required to obtain converged eigenvalues by pruning. This is done w ithout jeopardizing the efficiency of the matrix-vector products required by all iterative algorithms. In this talk\, I shall present new basis-si ze reduction ideas that are compatible with efficient matrix-vector produ cts. The basis is designed to include the product basis functions coupled by the largest terms in the potential and important for computing low-ly ing vibrational levels. To solve the vibrational Schrödinger equation w ithout approximating the potential\, one must use quadrature to compute p otential matrix elements. When using iterative methods in conjunction wit h quadrature\, it is important to evaluate matrix-vector products by doin g sums sequentially. This is only possible if both the basis and the grid have structure. Although it is designed to include only functions couple d by the largest terms in the potential\, the basis we use and also the ( Smolyak-type) quadrature for doing integrals with the basis have enough s tructure to make efficient matrix-vector products possible. Using the qua drature methods of this paper\, we evaluate the accuracy of calculations made by making multimode approximations. DTEND;TZID=Europe/Zurich:20140509T120000 END:VEVENT BEGIN:VEVENT UID:news268@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T221403 DTSTART;TZID=Europe/Zurich:20140307T110000 SUMMARY:Seminar in Numerical Analysis: Fabio Nobile (EPF Lausanne) DESCRIPTION:We consider the Darcy equation to describe the flow in a satura ted porous medium. The permeability of the medium is described as a log-no rmal random field\, eventually conditioned to available direct measurement s\, to account for its relatively large uncertainty and heterogeneity.\\r\ \nWe consider perturbation methods based on Taylor expansion of the soluti on of the PDE around the nominal permeability value. Successive higher ord er corrections to the statistical moments such as pointwise mean and covar iance of the solution can be obtained recursively from the computation of high order correlation functions which\, on their turn\, solve high dimens ional problems. To overcome the curse of dimensionality in computing and s toring such high order correlations\, we adopt a low-rank format\, namely the so called tensor-train (TT) format.\\r\\nWe show that\, on the one han d\, the Taylor series does not converge globally\, so that it only makes s ense to compute corrections up to a maximum critical order\, beyon which t he accuracy of the solution deteriorates insetad of improving. On the othe r hand\, we show on some numerical test cases\, the effectiveness of the p roposed approach in case of a moderately small variance of the log-normal permeability field. X-ALT-DESC:We consider the Darcy equation to describe the flow in a saturat ed porous medium. The permeability of the medium is described as a log-nor mal random field\, eventually conditioned to available direct measurements \, to account for its relatively large uncertainty and heterogeneity.\nWe consider perturbation methods based on Taylor expansion of the solution of the PDE around the nominal permeability value. Successive higher order co rrections to the statistical moments such as pointwise mean and covariance of the solution can be obtained recursively from the computation of high order correlation functions which\, on their turn\, solve high dimensional problems. To overcome the curse of dimensionality in computing and storin g such high order correlations\, we adopt a low-rank format\, namely the s o called tensor-train (TT) format.\nWe show that\, on the one hand\, the T aylor series does not converge globally\, so that it only makes sense to c ompute corrections up to a maximum critical order\, beyon which the accura cy of the solution deteriorates insetad of improving. On the other hand\, we show on some numerical test cases\, the effectiveness of the proposed a pproach in case of a moderately small variance of the log-normal permeabil ity field. DTEND;TZID=Europe/Zurich:20140307T120000 END:VEVENT BEGIN:VEVENT UID:news269@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T222002 DTSTART;TZID=Europe/Zurich:20140221T110000 SUMMARY:Seminar in Numerical Analysis: Marc Dambrine (Université de Pau) DESCRIPTION:I am interested in the influence of small geometrical perturbat ions on the solution of elliptic problems. The cases of a single inclusio n or several well-separated inclusions have been deeply studied. I will first recall here the techniques to construct an asymptotics expansion in that case. Then I will consider moderately close inclusions\, i.e. the distance between the inclusions tends to zero more slowly than their cha racteristic size and provide a complete asymptotic description of the sol ution of Laplace equation. I will also present numerical simulations base d on the multiscale superposition method derived from the first order exp ansion.\\r\\nI will explain how some mathematical questions about the los s of coercivity arise from the computation of the profiles appearing in t he expansion. Ventcel boundary conditions are second order differential c onditions that appears when looking for a transparent boundary conditio n for an exterior boundary value problem in planar linear elasticity. The goal is to bound the infinite domain by a large “box” to make numeri cal approximations possible. Like Robin boundary conditions\, they lead t o wellposed variational problems under a sign condition of a coefficient. Nevertheless situations where this condition is violated appeared in sev eral works. The wellposedness of such problems was still open. I will pr esent\, in the generic case\, existence and uniqueness result of the so lution for the Ventcel boundary value problem without the sign condition. Then\, I will consider perforated geometries and give conditions to remo ve the genericity restriction. X-ALT-DESC:I am interested in the influence of small geometrical perturbati ons on the solution of elliptic problems. The cases of a single inclusion or several well-separated inclusions have been deeply studied. \; I will first recall here the techniques to construct an asymptotics expansi on in that case. Then I will consider moderately close inclusions\, i.e. the distance between the inclusions tends to zero more slowly than their characteristic size and provide a complete asymptotic description of the solution of Laplace equation. I will also present numerical simulations based on the multiscale superposition method derived from the first order expansion.\nI will explain how some mathematical questions about the los s of coercivity arise from the computation of the profiles appearing in t he expansion. Ventcel boundary conditions are second order differential c onditions that appears \; when looking for a transparent boundary con dition for an exterior boundary value problem in planar linear elasticity . The goal is to bound the infinite domain by a large “box” to make n umerical approximations possible. Like Robin boundary conditions\, they l ead to wellposed variational problems under a sign condition of a coeffic ient. Nevertheless situations where this condition is violated appeared i n several works. The wellposedness of such problems was still open. \; I will present\, in the generic case\, \; existence and uniqueness r esult of the solution for the Ventcel boundary value problem without the sign condition. Then\, I will consider perforated geometries and give con ditions to remove the genericity restriction. DTEND;TZID=Europe/Zurich:20140221T120000 END:VEVENT BEGIN:VEVENT UID:news270@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T230659 DTSTART;TZID=Europe/Zurich:20131206T110000 SUMMARY:Seminar in Numerical Analysis: Mario S. Mommer (Universität Heidel berg) DESCRIPTION:Optimum experimental design (OED) is the problem of finding set ups for an experiment in such a way that the collected data allows for opt imally accurate estimation of the parameters of interest - taking into acc ount an experimental budget. In practice\, the parameters are only approxi mately known as a matter of course\, while at the same time\, solving an O ED problem is in a way equivalent to magnifying the dependence of the syst em response on these quantities. As a consequence\, designs computed on the basis of a "good guess" of the parameters may underperform dramaticall y in practice\, especially for problems involving nonlinear models.\\r\\nI n this talk\, we consider robust formulations for optimum experimental des ign that work under significant uncertainty. Our focus is on problem setti ngs in which the model is described by differential equations of some type that are solved numerically. Our approach is based on a semi-infinite pro gramming formulation in which we exploit additional problem structure\, to gether with sparse grids\, to ensure tractability. The talk includes numer ical experiments to illustrate and compare the effectiveness of the approa ches. X-ALT-DESC:Optimum experimental design (OED) is the problem of finding setu ps for an experiment in such a way that the collected data allows for opti mally accurate estimation of the parameters of interest - taking into acco unt an experimental budget. In practice\, the parameters are only approxim ately known as a matter of course\, while at the same time\, solving an OE D problem is in a way equivalent to magnifying the dependence of the syste m response on these quantities. \; As a consequence\, designs computed on the basis of a "\;good guess"\; of the parameters may underper form dramatically in practice\, especially for problems involving nonlinea r models.\nIn this talk\, we consider robust formulations for optimum expe rimental design that work under significant uncertainty. Our focus is on p roblem settings in which the model is described by differential equations of some type that are solved numerically. Our approach is based on a semi- infinite programming formulation in which we exploit additional problem st ructure\, together with sparse grids\, to ensure tractability. The talk in cludes numerical experiments to illustrate and compare the effectiveness o f the approaches. DTEND;TZID=Europe/Zurich:20131206T120000 END:VEVENT BEGIN:VEVENT UID:news271@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T230846 DTSTART;TZID=Europe/Zurich:20131122T110000 SUMMARY:Seminar in Numerical Analysis: Armin Lechleiter (Universität Breme n) DESCRIPTION:It is well-known that the interior eigenvalues of the Laplacian in a bounded domain share connections to scattering problems in the exte rior of this domain. For instance\, certain boundary integral equations f or exterior scattering problems fail at interior eigenvalues.\\r\\nSimila r connections also exist for inverse exterior scattering problems - for instance\, if zero is an eigenvalue of the far-field operator at a fixed wave number\, then the squared wave number is an interior eigenvalue. Des pite it is in general wrong that interior eigenvalues correspond to zero being an eigenvalue of the far field operator\, one can prove a pretty di rect characterization of interior eigenvalues via the behavior of the pha ses of the eigenvalues of the far-field operator.\\r\\nIn this talk\, we present this characterization and sketch its proof for Dirichlet\, Neuman n\, and Robin boundary conditions. Then we extend this theory to impenetr able scattering objects and show via a couple of numerical examples that one can indeed use this characterization to compute interior eigenvalues of unknown scattering objects from the spectrum of their far-field operat ors.\\r\\nOur motivation to study this so-called inside-outside duality c omes from a paper by Eckmann and Pillet (1995). This is joint work with A ndreas Kirsch (KIT) and Stefan Peters (University of Bremen). X-ALT-DESC:It is well-known that the interior eigenvalues of the Laplacian in a bounded domain share connections to scattering problems in the exter ior of this domain. For instance\, certain boundary integral equations fo r exterior scattering problems fail at interior eigenvalues.\nSimilar co nnections also exist for inverse exterior scattering problems - for insta nce\, if zero is an eigenvalue of the far-field operator at a fixed wave number\, then the squared wave number is an interior eigenvalue. Despite it is in general wrong that interior eigenvalues correspond to zero being an eigenvalue of the far field operator\, one can prove a pretty direct characterization of interior eigenvalues via the behavior of the phases o f the eigenvalues of the far-field operator.\nIn this talk\, we present t his characterization and sketch its proof for Dirichlet\, Neumann\, and R obin boundary conditions. Then we extend this theory to impenetrable scat tering objects and show via a couple of numerical examples that one can i ndeed use this characterization to compute interior eigenvalues of unknow n scattering objects from the spectrum of their far-field operators.\nOur motivation to study this so-called inside-outside duality comes from a p aper by Eckmann and Pillet (1995). This is joint work with Andreas Kirsch (KIT) and Stefan Peters (University of Bremen). DTEND;TZID=Europe/Zurich:20131122T120000 END:VEVENT BEGIN:VEVENT UID:news272@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T231105 DTSTART;TZID=Europe/Zurich:20131115T110000 SUMMARY:Seminar in Numerical Analysis: Wim Vanroose (University of Antwerpe n) DESCRIPTION:Many imaging systems such phase contrast tomography\, internal reflection microscopy or reaction microscopes measure the far or near fie ld of the scattered wave. We present an efficient and scalable method to calculate the far- and near field of a Helmholtz equation describing a g iven object. The far and near field solution can be written as an integra l of the Greens function multiplied by the solution of the Helmholtz equa tion with absorbing boundary conditions. By deforming the contour of the integral we only require the numerical solution of the Helmholtz equation along a complex valued contour. We show that Helmholtz equation along this contour is equivalent to a complex shifted Laplacian that can be sol ved efficiently by multigrid. This results in an scalable method to calcu lated the far and near field integral. We discuss this numerical method\, show its applicability\, scalability and discuss its limitations. X-ALT-DESC:Many imaging systems such phase contrast tomography\, internal r eflection microscopy or reaction microscopes measure the far or near fiel d of the scattered wave. We present an efficient and scalable method to calculate the far- and near field of a Helmholtz equation describing a gi ven object. The far and near field solution can be written as an integral of the Greens function multiplied by the solution of the Helmholtz equat ion with absorbing boundary conditions. By deforming the contour of the i ntegral we only require the numerical solution of the Helmholtz equation along a complex valued contour. \; We show that Helmholtz equation al ong this contour is equivalent to a complex shifted Laplacian that can be solved efficiently by multigrid. This results in an scalable method to c alculated the far and near field integral. We discuss this numerical meth od\, show its applicability\, scalability and discuss its limitations. DTEND;TZID=Europe/Zurich:20131115T120000 END:VEVENT BEGIN:VEVENT UID:news273@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T231305 DTSTART;TZID=Europe/Zurich:20131025T110000 SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université de Gr enoble) DESCRIPTION:Full Waveform Inversion is an efficient seismic imaging techniq ue for quantitative estimation of subsurface parameters such as the P-wav e and S-wave velocities\, density\, attenuation and anisotropy parameters . The method is based on the iterative minimization of the misfit between observed and calculated data. During the past ten years\, the method has been successfully applied to real data in 2D acoustic and elastic confi guration\, as well as in 3D acoustic configuration. The inverse Hessian o perator plays an important role in the reconstruction process. Particular ly\, this operator should correct for illumination deficits\, frequency b andlimited effects\, and help to restore the correct amplitude of less il luminated parameters. In this presentation\, we will focus on the methods we have to approximate this operator\, from preconditioned gradient-base d methods\, to quasi-Newton methods (l-BFGS) and truncated Newton methods . We will present results obtained on 2D synthetic and real data for the reconstruction of P-wave velocity which illustrate the importance of the approximation of this operator. We will also present a simple illustratio n of the inverse Hessian operator effect in a multi-parameter framework. In this context\, the operator helps to mitigate the trade-off between di fferent classes of parameters. X-ALT-DESC:Full Waveform Inversion is an efficient seismic imaging techniqu e for quantitative estimation of subsurface parameters such as the P-wave and S-wave velocities\, density\, attenuation and anisotropy parameters. The method is based on the iterative minimization of the misfit between observed and calculated data. During the past ten years\, the method has been successfully applied to real data in 2D acoustic and elastic config uration\, as well as in 3D acoustic configuration. The inverse Hessian op erator plays an important role in the reconstruction process. Particularl y\, this operator should correct for illumination deficits\, frequency ba ndlimited effects\, and help to restore the correct amplitude of less ill uminated parameters. In this presentation\, we will focus on the methods we have to approximate this operator\, from preconditioned gradient-based methods\, to quasi-Newton methods (l-BFGS) and truncated Newton methods. We will present results obtained on 2D synthetic and real data for the r econstruction of P-wave velocity which illustrate the importance of the a pproximation of this operator. We will also present a simple illustration of the inverse Hessian operator effect in a multi-parameter framework. I n this context\, the operator helps to mitigate the trade-off between dif ferent classes of parameters. DTEND;TZID=Europe/Zurich:20131025T120000 END:VEVENT BEGIN:VEVENT UID:news274@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T231504 DTSTART;TZID=Europe/Zurich:20131011T110000 SUMMARY:Seminar in Numerical Analysis: Maya de Buhan (Université Paris Des cartes) DESCRIPTION:In this talk\, we propose a new method to solve the following i nverse problem: we aim at reconstructing\, from boundary measurements\, t he location\, the shape and the wave propagation speed of an unknown obs tacle surrounded by a medium whose properties are known. Our strategy com bines two methods recently developed by the authors:\\r\\n1 - the Time-Re versed Absorbing Condition method: It combines time reversal techniques a nd absorbing boundary conditions to reconstruct and regularize the signal in a truncated domain that encloses the obstacle. This enables us to red uce the size of the computational domain where we solve the inverse probl em\, now from virtual internal measurements.\\r\\n2 - the Adaptive Invers ion method: It is an inversion method which looks for the value of the un known wave propagation speed in a basis composed by eigenvectors of an el liptic operator. Then\, it uses an iterative process to adapt the mesh an d the basis and improve the reconstruction.\\r\\nWe present several numer ical examples in two dimensions to illustrate the efficiency of the combi nation of both methods. In particular\, our strategy allows (a) to reduce the computational cost\, (b) to stabilize the inverse problem and (c) to improve the precision of the results. X-ALT-DESC:In this talk\, we propose a new method to solve the following in verse problem: we aim at reconstructing\, from boundary measurements\, th e location\, the shape and the wave propagation speed of an unknown obst acle surrounded by a medium whose properties are known. Our strategy comb ines two methods recently developed by the authors:\n1 - the Time-Reverse d Absorbing Condition method: It combines time reversal techniques and ab sorbing boundary conditions to reconstruct and regularize the signal in a truncated domain that encloses the obstacle. This enables us to reduce t he size of the computational domain where we solve the inverse problem\, now from virtual internal measurements.\n2 - the Adaptive Inversion metho d: It is an inversion method which looks for the value of the unknown wav e propagation speed in a basis composed by eigenvectors of an elliptic op erator. Then\, it uses an iterative process to adapt the mesh and the bas is and improve the reconstruction.\nWe present several numerical examples in two dimensions to illustrate the efficiency of the combination of bot h methods. In particular\, our strategy allows (a) to reduce the computat ional cost\, (b) to stabilize the inverse problem and (c) to improve the precision of the results. DTEND;TZID=Europe/Zurich:20131011T120000 END:VEVENT BEGIN:VEVENT UID:news275@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T231654 DTSTART;TZID=Europe/Zurich:20130524T110000 SUMMARY:Seminar in Numerical Analysis: Angela Kunoth (Universität Paderbor n) DESCRIPTION:Optimization problems constrained by linear parabolic evolution PDEs are challenging from a computational point of view\, as they requir e to solve a system of PDEs coupled globally in time and space. For the ir solution\, conventional time-stepping methods quickly reach their lim itations due to the enormous demand for storage. For such a coupled PDE system\, adaptive methods which aim at distributing the available degree s of freedom in an a-posteriori-fashion to capture singularities in the d ata or domain\, with respect to both space and time\, appear to be most p romising. Employing wavelet schemes for full weak space-time formulations of the parabolic PDEs\, we can prove convergence and optimal complexity. \\r\\nYet another level of challenge are control problems constrained by evolution PDEs involving stochastic or countably many infinite parametri c coefficients: for each instance of the parameters\, this requires the s olution of the complete control problem. \\r\\nOur method of attack is ba sed on the following new theoretical paradigm. It is first shown for co ntrol problems constrained by evolution PDEs\, formulated in full weak sp ace-time form\, that state\, costate and control are analytic as function s depending on these parameters. Moreover\, we establish that these funct ions allow expansions in terms of sparse tensorized generalized polynomia l chaos (gpc) bases. Their sparsity is quantified in terms of p-summabi lity of the coefficient sequences for some 0 < p <= 1. Resulting a-priori estimates establish the existence of an index set\, allowing for concu rrent approximations of state\, co-state and control for which the gpc ap proximations attain rates of best N-term approximation. These findings se rve as the analytical foundation for the development of corresponding spa rse realizations in terms of deterministic adaptive Galerkin approximatio ns of state\, co-state and control on the entire\, possibly infinite-dim ensional parameter space.\\r\\nThe results were obtained with Max Gunzburg er (Florida State University) and with Christoph Schwab (ETH Zuerich). X-ALT-DESC:Optimization problems constrained by linear parabolic evolution PDEs are challenging from a computational point of view\, as they require to solve a system of PDEs coupled globally in time and space. \; For their solution\, conventional time-stepping methods quickly reach their limitations due to the enormous demand for storage. \; For such a cou pled PDE system\, adaptive methods which aim at distributing the availabl e degrees of freedom in an a-posteriori-fashion to capture singularities in the data or domain\, with respect to both space and time\, appear to b e most promising. Employing wavelet schemes for full weak space-time for mulations of the parabolic PDEs\, we can prove convergence and optimal co mplexity. \nYet another level of challenge are control problems constrain ed by evolution PDEs involving stochastic or countably many infinite para metric coefficients: for each instance of the parameters\, this requires the solution of the complete control problem. \nOur method of attack is b ased on the following new theoretical paradigm. \; It is first shown for control problems constrained by evolution PDEs\, formulated in full w eak space-time form\, that state\, costate and control are analytic as fu nctions depending on these parameters. Moreover\, we establish that these functions allow expansions in terms of sparse tensorized generalized pol ynomial chaos (gpc) bases. \; Their sparsity is quantified in terms o f p-summability of the coefficient sequences for some 0 <\; p <\;= 1. Resulting a-priori estimates establish \; the existence of an index set\, allowing for concurrent approximations of state\, co-state and cont rol for which the gpc approximations attain rates of best N-term approxim ation. These findings serve as the analytical foundation for the developm ent of corresponding sparse realizations in terms of deterministic adapti ve Galerkin approximations of state\, co-state and control on the entire\ , possibly infinite-dimensional parameter space.\nThe results were obtain ed with Max Gunzburger (Florida State University) and with Christoph Schwa b (ETH Zuerich). DTEND;TZID=Europe/Zurich:20130524T120000 END:VEVENT BEGIN:VEVENT UID:news276@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T231855 DTSTART;TZID=Europe/Zurich:20130503T110000 SUMMARY:Seminar in Numerical Analysis: Rüdiger Schultz (Universität Duisb urg-Essen) DESCRIPTION:This talk aims at demonstrating how concepts and techniques whi ch are well-established in operations research may serve as blueprints fo r approaching shape optimization with linearized elasticity and stocha stic loading. Stochastic shape optimization problems are considered from a two-stage viewpoint: In a first stage\, without anticipation of the ran dom loading\, the shape has to be fixed. After realization of the load\ , the displacement obtained from solving the elasticity boundary value pr oblem then may be seen as a second-stage (or recourse) action\, and the v ariational problem of the weak formulation as a second-stage optimization problem.\\r\\nAt this point\, there is a perfect match with two-stage st ochastic programming: after having taken a non-anticipative decision in the first stage\, and having observed the random data\, a well-defined second-stage problem remains and is solved to optimality. Suitable object ive functions complete the formal descriptions of the models\, for instan ce\, costs in the stochastic-programming setting and compliance or tracki ng functionals in shape optimization.\\r\\nStochastic programming now off ers a wide collection of models to address shape optimization under uncer tainty. This starts with risk neutral models\, is continued by mean-risk optimization involving different risk measures\, and will finally lead to analogues in shape optimization of decision problems with stochastic- order (or dominance) constraints.\\r\\nIn the talk we will present these m odels\, discuss solution methods\, and report some computational tests. X-ALT-DESC:This talk aims at demonstrating how concepts and techniques whic h are well-established in operations research may serve as blueprints for approaching shape optimization with linearized \;elasticity and st ochastic loading. Stochastic shape optimization problems are considered f rom a two-stage viewpoint: In a first stage\, without anticipation of the random loading\, the shape has to be fixed. After realization \;of the load\, the displacement obtained from solving the elasticity boundary value problem then may be seen as a second-stage (or recourse) action\, and the variational problem of the weak formulation as a second-stage opt imization problem.\nAt this point\, there is a perfect match with two-sta ge stochastic programming: after having taken a non-anticipative decision in \;the first stage\, and having observed the random data\, a well -defined second-stage problem remains and is solved to optimality. Suitab le objective functions complete the formal descriptions of the models\, f or instance\, costs in the stochastic-programming setting and compliance or tracking functionals in shape optimization.\nStochastic programming no w offers a wide collection of models to address shape optimization under uncertainty. This starts with risk neutral models\, is continued by mean- risk \;optimization involving different risk measures\, and will fin ally lead to analogues in shape optimization of decision problems with s tochastic-order (or dominance) constraints.\nIn the talk we will present t hese models\, discuss solution methods\, and report some computational tes ts. DTEND;TZID=Europe/Zurich:20130503T120000 END:VEVENT BEGIN:VEVENT UID:news277@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T232119 DTSTART;TZID=Europe/Zurich:20130419T110000 SUMMARY:Seminar in Numerical Analysis: Wolfgang Wendland (Universität Stut tgart) DESCRIPTION:As a special case of nonlinear Rieman--Hilbert problems with cl osed boundary data in multiply connected domains\, here a doubly connecte d domain like an annulus is considered.\\r\\nThe nonlinear boundary cond itions for the desired holomorphic solutions lead to nonlinear singular i ntegral equations on the boundary which belong to the class of quasiruled Fredholm maps defined on quasicylindrical domains in appropriate separab le Banach spaces.\\r\\nThe closed boundary data give a priori estimates f or the modulus of solutions which in turn implies a priori estimates in t he Sobolev spaces considered here. For this class of problems\, the Shnir elman--Efendiev degree of mappings can be defined which allows to investi gate the existence of solutions if the boundary conditions satisfy some t opological assumptions.\\r\\nThe lifting of the boundary value problem via holomorphic transformation onto the universal covering of the unit di sc allows to construct a homotopic deformation of the lifted nonlinear si ngular integral equations to a uniquely solvable case which implies tha t the degree of mapping is 1 and existence of (in fact at least two) solu tions follows.\\r\\nIf the nonlinear integral equations on the boundary a re appoximated by trigonometric point collocation then the theory also i mplies that approximate solutions exist and converge asymptotically. X-ALT-DESC:As a special case of nonlinear Rieman--Hilbert problems with clo sed boundary data in multiply connected domains\, here a doubly connected domain like an annulus is considered.\nThe nonlinear boundary condition s for the desired holomorphic solutions lead to nonlinear singular integr al equations on the boundary which belong to the class of quasiruled Fred holm maps defined on quasicylindrical domains in appropriate separable Ba nach spaces.\nThe closed boundary data give a priori estimates for the mo dulus of solutions which in turn implies a priori estimates in the Sobole v spaces considered here. For this class of problems\, the Shnirelman--Ef endiev degree of mappings can be defined which allows to investigate the existence of solutions if the boundary conditions satisfy some topologica l assumptions.\nThe lifting of the boundary \; value problem via holo morphic transformation onto the universal covering of the unit disc allow s to construct a homotopic deformation of the lifted nonlinear singular i ntegral equations to a uniquely solvable case \; which implies that t he degree of mapping is 1 and existence of (in fact at least two) solutio ns follows.\nIf the nonlinear integral equations on the boundary are app oximated by trigonometric point collocation then the theory also implies that approximate solutions exist and converge asymptotically. DTEND;TZID=Europe/Zurich:20130419T120000 END:VEVENT BEGIN:VEVENT UID:news278@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T232307 DTSTART;TZID=Europe/Zurich:20130412T110000 SUMMARY:Seminar in Numerical Analysis: Florian Loos (Universität der Bunde swehr München) DESCRIPTION:The number of electrical devices in modern cars supplied by hig h currents grows continuously. In order to avoid hot spot generation and overheating on the one hand\, but to save weight and material on the oth er hand\, electrical connecting structures have to be dimensioned appropr iately. The heat transfer in current carrying multicables with considerat ion of the rise of electrical resistivity for higher temperatures is desc ribed by a system of semilinear equations with discontinuous coefficients . The effects of convection and radiation are taken into account by a non linear boundary condition.\\r\\nSimulation results and experimental studi es show that the positioning of the single cables has important influence on the maximum temperatures. In order to find an optimal cable design\, i.e. to arrange the single cables with fixed cross section and current su ch that the maximum temperature is minimized\, a shape optimization probl em is formulated. We derive an adjoint system and the shape gradient usin g the formal Lagrange approach. The effect of the discontinuity of some c oefficients on the shape gradient is shown. By application of different ( nonlinear) optimizers combined with the finite element solver COMSOL Mult iphysics\, a solution is obtained numerically. In this talk\, we present the modeling of the problem\, the derivation of the shape gradient and nu merical results.\\r\\nThis is joint work with Helmut Harbrecht and Thomas Apel. X-ALT-DESC:The number of electrical devices in modern cars supplied by high currents grows continuously. In order to avoid hot spot generation and overheating on the one hand\, but to save weight and material on the othe r hand\, electrical connecting structures have to be dimensioned appropri ately. The heat transfer in current carrying multicables with considerati on of the rise of electrical resistivity for higher temperatures is descr ibed by a system of semilinear equations with discontinuous coefficients. The effects of convection and radiation are taken into account by a nonl inear boundary condition.\nSimulation results and experimental studies sh ow that the positioning of the single cables has important influence on t he maximum temperatures. In order to find an optimal cable design\, i.e. to arrange the single cables with fixed cross section and current such th at the maximum temperature is minimized\, a shape optimization problem is formulated. We derive an adjoint system and the shape gradient using the formal Lagrange approach. The effect of the discontinuity of some coeffi cients on the shape gradient is shown. By application of different (nonli near) optimizers combined with the finite element solver COMSOL Multiphys ics\, a solution is obtained numerically. In this talk\, we present the m odeling of the problem\, the derivation of the shape gradient and numeric al results.\nThis is joint work with Helmut Harbrecht and Thomas Apel. DTEND;TZID=Europe/Zurich:20130412T120000 END:VEVENT BEGIN:VEVENT UID:news279@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T232556 DTSTART;TZID=Europe/Zurich:20121214T110000 SUMMARY:Seminar in Numerical Analysis: Ulrich Römer (TU Darmstadt) DESCRIPTION:Simulation results of magnetic devices with ferromagnetic mater ials are highly sensitive to the nonlinear material law and the geometry. Due to uncertainties inherent to measurements and the fabrication proces s\, the precise knowledge of model input data cannot be assumed to be giv en. Therefore\, in recent years\, methods of uncertainty quantification h ave become more and more important. In this talk a short overview over a pplication examples (magnets\, machines) will be given and the PDEs for t he magnetic fields will be discussed. Under some simplifications these ar e 2D nonlinear elliptic interface equations of monotone type. We will int roduce a stochastic collocation method based on generalized polynomial ch aos to quantify the uncertainties. Furthermore\, we will discuss a worst- case scenario analysis to cover cases where the statistics of the inputs is not available. Since for the worst-case analysis gradient information is especially important\, sensitivity analysis techniques\, e.g.\, adjoin t equations and shape calculus are required.\\r\\nJoint work with Sebasti an Schöps and Thomas Weiland. X-ALT-DESC:Simulation results of magnetic devices with ferromagnetic materi als are highly sensitive to the nonlinear material law and the geometry. Due to uncertainties inherent to measurements and the fabrication process \, the precise knowledge of model input data cannot be assumed to be give n. Therefore\, in recent years\, methods of uncertainty quantification ha ve become more and more important. In this talk a short overview over ap plication examples (magnets\, machines) will be given and the PDEs for th e magnetic fields will be discussed. Under some simplifications these are 2D nonlinear elliptic interface equations of monotone type. We will intr oduce a stochastic collocation method based on generalized polynomial cha os to quantify the uncertainties. Furthermore\, we will discuss a worst-c ase scenario analysis to cover cases where the statistics of the inputs i s not available. Since for the worst-case analysis gradient information i s especially important\, sensitivity analysis techniques\, e.g.\, adjoint equations and shape calculus are required.\nJoint work with Sebastian Sc höps and Thomas Weiland. DTEND;TZID=Europe/Zurich:20121214T120000 END:VEVENT BEGIN:VEVENT UID:news280@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T232736 DTSTART;TZID=Europe/Zurich:20121207T110000 SUMMARY:Seminar in Numerical Analysis: Mike Botchev (University of Twente) DESCRIPTION:We review some recent advances in Krylov subspace methods to co mpute an action of the matrix exponential on a given vector. In particu lar\, we briefly discuss residual-based and shift-and-invert Krylov subsp ace methods in the context of space-discretized 3D Maxwell's equations. In our limited experience\, a conventional time stepping\, where actions of the matrix exponential have to be repeatedly computed at every time step\, are usually inefficient. We therefore discuss an alternative app roach\, based on block Krylov subspaces\, where just a couple evaluations of the matrix exponential suffice to solve the problem for the whole tim e interval. X-ALT-DESC:We review some recent advances in Krylov subspace methods to com pute an action of the matrix exponential on a given vector. \; In par ticular\, we briefly discuss residual-based and shift-and-invert Krylov s ubspace methods in the context of space-discretized 3D Maxwell's equation s. \; In our limited experience\, a conventional time stepping\, wher e actions of the matrix exponential have to be repeatedly computed at eve ry time step\, are usually inefficient. \; We therefore discuss an al ternative approach\, based on block Krylov subspaces\, where just a coupl e evaluations of the matrix exponential suffice to solve the problem for the whole time interval. DTEND;TZID=Europe/Zurich:20121207T120000 END:VEVENT BEGIN:VEVENT UID:news281@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T232939 DTSTART;TZID=Europe/Zurich:20121109T110000 SUMMARY:Seminar in Numerical Analysis: Sören Bartels (Universität Freibur g) DESCRIPTION:The mathematical description of the elastic deformation of thin plates can be derived by a dimension reduction from three-dimensional e lasticity and leads to the minimization of an energy functional that invo lves the second fundamental form of the deformation and is subject to the constraint that the deformation is an isometry. We discuss two approache s to the discretization of the second order derivatives and the treatment of the isometry constraint. The first one relaxes the second order deriv atives via a Reissner-Mindlin approximation and the second one employs di screte Kirchhoff triangles that define a nonconforming second order deriv ative. In both cases the deformation is decoupled from the deformation gr adient and this enables us to employ techniques developed for the approxi mation of harmonic maps to impose the constraint on the deformation gradi ent at the nodes of a triangulation. The solution of the nonlinear discre te schemes is done by appropriate gradient flows and we demonstrate their energy decreasing behaviours under mild conditions on step sizes. Numeri cal experiments show that the proposed schemes provide accurate approxima tions for large vertical loads as well as compressive boundary conditions . X-ALT-DESC:The mathematical description of the elastic deformation of thin plates can be derived by a dimension reduction from three-dimensional el asticity and leads to the minimization of an energy functional that invol ves the second fundamental form of the deformation and is subject to the constraint that the deformation is an isometry. We discuss two approaches to the discretization of the second order derivatives and the treatment of the isometry constraint. The first one relaxes the second order deriva tives via a Reissner-Mindlin approximation and the second one employs dis crete Kirchhoff triangles that define a nonconforming second order deriva tive. In both cases the deformation is decoupled from the deformation gra dient and this enables us to employ techniques developed for the approxim ation of harmonic maps to impose the constraint on the deformation gradie nt at the nodes of a triangulation. The solution of the nonlinear discret e schemes is done by appropriate gradient flows and we demonstrate their energy decreasing behaviours under mild conditions on step sizes. Numeric al experiments show that the proposed schemes provide accurate approximat ions for large vertical loads as well as compressive boundary conditions. DTEND;TZID=Europe/Zurich:20121109T120000 END:VEVENT BEGIN:VEVENT UID:news282@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T233431 DTSTART;TZID=Europe/Zurich:20121102T110000 SUMMARY:Seminar in Numerical Analysis: Dominik Schötzau (University of Bri tish Columbia) DESCRIPTION:We introduce and analyze a new mixed finite element method for the spatial discretization of an incompressible magnetohydrodynamics p roblem. It is based on divergence-conforming elements for the fluid velo cities and on curl-conforming elements for the magnetic unknowns. The tan gential continuity of the velocities is enforced by a DG approach. Centr al features of the resulting method are that it produces exactly diverge nce-free velocity approximations and is provably energy-stable\, and tha t it correctly captures the strongest magnetic singularities in non-smoot h domains. We carry out the error analysis of the method\, and present a comprehensive set of numerical tests in two and three dimensions. We also discuss some recent ideas regarding the design of efficient solvers for the matrix systems. X-ALT-DESC:We introduce and analyze a new mixed finite element method for the spatial discretization of an incompressible magnetohydrodynamics pr oblem. It is based on divergence-conforming elements for the fluid veloc ities and on curl-conforming elements for the magnetic unknowns. The tang ential continuity of the velocities is enforced by a DG approach. Centra l features of the resulting method are that it produces exactly divergen ce-free velocity approximations and is provably energy-stable\, and that it correctly captures the strongest magnetic singularities in non-smooth domains. We carry out the error analysis of the method\, and present a c omprehensive set of numerical tests in two and three dimensions. We also discuss some recent ideas regarding the design of efficient solvers for t he matrix systems. DTEND;TZID=Europe/Zurich:20121102T120000 END:VEVENT BEGIN:VEVENT UID:news283@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T233556 DTSTART;TZID=Europe/Zurich:20121026T110000 SUMMARY:Seminar Numerical Analysis: Drosos Kourounis (Università della Svi zzera italiana) DESCRIPTION:Adjoint-based gradients form an important ingredient of fast o ptimization algorithms for computer-assisted history matching and life-cy cle production optimization. Large-scale applications of adjoint-based re servoir optimization reported so far concern relatively simple physics\, in particular two-phase (oil-water) or three-phase (oil-gas-water) applic ations. In contrast\, compositional simulation has the added complexity o f frequent flash calculations and high compressibilities which potentiall y complicate both the adjoint computation and gradient-based optimization \, especially in the presence of complex constraints. These aspects are i nvestigated using a new adjoint implementation in a research reservoir si mulator designed on top of an automatic differentiation framework coupled to a standard large-scale nonlinear optimization package. Based on sev eral examples of increasing complexity we conclude that the AD-based adjo int implementation is capable of accurately and efficiently computing gr adients for multi-component reservoir flow. However\, optimization of str ongly compressible flow with constraints on well rates or pressures leads to potentially poor performance in conjunction with an external optimiza tion package. We present a pragmatic but effective strategy to overcome t his issue. X-ALT-DESC:Adjoint-based gradients form an important ingredient of fast op timization algorithms for computer-assisted history matching and life-cyc le production optimization. Large-scale applications of adjoint-based res ervoir optimization reported so far concern relatively simple physics\, i n particular two-phase (oil-water) or three-phase (oil-gas-water) applica tions. In contrast\, compositional simulation has the added complexity of frequent flash calculations and high compressibilities which potentially complicate both the adjoint computation and gradient-based optimization\ , especially in the presence of complex constraints. These aspects are in vestigated using a new adjoint implementation in a research reservoir sim ulator designed on top of an automatic differentiation framework coupled to a standard large-scale nonlinear optimization package. \; Based on several examples of increasing complexity we conclude that the AD-based adjoint implementation is capable of accurately and efficiently computing gradients for multi-component reservoir flow. However\, optimization of strongly compressible flow with constraints on well rates or pressures l eads to potentially poor performance in conjunction with an external opti mization package. We present a pragmatic but effective strategy to overco me this issue. DTEND;TZID=Europe/Zurich:20121026T120000 END:VEVENT BEGIN:VEVENT UID:news284@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T233803 DTSTART;TZID=Europe/Zurich:20121012T110000 SUMMARY:Seminar in Numerical Analysis: Stefan Volkwein (Universität Konsta nz) DESCRIPTION:We consider the following problem of error estimation for the o ptimal control of nonlinear partial differential equations: Let an arbitr ary admissible control function be given. How far is it from the next lo cally optimal control? Under natural assumptions including a second-order sufficient optimality condition for the (unknown) locally optimal contro l\, we estimate the distance between the two controls. To do this\, we ne ed some information on the lowest eigenvalue of the reduced Hessian. We a pply this technique to a model reduced optimal control problem obtained b y proper orthogonal decomposition (POD). The distance between a local sol ution of the reduce problem to a local solution of the original problem i s estimated. X-ALT-DESC:We consider the following problem of error estimation for the op timal control of nonlinear partial differential equations: Let an arbitra ry admissible control function be given. How far is it from the next loc ally optimal control? Under natural assumptions including a second-order sufficient optimality condition for the (unknown) locally optimal control \, we estimate the distance between the two controls. To do this\, we nee d some information on the lowest eigenvalue of the reduced Hessian. We ap ply this technique to a model reduced optimal control problem obtained by proper orthogonal decomposition (POD). The distance between a local solu tion of the reduce problem to a local solution of the original problem is estimated. DTEND;TZID=Europe/Zurich:20121012T120000 END:VEVENT BEGIN:VEVENT UID:news285@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T233949 DTSTART;TZID=Europe/Zurich:20120928T110000 SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (Université Pierre et Marie Curie) DESCRIPTION:We introduce the time reversed absorbing conditions (TRAC) in t ime reversal methods. They enable to "recreate the past" without knowing the source which has emitted the signals that are back-propagated. Two applications to inverse problems are given in both the full and partial aperture case. X-ALT-DESC:We introduce the time reversed absorbing conditions (TRAC) in ti me reversal methods. They enable to "\;recreate the past"\; with out knowing the source which has emitted the signals that are back-propa gated. Two applications to inverse problems are given in both the full a nd partial aperture case. DTEND;TZID=Europe/Zurich:20120928T120000 END:VEVENT BEGIN:VEVENT UID:news286@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T234139 DTSTART;TZID=Europe/Zurich:20120601T090000 SUMMARY:Seminar in Numerical Analysis: Walter Gautschi (Purdue University) DESCRIPTION:Algorithms are developed for computing the coefficients in the three-term recurrence relation of repeatedly modified orthogonal polynom ials\, the modifications involving division of the orthogonality measure by a linear function with real or complex coefficient. The respective Gau ssian quadrature rules can be used to account for simple or multiple pole s that may be present in the integrand. Several examples are given to ill ustrate this. X-ALT-DESC:Algorithms are developed for computing the coefficients in the three-term recurrence relation of repeatedly modified orthogonal polynomi als\, the modifications involving division of the orthogonality measure b y a linear function with real or complex coefficient. The respective Gaus sian quadrature rules can be used to account for simple or multiple poles that may be present in the integrand. Several examples are given to illu strate this. DTEND;TZID=Europe/Zurich:20120601T100000 END:VEVENT BEGIN:VEVENT UID:news287@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T234452 DTSTART;TZID=Europe/Zurich:20120525T090000 SUMMARY:Seminar in Numerical Analysis: Wolfgang Bangerth (Texas A&M Univers ity) DESCRIPTION:In many of the modern biomedical imaging modalities\, the measu rable signal can be described as the solution of a partial differential equation that depends nonlinearly on the tissue properties (the "paramete rs") one would like to image. Consequently\, there are typically no expli cit solution formulas for these so-called "inverse problems" that can rec over the parameters from the measurements\, and the only way to generate body images from measurements is through numerical approximation.\\r\\nTh e resulting parameter estimation schemes have the underlying partial diff erential equations as side-constraints\, and the solution of these optimi zation problems often requires solving the partial differential equation thousands or hundred of thousands of times. The development of efficient schemes is therefore of great interest for the practical use of such imag ing modalities in clinical settings. In this talk\, the formulation and e fficient solution strategies for such inverse problems will be discussed\ , and we will demonstrate its efficacy using examples from our work on Op tical Tomography\, a novel way of imaging tumors in humans and animals. T he talk will conclude with an outlook to even more complex problems that attempt to automatically optimize experimental setups to obtain better i mages. X-ALT-DESC:In many of the modern biomedical imaging modalities\, the measur able signal can be described as the solution of a partial differential e quation that depends nonlinearly on the tissue properties (the "\;par ameters"\;) one would like to image. Consequently\, there are typicall y no explicit solution formulas for these so-called "\;inverse proble ms"\; that can recover the parameters from the measurements\, and the only way to generate body images from measurements is through numerical approximation.\nThe resulting parameter estimation schemes have the unde rlying partial differential equations as side-constraints\, and the solut ion of these optimization problems often requires solving the partial dif ferential equation thousands or hundred of thousands of times. The develo pment of efficient schemes is therefore of great interest for the practic al use of such imaging modalities in clinical settings. In this talk\, th e formulation and efficient solution strategies for such inverse problems will be discussed\, and we will demonstrate its efficacy using examples from our work on Optical Tomography\, a novel way of imaging tumors in hu mans and animals. The talk will conclude with an outlook to even more com plex problems that attempt to automatically optimize experimental setups to obtain better images. DTEND;TZID=Europe/Zurich:20120525T100000 END:VEVENT BEGIN:VEVENT UID:news288@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20180716T234712 DTSTART;TZID=Europe/Zurich:20120427T090000 SUMMARY:Seminar in Numerical Analysis: Daniel Kressner (EPFL) DESCRIPTION:Verifying the stability of a matrix A under perturbations can b e a challenging task\, especially when additional structure is imposed o n the set of admissible perturbations. In the unstructured case\, a new class of algorithms has recently been proposed by Guglielmi\, Lubich\, and Overton to efficiently compute extremal points (such as the right-most point) of the pseudospectrum. In this talk\, we discuss two extensions of these algorithms. First\, we show how subspace acceleration can be used to significantly speed up convergence\, yielding a quadratically converge nt subspace method. Second\, an extension to certain structured pseudospe ctra is provided. This gives the possibility to address structures (real Hamiltonian\, block diagonal\, ...) that have so far been inaccessible by existing techniques.\\r\\nThis talk is based on joint work with Nicola Guglielmi\, Christian Lubich\, and Bart Vandereycken. X-ALT-DESC:Verifying the stability of a matrix