BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Sabre//Sabre VObject 4.5.7//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
X-LIC-LOCATION:Europe/Zurich
TZURL:http://tzurl.org/zoneinfo/Europe/Zurich
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:news1971@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251215T194718
DTSTART;TZID=Europe/Zurich:20251219T110000
SUMMARY:Seminar in Numerical Analysis: Omar Lakkis (University of Sussex)
DESCRIPTION:Aposteriori error analysis for Galerkin finite element methods 
 have proven very successful tool in developing mathematically rigorous ada
 ptive mesh refinement algorithms for implicit/space-time evolution equatio
 ns. In this work we extend rigorous adaptivity principles to explicit time
 -stepping for the wave equation.\\r\\nI will review in the first part of t
 he talk the state of the art for the wave equation. In the second part\, I
  will present recent work in connection to fully practical explicit scheme
 s such as the Leapfrog method and local time step variants thereof.\\r\\nT
 his talk is mostly based on recent joint work with M Grote\, C Santos and 
 earlier work with EH Georgoulis\, C Makridakis and J Virtanen.\\r\\nFor fu
 rther information about the seminar\, please visit this webpage [https://d
 mi.unibas.ch/de/forschung/mathematik/seminar-in-numerical-analysis/].
X-ALT-DESC:<p>Aposteriori error analysis for Galerkin finite element method
 s have proven very successful tool in developing mathematically rigorous a
 daptive mesh refinement algorithms for implicit/space-time evolution equat
 ions. In this work we extend rigorous adaptivity principles to explicit ti
 me-stepping for the wave equation.</p>\n<p>I will review in the first part
  of the talk the state of the art for the wave equation. In the second par
 t\, I will present recent work in connection to fully practical explicit s
 chemes such as the Leapfrog method and local time step variants thereof.</
 p>\n<p>This talk is mostly based on recent joint work with M Grote\, C San
 tos and earlier work with EH Georgoulis\, C Makridakis and J Virtanen.</p>
 \n<p>For further information about the seminar\, please visit this <a href
 ="https://dmi.unibas.ch/de/forschung/mathematik/seminar-in-numerical-analy
 sis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251219T123000
END:VEVENT
BEGIN:VEVENT
UID:news1962@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251130T190800
DTSTART;TZID=Europe/Zurich:20251212T110000
SUMMARY:Seminar in Numerical Analysis: Gilles Vilmart (University of Geneva
 )
DESCRIPTION:Explicit stabilized methods are highly efficient time integrato
 rs for large and stiff systems of ODEs especially when applied to semi-dis
 crete parabolic problems. However\, when local spatial mesh refinement is 
 introduced\, their efficiency decreases\, since the stiffness is driven by
  only the smallest mesh element. A natural approach is to split the system
  into fast stiff and slower mildly stiff components. In this context\, [A.
  Abdulle\, M.J. Grote\, and G. Rosilho de Souza 2022] proposed the order o
 ne multirate explicit stabilized method (mRKC). We extend their approach t
 o second order and introduce the new multirate ROCK2 method (mROCK2)\, whi
 ch achieves high precision and allows a step-size strategy with error cont
 rol.\\r\\nThis talk is based on joint work with Mathieu Benninghoff (Unive
 rsity of Geneva).\\r\\nFor further information about the seminar\, please 
 visit this webpage [https://dmi.unibas.ch/de/forschung/mathematik/seminar-
 in-numerical-analysis/].
X-ALT-DESC:<p>Explicit stabilized methods are highly efficient time integra
 tors for large and stiff systems of ODEs especially when applied to semi-d
 iscrete parabolic problems. However\, when local spatial mesh refinement i
 s introduced\, their efficiency decreases\, since the stiffness is driven 
 by only the smallest mesh element. A natural approach is to split the syst
 em into fast stiff and slower mildly stiff components. In this context\, [
 A. Abdulle\, M.J. Grote\, and G. Rosilho de Souza 2022] proposed the order
  one multirate explicit stabilized method (mRKC). We extend their approach
  to second order and introduce the new multirate ROCK2 method (mROCK2)\, w
 hich achieves high precision and allows a step-size strategy with error co
 ntrol.</p>\n<p>This talk is based on joint work with Mathieu Benninghoff (
 University of Geneva).</p>\n<p>For further information about the seminar\,
  please visit this <a href="https://dmi.unibas.ch/de/forschung/mathematik/
 seminar-in-numerical-analysis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251212T123000
END:VEVENT
BEGIN:VEVENT
UID:news1961@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251126T164729
DTSTART;TZID=Europe/Zurich:20251205T110000
SUMMARY:Seminar in Numerical Analysis: Mamadou N'diaye (Université Polytec
 hnique Hauts-de-France)
DESCRIPTION:In this presentation\, we address the numerical solution of the
  second-order acoustic wave equation using high-order explicit time-integr
 ation methods on CPU-GPU architectures. After discretizing the spatial dom
 ain with spectral element method\, we compare several time-integration sch
 emes for the resulting system of ODEs. We first consider the classical sec
 ond-order leapfrog method\, followed by higher-order Runge-Kutta-Nyström 
 (RKN) schemes and the modified equation approach. We then focus on the ana
 lysis of the stability properties of the RKN method. Strategies for incorp
 orating the damping term\, which involves the first-order time derivative\
 , are discussed with particular attention to preserving the order of conve
 rgence of the schemes. The proposed approaches are compared in terms of co
 mputational efficiency\, and 3D numerical simulation for the acoustic wave
  equation are presented.\\r\\nFor further information about the seminar\, 
 please visit this webpage [https://dmi.unibas.ch/de/forschung/mathematik/s
 eminar-in-numerical-analysis/].
X-ALT-DESC:<p>In this presentation\, we address the numerical solution of t
 he second-order acoustic wave equation using high-order explicit time-inte
 gration methods on CPU-GPU architectures. After discretizing the spatial d
 omain with spectral element method\, we compare several time-integration s
 chemes for the resulting system of ODEs. We first consider the classical s
 econd-order leapfrog method\, followed by higher-order Runge-Kutta-Nyströ
 m (RKN) schemes and the modified equation approach. We then focus on the a
 nalysis of the stability properties of the RKN method. Strategies for inco
 rporating the damping term\, which involves the first-order time derivativ
 e\, are discussed with particular attention to preserving the order of con
 vergence of the schemes. The proposed approaches are compared in terms of 
 computational efficiency\, and 3D numerical simulation for the acoustic wa
 ve equation are presented.</p>\n<p>For further information about the semin
 ar\, please visit this <a href="https://dmi.unibas.ch/de/forschung/mathema
 tik/seminar-in-numerical-analysis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251205T123000
END:VEVENT
BEGIN:VEVENT
UID:news1953@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251109T143006
DTSTART;TZID=Europe/Zurich:20251114T110000
SUMMARY:Seminar in Numerical Analysis: Andrea Moiola (University of Pavia)
DESCRIPTION:We propose a new space-time variational formulation for wave eq
 uation initial-boundary value problems. The key property is that the formu
 lation is coercive (sign-definite) and continuous in a norm stronger than 
 H1(Q)\, Q being the space-time cylinder. Coercivity holds for constant-coe
 fficient impedance cavity problems posed in star-shaped domains\, and for 
 a class of impedance-Dirichlet problems. The formulation is defined using 
 simple Morawetz multipliers and its coercivity is proved with elementary a
 nalytical tools\, following earlier work on the Helmholtz equation. The fo
 rmulation can be stably discretised with any H2(Q)-conforming discrete spa
 ce\, leading to quasi-optimal space-time Galerkin schemes. Several numeric
 al experiments show the excellent properties of the method. We also presen
 t a continuous-interior-penalty variant\, a posteriori error estimators\, 
 and adaptive discretisations. This is a joint work with Paolo Bignardi and
  Theophile Chaumont-Frelet.\\r\\n[1] Bignardi\, Moiola\, A space-time cont
 inuous and coercive formulation for the wave equation\, Numerische Mathema
 tik 2025. https://doi.org/10.1007/s00211-025-01478-3 [2] Bignardi\, Space-
 time Morawetz formulations for the wave equation\, PhD thesis\, University
  of Pavia 2025. https://iris.unipv.it/handle/11571/1519237\\r\\nFor furthe
 r information about the seminar\, please visit this webpage [https://dmi.u
 nibas.ch/de/forschung/mathematik/seminar-in-numerical-analysis/].
X-ALT-DESC:<p>We propose a new space-time variational formulation for wave 
 equation initial-boundary value problems. The key property is that the for
 mulation is coercive (sign-definite) and continuous in a norm stronger tha
 n H1(Q)\, Q being the space-time cylinder. Coercivity holds for constant-c
 oefficient impedance cavity problems posed in star-shaped domains\, and fo
 r a class of impedance-Dirichlet problems. The formulation is defined usin
 g simple Morawetz multipliers and its coercivity is proved with elementary
  analytical tools\, following earlier work on the Helmholtz equation. The 
 formulation can be stably discretised with any H2(Q)-conforming discrete s
 pace\, leading to quasi-optimal space-time Galerkin schemes. Several numer
 ical experiments show the excellent properties of the method. We also pres
 ent a continuous-interior-penalty variant\, a posteriori error estimators\
 , and adaptive discretisations. This is a joint work with Paolo Bignardi a
 nd Theophile Chaumont-Frelet.</p>\n<p>[1] Bignardi\, Moiola\, A space-time
  continuous and coercive formulation for the wave equation\, Numerische Ma
 thematik 2025. https://doi.org/10.1007/s00211-025-01478-3 [2] Bignardi\, S
 pace-time Morawetz formulations for the wave equation\, PhD thesis\, Unive
 rsity of Pavia 2025. https://iris.unipv.it/handle/11571/1519237</p>\n<p>Fo
 r further information about the seminar\, please visit this <a href="https
 ://dmi.unibas.ch/de/forschung/mathematik/seminar-in-numerical-analysis/">w
 ebpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251114T123000
END:VEVENT
BEGIN:VEVENT
UID:news1939@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251029T175015
DTSTART;TZID=Europe/Zurich:20251107T110000
SUMMARY:Seminar in Numerical Analysis: Rekha Khot (ENPC et Inria\, Paris)
DESCRIPTION:We consider the classical wave equation in a Friedrichs-type fo
 rmulation involving a skew-symmetric spatial differential operator. The fo
 cus is on the analysis of a fully discrete setting\, employing third- and 
 fourth-order explicit Runge–Kutta schemes (ERK3 and ERK4) for time discr
 etization\, combined with Hybrid High-Order (HHO) and Weak Galerkin (WG) m
 ethods for spatial discretization. A first objective is to establish key p
 roperties that address the static coupling between cell and face unknowns\
 , which is intrinsic to hybrid methods.\\r\\nWe investigate two distinct s
 trategies for error analysis: one based on bounding the operator norm of T
 aylor polynomials applied to the discrete differential operator\, and anot
 her that involves testing the error equations with appropriately chosen in
 crements. The effectiveness of these strategies depends on the specific in
 terpolation operators involved and the properties that make such analyses 
 viable. Finally\, we outline how the same framework\, using various HHO va
 riants\, extends to the acoustic-elastic interface problem\, where the cou
 pled terms contribute no additional error.\\r\\nFor further information ab
 out the seminar\, please visit this webpage [https://dmi.unibas.ch/de/fors
 chung/mathematik/seminar-in-numerical-analysis/].
X-ALT-DESC:<p>We consider the classical wave equation in a Friedrichs-type 
 formulation involving a skew-symmetric spatial differential operator. The 
 focus is on the analysis of a fully discrete setting\, employing third- an
 d fourth-order explicit Runge–Kutta schemes (ERK3 and ERK4) for time dis
 cretization\, combined with Hybrid High-Order (HHO) and Weak Galerkin (WG)
  methods for spatial discretization. A first objective is to establish key
  properties that address the static coupling between cell and face unknown
 s\, which is intrinsic to hybrid methods.</p>\n<p>We investigate two disti
 nct strategies for error analysis: one based on bounding the operator norm
  of Taylor polynomials applied to the discrete differential operator\, and
  another that involves testing the error equations with appropriately chos
 en increments. The effectiveness of these strategies depends on the specif
 ic interpolation operators involved and the properties that make such anal
 yses viable. Finally\, we outline how the same framework\, using various H
 HO variants\, extends to the acoustic-elastic interface problem\, where th
 e coupled terms contribute no additional error.</p>\n<p>For further inform
 ation about the seminar\, please visit this <a href="https://dmi.unibas.ch
 /de/forschung/mathematik/seminar-in-numerical-analysis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251107T123000
END:VEVENT
BEGIN:VEVENT
UID:news1919@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20251029T174630
DTSTART;TZID=Europe/Zurich:20251017T110000
SUMMARY:Seminar in Numerical Analysis: Marcella Bonazzoli (Inria\, Institut
  Polytechnique de Paris)
DESCRIPTION:In this talk we are interested in reconstructing the interface 
 between the concrete structure of a hydroelectric gravity dam and the unde
 rlying rock\, using Full Waveform Inversion. Indeed\, it appears that the 
 roughness of the dam-rock interface has an effect on the sliding stability
  of gravity dams. We minimize a regularized misfit cost functional by comp
 uting its shape derivative and iteratively updating the interface shape by
  the gradient descent method. At each iteration\, we simulate time-harmoni
 c elasto-acoustic wave propagation models\, coupling linear elasticity in 
 the solid medium with acoustics in the reservoir. Numerical results using 
 realistic noisy synthetic data demonstrate the method ability to accuratel
 y reconstruct the dam-rock interface\, even with a limited number of measu
 rements. This is joint work with Mohamed Aziz Boukraa\, Lorenzo Audibert\,
  Houssem Haddar and Denis Vautrin.\\r\\nFor further information about the 
 seminar\, please visit this webpage [https://dmi.unibas.ch/de/forschung/ma
 thematik/seminar-in-numerical-analysis/].
X-ALT-DESC:<p>In this talk we are interested in reconstructing the interfac
 e between the concrete structure of a hydroelectric gravity dam and the un
 derlying rock\, using Full Waveform Inversion. Indeed\, it appears that th
 e roughness of the dam-rock interface has an effect on the sliding stabili
 ty of gravity dams. We minimize a regularized misfit cost functional by co
 mputing its shape derivative and iteratively updating the interface shape 
 by the gradient descent method. At each iteration\, we simulate time-harmo
 nic elasto-acoustic wave propagation models\, coupling linear elasticity i
 n the solid medium with acoustics in the reservoir. Numerical results usin
 g realistic noisy synthetic data demonstrate the method ability to accurat
 ely reconstruct the dam-rock interface\, even with a limited number of mea
 surements. This is joint work with Mohamed Aziz Boukraa\, Lorenzo Audibert
 \, Houssem Haddar and Denis Vautrin.</p>\n<p>For further information about
  the seminar\, please visit this <a href="https://dmi.unibas.ch/de/forschu
 ng/mathematik/seminar-in-numerical-analysis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20251017T123000
END:VEVENT
BEGIN:VEVENT
UID:news1903@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250902T144404
DTSTART;TZID=Europe/Zurich:20250919T110000
SUMMARY:Seminar in Numerical Analysis: Niraj Kumar Shukla (Indian Institute
  of Technology Indore)
DESCRIPTION:The generalized translation invariant (GTI) systems unify the d
 iscrete frame theory of generalized shift-invariant systems with its conti
 nuous version\, such as wavelets\, shearlets\, Gabor transforms\, and othe
 rs. This article provides sufficient conditions to construct pairwise orth
 ogonal Parseval GTI frames in satisfying the local integrability condition
  (LIC) and having the Calderón sum one\, where G is a second countable lo
 cally compact abelian group. The pairwise orthogonality plays a crucial ro
 le in multiple access communications\, hiding data\, synthesizing superfra
 mes and frames\, etc. Further\, we provide a result for constructing N num
 bers of GTI Parseval frames\, which are pairwise orthogonal. Consequently\
 , we obtain an explicit construction of pairwise orthogonal Parseval frame
 s in and \, using B-splines as a generating function. In the end\, the res
 ults are particularly discussed for wavelet systems. This is a joint work 
 with Navneet Redhu and Anupam Gumber.\\r\\nFor further information about t
 he seminar\, please visit this webpage [https://dmi.unibas.ch/de/forschung
 /mathematik/seminar-in-numerical-analysis/].
X-ALT-DESC:<p>The generalized translation invariant (GTI) systems unify the
  discrete frame theory of generalized shift-invariant systems with its con
 tinuous version\, such as wavelets\, shearlets\, Gabor transforms\, and ot
 hers. This article provides sufficient conditions to construct pairwise or
 thogonal Parseval GTI frames in satisfying the local integrability conditi
 on (LIC) and having the Calderón sum one\, where G is a second countable 
 locally compact abelian group. The pairwise orthogonality plays a crucial 
 role in multiple access communications\, hiding data\, synthesizing superf
 rames and frames\, etc. Further\, we provide a result for constructing N n
 umbers of GTI Parseval frames\, which are pairwise orthogonal. Consequentl
 y\, we obtain an explicit construction of pairwise orthogonal Parseval fra
 mes in and \, using B-splines as a generating function. In the end\, the r
 esults are particularly discussed for wavelet systems. This is a joint wor
 k with Navneet Redhu and Anupam Gumber.</p>\n<p>For further information ab
 out the seminar\, please visit this <a href="https://dmi.unibas.ch/de/fors
 chung/mathematik/seminar-in-numerical-analysis/">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20250919T123000
END:VEVENT
BEGIN:VEVENT
UID:news1902@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250825T191230
DTSTART;TZID=Europe/Zurich:20250829T110000
SUMMARY:Seminar in Numerical Analysis: Dinh Dũng (Vietnam National Univers
 ity)
DESCRIPTION:We prove convergence rates of linear sampling recovery of funct
 ions in abstract Bochner spaces satisfying weighted summability of their g
 eneralized polynomial chaos expansion coefficients. The underlying algorit
 hm is a function-valued extension of the least squares method widely used 
 and thoroughly studied in scalar-valued function recovery.\\r\\nWe apply o
 ur theory to  collocation approximation of solutions to parametric ellipt
 ic or parabolic PDEs with log-normal random inputs and to relevant approxi
 mation of infinite dimensional holomorphic functions. The application allo
 ws us to significantly improve known results in Computational Uncertainty 
 Quantification for these problems. Our results are also applicable for par
 ametric PDEs with affine inputs\, where they match the known rates.\\r\\n\
 \r\\nFor further information about the seminar\, please visit this webpage
  [t3://page?uid=1115].
X-ALT-DESC:<p>We prove convergence rates of linear sampling recovery of fun
 ctions in abstract Bochner spaces satisfying weighted summability of their
  generalized polynomial chaos expansion coefficients. The underlying algor
 ithm is a function-valued extension of the least squares method widely use
 d and thoroughly studied in scalar-valued function recovery.</p>\n<p>We ap
 ply our theory to &nbsp\;collocation approximation of solutions to paramet
 ric elliptic or parabolic PDEs with log-normal random inputs and to releva
 nt approximation of infinite dimensional holomorphic functions. The applic
 ation allows us to significantly improve known results in Computational Un
 certainty Quantification for these problems. Our results are also applicab
 le for parametric PDEs with affine inputs\, where they match the known rat
 es.</p>\n\n<p>For further information about the seminar\, please visit thi
 s <a href="t3://page?uid=1115" title="Opens internal link in current windo
 w">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20250829T123000
END:VEVENT
BEGIN:VEVENT
UID:news1785@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250417T144636
DTSTART;TZID=Europe/Zurich:20250425T110000
SUMMARY:Seminar in Numerical Analysis: Björn Sprungk (TU Freiberg) 
DESCRIPTION:In this talk we consider the Bayesian approach to inverse probl
 ems which allows for uncertainty quantification for the data-driven recons
 truction of the ground truth. After a brief dicussion of the well-posednes
 s and local Lipschitz stability of Bayesian inverse problems\, we focus on
  Markov chain Monte Carlo methods for sampling and integration with respec
 t to the posterior probability distribution. Here we present our contribut
 ions to Metropolis-Hastings algorithms in function spaces\, discuss conve
 rgence in terms of geometric ergodicity and present numerical experiments 
 which show a dimension-indepedent performance which is\, moreover\, robust
  to the level of observational noise in the data. In the last part of the 
 talk we present recent results of combining Metropolis-Hastings with inter
 acting particle sampling methods based on Euler-Maruyama discretizations o
 f stochastic differential equations of McKean-Vlasov type.\\r\\n\\r\\nFor 
 further information about the seminar\, please visit this webpage [t3://pa
 ge?uid=1115].
X-ALT-DESC:<p>In this talk we consider the Bayesian approach to inverse pro
 blems which allows for uncertainty quantification for the data-driven reco
 nstruction of the ground truth. After a brief dicussion of the well-posedn
 ess and local Lipschitz stability of Bayesian inverse problems\, we focus 
 on Markov chain Monte Carlo methods for sampling and integration with resp
 ect to the posterior probability distribution. Here we present our contrib
 utions to Metropolis-Hastings algorithms in function spaces\, discuss&nbsp
 \;convergence in terms of geometric ergodicity and present numerical exper
 iments which show a dimension-indepedent performance which is\, moreover\,
  robust to the level of observational noise in the data. In the last part 
 of the talk we present recent results of combining Metropolis-Hastings wit
 h interacting particle sampling methods based on Euler-Maruyama discretiza
 tions of stochastic differential equations of McKean-Vlasov type.</p>\n\n<
 p>For further information about the seminar\, please visit this <a href="t
 3://page?uid=1115" title="Opens internal link in current window">webpage</
 a>.</p>
DTEND;TZID=Europe/Zurich:20250425T123000
END:VEVENT
BEGIN:VEVENT
UID:news1826@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250328T125929
DTSTART;TZID=Europe/Zurich:20250404T110000
SUMMARY:Seminar in Numerical Analysis: Théophile Chaumont-Frelet (Inria L
 ille)
DESCRIPTION:The wave equation is a basic PDE model central to a plethora of
  physical and engineering applications\, with such applications requiring 
 approximate solutions obtained by numerical schemes. In this talk\, I will
  focus on the space semi-discretization of the wave equation with a finite
  element method (and assume that time integration is exactly performed). I
 n the context of finite element methods\, a posteriori error estimates are
  a now widely established technique to rigorously control the discretizati
 on error\, and to drive adaptive processes where the finite element mesh i
 s iteratively refined. However\, although a posteriori error estimates are
  widely available for elliptic and parabolic problems\, the literature is 
 much scarcer for hyperbolic problems\, including the time-dependent wave e
 quation. In this talk\, I will discuss a new a posteriori error estimator 
 that hinges on ideas previously developed for the Helmholtz equation (the 
 time-harmonic version of the wave equation). To the best of my knowledge\,
  this new error estimator is the first to provide both an upper and a lowe
 r bound for the error measured in the same norm. I will also briefly quick
 ly discuss preliminary results concerning time discretization\, and applic
 ation to adaptive algorithms.\\r\\n\\r\\nFor further information about the
  seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>The wave equation is a basic PDE model central to a plethora 
 of physical and engineering applications\, with such applications requirin
 g approximate solutions obtained by numerical schemes. In this talk\, I wi
 ll focus on the space semi-discretization of the wave equation with a fini
 te element method (and assume that time integration is exactly performed).
  In the context of finite element methods\, a posteriori error estimates a
 re a now widely established technique to rigorously control the discretiza
 tion error\, and to drive adaptive processes where the finite element mesh
  is iteratively refined. However\, although a posteriori error estimates a
 re widely available for elliptic and parabolic problems\, the literature i
 s much scarcer for hyperbolic problems\, including the time-dependent wave
  equation. In this talk\, I will discuss a new a posteriori error estimato
 r that hinges on ideas previously developed for the Helmholtz equation (th
 e time-harmonic version of the wave equation). To the best of my knowledge
 \, this new error estimator is the first to provide both an upper and a lo
 wer bound for the error measured in the same norm. I will also briefly qui
 ckly discuss preliminary results concerning time discretization\, and appl
 ication to adaptive algorithms.</p>\n\n<p>For further information about th
 e seminar\, please visit this <a href="t3://page?uid=1115" title="Opens in
 ternal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20250404T123000
END:VEVENT
BEGIN:VEVENT
UID:news1787@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250318T095344
DTSTART;TZID=Europe/Zurich:20250328T110000
SUMMARY:Seminar in Numerical Analysis: Daniel Potts (TU Chemnitz)
DESCRIPTION:In this talk\, we present algorithms for the approximation of m
 ultivariate functions. We start with the approximation by trigonometric po
 lynomials based on sampling of multivariate functions on rank-1 lattices o
 r on scattered data. To this end\, we study the approximation of functions
  in periodic Sobolev spaces of dominating mixed smoothness. The proposed a
 lgorithm based mainly on a fast Fourier transforms\, and the arithmetic co
 mplexity of the algorithm depends only on the cardinality of the support o
 f the trigonometric polynomial in the frequency domain. After a detailed i
 ntroduction we will focus on the following questions in more detail.\\r\\n
  	 	We discuss methods where the support of the trigonometric polynomial i
 s unknown. 	 	 	We present a method based on the analysis of variance (ANO
 VA) decomposition that aims to detect the structure of the function\, i.e.
 \, find out which dimension and dimension interactions are important. This
  information is then utilized in obtaining an approximation for the functi
 on. 	 	 	Based on these methods we develop an efficient\, non-intrusive\, 
 adaptive algorithm for the solution of elliptic partial differential equat
 ions. 	 \\r\\n\\r\\nFor further information about the seminar\, please vis
 it this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>In this talk\, we present algorithms for the approximation of
  multivariate functions. We start with the approximation by trigonometric 
 polynomials based on sampling of multivariate functions on rank-1 lattices
  or on scattered data. To this end\, we study the approximation of functio
 ns in periodic Sobolev spaces of dominating mixed smoothness. The proposed
  algorithm based mainly on a fast Fourier transforms\, and the arithmetic 
 complexity of the algorithm depends only on the cardinality of the support
  of the trigonometric polynomial in the frequency domain. After a detailed
  introduction we will focus on the following questions in more detail.</p>
 \n<ul><li><p>We discuss methods where the support of the trigonometric
  polynomial is unknown.</p></li><li><p>We present a method based on 
 the analysis of variance (ANOVA) decomposition that aims to detect the str
 ucture of the function\, i.e.\, find out which dimension and dimension int
 eractions are important. This information is then utilized in obtaining an
  approximation for the function.</p></li><li><p>Based on these metho
 ds we develop an efficient\, non-intrusive\, adaptive algorithm for the so
 lution of elliptic partial differential equations.</p></li></ul>\n\n<p>
 For further information about the seminar\, please visit this <a href="t3:
 //page?uid=1115" title="Opens internal link in current window">webpage</a>
 .</p>
DTEND;TZID=Europe/Zurich:20250328T123000
END:VEVENT
BEGIN:VEVENT
UID:news1798@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20250315T000920
DTSTART;TZID=Europe/Zurich:20250321T110000
SUMMARY:Seminar in Numerical Analysis: Zhaonan Dong (Inria Paris)
DESCRIPTION:PDE models are often characterized by local features such as so
 lution singularities\, boundary layers\, domains with complicated boundari
 es\, and phase transitions. These unique characteristics make designing ac
 curate numerical solutions challenging or demand substantial computational
  resources. One effective strategy is to develop novel numerical methods t
 hat support general meshes composed of polygonal or polyhedral elements\, 
 enabling adaptive refinement that efficiently captures local features.\\r\
 \nIn this talk\, we present recent results on a new a posteriori error ana
 lysis for the discontinuous Galerkin (dG) method applied to general comput
 ational meshes consisting of polygonal/polyhedral (polytopic) elements wit
 h an arbitrary number of faces. This analysis\, which first appeared in th
 e literature\, generalizes known dG methods by allowing an arbitrary numbe
 r of irregular hanging nodes per element. Moreover\, under practical mesh 
 assumptions\, the new error estimator accommodates nearly any element shap
 e—even with curved faces. We will also briefly discuss the a posteriori 
 error estimator for the space-time dG method in solving the Allen–Cahn p
 roblem\, as well as the hp-a posteriori error estimator for the DG method 
 in tackling fourth-order PDEs.\\r\\n\\r\\nFor further information about th
 e seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>PDE models are often characterized by local features such as 
 solution singularities\, boundary layers\, domains with complicated bounda
 ries\, and phase transitions. These unique characteristics make designing 
 accurate numerical solutions challenging or demand substantial computation
 al resources. One effective strategy is to develop novel numerical methods
  that support general meshes composed of polygonal or polyhedral elements\
 , enabling adaptive refinement that efficiently captures local features.</
 p>\n<p>In this talk\, we present recent results on a new a posteriori erro
 r analysis for the discontinuous Galerkin (dG) method applied to general c
 omputational meshes consisting of polygonal/polyhedral (polytopic) element
 s with an arbitrary number of faces. This analysis\, which first appeared 
 in the literature\, generalizes known dG methods by allowing an arbitrary 
 number of irregular hanging nodes per element. Moreover\, under practical 
 mesh assumptions\, the new error estimator accommodates nearly any element
  shape—even with curved faces. We will also briefly discuss the a poster
 iori error estimator for the space-time dG method in solving the Allen–C
 ahn problem\, as well as the hp-a posteriori error estimator for the DG me
 thod in tackling fourth-order PDEs.</p>\n\n<p>For further information abou
 t the seminar\, please visit this <a href="t3://page?uid=1115" title="Open
 s internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20250321T123000
END:VEVENT
BEGIN:VEVENT
UID:news1720@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241213T192501
DTSTART;TZID=Europe/Zurich:20241220T110000
SUMMARY:Seminar in Numerical Analysis: Ana Djurdjevac (FU Berlin) 
DESCRIPTION:Interacting particle systems provide flexible and powerful mode
 ls that are useful in many application areas such as sociology (agents)\, 
 molecular dynamics (proteins) etc. However\, particle systems with large n
 umbers of particles are very complex and diﬃcult to handle\, both analyt
 ically and computationally. Therefore\, a common strategy is to derive e
 ﬀective equations that describe the time evolution of the empirical part
 icle density. A prototypical example that we will consider is the formal i
 dentification of a finite system of particles with the singular Dean-Kawas
 aki equation. We will give a short introduction about the Dean-Kawasaki eq
 uation and its applications. Our aim is to introduce a well-behaved nonlin
 ear SPDE that approximates the Dean-Kawasaki equation for a particle syste
 m with mean-field interaction both in the drift and the noise term. We wan
 t to study the well-posedness of these nonlinear SPDE models and to contro
 l the weak error of the SPDE approximation with respect to the particle sy
 stem using the technique of transport equations on the space of probabilit
 y measures. This is the joint work with H. Kremp\, N. Perkowski and J. Xia
 ohao. Furthermore\, we will discuss possible numerical methods for these p
 roblems. In particular\, we will focus on hybrid methods. This is a joint 
 work with A. Almgren and J. Bell.\\r\\n\\r\\nFor further information about
  the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Interacting particle systems provide flexible and powerful mo
 dels that are useful in many application areas such as sociology (agents)\
 , molecular dynamics (proteins) etc. However\, particle systems with large
  numbers of particles are very complex and diﬃcult to handle\, both anal
 ytically and computationally. Therefore\, a common strategy is to derive e
 ﬀective equations that describe the time evolution of the empirical part
 icle density. A prototypical example that we will consider is the formal i
 dentification of a finite system of particles with the singular Dean-Kawas
 aki equation. We will give a short introduction about the Dean-Kawasaki eq
 uation and its applications. Our aim is to introduce a well-behaved nonlin
 ear SPDE that approximates the Dean-Kawasaki equation for a particle syste
 m with mean-field interaction both in the drift and the noise term. We wan
 t to study the well-posedness of these nonlinear SPDE models and to contro
 l the weak error of the SPDE approximation with respect to the particle sy
 stem using the technique of transport equations on the space of probabilit
 y measures. This is the joint work with H. Kremp\, N. Perkowski and J. Xia
 ohao. Furthermore\, we will discuss possible numerical methods for these p
 roblems. In particular\, we will focus on hybrid methods. This is a joint 
 work with A. Almgren and J. Bell.</p>\n\n<p>For further information about 
 the seminar\, please visit this <a href="t3://page?uid=1115" title="Opens 
 internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20241220T120000
END:VEVENT
BEGIN:VEVENT
UID:news1722@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241213T192424
DTSTART;TZID=Europe/Zurich:20241213T110000
SUMMARY:Seminar in Numerical Analysis: Robert Scheichl (Heidelberg Universi
 ty) 
DESCRIPTION:The fast simulation of Gaussian random fields (GRFs) plays a pi
 votal role in many research areas such as uncertainty quantification\, dat
 a science and spatial statistics. In theory\, it is a well understood and 
 solved problem\, but in practice the efficiency and performance of traditi
 onal sampling procedures degenerates quickly when the random field is disc
 retised on a grid with spatial resolution going to zero. Most existing alg
 orithms\, such as Cholesky factorisation samplers\, do not scale well on l
 arge-scale parallel computers. On the other hand\, stationary\, iterative 
 approaches such as the Gibbs sampler become extremely inefficient at high 
 grid resolution. Already in the late 1980s\, Goodman\, Sokal and their co
 llaborators wrote a series of papers aimed at accelerating the Gibbs sampl
 er using multigrid ideas. The key observation is the intricate connection 
 of random samplers\, such as the Gibbs method\, with iterative solvers for
  linear systems. They proposed the so-called multigrid Monte Carlo (MGMC) 
 method - a random analogue of the multigrid method for solving discretised
  PDEs. We revisit the MGMC algorithm and provide rigorous theoretical jus
 tifications for the optimal scalability of the method for large scale prob
 lems\, with a cost growing linearly with problem size. Most importantly we
  extend the method and the analysis to the Bayesian setting\, i.e.\, GRFs 
 conditioned on noisy data. By using bespoke\, conditioned variants of the 
 Gibbs sampler on each level of the multigrid hierarchy we are able to samp
 le directly from the posterior with a fixed\, grid-independent integrated 
 autocorrelation time. Our theoretical analysis is confirmed by numerical 
 experiments. We further generalise the approach by exploiting more flexibl
 e and robust grid hierarchies that were developed in the context of Algeb
 raic Multigrid solvers. Finally\, using existing PDE libraries\, such as 
 PETSs\, the sampler is easily parallelised and scales optimally to large p
 rocessor numbers.\\r\\n\\r\\nFor further information about the seminar\, p
 lease visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>The fast simulation of Gaussian random fields (GRFs) plays a 
 pivotal role in many research areas such as uncertainty quantification\, d
 ata science and spatial statistics. In theory\, it is a well understood an
 d solved problem\, but in practice the efficiency and performance of tradi
 tional sampling procedures degenerates quickly when the random field is di
 scretised on a grid with spatial resolution going to zero. Most existing a
 lgorithms\, such as Cholesky factorisation samplers\, do not scale well on
  large-scale parallel computers. On the other hand\, stationary\, iterativ
 e approaches such as the Gibbs sampler become extremely inefficient at hig
 h grid resolution.&nbsp\;Already in the late 1980s\, Goodman\, Sokal and t
 heir collaborators wrote a series of papers aimed at accelerating the Gibb
 s sampler using multigrid ideas. The key observation is the intricate conn
 ection of random samplers\, such as the Gibbs method\, with iterative solv
 ers for linear systems. They proposed the so-called multigrid Monte Carlo 
 (MGMC) method - a random analogue of the multigrid method for solving disc
 retised PDEs.&nbsp\;We revisit the MGMC algorithm and provide rigorous the
 oretical justifications for the optimal scalability of the method for larg
 e scale problems\, with a cost growing linearly with problem size. Most im
 portantly we extend the method and the analysis to the Bayesian setting\, 
 i.e.\, GRFs conditioned on noisy data. By using bespoke\, conditioned vari
 ants of the Gibbs sampler on each level of the multigrid hierarchy we are 
 able to sample directly from the posterior with a fixed\, grid-independent
  integrated autocorrelation time.&nbsp\;Our theoretical analysis is confir
 med by numerical experiments. We further generalise the approach by exploi
 ting more flexible and robust grid hierarchies that were developed in the 
 context of&nbsp\;Algebraic Multigrid solvers.&nbsp\;Finally\, using existi
 ng PDE libraries\, such as PETSs\, the sampler is easily parallelised and 
 scales optimally to large processor numbers.</p>\n\n<p>For further informa
 tion about the seminar\, please visit this <a href="t3://page?uid=1115" ti
 tle="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20241213T120000
END:VEVENT
BEGIN:VEVENT
UID:news1728@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241127T162336
DTSTART;TZID=Europe/Zurich:20241206T110000
SUMMARY:Seminar in Numerical Analysis: Alexandre Imperiale (CEA Paris-Sacla
 y) 
DESCRIPTION:We consider the problem of transient wave propagation within tw
 o domains linked with smooth nonlinear contact conditions at a common inte
 rface. While standard linear elastodynamics is assumed within each domain\
 , at the interface we consider continuity of normal stresses\, and (more i
 mportantly) a smooth finite compressibility law. We propose an energy pre
 serving – thus stable – time scheme based upon [1]\, and devise an eff
 icient time-marching algorithm. We validate our approach with semi-analyti
 cal results\, and illustrate typical nonlinear waves phenomena (harmonics\
 , zero-frequency components) in 2D.\\r\\nReferences\\r\\n[1] O. Gonzalez\,
  Exact energy and momentum conserving algorithms for general models in non
 linear elasticity\, Comput. Methods Appl. Mech. Eng.\, 2000\, 190(13-14)\,
  1763-1783.\\r\\nFor further information about the seminar\, please visit 
 this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We consider the problem of transient wave propagation within 
 two domains linked with smooth nonlinear contact conditions at a common in
 terface. While standard linear elastodynamics is assumed within each domai
 n\, at the interface we consider continuity of normal stresses\, and (more
  importantly) a smooth finite compressibility law.&nbsp\;We propose an ene
 rgy preserving – thus stable – time scheme based upon [1]\, and devise
  an efficient time-marching algorithm. We validate our approach with semi-
 analytical results\, and illustrate typical nonlinear waves phenomena (har
 monics\, zero-frequency components) in 2D.</p>\n<p>References</p>\n<p>[1] 
 O. Gonzalez\, Exact energy and momentum conserving algorithms for general 
 models in nonlinear elasticity\, Comput. Methods Appl. Mech. Eng.\, 2000\,
  190(13-14)\, 1763-1783.</p>\n<p>For further information about the seminar
 \, please visit this <a href="t3://page?uid=1115" title="Opens internal li
 nk in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20241206T120000
END:VEVENT
BEGIN:VEVENT
UID:news1729@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241106T144114
DTSTART;TZID=Europe/Zurich:20241115T110000
SUMMARY:Seminar in Numerical Analysis: Marco Picasso (EPFL) 
DESCRIPTION:Anisotropic adaptive meshes\, that is to say adaptive meshes wi
 th large aspect ratio\, are extremely efficient to approach functions havi
 ng boundary or internal layers\, some industrial applications will be pres
 ented. The theory will be justified on elliptic problems\, and on the tran
 sport equation when using the order two Crank-Nicolson scheme.\\r\\n\\r\\n
 For further information about the seminar\, please visit this webpage [t3:
 //page?uid=1115].
X-ALT-DESC:<p>Anisotropic adaptive meshes\, that is to say adaptive meshes 
 with large aspect ratio\, are extremely efficient to approach functions ha
 ving boundary or internal layers\, some industrial applications will be pr
 esented. The theory will be justified on elliptic problems\, and on the tr
 ansport equation when using the order two Crank-Nicolson scheme.</p>\n\n<p
 >For further information about the seminar\, please visit this <a href="t3
 ://page?uid=1115" title="Opens internal link in current window">webpage</a
 >.</p>
DTEND;TZID=Europe/Zurich:20241115T120000
END:VEVENT
BEGIN:VEVENT
UID:news1721@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241101T133004
DTSTART;TZID=Europe/Zurich:20241108T110000
SUMMARY:Seminar in Numerical Analysis: Jürgen Dölz (University of Bonn) 
DESCRIPTION:We consider generalized symmetric operator eigenvalue problems 
 with random symmetric perturbations of the operators. This implies that th
 e eigenpairs of the eigenvalue problem are also random. We investigate sto
 chastic quantities of interest of eigenpairs of higher but finite multipli
 city and discuss why for multiplicity larger than one\, only the stochasti
 c quantities of interest of the eigenspaces are meaningful. To do so\, we 
 characterize the Fréchet derivatives of the eigenpairs with respect to th
 e perturbation and provide a new linear characterization for eigenpairs of
  higher multiplicity. As a side result\, we prove local analyticity of the
  eigenspaces. Based on the Fréchet derivatives of the eigenpairs we discu
 ss a meaningful Monte Carlo sampling strategy for multiple eigenvalues and
  develop an uncertainty quantification perturbation approach. We present n
 umerical examples to illustrate the theoretical results.\\r\\n\\r\\nFor fu
 rther information about the seminar\, please visit this webpage [t3://page
 ?uid=1115].
X-ALT-DESC:<p>We consider generalized symmetric operator eigenvalue problem
 s with random symmetric perturbations of the operators. This implies that 
 the eigenpairs of the eigenvalue problem are also random. We investigate s
 tochastic quantities of interest of eigenpairs of higher but finite multip
 licity and discuss why for multiplicity larger than one\, only the stochas
 tic quantities of interest of the eigenspaces are meaningful. To do so\, w
 e characterize the Fréchet derivatives of the eigenpairs with respect to 
 the perturbation and provide a new linear characterization for eigenpairs 
 of higher multiplicity. As a side result\, we prove local analyticity of t
 he eigenspaces. Based on the Fréchet derivatives of the eigenpairs we dis
 cuss a meaningful Monte Carlo sampling strategy for multiple eigenvalues a
 nd develop an uncertainty quantification perturbation approach. We present
  numerical examples to illustrate the theoretical results.</p>\n\n<p>For f
 urther information about the seminar\, please visit this <a href="t3://pag
 e?uid=1115" title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20241108T120000
END:VEVENT
BEGIN:VEVENT
UID:news1703@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20241001T122056
DTSTART;TZID=Europe/Zurich:20241025T110000
SUMMARY:Seminar in Numerical Analysis: Andreas Veeser (University of Milan)
  
DESCRIPTION:In the context of finite element methods for boundary value pro
 blems\, the goal of an a posteriori error analysis is to derive a so-calle
 d error estimator. Such an estimator should be a quantity that is computab
 le in terms of the problem data and the finite element solution and equiva
 lent to its error. Almost all error estimators available in literature\, h
 owever\, do not meet these requirements. Indeed\, equivalence is typically
  verified only up to so-called (data) oscillations and their presence has 
 been viewed as the price of computability.\\r\\nThe talk will consist of t
 wo parts. The first part will argue that the infinite-dimensional nature o
 f the problem data is an obstruction to computability\, while the typical 
 form of oscillations obstructs equivalence. In other words: the desirable 
 but missing unspoiled equivalence is not only a technical problem but does
  not hold for the involved oscillations.\\r\\nThe second part will then pr
 esent the new approach to a posteriori error estimation proposed by [1]. I
 ts resulting estimators are equivalent to the error and consist of two par
 ts\, where the first one is of finite-dimensional nature and thus computab
 le\, while the second one is a new form of data oscillation\, which is alw
 ays smaller that the old one and whose computability hinges on the knowled
 ge of the problem data. This splitting of the error estimator is also conv
 enient in guiding adaptive algorithms\; cf. [2].\\r\\n[1] C. Kreuzer\, A. 
 Veeser\, Oscillation in a posteriori error estimation\, Numer. Math. 148 (
 2021)\, 43-78 [2] A. Bonito\, C. Canuto\, R. H. Nochetto\, A. Veeser\, Ada
 ptive finite element methods\, Acta Numerica 33 (2024)\, 163-485.\\r\\n\\r
 \\nFor further information about the seminar\, please visit this webpage [
 t3://page?uid=1115].
X-ALT-DESC:<p>In the context of finite element methods for boundary value p
 roblems\, the goal of an a posteriori error analysis is to derive a so-cal
 led error estimator. Such an estimator should be a quantity that is comput
 able in terms of the problem data and the finite element solution and equi
 valent to its error. Almost all error estimators available in literature\,
  however\, do not meet these requirements. Indeed\, equivalence is typical
 ly verified only up to so-called (data) oscillations and their presence ha
 s been viewed as the price of computability.</p>\n<p>The talk will consist
  of two parts. The first part will argue that the infinite-dimensional nat
 ure of the problem data is an obstruction to computability\, while the typ
 ical form of oscillations obstructs equivalence. In other words: the desir
 able but missing unspoiled equivalence is not only a technical problem but
  does not hold for the involved oscillations.</p>\n<p>The second part will
  then present the new approach to a posteriori error estimation proposed b
 y [1]. Its resulting estimators are equivalent to the error and consist of
  two parts\, where the first one is of finite-dimensional nature and thus 
 computable\, while the second one is a new form of data oscillation\, whic
 h is always smaller that the old one and whose computability hinges on the
  knowledge of the problem data. This splitting of the error estimator is a
 lso convenient in guiding adaptive algorithms\; cf. [2].</p>\n<p>[1] C. Kr
 euzer\, A. Veeser\, Oscillation in a posteriori error estimation\, Numer. 
 Math. 148 (2021)\, 43-78<br /> [2] A. Bonito\, C. Canuto\, R. H. Nochetto\
 , A. Veeser\, Adaptive finite element methods\, Acta Numerica 33 (2024)\, 
 163-485.</p>\n\n<p>For further information about the seminar\, please visi
 t this <a href="t3://page?uid=1115" title="Opens internal link in current 
 window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20241025T120000
END:VEVENT
BEGIN:VEVENT
UID:news1684@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20240527T101453
DTSTART;TZID=Europe/Zurich:20240531T110000
SUMMARY:Seminar in Numerical Analysis: Philipp Trunschke (University of Nan
 tes)
DESCRIPTION:Many functions of interest exhibit weighted summability of thei
 r coefficients with respect to some dictionary of basis functions. This s
 ummability enables efficient sparse and low-rank approximation and ensures
  that the function can be estimated efficiently from samples. This talk 
 presents some fundamental results on the estimation of sparse and low-rank
  functions\, like the weighted Stechkin lemma and the restricted isometry
  property\, and introduces simultaneously sparse and low-rank tensor for
 mats.\\r\\n\\r\\nFor further information about the seminar\, please visit 
 this webpage [t3://page?uid=1115].
X-ALT-DESC:<p><em>Many functions of interest exhibit weighted summability o
 f their coefficients with respect to some dictionary of basis functions.&n
 bsp\;</em><em>This summability enables efficient s</em><em>parse and low-r
 ank approximation and ensures that the function can be&nbsp\;</em><em>esti
 mated efficiently from samples.&nbsp\;</em><em>This talk presents some fun
 damental results on the estimation of sparse and low-rank functions\, like
  the&nbsp\;</em><em>weighted Stechkin lemma and the restricted isometry pr
 operty\,</em><em>&nbsp\;and&nbsp\;</em><em>introduces simultaneously spars
 e and low-rank tensor formats.</em></p>\n\n<p>For further information abou
 t the seminar\, please visit this <a href="t3://page?uid=1115" title="Open
 s internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20240531T120000
END:VEVENT
BEGIN:VEVENT
UID:news1665@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20240429T095246
DTSTART;TZID=Europe/Zurich:20240503T110000
SUMMARY:Seminar in Numerical Analysis: Andreas Veeser (University of Milano
 )
DESCRIPTION:TBA\\r\\n\\r\\nFor further information about the seminar\, plea
 se visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>TBA</p>\n\n<p>For further information about the seminar\, ple
 ase visit this <a href="t3://page?uid=1115" title="Opens internal link in 
 current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20240503T120000
END:VEVENT
BEGIN:VEVENT
UID:news1656@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20240415T160416
DTSTART;TZID=Europe/Zurich:20240426T110000
SUMMARY:Seminar in Numerical Analysis: Malte Peter (University of Augsburg)
DESCRIPTION:We consider the upscaled linear elasticity problem in the conte
 xt of periodic homogenisation in the stationary setting as well as in the 
 time-dependent regime where the wavelength is much larger than the microst
 ructure. Based on measurements of the deformation of the (macroscopic) bou
 ndary of a body for a given forcing\, the aim is to deduce information on 
 the geometry of the microstructure. After a general introduction to period
 ic homogenisation in the context of linear elasticity\, we are able to pro
 ve for a parametrised microstructure that there exists at least one soluti
 on of the associated minimisation problem based on the L^2-difference of t
 he measured deformation and the computed deformation for a given parameter
  vector. To facilitate the use of gradient-based algorithms\, we derive th
 e Gâteaux derivatives using the Lagrangian method of Céa\, and we presen
 t numerical experiments showcasing the functioning of the method.\\r\\nThi
 s is joint work with T. Lochner (University of Augsburg).\\r\\n\\r\\nFor f
 urther information about the seminar\, please visit this webpage [t3://pag
 e?uid=1115].
X-ALT-DESC:<p>We consider the upscaled linear elasticity problem in the con
 text of periodic homogenisation in the stationary setting as well as in th
 e time-dependent regime where the wavelength is much larger than the micro
 structure. Based on measurements of the deformation of the (macroscopic) b
 oundary of a body for a given forcing\, the aim is to deduce information o
 n the geometry of the microstructure. After a general introduction to peri
 odic homogenisation in the context of linear elasticity\, we are able to p
 rove for a parametrised microstructure that there exists at least one solu
 tion of the associated minimisation problem based on the L^2-difference of
  the measured deformation and the computed deformation for a given paramet
 er vector. To facilitate the use of gradient-based algorithms\, we derive 
 the Gâteaux derivatives using the Lagrangian method of Céa\, and we pres
 ent numerical experiments showcasing the functioning of the method.</p>\n<
 p>This is joint work with T. Lochner (University of Augsburg).</p>\n\n<p>F
 or further information about the seminar\, please visit this <a href="t3:/
 /page?uid=1115" title="Opens internal link in current window">webpage</a>.
 </p>
DTEND;TZID=Europe/Zurich:20240426T120000
END:VEVENT
BEGIN:VEVENT
UID:news1655@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20240402T130832
DTSTART;TZID=Europe/Zurich:20240419T110000
SUMMARY:Seminar in Numerical Analysis: Barbara Verfürth (University of Bon
 n)
DESCRIPTION:In the last years\, there has been an increasing interest in ti
 me-modulated materials to obtain enhanced properties. As mathematical mode
 l\, we study the classical wave equation with time-dependent coefficient\,
  which may also include spatial multiscale features. Based on joint work w
 ith Bernhard Maier\, we present a numerical multiscale method for spatiall
 y multiscale\, (slowly) time-evolving coefficients. The method is inspired
  by the Localized Orthogonal Decomposition (LOD) and entails time-dependen
 t multiscale spaces. We provide a rigorous a priori error analysis for the
  considered setting. Numerical examples illustrate the theoretical finding
 s and investigate an adaptive approach for the computation of the time-dep
 endent basis functions. On the other hand\, we will also briefly discuss t
 he setting of spatially homogeneous\, temporal multiscale coefficients. (H
 igher-order) multiscale expansions may help to interpret effective physica
 l material properties and are numerically illustrated.\\r\\nFor further in
 formation about the seminar\, please visit this webpage [t3://page?uid=111
 5].
X-ALT-DESC:<p>In the last years\, there has been an increasing interest in 
 time-modulated materials to obtain enhanced properties. As mathematical mo
 del\, we study the classical wave equation with time-dependent coefficient
 \, which may also include spatial multiscale features.<br /> Based on join
 t work with Bernhard Maier\, we present a numerical multiscale method for 
 spatially multiscale\, (slowly) time-evolving coefficients. The method is 
 inspired by the Localized Orthogonal Decomposition (LOD) and entails time-
 dependent multiscale spaces. We provide a rigorous a priori error analysis
  for the considered setting. Numerical examples illustrate the theoretical
  findings and investigate an adaptive approach for the computation of the 
 time-dependent basis functions.<br /> On the other hand\, we will also bri
 efly discuss the setting of spatially homogeneous\, temporal multiscale co
 efficients. (Higher-order) multiscale expansions may help to interpret eff
 ective physical material properties and are numerically illustrated.</p>\n
 <p>For further information about the seminar\, please visit this <a href="
 t3://page?uid=1115" title="Opens internal link in current window">webpage<
 /a>.</p>
DTEND;TZID=Europe/Zurich:20240419T120000
END:VEVENT
BEGIN:VEVENT
UID:news1550@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20231204T091610
DTSTART;TZID=Europe/Zurich:20231215T110000
SUMMARY:Seminar in Numerical Analysis: Caroline Geiersbach (WIAS Berlin)
DESCRIPTION:Many problems in shape optimization involve constraints in the 
 form of one or more partial differential equations. In practice\, the mate
 rial properties of the underlying shape on which a PDE is defined are not 
 known exactly\; it is natural to use a probability distribution based on e
 mpirical measurements and incorporate this information when designing an o
 ptimal shape. Additionally\, one might wish to obtain a shape that is robu
 st in its response to certain external inputs\, such as forces. It is help
 ful to view shape optimization problems subject to uncertainty through the
  lens of stochastic optimization\, where a wealth of theory and algorithms
  already exist for finite-dimensional problems. The focus will be on the a
 lgorithmic handling of these problems in the case of a high stochastic dim
 ension. Stochastic approximation\, which dynamically samples from the stoc
 hastic space over the course of iterations\, is favored in this case\, and
  we show how these methods can be applied to shape optimization. We study 
 the classical stochastic gradient method\, which was introduced in 1951 by
  Robbins and Monro and is widely used in machine learning. In particular\,
  we investigate its application to infinite-dimensional shape manifolds. F
 urther\, we present numerical examples showing the performance of the meth
 od\, also in combination with the augmented Lagrangian method for problems
  with geometric constraints. \\r\\nJoint work with: Kathrin Welker\, Este
 fania Loayza-Romero\, Tim Suchan\\r\\n\\r\\nFor further information about 
 the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Many problems in shape optimization involve constraints in th
 e form of one or more partial differential equations. In practice\, the ma
 terial properties of the underlying shape on which a PDE is defined are no
 t known exactly\; it is natural to use a probability distribution based on
  empirical measurements and incorporate this information when designing an
  optimal shape. Additionally\, one might wish to obtain a shape that is ro
 bust in its response to certain external inputs\, such as forces. It is he
 lpful to view shape optimization problems subject to uncertainty through t
 he lens of stochastic optimization\, where a wealth of theory and algorith
 ms already exist for finite-dimensional problems. The focus will be on the
  algorithmic handling of these problems in the case of a high stochastic d
 imension. Stochastic approximation\, which dynamically samples from the st
 ochastic space over the course of iterations\, is favored in this case\, a
 nd we show how these methods can be applied to shape optimization. We stud
 y the classical stochastic gradient method\, which was introduced in 1951 
 by Robbins and Monro and is widely used in machine learning. In particular
 \, we investigate its application to infinite-dimensional shape manifolds.
  Further\, we present numerical examples showing the performance of the me
 thod\, also in combination with the augmented Lagrangian method for proble
 ms with geometric constraints.&nbsp\;</p>\n<p>Joint work with: Kathrin Wel
 ker\, Estefania Loayza-Romero\, Tim Suchan</p>\n\n<p>For further informati
 on about the seminar\, please visit this <a href="t3://page?uid=1115" titl
 e="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231215T120000
END:VEVENT
BEGIN:VEVENT
UID:news1570@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20231127T102321
DTSTART;TZID=Europe/Zurich:20231208T110000
SUMMARY:Seminar in Numerical Analysis: Martin Vohralik (Inria Paris)
DESCRIPTION:A posteriori estimates enable to certify the error committed in
  a numerical simulation. In particular\, the equilibrated flux reconstruct
 ion technique yields a guaranteed error upper bound\, where the flux\, obt
 ained by a local postprocessing\, is of independent interest since it is a
 lways locally conservative. In this talk\, we tailor this methodology to m
 odel nonlinear and time-dependent problems to obtain estimates that are ro
 bust\, i.e.\, of quality independent of the strength of the nonlinearities
  and the final time. These estimates include\, and build on\, common itera
 tive linearization schemes such as Zarantonello\, Picard\, Newton\, or M- 
 and L-ones. We first consider steady problems and conceive two settings: w
 e either augment the energy difference by the discretization error of the 
 current linearization step\, or we design iteration-dependent norms that f
 eature weights given by the current iterate. We then turn to unsteady prob
 lems. Here we first consider the linear heat equation and finally move to 
 the Richards one\, that is doubly nonlinear and exhibits both parabolic–
 hyperbolic and parabolic–elliptic degeneracies. Robustness with respect 
 to the final time and local efficiency in both time and space are addresse
 d here. Numerical experiments illustrate the theoretical findings all alon
 g the presentation. Details can be found in [1-4].\\r\\nA. Ern\, I. Smears
 \, M. Vohralík\, Guaranteed\, locally space-time efficient\, and polynomi
 al-degree robust a posteriori error estimates for high-order discretizatio
 ns of parabolic problems\, SIAM J. Numer. Anal. 55 (2017)\, 2811–2834.\\
 r\\nA. Harnist\, K. Mitra\, A. Rappaport\, M. Vohralík\, Robust energy a 
 posteriori estimates for nonlinear elliptic problems\, HAL Preprint 04033
 438\, 2023.\\r\\nK. Mitra\, M. Vohralík\, A posteriori error estimates fo
 r the Richards equation\, Math. Comp. (2024)\, accepted for publication.\\
 r\\nK. Mitra\, M. Vohralík\, Guaranteed\, locally efficient\, and robust 
 a posteriori estimates for nonlinear elliptic problems in iteration-depend
 ent norms. An orthogonal decomposition result based on iterative lineariza
 tion\, HAL Preprint 04156711\, 2023.\\r\\n\\r\\nFor further information a
 bout the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>A posteriori estimates enable to certify the error committed 
 in a numerical simulation. In particular\, the equilibrated flux reconstru
 ction technique yields a guaranteed error upper bound\, where the flux\, o
 btained by a local postprocessing\, is of independent interest since it is
  always locally conservative. In this talk\, we tailor this methodology to
  model nonlinear and time-dependent problems to obtain estimates that are 
 robust\, i.e.\, of quality independent of the strength of the nonlineariti
 es and the final time. These estimates include\, and build on\, common ite
 rative linearization schemes such as Zarantonello\, Picard\, Newton\, or M
 - and L-ones. We first consider steady problems and conceive two settings:
  we either augment the energy difference by the discretization error of th
 e current linearization step\, or we design iteration-dependent norms that
  feature weights given by the current iterate. We then turn to unsteady pr
 oblems. Here we first consider the linear heat equation and finally move t
 o the Richards one\, that is doubly nonlinear and exhibits both parabolic
 –hyperbolic and parabolic–elliptic degeneracies. Robustness with respe
 ct to the final time and local efficiency in both time and space are addre
 ssed here. Numerical experiments illustrate the theoretical findings all a
 long the presentation. Details can be found in [1-4].</p>\n<p>A. Ern\, I. 
 Smears\, M. Vohralík\, Guaranteed\, locally space-time efficient\, and po
 lynomial-degree robust a posteriori error estimates for high-order discret
 izations of parabolic problems\, <em>SIAM J. Numer. Anal.</em><strong>55<
 /strong> (2017)\, 2811–2834.</p>\n<p>A. Harnist\, K. Mitra\, A. Rappapor
 t\, M. Vohralík\, Robust energy a posteriori estimates for nonlinear elli
 ptic problems\, HAL Preprint&nbsp\;04033438\, 2023.</p>\n<p>K. Mitra\, M. 
 Vohralík\, A posteriori error estimates for the Richards equation\, <em>M
 ath. Comp.</em> (2024)\, accepted for publication.</p>\n<p>K. Mitra\, M. V
 ohralík\, Guaranteed\, locally efficient\, and robust a posteriori estima
 tes for nonlinear elliptic problems in iteration-dependent norms. An ortho
 gonal decomposition result based on iterative linearization\, HAL Preprint
 &nbsp\;04156711\, 2023.</p>\n\n<p>For further information about the semina
 r\, please visit this <a href="t3://page?uid=1115" title="Opens internal l
 ink in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231208T120000
END:VEVENT
BEGIN:VEVENT
UID:news1544@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230922T165627
DTSTART;TZID=Europe/Zurich:20231110T110000
SUMMARY:Seminar in Numerical Analysis: Larisa Beilina (University of Göteb
 org)
DESCRIPTION:An adaptive finite element/finite difference domain decompositi
 on method for solution of time-dependent Maxwell's equations for electric
  field in conductive media will be presented. This method is applied for r
 econstruction of dielectric permittivity and conductivity functions using
  time-dependent scattered data of electric field at the boundary of the do
 main.\\r\\nAll reconstruction algorithms are based on optimization approac
 h for finding of stationary point of the Lagrangian. Derivation of a poste
 riori error estimates for the regularized solution and Tikhonov functional
  will be presented.  Based on these estimates adaptive reconstruction alg
 orithms are developed.  Computational tests will show robustness of propo
 sed algorithms in 3D.\\r\\n\\r\\nFor further information about the seminar
 \, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>An adaptive finite element/finite difference domain decomposi
 tion&nbsp\;method for solution of time-dependent Maxwell's equations for e
 lectric field in conductive media will be presented. This method is applie
 d for reconstruction of dielectric permittivity and conductivity&nbsp\;fun
 ctions using time-dependent scattered data of electric field at the bounda
 ry of the domain.</p>\n<p>All reconstruction algorithms are based on optim
 ization approach for finding of stationary point of the Lagrangian. Deriva
 tion of a posteriori error estimates for the regularized solution and Tikh
 onov functional will be presented. &nbsp\;Based on these estimates adaptiv
 e reconstruction algorithms are developed. &nbsp\;Computational tests will
  show robustness of proposed algorithms in 3D.</p>\n\n<p>For further infor
 mation about the seminar\, please visit this <a href="t3://page?uid=1115" 
 title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231110T120000
END:VEVENT
BEGIN:VEVENT
UID:news1583@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20231024T091747
DTSTART;TZID=Europe/Zurich:20231027T110000
SUMMARY:Seminar in Numerical Analysis: Carsten Gräser (FAU Erlangen-Nürnb
 erg)
DESCRIPTION:We consider the regularization of a supervised learning proble
 m by partial differential equations (PDEs). For the resulting regularized
  problem we derive error bounds in terms of a PDE error term and a data 
 error term. These error contributions quantify the accuracy of the PDE mod
 el used for regularization and the data coverage.  Furthermore\, the disc
 retization of the PDE-regularized learning problem by generalized Galerki
 n methods including finite elements and neural networks approaches is in
 vestigated. A nonlinear version of Céa's lemma allows to derive errors b
 ounds for both classes of discretizations and gives first insights into e
 rror analysis of variational neural network discretizations of PDEs.\\r\
 \n\\r\\nFor further information about the seminar\, please visit this webp
 age [t3://page?uid=1115].
X-ALT-DESC:<p>We consider the regularization of a supervised&nbsp\;learning
  problem by partial differential equations&nbsp\;(PDEs). For the resulting
  regularized problem we&nbsp\;derive error bounds in terms of a PDE error 
 term&nbsp\;and a data error term. These error contributions<br /> quantify
  the accuracy of the PDE model used for&nbsp\;regularization and the data 
 coverage.<br /><br /> Furthermore\, the discretization of the PDE-regular
 ized&nbsp\;learning problem by generalized Galerkin methods&nbsp\;includin
 g finite elements and neural networks approaches&nbsp\;is investigated. A 
 nonlinear version of Céa's lemma&nbsp\;allows to derive errors bounds for
  both classes of&nbsp\;discretizations and gives first insights into error
 &nbsp\;analysis of variational neural network discretizations&nbsp\;of PDE
 s.</p>\n\n<p>For further information about the seminar\, please visit this
  <a href="t3://page?uid=1115" title="Opens internal link in current window
 ">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231027T120000
END:VEVENT
BEGIN:VEVENT
UID:news1537@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20231013T095041
DTSTART;TZID=Europe/Zurich:20231020T110000
SUMMARY:Seminar in Numerical Analysis: Marco Zank (U Wien) 
DESCRIPTION:For the discretization of time-dependent partial differential e
 quations\, the standard approaches are explicit or implicit time-stepping 
 schemes together with finite element methods in space. An alternative appr
 oach is the usage of space-time methods\, where the space-time domain is d
 iscretized and the resulting global linear system is solved at once. In th
 is talk\, some recent developments in space-time finite element methods ar
 e reviewed. For this purpose\, the heat equation and the wave equation ser
 ve as model problems. First\, for both model problems\, space-time variati
 onal formulations and their unique solvability in space-time Sobolev space
 s are discussed\, where a modified Hilbert transformation is used such tha
 t ansatz and test spaces are equal. Second\, conforming discretization sch
 emes\, using piecewise polynomial\, globally continuous functions\, are in
 troduced. Solvability and stability of these numerical schemes are discuss
 ed. Next\, we investigate efficient direct solvers for the occurring huge 
 linear systems. The developed solvers are based on the Bartels--Stewart me
 thod and on the Fast Diagonalization method\, which result in solving a se
 quence of spatial subproblems. The solver based on the Fast Diagonalizatio
 n method allows solving these spatial subproblems in parallel\, leading to
  a full parallelization in time. In the last part of the talk\, numerical 
 examples are shown and discussed.\\r\\n\\r\\nFor further information about
  the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>For the discretization of time-dependent partial differential
  equations\, the standard approaches are explicit or implicit time-steppin
 g schemes together with finite element methods in space. An alternative ap
 proach is the usage of space-time methods\, where the space-time domain is
  discretized and the resulting global linear system is solved at once. In 
 this talk\, some recent developments in space-time finite element methods 
 are reviewed. For this purpose\, the heat equation and the wave equation s
 erve as model problems. First\, for both model problems\, space-time varia
 tional formulations and their unique solvability in space-time Sobolev spa
 ces are discussed\, where a modified Hilbert transformation is used such t
 hat ansatz and test spaces are equal. Second\, conforming discretization s
 chemes\, using piecewise polynomial\, globally continuous functions\, are 
 introduced. Solvability and stability of these numerical schemes are discu
 ssed. Next\, we investigate efficient direct solvers for the occurring hug
 e linear systems. The developed solvers are based on the Bartels--Stewart 
 method and on the Fast Diagonalization method\, which result in solving a 
 sequence of spatial subproblems. The solver based on the Fast Diagonalizat
 ion method allows solving these spatial subproblems in parallel\, leading 
 to a full parallelization in time. In the last part of the talk\, numerica
 l examples are shown and discussed.</p>\n\n<p>For further information abou
 t the seminar\, please visit this <a href="t3://page?uid=1115" title="Open
 s internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231020T120000
END:VEVENT
BEGIN:VEVENT
UID:news1562@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230926T153730
DTSTART;TZID=Europe/Zurich:20231013T110000
SUMMARY:Seminar in Numerical Analysis: Markus Weimar (Julius-Maximilians-Un
 iversität Würzburg)
DESCRIPTION:As a rule of thumb in approximation theory\, the asymptotic spe
 ed of convergence of numerical algorithms is governed by the regularity of
  the objects we like to approximate. Besides classical isotropic Sobolev s
 moothness\, in the last decades the notion of so-called dominating- mixed 
 regularity of functions turned out to be an important concept in numerical
  analysis. Indeed\, it naturally arises in high-dimensional real-world app
 lications\, e.g.\, related to the electronic Schrödinger equation. Althou
 gh optimal approximation rates for embeddings within the scales of isotr
 opic or dominating-mixed Lp-Sobolev spaces are well-understood\, not that
  much is known for embeddings across those scales (break-of-scale embedd
 ings).\\r\\nIn this lecture\, we first review the Fourier analytic approac
 h towards by now well-established (Besov and Triebel-Lizorkin) scales of d
 istribution spaces that measure either isotropic or dominating-mixed reg
 ularity. In addition\, we introduce new function spaces of hybrid smoothne
 ss which are able to simultaneously capture both types of regularity at 
 the same time. As a further generalization of the aforementioned scales\, 
 they particularly include standard Sobolev spaces on domains. On the other
  hand\, our new spaces yield an appropri- ate framework to study break-of-
 scale embeddings by means of harmonic analysis. We shall present (non-)ada
 ptive wavelet-based multiscale algorithms that approximate such embed- din
 gs at optimal dimension-independent rates of convergence. Important specia
 l cases cover the approximation of functions having dominating-mixed Sobol
 ev smoothness w.r.t. Lp in the norm of the (isotropic) energy space H1.
 \\r\\nThe talk is based on a recent paper [1] which represents the first p
 art of a joint work with Glenn Byrenheid (FSU Jena)\, Markus Hansen (PU Ma
 rburg)\, and Janina Hübner (RU Bochum).\\r\\nReferences:\\r\\n[1] G. Byre
 nheid\, J. Hübner\, and M. Weimar. Rate-optimal sparse approximation of c
 ompact break-of-scale embeddings. Appl. Comput. Harmon. Anal. 65:40–66\
 , 2023 (arXiv:2203.10011).\\r\\nFor further information about the seminar\
 , please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>As a rule of thumb in approximation theory\, the asymptotic s
 peed of convergence of numerical algorithms is governed by the regularity 
 of the objects we like to approximate. Besides classical isotropic Sobolev
  smoothness\, in the last decades the notion of so-called dominating- mixe
 d regularity of functions turned out to be an important concept in numeric
 al analysis. Indeed\, it naturally arises in high-dimensional real-world a
 pplications\, e.g.\, related to the electronic Schrödinger equation. Alth
 ough optimal approximation rates for embeddings&nbsp\;within&nbsp\;the sca
 les of isotropic or dominating-mixed&nbsp\;Lp-Sobolev spaces are well-unde
 rstood\, not that much is known for embeddings&nbsp\;across&nbsp\;those sc
 ales (break-of-scale embeddings).</p>\n<p>In this lecture\, we first revie
 w the Fourier analytic approach towards by now well-established (Besov and
  Triebel-Lizorkin) scales of distribution spaces that measure&nbsp\;either
 &nbsp\;isotropic or dominating-mixed regularity. In addition\, we introduc
 e new function spaces of hybrid smoothness which are able to&nbsp\;simulta
 neously&nbsp\;capture both types of regularity at the same time. As a furt
 her generalization of the aforementioned scales\, they particularly includ
 e standard Sobolev spaces on domains. On the other hand\, our new spaces y
 ield an appropri- ate framework to study break-of-scale embeddings by mean
 s of harmonic analysis. We shall present (non-)adaptive wavelet-based mult
 iscale algorithms that approximate such embed- dings at optimal dimension-
 independent rates of convergence. Important special cases cover the approx
 imation of functions having dominating-mixed Sobolev smoothness w.r.t.&nbs
 p\;Lp&nbsp\;in the norm of the (isotropic) energy space&nbsp\;H1.</p>\n<p>
 The talk is based on a recent paper [1] which represents the first part of
  a joint work with Glenn Byrenheid (FSU Jena)\, Markus Hansen (PU Marburg)
 \, and Janina Hübner (RU Bochum).</p>\n<p>References:</p>\n<p>[1] G. Byre
 nheid\, J. Hübner\, and M. Weimar. Rate-optimal sparse approximation of c
 ompact break-of-scale embeddings. Appl. Comput. Harmon. Anal.&nbsp\;65:40
 –66\, 2023 (arXiv:2203.10011).</p>\n<p>For further information about the
  seminar\, please visit this <a href="t3://page?uid=1115" title="Opens int
 ernal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20231013T120000
END:VEVENT
BEGIN:VEVENT
UID:news1538@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230915T113109
DTSTART;TZID=Europe/Zurich:20230929T110000
SUMMARY:Seminar in Numerical Analysis: Rüdiger Kempf (U Bayreuth)
DESCRIPTION:Reproducing kernel Hilbert spaces (RKHSs) and the closely rela
 ted kernel methods are well-established and well-studied tools in classi
 cal approximation theory. More recently\, they see many uses in other prob
 lems in applied and numerical analysis.\\r\\nIn machine learning\, support
  vector machines heavily rely on RKHSs. For neural networks Barron spaces 
 are connected to certain RKHSs and offer a possibility for a theoretical a
 nalysis of these networks.\\r\\nAnother application of RKHSs is in high(er
 )-dimensional approximation. For instance in the field of quasi Monte-Carl
 o methods\, kernel-techniques are used to derive an error analysis for hig
 h-dimensional quadrature rules. We also developed a novel kernel-based app
 roximation method for higher-dimensional meshfree function reconstruction\
 , based on Smolyak operators.\\r\\nIn this talk I will provide an introduc
 tion into the theory of RKHSs\, their kernels and associated kernel method
 s. In particular\, I will focus on a multiscale approximation scheme for r
 escaled radial basis functions. This method will then be used to derive th
 e new tensor product multilevel method for higher- dimensional meshfree 
 approximation\, which I will discuss in detail.\\r\\n\\r\\nFor further inf
 ormation about the seminar\, please visit this webpage [t3://page?uid=1115
 ].
X-ALT-DESC:<p>Reproducing kernel Hilbert spaces (RKHSs)&nbsp\;and the close
 ly related&nbsp\;kernel methods&nbsp\;are well-established and well-studie
 d tools in classical approximation theory. More recently\, they see many u
 ses in other problems in applied and numerical analysis.</p>\n<p>In machin
 e learning\, support vector machines heavily rely on RKHSs. For neural net
 works Barron spaces are connected to certain RKHSs and offer a possibility
  for a theoretical analysis of these networks.</p>\n<p>Another application
  of RKHSs is in high(er)-dimensional approximation. For instance in the fi
 eld of quasi Monte-Carlo methods\, kernel-techniques are used to derive an
  error analysis for high-dimensional quadrature rules. We also developed a
  novel kernel-based approximation method for higher-dimensional meshfree f
 unction reconstruction\, based on Smolyak operators.</p>\n<p>In this talk 
 I will provide an introduction into the theory of RKHSs\, their kernels an
 d associated kernel methods. In particular\, I will focus on a multiscale 
 approximation scheme for rescaled radial basis functions. This method will
  then be used to derive the new&nbsp\;tensor product multilevel method&nbs
 p\;for higher- dimensional meshfree approximation\, which I will discuss i
 n detail.</p>\n\n<p>For further information about the seminar\, please vis
 it this <a href="t3://page?uid=1115" title="Opens internal link in current
  window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230929T120000
END:VEVENT
BEGIN:VEVENT
UID:news1536@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230904T192551
DTSTART;TZID=Europe/Zurich:20230922T110000
SUMMARY:Seminar in Numerical Analysis: Robert Gruhlke (FU Berlin) 
DESCRIPTION:Ensemble methods have become ubiquitous for solving Bayesian in
 ference problems\, in particular the efficient sampling from posterior den
 sities. State-of-the-art subclasses of Markow-Chain-Monte-Carlo methods r
 ely on gradient information of the log-density including Langevin samplers
  such as Ensemble Kalman Sampler (EKS) and Affine Invariant Langevin Dyna
 mics (ALDI). These dynamics are described by stochastic differential equa
 tions (SDEs) with time homogeneous drift terms. \\r\\nIn this talk we pre
 sent enhancement strategies of such ensemble methods based on sample enric
 hment and homotopy formalism\, that ultimately lead to time-dependent drif
 t terms that possible assimilate a larger class of target distributions w
 hile providing faster mixing times. \\r\\nFurthermore\, we present an alt
 ernative route to construct time-inhomogeneous drift terms based on rever
 se Diffusion processes that are popular in state-of-the-art Generative Mo
 delling such as Diffusion maps. Here\, we propose learning these log-dens
 ities by propagation of the target distribution through an Ornstein-Uhlenb
 eck process. For this\, we solve the associated Hamilton-Jabobi-Bellman eq
 uation through an adaptive explicit Euler discretization using low-rank co
 mpression such as functional Tensor Trains for the spatial discretization.
 \\r\\n\\r\\nFor further information about the seminar\, please visit this 
 webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Ensemble methods have become ubiquitous for solving Bayesian 
 inference problems\, in particular the efficient sampling from posterior d
 ensities.&nbsp\;State-of-the-art subclasses of Markow-Chain-Monte-Carlo me
 thods rely on gradient information of the log-density including Langevin s
 amplers such as&nbsp\;Ensemble Kalman Sampler (EKS) and Affine Invariant L
 angevin Dynamics (ALDI). These dynamics are described by&nbsp\;stochastic 
 differential equations (SDEs) with time homogeneous drift terms.&nbsp\;</p
 >\n<p>In this talk we present enhancement strategies of such ensemble meth
 ods based on sample enrichment and homotopy formalism\, that ultimately le
 ad to time-dependent drift terms that possible&nbsp\;assimilate a larger c
 lass of target distributions while providing faster mixing times.&nbsp\;</
 p>\n<p>Furthermore\, we present an alternative route to construct&nbsp\;ti
 me-inhomogeneous drift terms based on reverse Diffusion processes that are
 &nbsp\;popular in state-of-the-art Generative Modelling such as Diffusion 
 maps.&nbsp\;Here\, we propose learning these log-densities by propagation 
 of the target distribution through an Ornstein-Uhlenbeck process. For this
 \, we solve the associated Hamilton-Jabobi-Bellman equation through an ada
 ptive explicit Euler discretization using low-rank compression such as fun
 ctional Tensor Trains for the spatial discretization.</p>\n\n<p>For furthe
 r information about the seminar\, please visit this <a href="t3://page?uid
 =1115" title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230922T120000
END:VEVENT
BEGIN:VEVENT
UID:news1464@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230509T105935
DTSTART;TZID=Europe/Zurich:20230512T110000
SUMMARY:Seminar in Numerical Analysis: Martin Eigel (WIAS Berlin)
DESCRIPTION:Weighted least squares methods have been examined thouroughly t
 o obtain quasi-optimal convergence results for a chosen (polynomial) basis
  of a linear space. A focus in the analysis lies on the construction of op
 timal sampling measures and the derivation of a sufficient sample complexi
 ty for stable reconstructions. When considering holomorphic functions such
  as solutions of common parametric PDEs\, the anisotropic sparsity they ex
 hibit can be exploited to achieve improved results adapted to the consider
 ed problem. In particular\, the sparsity of the data transfers to the solu
 tion sparsity in terms of polynomial chaos coefficients. When using nonlin
 ear model classes\, it turns out that the known results cannot be used dir
 ectly. To obtain comparable a priori rates\, we introduce a new weighted v
 ersion of Stechkin's lemma. This enables to obtain optimal complexity resu
 lts for a model class of low-rank tensor trains. We also show that the sol
 ution sparsity results in sparse component tensors and sketch how this can
  be realised in practical algorithms. A nice application is the reconstruc
 tion of Galerkin solutions for parametric PDEs. With this\, a provably con
 verging a posteriori adaptive algorithm can be derived for linear model PD
 Es with non-affine coefficients.\\r\\n\\r\\nFor further information about 
 the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Weighted least squares methods have been examined thouroughly
  to obtain quasi-optimal convergence results for a chosen (polynomial) bas
 is of a linear space. A focus in the analysis lies on the construction of 
 optimal sampling measures and the derivation of a sufficient sample comple
 xity for stable reconstructions. When considering holomorphic functions su
 ch as solutions of common parametric PDEs\, the anisotropic sparsity they 
 exhibit can be exploited to achieve improved results adapted to the consid
 ered problem. In particular\, the sparsity of the data transfers to the so
 lution sparsity in terms of polynomial chaos coefficients. When using nonl
 inear model classes\, it turns out that the known results cannot be used d
 irectly. To obtain comparable a priori rates\, we introduce a new weighted
  version of Stechkin's lemma. This enables to obtain optimal complexity re
 sults for a model class of low-rank tensor trains. We also show that the s
 olution sparsity results in sparse component tensors and sketch how this c
 an be realised in practical algorithms. A nice application is the reconstr
 uction of Galerkin solutions for parametric PDEs. With this\, a provably c
 onverging a posteriori adaptive algorithm can be derived for linear model 
 PDEs with non-affine coefficients.</p>\n\n<p>For further information about
  the seminar\, please visit this <a href="t3://page?uid=1115" title="Opens
  internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230512T120000
END:VEVENT
BEGIN:VEVENT
UID:news1500@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230427T100910
DTSTART;TZID=Europe/Zurich:20230505T110000
SUMMARY:Seminar in Numerical Analysis: Elena Moral Sánchez (Max-Planck Ins
 titute for Plasma Physics)
DESCRIPTION:The cold-plasma wave equation describes the propagation of an e
 lectromagnetic wave in a magnetized plasma\, which is an inhomogeneous\, d
 ispersive and anisotropic medium. The thermal effects are assumed to be ne
 gligible\, which leads to a linear partial differential equation. Besides\
 , we assume that the electromagnetic field of the propagating wave is in t
 he time-harmonic regime. This model has applications in magnetic confineme
 nt fusion devices\, like the Tokamak. Namely\, electromagnetic waves are u
 sed to heat up the plasma (Electron cyclotron resonance heating (ECRH)) or
  for interferometry and reflectometry diagnostics (to measure plasma densi
 ty and position\, etc.).  In the first part of this talk\, we introduce th
 e cold-plasma model\, together with a qualitative study of the plasma mode
 s which expose the complexity of the problem. In the second part\, we desc
 ribe the problem and the simplifications we carry out\, which yield the in
 definite Helmholtz equation. It is solved with B-Spline Finite Elements pr
 ovided by the Psydac library and some results are shown. Lastly\, we discu
 ss the performance and potential ways of preconditioning.\\r\\n\\r\\nFor f
 urther information about the seminar\, please visit this webpage [t3://pag
 e?uid=1115].
X-ALT-DESC:<p>The cold-plasma wave equation describes the propagation of an
  electromagnetic wave in a magnetized plasma\, which is an inhomogeneous\,
  dispersive and anisotropic medium. The thermal effects are assumed to be 
 negligible\, which leads to a linear partial differential equation. Beside
 s\, we assume that the electromagnetic field of the propagating wave is in
  the time-harmonic regime.<br /> This model has applications in magnetic c
 onfinement fusion devices\, like the Tokamak. Namely\, electromagnetic wav
 es are used to heat up the plasma (Electron cyclotron resonance heating (E
 CRH)) or for interferometry and reflectometry diagnostics (to measure plas
 ma density and position\, etc.).<br /><br /> In the first part of this ta
 lk\, we introduce the cold-plasma model\, together with a qualitative stud
 y of the plasma modes which expose the complexity of the problem.<br /> In
  the second part\, we describe the problem and the simplifications we carr
 y out\, which yield the indefinite Helmholtz equation. It is solved with B
 -Spline Finite Elements provided by the Psydac library and some results ar
 e shown. Lastly\, we discuss the performance and potential ways of precond
 itioning.</p>\n\n<p>For further information about the seminar\, please vis
 it this <a href="t3://page?uid=1115" title="Opens internal link in current
  window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230505T120000
END:VEVENT
BEGIN:VEVENT
UID:news1472@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230417T092542
DTSTART;TZID=Europe/Zurich:20230428T110000
SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (CNRS — Universit
 é Pierre et Marie Curie)
DESCRIPTION:We introduce a scalable adaptive element-based domain decomposi
 tion (DD) method for solving saddle point problems defined as a block two 
 by two matrix. The algorithm does not require any knowledge of the constra
 ined space. We assume that all sub matrices are sparse and that the diagon
 al blocks are spectrally equivalent to a sum of positive semi definite mat
 rices. The latter assumption enables the design of adaptive coarse space f
 or DD methods that extends the GenEO theory to saddle point problems. Nume
 rical results on three dimensional elasticity problems for steel-rubber st
 ructures discretized by a finite element with continuous pressure are show
 n for up to one billion degrees of freedom along with comparisons to Algeb
 raic Multigrid Methods.\\r\\n\\r\\nFor further information about the semin
 ar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We introduce a scalable adaptive element-based domain decompo
 sition (DD) method for solving saddle point problems defined as a block tw
 o by two matrix. The algorithm does not require any knowledge of the const
 rained space. We assume that all sub matrices are sparse and that the diag
 onal blocks are spectrally equivalent to a sum of positive semi definite m
 atrices. The latter assumption enables the design of adaptive coarse space
  for DD methods that extends the GenEO theory to saddle point problems. Nu
 merical results on three dimensional elasticity problems for steel-rubber 
 structures discretized by a finite element with continuous pressure are sh
 own for up to one billion degrees of freedom along with comparisons to Alg
 ebraic Multigrid Methods.</p>\n\n<p>For further information about the semi
 nar\, please visit this <a href="t3://page?uid=1115" title="Opens internal
  link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230428T120000
END:VEVENT
BEGIN:VEVENT
UID:news1480@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230418T090618
DTSTART;TZID=Europe/Zurich:20230421T110000
SUMMARY:Seminar in Numerical Analysis: Omar Lakkis (University of Sussex)
DESCRIPTION:Least-squares finite element recovery-based methods provide a s
 imple and practical way to approximate linear elliptic PDEs in nondivergen
 ce form where standard variational approach either fails or requires techn
 ically complex modifications.\\r\\nThis idea allows the creation of effici
 ent solvers for fully nonlinear elliptic equations\, the linearization of 
 which leaves us with an equation in nondivergence form. An important class
  of fully nonlinear elliptic PDEs can be written in Hamilton--Jacobi--Bell
 man (Dynamic Programming) form\, i.e.\, as the supremum of a collection of
  linear operators acting on the unkown.\\r\\nThe least-squares FEM approac
 h\, a variant of the nonvariational finite element method\, is based on gr
 adient or Hessian recovery and allows the use of FEMs of arbitrary degree.
  The price to pay for using higher order FEMs is the loss of discrete-leve
 l monotonicity (maximum principle)\, which is valid for the PDE and crucia
 l in proving the convergence of many degree one FEM and finite difference 
 schemes.\\r\\nSuitable functional spaces and penalties in the least-square
 s's cost functional must be carefully crafted in order to ensure stability
  and convergence of the scheme with a good approximation of the gradient (
 or Hessian) under the Cordes condition on the family of linear operators b
 eing optimized.\\r\\nFurthermore\, the nonlinear operator which is not nec
 essarily everywhere differentiable\, must be linearized in appropriate fun
 ctional spaces using semismooth Newton or Howard's policy iteration method
 . A crucial contribution of our work\, is the proof of convergence of the 
 semismooth Newton method at the continuum level\, i.e.\, on infinite dimes
 ional functionals spaces. This allows an easy use of our non-monotone sche
 mes which provides convergence rates as well as a posteriori error estimat
 es.\\r\\n\\r\\nFor further information about the seminar\, please visit th
 is webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Least-squares finite element recovery-based methods provide a
  simple and practical way to approximate linear elliptic PDEs in nondiverg
 ence form where standard variational approach either fails or requires tec
 hnically complex modifications.</p>\n<p>This idea allows the creation of e
 fficient solvers for fully nonlinear elliptic equations\, the linearizatio
 n of which leaves us with an equation in nondivergence form. An important 
 class of fully nonlinear elliptic PDEs can be written in Hamilton--Jacobi-
 -Bellman (Dynamic Programming) form\, i.e.\, as the supremum of a collecti
 on of linear operators acting on the unkown.</p>\n<p>The least-squares FEM
  approach\, a variant of the nonvariational finite element method\, is bas
 ed on gradient or Hessian recovery and allows the use of FEMs of arbitrary
  degree. The price to pay for using higher order FEMs is the loss of discr
 ete-level monotonicity (maximum principle)\, which is valid for the PDE an
 d crucial in proving the convergence of many degree one FEM and finite dif
 ference schemes.</p>\n<p>Suitable functional spaces and penalties in the l
 east-squares's cost functional must be carefully crafted in order to ensur
 e stability and convergence of the scheme with a good approximation of the
  gradient (or Hessian) under the Cordes condition on the family of linear 
 operators being optimized.</p>\n<p>Furthermore\, the nonlinear operator wh
 ich is not necessarily everywhere differentiable\, must be linearized in a
 ppropriate functional spaces using semismooth Newton or Howard's policy it
 eration method. A crucial contribution of our work\, is the proof of conve
 rgence of the semismooth Newton method at the continuum level\, i.e.\, on 
 infinite dimesional functionals spaces. This allows an easy use of our non
 -monotone schemes which provides convergence rates as well as a posteriori
  error estimates.</p>\n\n<p>For further information about the seminar\, pl
 ease visit this <a href="t3://page?uid=1115" title="Opens internal link in
  current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230421T120000
END:VEVENT
BEGIN:VEVENT
UID:news1455@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230403T123724
DTSTART;TZID=Europe/Zurich:20230414T110000
SUMMARY:Seminar in Numerical Analysis: Vesa Kaarnioja (FU Berlin)
DESCRIPTION:We describe a fast method for solving elliptic PDEs with uncert
 ain coefficients using kernel-based interpolation over a rank-1 lattice po
 int set [1]. By representing the input random field of the system using a 
 model proposed by Kaarnioja\, Kuo\, and Sloan [2]\, in which a countable n
 umber of independent random variables enter the random field as periodic f
 unctions\, it is shown that the kernel interpolant can be constructed for 
 the PDE solution (or some quantity of interest thereof) as a function of t
 he stochastic variables in a highly efficient manner using fast Fourier tr
 ansform. The method works well even when the stochastic dimension of the p
 roblem is large\, and we obtain rigorous error bounds which are independen
 t of the stochastic dimension of the problem. We also outline some techniq
 ues that can be used to further improve the approximation error and comput
 ational complexity of the method [3].\\r\\n\\r\\nReferences:\\r\\n[1] V. K
 aarnioja\, Y. Kazashi\, F. Y. Kuo\, F. Nobile\, and I. H. Sloan. Fast appr
 oximation by periodic kernel-based lattice-point interpolation with applic
 ation in uncertainty quantification. Numer. Math. 150:33-77\, 2022.\\r\\n[
 2] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Uncertainty quantification 
 using periodic random variables. SIAM J. Numer. Anal. 58(2):1068-1091\, 20
 20.\\r\\n[3] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Lattice-based ker
 nel approximation and serendipitous weights for parametric PDEs in very hi
 gh dimensions. Preprint 2023\, arXiv:2303.17755 [math.NA].\\r\\n\\r\\nFor 
 further information about the seminar\, please visit this webpage [t3://pa
 ge?uid=1115].
X-ALT-DESC:<p>We describe a fast method for solving elliptic PDEs with unce
 rtain coefficients using kernel-based interpolation over a rank-1 lattice 
 point set [1]. By representing the input random field of the system using 
 a model proposed by Kaarnioja\, Kuo\, and Sloan [2]\, in which a countable
  number of independent random variables enter the random field as periodic
  functions\, it is shown that the kernel interpolant can be constructed fo
 r the PDE solution (or some quantity of interest thereof) as a function of
  the stochastic variables in a highly efficient manner using fast Fourier 
 transform. The method works well even when the stochastic dimension of the
  problem is large\, and we obtain rigorous error bounds which are independ
 ent of the stochastic dimension of the problem. We also outline some techn
 iques that can be used to further improve the approximation error and comp
 utational complexity of the method [3].</p>\n\n<p>References:</p>\n<p>[1] 
 V. Kaarnioja\, Y. Kazashi\, F. Y. Kuo\, F. Nobile\, and I. H. Sloan. Fast 
 approximation by periodic kernel-based lattice-point interpolation with ap
 plication in uncertainty quantification. Numer. Math. 150:33-77\, 2022.</p
 >\n<p>[2] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Uncertainty quantifi
 cation using periodic random variables. SIAM J. Numer. Anal. 58(2):1068-10
 91\, 2020.</p>\n<p>[3] V. Kaarnioja\, F. Y. Kuo\, and I. H. Sloan. Lattice
 -based kernel approximation and serendipitous weights for parametric PDEs 
 in very high dimensions. Preprint 2023\, arXiv:2303.17755 [math.NA].</p>\n
 \n<p>For further information about the seminar\, please visit this <a href
 ="t3://page?uid=1115" title="Opens internal link in current window">webpag
 e</a>.</p>
DTEND;TZID=Europe/Zurich:20230414T120000
END:VEVENT
BEGIN:VEVENT
UID:news1468@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20230403T123812
DTSTART;TZID=Europe/Zurich:20230317T110000
SUMMARY:Seminar in Numerical Analysis: Marc Dambrine (Université de Pau et
  des Pays de l'Adour)
DESCRIPTION:As it is often the case in optimization\, the solution of a sha
 pe problem is sensitive to the parameters of the problem. For example\, th
 e loading of a structure to be optimised is known in an imprecise way. In 
 this talk\, I will present the different approaches that have been recentl
 y proposed to incorporate these uncertainties in the definition of the obj
 ective. I will present numerical illustrations from structural optimizatio
 n.\\r\\n\\r\\nFor further information about the seminar\, please visit thi
 s webpage [t3://page?uid=1115].
X-ALT-DESC:<p>As it is often the case in optimization\, the solution of a s
 hape problem is sensitive to the parameters of the problem. For example\, 
 the loading of a structure to be optimised is known in an imprecise way. I
 n this talk\, I will present the different approaches that have been recen
 tly proposed to incorporate these uncertainties in the definition of the o
 bjective. I will present numerical illustrations from structural optimizat
 ion.</p>\n\n<p>For further information about the seminar\, please visit th
 is <a href="t3://page?uid=1115" title="Opens internal link in current wind
 ow">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20230317T120000
END:VEVENT
BEGIN:VEVENT
UID:news1402@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220920T084344
DTSTART;TZID=Europe/Zurich:20221216T110000
SUMMARY:Seminar in Numerical Analysis: Christophe Geuzaine (Université de 
 Liège)
DESCRIPTION:This talk is devoted to non-overlapping Schwarz domain decompos
 ition methods for the resolution of high frequency flow acoustics problems
  of industrial relevance. First\, we will present recent advances on non-r
 eflecting boundary techniques that provide local approximations to the Dir
 ichlet-to-Neumann operator for convected and heterogeneous time-harmonic w
 ave propagation problems [1]. Then we will show how to adapt a generic dom
 ain decomposition framework to flow acoustics\, based on these newly desig
 ned transmission conditions\, and highlight the benefit of the approach on
  the simulation of three-dimensional noise radiation of a high by-pass rat
 io turbofan engine intake [2].  [1] Marchner\, P.\, Antoine\, X.\, Geuzain
 e\, C.\, & Bériot\, H. (2022). Construction and numerical assessment of l
 ocal absorbing boundary conditions for heterogeneous time-harmonic acousti
 c problems. SIAM Journal on Applied Mathematics\, 82(2)\, 476-501.  [2] Li
 eu\, A.\, Marchner\, P.\, Gabard\, G.\, Beriot\, H.\, Antoine\, X.\, & Geu
 zaine\, C. (2020). A non-overlapping Schwarz domain decomposition method w
 ith high-order finite elements for flow acoustics. Computer Methods in App
 lied Mechanics and Engineering\, 369\, 113223.\\r\\nFor further informatio
 n about the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>This talk is devoted to non-overlapping Schwarz domain decomp
 osition methods for the resolution of high frequency flow acoustics proble
 ms of industrial relevance. First\, we will present recent advances on non
 -reflecting boundary techniques that provide local approximations to the D
 irichlet-to-Neumann operator for convected and heterogeneous time-harmonic
  wave propagation problems [1]. Then we will show how to adapt a generic d
 omain decomposition framework to flow acoustics\, based on these newly des
 igned transmission conditions\, and highlight the benefit of the approach 
 on the simulation of three-dimensional noise radiation of a high by-pass r
 atio turbofan engine intake [2].<br /><br /> [1] Marchner\, P.\, Antoine\
 , X.\, Geuzaine\, C.\, &amp\; Bériot\, H. (2022). Construction and numeri
 cal assessment of local absorbing boundary conditions for heterogeneous ti
 me-harmonic acoustic problems. SIAM Journal on Applied Mathematics\, 82(2)
 \, 476-501.<br /><br /> [2] Lieu\, A.\, Marchner\, P.\, Gabard\, G.\, Ber
 iot\, H.\, Antoine\, X.\, &amp\; Geuzaine\, C. (2020). A non-overlapping S
 chwarz domain decomposition method with high-order finite elements for flo
 w acoustics. Computer Methods in Applied Mechanics and Engineering\, 369\,
  113223.</p>\n<p>For further information about the seminar\, please visit 
 this <a href="t3://page?uid=1115" title="Opens internal link in current wi
 ndow">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20221216T120000
END:VEVENT
BEGIN:VEVENT
UID:news1410@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20221130T142441
DTSTART;TZID=Europe/Zurich:20221209T110000
SUMMARY:Seminar in Numerical Analysis: Patrick Ciarlet (ENSTA Paris)
DESCRIPTION:Variational formulations are a popular tool to analyse linear P
 DEs (eg. neutron diffusion\, Maxwell equations\, Stokes equations ...)\, a
 nd it also provides a convenient basis to design numerical methods to solv
 e them. Of paramount importance is the inf-sup condition\, designed by Lad
 yzhenskaya\, Necas\, Babuska and Brezzi in the 1960s and 1970s. As is well
 -known\, it provides sharp conditions to prove well-posedness of the probl
 em\, namely existence and uniqueness of the solution\, and continuous depe
 ndence with respect to the data. Then\, to solve the approximate\, or disc
 rete\, problems\, there is the (uniform) discrete inf-sup condition\, to e
 nsure existence of the approximate solutions\, and convergence of those so
 lutions to the exact solution. Often\, the two sides of this problem (exac
 t and approximate) are handled separately\, or at least no explicit connec
 tion is made between the two.\\r\\nIn this talk\, I will focus on an appro
 ach that is completely equivalent to the inf-sup condition for problems se
 t in Hilbert spaces\, the T-coercivity approach. This approach relies on t
 he design of an explicit operator to realize the inf-sup condition. If the
  operator is carefully chosen\, it can provide useful insight for a straig
 htforward definition of the approximation of the exact problem. As a matte
 r of fact\, the derivation of the discrete inf-sup condition often becomes
  elementary\, at least when one considers conforming methods\, that is whe
 n the discrete spaces are subspaces of the exact Hilbert spaces. In this w
 ay\, both the exact and the approximate problems are considered\, analysed
  and solved at once.\\r\\nIn itself\, T-coercivity is not a new theory\, h
 owever it seems that some of its strengths have been overlooked\, and that
 \, if used properly\, it can be a simple\, yet powerful tool to analyse an
 d solve linear PDEs. In particular\, it provides guidelines such as\, whic
 h abstract tools and which numerical methods are the most “natural” to
  analyse and solve the problem at hand. In other words\, it allows one to 
 select simply appropriate tools in the mathematical\, or numerical\, toolb
 oxes. This claim will be illustrated on classical linear PDEs\, and for so
 me generalizations of those models.\\r\\nFor further information about the
  seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Variational formulations are a popular tool to analyse linear
  PDEs (eg. neutron diffusion\, Maxwell equations\, Stokes equations ...)\,
  and it also provides a convenient basis to design numerical methods to so
 lve them. Of paramount importance is the <strong>inf-sup condition</strong
 >\, designed by Ladyzhenskaya\, Necas\, Babuska and Brezzi in the 1960s an
 d 1970s. As is well-known\, it provides sharp conditions to prove well-pos
 edness of the problem\, namely existence and uniqueness of the solution\, 
 and continuous dependence with respect to the data. Then\, to solve the ap
 proximate\, or discrete\, problems\, there is the <strong>(uniform) discre
 te inf-sup condition</strong>\, to ensure existence of the approximate sol
 utions\, and convergence of those solutions to the exact solution. Often\,
  the two sides of this problem (exact and approximate) are handled separat
 ely\, or at least no explicit connection is made between the two.</p>\n<p>
 In this talk\, I will focus on an approach that is completely equivalent t
 o the inf-sup condition for problems set in Hilbert spaces\, the <strong>T
 -coercivity approach</strong>. This approach relies on the design of an <e
 m>explicit</em> operator to realize the inf-sup condition. If the operator
  is carefully chosen\, it can provide useful insight for a straightforward
  definition of the approximation of the exact problem. As a matter of fact
 \, the derivation of the discrete inf-sup condition often becomes elementa
 ry\, at least when one considers conforming methods\, that is when the dis
 crete spaces are subspaces of the exact Hilbert spaces. In this way\, both
  the exact and the approximate problems are considered\, analysed and solv
 ed at once.</p>\n<p>In itself\, T-coercivity is not a new theory\, however
  it seems that some of its strengths have been overlooked\, and that\, if 
 used properly\, it can be a simple\, yet powerful tool to analyse and solv
 e linear PDEs. In particular\, it provides guidelines such as\, which abst
 ract tools and which numerical methods are the most “natural” to analy
 se and solve the problem at hand. In other words\, it allows one to select
  simply appropriate tools in the mathematical\, or numerical\, toolboxes. 
 This claim will be illustrated on classical linear PDEs\, and for some gen
 eralizations of those models.</p>\n<p>For further information about the se
 minar\, please visit this <a href="t3://page?uid=1115" title="Opens intern
 al link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20221209T120000
END:VEVENT
BEGIN:VEVENT
UID:news1409@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220928T143806
DTSTART;TZID=Europe/Zurich:20221202T110000
SUMMARY:Seminar in Numerical Analysis: Sébastien Imperiale (Inria — LMS\
 , Ecole Polytechnique\, CNRS — Université Paris-Saclay\, MΞDISIM)
DESCRIPTION:The objective of this work is to propose and analyze numerical 
 schemes to solve transient wave propagation problems that are exponentiall
 y stable (i.e. the solution decays to zero exponentially fast). Applicatio
 ns are in data assimilation strategies or the discretisation of absorbing 
 boundary conditions. More precisely the aim of our work is to propose a di
 scretization process that enables to preserve the exponential stability at
  the discrete level as well as a high order consistency when using a high-
 order finite element approximation. The main idea is to add to the wave eq
 uation a stabilizing term which damps the high-frequency oscillating compo
 nents of the solutions such as spurious waves. This term is built from a d
 iscrete multiplier analysis that proves the exponential stability of the s
 emi-discrete problem at any order without affecting the order of convergen
 ce.\\r\\nFor further information about the seminar\, please visit this web
 page [t3://page?uid=1115].
X-ALT-DESC:<p>The objective of this work is to propose and analyze numerica
 l schemes to solve transient wave propagation problems that are exponentia
 lly stable (i.e. the solution decays to zero exponentially fast). Applicat
 ions are in data assimilation strategies or the discretisation of absorbin
 g boundary conditions. More precisely the aim of our work is to propose a 
 discretization process that enables to preserve the exponential stability 
 at the discrete level as well as a high order consistency when using a hig
 h-order finite element approximation. The main idea is to add to the wave 
 equation a stabilizing term which damps the high-frequency oscillating com
 ponents of the solutions such as spurious waves. This term is built from a
  discrete multiplier analysis that proves the exponential stability of the
  semi-discrete problem at any order without affecting the order of converg
 ence.</p>\n<p>For further information about the seminar\, please visit thi
 s <a href="t3://page?uid=1115" title="Opens internal link in current windo
 w">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20221202T120000
END:VEVENT
BEGIN:VEVENT
UID:news1408@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20221110T132504
DTSTART;TZID=Europe/Zurich:20221118T110000
SUMMARY:Seminar in Numerical Analysis: Alexey Chernov (Universität Oldenbu
 rg)
DESCRIPTION:We investigate a class of parametric elliptic eigenvalue proble
 ms where the coefficients (and hence the solution) may depend on a paramet
 er y. Understanding the regularity of the solution as a function of y is i
 mportant for construction of efficient numerical approximation schemes. Se
 veral approaches are available in the existing literature\, e.g. the comp
 lex-analytic argument by Andreev and Schwab (2012) and the real-variable a
 rgument by Gilbert et al. (2019+). The latter proof strategy is more expli
 cit\, but\, due to the nonlinear nature of the problem\, leads to slightly
  suboptimal results. In this talk we close this gap and (as a by-product) 
 extend the analysis to the more general class of coefficients.\\r\\nFor fu
 rther information about the seminar\, please visit this webpage [t3://page
 ?uid=1115].
X-ALT-DESC:<p>We investigate a class of parametric elliptic eigenvalue prob
 lems where the coefficients (and hence the solution) may depend on a param
 eter y. Understanding the regularity of the solution as a function of y is
  important for construction of efficient numerical approximation schemes. 
 Several approaches are available in the existing literature\, e.g.&nbsp\;t
 he complex-analytic argument by Andreev and Schwab (2012) and the real-var
 iable argument by Gilbert et al. (2019+). The latter proof strategy is mor
 e explicit\, but\, due to the nonlinear nature of the problem\, leads to s
 lightly suboptimal results. In this talk we close this gap and (as a by-pr
 oduct) extend the analysis to the more general class of coefficients.</p>\
 n<p>For further information about the seminar\, please visit this <a href=
 "t3://page?uid=1115" title="Opens internal link in current window">webpage
 </a>.</p>
DTEND;TZID=Europe/Zurich:20221118T120000
END:VEVENT
BEGIN:VEVENT
UID:news1412@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20221021T141230
DTSTART;TZID=Europe/Zurich:20221104T110000
SUMMARY:Seminar in Numerical Analysis: Naomi Schneider (Universität Siegen
 )
DESCRIPTION:Both the approximation of the gravitational potential via the d
 ownward continuation of satellite data and of wave velocities via the trav
 el time tomography using earthquake data are geoscientific ill- posed inve
 rse problems. To monitor certain aspects of the system Earth\, like the ma
 ss transport or its geomagnetic field\, it is\, however\, important to tac
 kle these challenges. Traditionally\, an approximation of such a linear(iz
 ed) inverse problem is obtained in one\, a-priori chosen basis system: eit
 her a global one\, e.g. spherical harmonics or polynomials on the ball\, o
 r a local one\, e.g. radial basis functions and wavelets on the sphere or 
 finite elements on the ball. In the Geomathematics Group Siegen\, we devel
 oped methods that enable us to combine different types of trial functions 
 for such an approximation. The idea is to make the most of the benefits of
  different types of available trial functions. The algorithms are called t
 he (Learning) Inverse Problem Matching Pursuits (LIPMPs). They construct a
 n approximation iteratively from an intentionally overcomplete set of tria
 l functions\, the dictionary\, such that the Tikhonov functional is reduce
 d. Due to the learning add-on\, the dictionary can very well be infinite. 
 Moreover\, the computational costs are usually decreased. In this talk\, w
 e give details on the LIPMPs and show some current numerical results.\\r\\
 nFor further information about the seminar\, please visit this webpage [t3
 ://page?uid=1115].
X-ALT-DESC:<p>Both the approximation of the gravitational potential via the
  downward continuation of satellite data and of wave velocities via the tr
 avel time tomography using earthquake data are geoscientific ill- posed in
 verse problems. To monitor certain aspects of the system Earth\, like the 
 mass transport or its geomagnetic field\, it is\, however\, important to t
 ackle these challenges.<br /> Traditionally\, an approximation of such a l
 inear(ized) inverse problem is obtained in one\, a-priori chosen basis sys
 tem: either a global one\, e.g. spherical harmonics or polynomials on the 
 ball\, or a local one\, e.g. radial basis functions and wavelets on the sp
 here or finite elements on the ball.<br /> In the Geomathematics Group Sie
 gen\, we developed methods that enable us to combine different types of tr
 ial functions for such an approximation. The idea is to make the most of t
 he benefits of different types of available trial functions. The algorithm
 s are called the (Learning) Inverse Problem Matching Pursuits (LIPMPs). Th
 ey construct an approximation iteratively from an intentionally overcomple
 te set of trial functions\, the dictionary\, such that the Tikhonov functi
 onal is reduced. Due to the learning add-on\, the dictionary can very well
  be infinite. Moreover\, the computational costs are usually decreased.<br
  /> In this talk\, we give details on the LIPMPs and show some current num
 erical results.</p>\n<p>For further information about the seminar\, please
  visit this <a href="t3://page?uid=1115" title="Opens internal link in cur
 rent window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20221104T120000
END:VEVENT
BEGIN:VEVENT
UID:news1352@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220516T103313
DTSTART;TZID=Europe/Zurich:20220520T110000
SUMMARY:Seminar in Numerical Analysis: Jens Saak (Max Planck Institute for 
 Dynamics of Complex Technical Systems)
DESCRIPTION:Optimal control problems subject to constraints given by partia
 l differential equations are a powerful tool for the improvement of many t
 asks in science an technology. Classic optimization today is applicable on
  various problems and tackling nonlinear equations and inclusion of box co
 nstraints on the solutions is flexible. However\, especially for non-stati
 onary problems\, small perturbations along the trajectories can easily lea
 d to large deviations in the desired solutions. Consequently\, optimality 
 may be lost just as easily. On the other hand\, the linear-quadratic regul
 ator problem in system theory is an approach to make a dynamical system re
 act to perturbation via feedback controls that can be expressed by the sol
 utions of matrix Riccati equations. It’s applicability is limited by the
  linearity of the dynamical system and the efficient solvability of the qu
 adratic matrix equation. In this talk\, we discuss how certain classes of 
 non-stationary PDEs can be reformulated (after spatial semi-discretization
 ) into structured linear dynamical systems that allow the Riccati feedback
  to be computed. This allows us to combine both approaches and thus steer 
 solutions of perturbed PDEs back to the optimized trajectories. The key to
  efficient solvers for the Riccati equations is the usage of the specific 
 structure in the problems and the fact that the Riccati solutions usually 
 feature a strong singular value decay\, and thus good low-rank approximabi
 lity.\\r\\nFor further information about the seminar\, please visit this w
 ebpage [t3://page?uid=1115].
X-ALT-DESC:<p>Optimal control problems subject to constraints given by part
 ial differential equations are a powerful tool for the improvement of many
  tasks in science an technology. Classic optimization today is applicable 
 on various problems and tackling nonlinear equations and inclusion of box 
 constraints on the solutions is flexible. However\, especially for non-sta
 tionary problems\, small perturbations along the trajectories can easily l
 ead to large deviations in the desired solutions. Consequently\, optimalit
 y may be lost just as easily.<br /> On the other hand\, the linear-quadrat
 ic regulator problem in system theory is an approach to make a dynamical s
 ystem react to perturbation via feedback controls that can be expressed by
  the solutions of matrix Riccati equations. It’s applicability is limite
 d by the linearity of the dynamical system and the efficient solvability o
 f the quadratic matrix equation.<br /> In this talk\, we discuss how certa
 in classes of non-stationary PDEs can be reformulated (after spatial semi-
 discretization) into structured linear dynamical systems that allow the Ri
 ccati feedback to be computed. This allows us to combine both approaches a
 nd thus steer solutions of perturbed PDEs back to the optimized trajectori
 es. The key to efficient solvers for the Riccati equations is the usage of
  the specific structure in the problems and the fact that the Riccati solu
 tions usually feature a strong singular value decay\, and thus good low-ra
 nk approximability.</p>\n<p>For further information about the seminar\, pl
 ease visit this <a href="t3://page?uid=1115" title="Opens internal link in
  current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20220520T120000
END:VEVENT
BEGIN:VEVENT
UID:news1351@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220419T163823
DTSTART;TZID=Europe/Zurich:20220422T110000
SUMMARY:Seminar in Numerical Analysis: Matthias Voigt (FernUni Schweiz)
DESCRIPTION:We introduce a model reduction approach for linear time-invaria
 nt second-order systems based on positive real balanced truncation. Our me
 thod guarantees to preserve passivity of the reduced-order model as well a
 s the positive definiteness of the mass and stiffness matrices and admits 
 an a priori gap metric error bound. Our construction of the second-order r
 educed model is based on the consideration of an internal symmetry structu
 re and the invariant zeros of the system and their sign-characteristics fo
 r which we derive a normal form. The results are available in [1].\\r\\n[1
 ] I. Dorschky\, T. Reis\, and M. Voigt. Balanced truncation model reductio
 n for symmetric second order systems - a passivity-based approach. SIAM J.
  Matrix Anal. Appl.\, 42(4):1602--1635\, 2021.\\r\\nFor further informatio
 n about the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We introduce a model reduction approach for linear time-invar
 iant second-order systems based on positive real balanced truncation.<br /
 > Our method guarantees to preserve passivity of the reduced-order model a
 s well as the positive definiteness of the mass and stiffness matrices and
  admits an a priori gap metric error bound.<br /> Our construction of the 
 second-order reduced model is based on the consideration of an internal sy
 mmetry structure and the invariant zeros of the system and their sign-char
 acteristics for which we derive a normal form.<br /> The results are avail
 able in [1].</p>\n<p>[1] I. Dorschky\, T. Reis\, and M. Voigt. Balanced tr
 uncation model reduction for symmetric second order systems - a passivity-
 based approach. SIAM J. Matrix Anal. Appl.\, 42(4):1602--1635\, 2021.</p>\
 n<p>For further information about the seminar\, please visit this <a href=
 "t3://page?uid=1115" title="Opens internal link in current window">webpage
 </a>.</p>
DTEND;TZID=Europe/Zurich:20220422T120000
END:VEVENT
BEGIN:VEVENT
UID:news1333@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220315T174614
DTSTART;TZID=Europe/Zurich:20220408T110000
SUMMARY:Seminar in Numerical Analysis: Ralf Hiptmair (ETH Zürich)
DESCRIPTION:We consider scalar-valued shape functionals on sets of shapes w
 hich are small perturbations of a reference shape. The shapes are describe
 d by parameterizations and their closeness is induced by a Hilbert space s
 tructure on the parameter domain. We justify a heuristic for finding the b
 est low-dimensional parameter subspace with respect to uniformly approxima
 ting a given shape functional. We also propose an adaptive algorithm for a
 chieving a prescribed accuracy when representing the shape functional with
  a small number of shape parameters.\\r\\nFor further information about th
 e seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We consider scalar-valued shape functionals on sets of shapes
  which are small perturbations of a reference shape. The shapes are descri
 bed by parameterizations and their closeness is induced by a Hilbert space
  structure on the parameter domain. We justify a heuristic for finding the
  best low-dimensional parameter subspace with respect to uniformly approxi
 mating a given shape functional. We also propose an adaptive algorithm for
  achieving a prescribed accuracy when representing the shape functional wi
 th a small number of shape parameters.</p>\n<p>For further information abo
 ut the seminar\, please visit this <a href="t3://page?uid=1115" title="Ope
 ns internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20220408T120000
END:VEVENT
BEGIN:VEVENT
UID:news1334@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220325T150305
DTSTART;TZID=Europe/Zurich:20220401T110000
SUMMARY:Seminar in Numerical Analysis: Johannes Pfefferer (Technische Unive
 rsität München)
DESCRIPTION:Many areas of science and engineering involve optimal control o
 f processes that are modeled through partial differential equations. This 
 talk will introduce the theoretical foundation and numerical methods based
  on finite elements for solving PDE constrained optimal control problems. 
 We will discuss different discretization concepts and corresponding discre
 tization error estimates. The discussion will include the consideration of
  optimal control problems with control constraints as well as with state c
 onstraints.\\r\\nFor further information about the seminar\, please visit 
 this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Many areas of science and engineering involve optimal control
  of processes that are modeled through partial differential equations. Thi
 s talk will introduce the theoretical foundation and numerical methods bas
 ed on finite elements for solving PDE constrained optimal control problems
 . We will discuss different discretization concepts and corresponding disc
 retization error estimates. The discussion will include the consideration 
 of optimal control problems with control constraints as well as with state
  constraints.</p>\n<p>For further information about the seminar\, please v
 isit this <a href="t3://page?uid=1115" title="Opens internal link in curre
 nt window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20220401T120000
END:VEVENT
BEGIN:VEVENT
UID:news1340@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220321T111220
DTSTART;TZID=Europe/Zurich:20220325T110000
SUMMARY:Seminar in Numerical Analysis: Stepan Shakhno (Ivan Franko National
  University of Lviv)
DESCRIPTION:In this talk\, one- and two-step methods for solving nonlinear 
 equations with nondifferentiable operators are proposed. These methods are
  based on two methods: using derivatives and using divided differences. Th
 e local and semi-local convergence of the proposed methods is studied and 
 the order of their convergence is established. We apply our results to the
  numerical solving of systems of nonlinear equations.\\r\\nFor further inf
 ormation about the seminar\, please visit this webpage [t3://page?uid=1115
 ].
X-ALT-DESC:<p>In this talk\, one- and two-step methods for solving nonlinea
 r equations with nondifferentiable operators are proposed. These methods a
 re based on two methods: using derivatives and using divided differences.<
 br /> The local and semi-local convergence of the proposed methods is stud
 ied and the order of their convergence is established. We apply our result
 s to the numerical solving of systems of nonlinear equations.</p>\n<p>For 
 further information about the seminar\, please visit this <a href="t3://pa
 ge?uid=1115" title="Opens internal link in current window">webpage</a>.</p
 >
DTEND;TZID=Europe/Zurich:20220325T120000
END:VEVENT
BEGIN:VEVENT
UID:news1335@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20220310T115442
DTSTART;TZID=Europe/Zurich:20220318T110000
SUMMARY:Seminar in Numerical Analysis: Markus Bachmayr (Universität Mainz)
DESCRIPTION:We consider the computational complexity of approximating ellip
 tic PDEs with random coefficients by sparse product polynomial expansions.
  Except for special cases (for instance\, when the spatial discretisation 
 limits the achievable overall convergence rate)\, previous approaches for
  a posteriori selection of polynomial terms and corresponding spatial di
 scretizations do not guarantee optimal complexity in the sense of computat
 ional costs scaling linearly in the number of degrees of freedom. We show 
 that one can achieve optimality of an adaptive Galerkin scheme for discret
 izations by spline wavelets in the spatial variable when a multiscale rep
 resentation of the affinely parameterized random coefficients is used. \\
 r\\nM. Bachmayr and I. Voulis\, An adaptive stochastic Galerkin method ba
 sed on multilevel expansions of random fields: Convergence and optimality\
 , arXiv:2109:09136 [https://arxiv.org/abs/2109.09136]\\r\\nFor further in
 formation about the seminar\, please visit this webpage [t3://page?uid=111
 5].
X-ALT-DESC:<p>We consider the computational complexity of approximating ell
 iptic PDEs with random coefficients by sparse product polynomial expansion
 s. Except for special cases (for instance\, when the spatial discretisatio
 n limits the achievable overall convergence rate)\, previous approaches fo
 r&nbsp\;<em>a posteriori</em>&nbsp\;selection of polynomial terms and corr
 esponding spatial discretizations do not guarantee optimal complexity in t
 he sense of computational costs scaling linearly in the number of degrees 
 of freedom. We show that one can achieve optimality of an adaptive Galerki
 n scheme for discretizations by spline wavelets&nbsp\;in the spatial varia
 ble when a multiscale representation of the affinely parameterized random 
 coefficients is used.&nbsp\;</p>\n<p>M. Bachmayr and I. Voulis\,&nbsp\;<em
 >An adaptive stochastic Galerkin method based on multilevel expansions of 
 random fields: Convergence and optimality</em>\,&nbsp\;<a href="https://ar
 xiv.org/abs/2109.09136">arXiv:2109:09136</a></p>\n<p>For further informati
 on about the seminar\, please visit this <a href="t3://page?uid=1115" titl
 e="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20220318T120000
END:VEVENT
BEGIN:VEVENT
UID:news1277@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211204T191942
DTSTART;TZID=Europe/Zurich:20211217T110000
SUMMARY:Seminar in Numerical Analysis: Eliane Bécache (POEMS\, CNRS\, INRI
 A\, ENSTA Paris\, Institut Polytechnique de Paris)
DESCRIPTION:The PML method is one of the most widely used for the numerical
  simulation of wave propagation problems set in unbounded domains. However
   difficulties arise when the exterior domain has some complexity which p
 revents from using classical approaches. For instance\, it is well-known t
 hat PML may be unstable for time-domain eslastodynamic waves in some aniso
 tropic materials. More recently\, is has also been noticed that standard P
 ML cannot work in presence of some dispersive materials.  In some cases\,
  new stable PMLs have been designed.\\r\\nIn this talk\, we address the qu
 estions of well-posedness\, stability and convergence of standard and new 
 models of PMLs in the context of electromagnetic waves for non-dispersive 
 and dispersive materials.\\r\\nFor further information about the seminar\,
  please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>The PML method is one of the most widely used for the numeric
 al simulation of wave propagation problems set in unbounded domains. Howev
 er &nbsp\;difficulties arise when the exterior domain has some complexity 
 which prevents from using classical approaches. For instance\, it is well-
 known that PML may be unstable for time-domain eslastodynamic waves in som
 e anisotropic materials. More recently\, is has also been noticed that sta
 ndard PML cannot work in presence of some dispersive materials. &nbsp\;In 
 some cases\, new stable PMLs have been designed.</p>\n<p>In this talk\, we
  address the questions of well-posedness\, stability and convergence of st
 andard and new models of PMLs in the context of electromagnetic waves for 
 non-dispersive and dispersive materials.</p>\n<p>For further information a
 bout the seminar\, please visit this <a href="t3://page?uid=1115" title="O
 pens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20211217T120000
END:VEVENT
BEGIN:VEVENT
UID:news1276@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211204T143630
DTSTART;TZID=Europe/Zurich:20211210T110000
SUMMARY:Seminar in Numerical Analysis: Mike Botchev (Keldysh Institute of A
 pplied Mathematics)
DESCRIPTION:An efficient Krylov subspace algorithm for computing actions of
  the phi matrix function for large matrices is proposed. This matrix funct
 ion is widely used in exponential time integration\, Markov chains\, and n
 etwork analysis and many other applications. Our algorithm is based on a 
 reliable residual based stopping criterion and a new efficient restarting 
 procedure. We analyze residual convergence and prove\, for matrices with n
 umerical range in the stable complex half-plane\, that the restarted metho
 d is guaranteed to converge for any Krylov subspace dimension. Numerical t
 ests demonstrate efficiency of our approach for solving large scale evolut
 ion problems resulting from discretized in space time-dependent PDEs\, in 
 particular\, diffusion and convection-diffusion problems.\\r\\nFor further
  information about the seminar\, please visit this webpage [t3://page?uid=
 1115].
X-ALT-DESC:<p>An efficient Krylov subspace algorithm for computing actions 
 of the phi matrix function for large matrices is proposed. This matrix fun
 ction is widely used in exponential time integration\, Markov chains\, and
  network analysis and many other applications. Our algorithm is based on&n
 bsp\;a reliable residual based stopping criterion and a new efficient rest
 arting procedure. We analyze residual convergence and prove\, for matrices
  with numerical range in the stable complex half-plane\, that the restarte
 d method is guaranteed to converge for any Krylov subspace dimension. Nume
 rical tests demonstrate efficiency of our approach for solving large scale
  evolution problems resulting from discretized in space time-dependent PDE
 s\, in particular\, diffusion and convection-diffusion problems.</p>\n<p>F
 or further information about the seminar\, please visit this <a href="t3:/
 /page?uid=1115" title="Opens internal link in current window">webpage</a>.
 </p>
DTEND;TZID=Europe/Zurich:20211210T120000
END:VEVENT
BEGIN:VEVENT
UID:news1258@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211025T123428
DTSTART;TZID=Europe/Zurich:20211203T110000
SUMMARY:Seminar in Numerical Analysis: Larisa Beilina (Chalmers tekniska h
 ögskola)
DESCRIPTION:We will  discuss how to apply an adaptive finite element metho
 d (AFEM) for numerical solution of an electromagnetic volume integral equ
 ation. The problem of solution of this equation is formulated as an optim
 al control problem for minimizing of the Tikhonov's regularization funct
 ional. A posteriori error estimates for the error in the obtained finite
  element reconstruction and error in the Tikhonov's functional will be pr
 esented.\\r\\nBased on these estimates\, different adaptive finite element
  algorithms are formulated. Numerical examples will show efficiency of t
 he proposed adaptive algorithms to improve quality of 3D reconstruction 
 of target during the process of microwave thermometry which is used in ca
 ncer therapies. This is joint work with the group of Biomedical Imaging a
 t the Department of Electrical Engineering at CTH\, Chalmers.\\r\\nFor fu
 rther information about the seminar\, please visit this webpage [t3://page
 ?uid=1115].
X-ALT-DESC:<p>We will&nbsp\; discuss how to apply an adaptive finite elemen
 t method (AFEM) for numerical solution of&nbsp\;an electromagnetic volume 
 integral equation.&nbsp\;The problem of solution of this equation is formu
 lated as an optimal&nbsp\;control problem for minimizing of the Tikhonov's
  regularization&nbsp\;functional.&nbsp\;A posteriori error estimates for t
 he error in the obtained finite&nbsp\;element reconstruction and error in 
 the Tikhonov's functional will be presented.</p>\n<p>Based on these estima
 tes\, different adaptive finite element algorithms&nbsp\;are formulated.&n
 bsp\;Numerical examples will show efficiency of the&nbsp\;proposed adaptiv
 e algorithms to improve quality of 3D reconstruction&nbsp\;of target durin
 g the process of microwave thermometry which is used in&nbsp\;cancer thera
 pies. This is joint work with the group of&nbsp\;Biomedical Imaging at the
  Department of Electrical&nbsp\;Engineering at CTH\, Chalmers.</p>\n<p>For
  further information about the seminar\, please visit this <a href="t3://p
 age?uid=1115" title="Opens internal link in current window">webpage</a>.</
 p>
DTEND;TZID=Europe/Zurich:20211203T120000
END:VEVENT
BEGIN:VEVENT
UID:news1260@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211109T170252
DTSTART;TZID=Europe/Zurich:20211119T110000
SUMMARY:Seminar in Numerical Analysis: Jaap van der Vegt (Universiteit Twen
 te)
DESCRIPTION:In the numerical solution of partial differential equations\, i
 t is frequently necessary to ensure that certain variables\, e.g.\, densit
 y\, pressure\, or probability density distribution\, remain within strict 
 bounds. Strict observation of these bounds is crucial\, otherwise unphysic
 al solutions will be obtained that might result in the failure of the nume
 rical algorithm. Bounds on certain variables are generally ensured in disc
 ontinuous Galerkin (DG) discretizations using positivity preserving limite
 rs\, which locally modify the solution to ensure that the constraints are 
 satisfied\, while preserving higher order accuracy. In practice this appro
 ach is mostly limited to DG discretizations combined with explicit time in
 tegration methods. The combination of (positivity preserving) limiters in 
 DG discretizations and implicit time integration methods results\, however
 \, in serious problems. Many positivity preserving limiters are not easy t
 o apply in time-implicit DG discretizations and have a non-smooth formulat
 ion\, which hampers the use of standard Newton methods to solve the nonlin
 ear algebraic equations resulting from the time-implicit DG discretization
 . This often results in poor convergence.\\r\\nIn this presentation\, we w
 ill discuss a different approach to ensure that a higher order accurate DG
  solution satisfies the positivity constraints. Instead of using a limiter
 \, we impose the positivity constraints directly on the algebraic equation
 s resulting from a higher order accurate time-implicit DG discretization u
 sing techniques from mathematical optimization theory.  This approach ens
 ures that the positivity constraints are satisfied and does not affect the
  higher order accuracy of the time-implicit DG discretization. The resulti
 ng algebraic equations are then solved using a specially designed semi-smo
 oth Newton method that is well suited to deal with the resulting nonlinear
  complementarity problem. We will demonstrate the algorithm on several par
 abolic model problems.\\r\\nFor further information about the seminar\, pl
 ease visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>In the numerical solution of partial differential equations\,
  it is frequently necessary to ensure that certain variables\, e.g.\, dens
 ity\, pressure\, or probability density distribution\, remain within stric
 t bounds. Strict observation of these bounds is crucial\, otherwise unphys
 ical solutions will be obtained that might result in the failure of the nu
 merical algorithm. Bounds on certain variables are generally ensured in di
 scontinuous Galerkin (DG) discretizations using positivity preserving limi
 ters\, which locally modify the solution to ensure that the constraints ar
 e satisfied\, while preserving higher order accuracy. In practice this app
 roach is mostly limited to DG discretizations combined with explicit time 
 integration methods. The combination of (positivity preserving) limiters i
 n DG discretizations and implicit time integration methods results\, howev
 er\, in serious problems. Many positivity preserving limiters are not easy
  to apply in time-implicit DG discretizations and have a non-smooth formul
 ation\, which hampers the use of standard Newton methods to solve the nonl
 inear algebraic equations resulting from the time-implicit DG discretizati
 on. This often results in poor convergence.</p>\n<p>In this presentation\,
  we will discuss a different approach to ensure that a higher order accura
 te DG solution satisfies the positivity constraints. Instead of using a li
 miter\, we impose the positivity constraints directly on the algebraic equ
 ations resulting from a higher order accurate time-implicit DG discretizat
 ion using techniques from mathematical optimization theory. &nbsp\;This ap
 proach ensures that the positivity constraints are satisfied and does not 
 affect the higher order accuracy of the time-implicit DG discretization. T
 he resulting algebraic equations are then solved using a specially designe
 d semi-smooth Newton method that is well suited to deal with the resulting
  nonlinear complementarity problem. We will demonstrate the algorithm on s
 everal parabolic model problems.</p>\n<p>For further information about the
  seminar\, please visit this <a href="t3://page?uid=1115" title="Opens int
 ernal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20211119T120000
END:VEVENT
BEGIN:VEVENT
UID:news1257@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211025T121842
DTSTART;TZID=Europe/Zurich:20211112T110000
SUMMARY:Seminar in Numerical Analysis: Rolf Krause (Università della Svizz
 era italiana)
DESCRIPTION:Non-convex minimization problems show up in manifold applicatio
 ns: non-linear elasticity\, phase field models\, fracture propagation\, or
  the training of neural networks.  Traditional multilevel decompositions a
 re the basic ingredient  of the most efficient class of solution methods 
 for linear systems\, i.e. of multigrid methods\, which allow to solve cert
 ain classes of linear systems with optimal complexity. The transfer of the
 se concepts to non-linear problems\, however\, is not straightforward\, ne
 ither in terms of the design of the multilevel decomposition nor in terms 
 of convergence properties. In this talk\, we will discuss  multilevel dec
 ompositions for convex\, non-convex and possibly non-smooth minimization p
 roblems. We will discuss in detail how  multilevel optimization methods c
 an be constructed and analyzed and we will illustrate  the sometimes sign
 ificant gain in performance\, which can be achieved by multilevel minimiza
 tion techniques. Examples from mechanics\, geophysics\, and machine learni
 ng will illustrate our discussion.\\r\\nFor further information about the 
 seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Non-convex minimization problems show up in manifold applicat
 ions: non-linear elasticity\, phase field models\, fracture propagation\, 
 or the training of neural networks.<br /><br /> Traditional multilevel de
 compositions are the basic ingredient &nbsp\;of the most efficient class o
 f solution methods for linear systems\, i.e. of multigrid methods\, which 
 allow to solve certain classes of linear systems with optimal complexity. 
 The transfer of these concepts to non-linear problems\, however\, is not s
 traightforward\, neither in terms of the design of the multilevel decompos
 ition nor in terms of convergence properties. In this talk\, we will discu
 ss &nbsp\;multilevel decompositions for convex\, non-convex and possibly n
 on-smooth minimization problems. We will discuss in detail how &nbsp\;mult
 ilevel optimization methods can be constructed and analyzed and we will il
 lustrate &nbsp\;the sometimes significant gain in performance\, which can 
 be achieved by multilevel minimization techniques. Examples from mechanics
 \, geophysics\, and machine learning will illustrate our discussion.</p>\n
 <p>For further information about the seminar\, please visit this <a href="
 t3://page?uid=1115" title="Opens internal link in current window">webpage<
 /a>.</p>
DTEND;TZID=Europe/Zurich:20211112T120000
END:VEVENT
BEGIN:VEVENT
UID:news1256@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20211025T121402
DTSTART;TZID=Europe/Zurich:20211029T110000
SUMMARY:Seminar in Numerical Analysis: Jochen Garke (Rheinische Friedrich-W
 ilhelms-Universität Bonn)
DESCRIPTION:We present a conceptual framework that helps to bridge the know
 ledge gap between the two individual communities from machine learning an
 d numerical simulation to identify potential combined approaches and to 
 promote the development of hybrid systems.  We give examples of different 
 types of combinations using exemplary approaches of simulation-assisted m
 achine learning and machine-learning assisted simulation. We also discuss
  an advanced pairing where we see particular further potential for hybrid
  systems.\\r\\nFor further information about the seminar\, please visit th
 is webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We present a conceptual framework that helps to bridge the kn
 owledge gap&nbsp\;between the two individual communities from machine lear
 ning and&nbsp\;numerical simulation to identify potential combined approac
 hes and to&nbsp\;promote the development of hybrid systems.<br /><br /> W
 e give examples of different types of combinations using exemplary&nbsp\;a
 pproaches of simulation-assisted machine learning and machine-learning&nbs
 p\;assisted simulation. We also discuss an advanced pairing where we see&n
 bsp\;particular further potential for hybrid systems.</p>\n<p>For further 
 information about the seminar\, please visit this <a href="t3://page?uid=1
 115" title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20211029T120000
END:VEVENT
BEGIN:VEVENT
UID:news1157@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202913
DTSTART;TZID=Europe/Zurich:20210604T110000
SUMMARY:Seminar in Numerical Analysis: Ivan Dokmanić (Universität Basel)
DESCRIPTION:This talk will be an overview of my group's research between de
 ep learning and inverse problems. I will first describe the current (?) st
 ate of the field and then present a medley of our results\, including 1) a
  neural network architecture for wave-based inverse problems derived from 
 Fourier integral operators\; 2) an approach to nonlinear traveltime tomogr
 aphy based on neural priors\; and 3) provably injective neural networks th
 at are universal approximators of probability measures supported on low-di
 mensional manifolds. My secret hope is to spark discussions that could evo
 lve to collaborations.\\r\\nFor further information about the seminar\, pl
 ease visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>This talk will be an overview of my group's research between 
 deep learning and inverse problems. I will first describe the current (?) 
 state of the field and then present a medley of our results\, including 1)
  a neural network architecture for wave-based inverse problems derived fro
 m Fourier integral operators\; 2) an approach to nonlinear traveltime tomo
 graphy based on neural priors\; and 3) provably injective neural networks 
 that are universal approximators of probability measures supported on low-
 dimensional manifolds. My secret hope is to spark discussions that could e
 volve to collaborations.</p>\n<p>For further information about the seminar
 \, please visit this <a href="t3://page?uid=1115" title="Opens internal li
 nk in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20210604T120000
END:VEVENT
BEGIN:VEVENT
UID:news1158@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202830
DTSTART;TZID=Europe/Zurich:20210507T110000
SUMMARY:Seminar in Numerical Analysis: Erik Burman (University College Lond
 on)
DESCRIPTION:In many applications both in medical science and in the geoscie
 nces the accurate approximation of solutions to wave equations is an impor
 tant component for optimisation or inverse identification. Examples inclu
 de thermoacoustic imaging or high frequency ultrasound treatments in medic
 ine (HIFU) or fault slip analysis in seismology. These problems have in c
 ommon the need for computational solution of an inverse problem where the 
 forward problem is set in a heterogeneous domain. Indeed typically the sou
 nd speed in the bulk domain jumps over material interfaces. Sometimes ther
 e is even a need for coupling of the acoustic and elastodynamic equations 
 in the presence of liquid inclusions. In this talk we will give a snapshot
  of our ongoing work in these topics\, motivated by two such applications:
  HIFU and the propagation of seismic waves. After a brief introduction of 
 the applications we will first discuss the analysis of some approximation 
 methods for inverse initial value problems subject to the wave equation. W
 e will then consider a hybrid high order method for the approximation of w
 ave propagation in heterogeneous media\, using cut element techniques to a
 void meshing of interfaces. Finally we will discuss some open problems tha
 t remain in order to understand the approximation of the inverse initial v
 alue problem in heterogeneous media using high order methods.\\r\\nFor fur
 ther information about the seminar\, please visit this webpage [t3://page?
 uid=1115].
X-ALT-DESC:<p>In many applications both in medical science and in the geosc
 iences the accurate approximation of solutions to wave equations is an imp
 ortant&nbsp\;component for optimisation or inverse identification. Example
 s include thermoacoustic imaging or high frequency ultrasound treatments i
 n medicine (HIFU)&nbsp\;or fault slip analysis in seismology. These proble
 ms have in common the need for computational solution of an inverse proble
 m where the forward problem is set in a heterogeneous domain. Indeed typic
 ally the sound speed in the bulk domain jumps over material interfaces. So
 metimes there is even a need for coupling of the acoustic and elastodynami
 c equations in the presence of liquid inclusions. In this talk we will giv
 e a snapshot of our ongoing work in these topics\, motivated by two such a
 pplications: HIFU and the propagation of seismic waves. After a brief intr
 oduction of the applications we will first discuss the analysis of some ap
 proximation methods for inverse initial value problems subject to the wave
  equation. We will then consider a hybrid high order method for the approx
 imation of wave propagation in heterogeneous media\, using cut element tec
 hniques to avoid meshing of interfaces. Finally we will discuss some open 
 problems that remain in order to understand the approximation of the inver
 se initial value problem in heterogeneous media using high order methods.<
 /p>\n<p>For further information about the seminar\, please visit this <a h
 ref="t3://page?uid=1115" title="Opens internal link in current window">web
 page</a>.</p>
DTEND;TZID=Europe/Zurich:20210507T120000
END:VEVENT
BEGIN:VEVENT
UID:news1184@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202737
DTSTART;TZID=Europe/Zurich:20210430T110000
SUMMARY:Seminar in Numerical Analysis: Pieter Barendrecht (KAUST)
DESCRIPTION:In this talk\, we're going to take a closer look at the basics 
 of both univariate and bivariate splines\, including Bézier- and B-spline
  curves\, box splines and subdivision surfaces. Next\, we'll shift our foc
 us to applications of smooth spline surfaces of arbitrary manifold topolog
 y within the realm of computer graphics. Finally\, a couple of aspects and
  applications of splines in the context of numerical methods will be discu
 ssed. Expect more illustrations than equations\, and in addition a couple 
 of (interactive) live software demos!\\r\\nFor further information about t
 he seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>In this talk\, we're going to take a closer look at the basic
 s of both univariate and bivariate splines\, including Bézier- and B-spli
 ne curves\, box splines and subdivision surfaces. Next\, we'll shift our f
 ocus to applications of smooth spline surfaces of arbitrary manifold topol
 ogy within the realm of computer graphics. Finally\, a couple of aspects a
 nd applications of splines in the context of numerical methods will be dis
 cussed. Expect more illustrations than equations\, and in addition a coupl
 e of (interactive) live software demos!</p>\n<p>For further information ab
 out the seminar\, please visit this <a href="t3://page?uid=1115" title="Op
 ens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20210430T120000
END:VEVENT
BEGIN:VEVENT
UID:news1167@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202614
DTSTART;TZID=Europe/Zurich:20210423T110000
SUMMARY:Seminar in Numerical Analysis: Markus Melenk (TU Wien)
DESCRIPTION:We consider the Helmholtz equation with piecewise analytic coef
 ficients at large wavenumber k > 0. The interface where the coefficients j
 ump is assumed to be analytic. We develop a k-explicit regularity theory f
 or the solution that takes the form of a decomposition into two components
 : the first component is a piecewise analytic\, but highly oscillatory fun
 ction and the second one has finite regularity but features wavenumber-ind
 ependent bounds. This decomposition generalizes earlier decompositions of 
 [MS10\, MS11\, EM11\, MSP12]\, which considered the Helmholtz equation wit
 h constant coefficients\, to the case of (piecewise) analytic coefficients
 . This regularity theory allows to show for high order Galerkin discretiza
 tions (hp-FEM) of the Helmholtz equation that quasi-optimality is reached 
 if (a) the approximation order p is selected as p = O(log k) and (b) the m
 esh size h is such that kh/p is sufficiently small. This extends the resul
 ts of [MS10\, MS11\, EM11\, MSP12] about the onset of quasi-optimality of 
 hp-FEM for the Helmholtz equation to the case of the heterogeneous Helmhol
 tz equation.\\r\\nJoint work with: Maximilian Bernkopf (TU Wien)\, Théoph
 ile Chaumont-Frelet (Inria).\\r\\nReferences [EM11]    S. Esterhazy and
  J.M. Melenk\, On stability of discretizations of the Helmholtz equation\,
  in: Numerical Analysis of Multiscale Problems\, Graham et al.\, eds\, Sp
 ringer 2012 [MS10]    J.M. Melenk and S. Sauter\, Convergence Analysis f
 or Finite Element Discretizations of the Helmholtz equation with Dirichlet
 -to-Neumann boundary conditions\, Math. Comp. 79:1871–1914\, 2010 [MS11]
     J.M. Melenk and S. Sauter\, Wavenumber explicit convergence analysis
  for finite element discretizations of the Helmholtz equation\, SIAM J. Nu
 mer. Anal.\, 49:1210–1243\, 2011 [MSP12] J.M. Melenk\, S. Sauter\, A. Pa
 rsania\, Generalized DG-methods for highly indefinite Helmholtz problems\,
  J. Sci. Comp. 57:536–581\, 2013\\r\\nFor further information about the 
 seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We consider the Helmholtz equation with piecewise analytic co
 efficients at large wavenumber k &gt\; 0. The interface where the coeffici
 ents jump is assumed to be analytic. We develop a k-explicit regularity th
 eory for the solution that takes the form of a decomposition into two comp
 onents: the first component is a piecewise analytic\, but highly oscillato
 ry function and the second one has finite regularity but features wavenumb
 er-independent bounds. This decomposition generalizes earlier decompositio
 ns of [MS10\, MS11\, EM11\, MSP12]\, which considered the Helmholtz equati
 on with constant coefficients\, to the case of (piecewise) analytic coeffi
 cients. This regularity theory allows to show for high order Galerkin disc
 retizations (hp-FEM) of the Helmholtz equation that quasi-optimality is re
 ached if (a) the approximation order p is selected as p = O(log k) and (b)
  the mesh size h is such that kh/p is sufficiently small. This extends the
  results of [MS10\, MS11\, EM11\, MSP12] about the onset of quasi-optimali
 ty of hp-FEM for the Helmholtz equation to the case of the heterogeneous H
 elmholtz equation.</p>\n<p>Joint work with: Maximilian Bernkopf (TU Wien)\
 , Théophile Chaumont-Frelet (Inria).</p>\n<p><strong>References</strong><
 br /> [EM11]&nbsp\; &nbsp\;&nbsp\;S. Esterhazy and J.M. Melenk\, On stabil
 ity of discretizations of the Helmholtz equation\, in: Numerical&nbsp\;Ana
 lysis of Multiscale Problems\, Graham et al.\, eds\, Springer 2012<br /> [
 MS10] &nbsp\; &nbsp\;J.M. Melenk and S. Sauter\, Convergence Analysis for 
 Finite Element Discretizations of the Helmholtz equation with Dirichlet-to
 -Neumann boundary conditions\, Math. Comp. 79:1871–1914\, 2010<br /> [MS
 11] &nbsp\; &nbsp\;J.M. Melenk and S. Sauter\, Wavenumber explicit converg
 ence analysis for finite element discretizations of the Helmholtz equation
 \, SIAM J. Numer. Anal.\, 49:1210–1243\, 2011<br /> [MSP12] J.M. Melenk\
 , S. Sauter\, A. Parsania\, Generalized DG-methods for highly indefinite H
 elmholtz problems\, J. Sci. Comp. 57:536–581\, 2013</p>\n<p>For further 
 information about the seminar\, please visit this <a href="t3://page?uid=1
 115" title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20210423T120000
END:VEVENT
BEGIN:VEVENT
UID:news1154@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202454
DTSTART;TZID=Europe/Zurich:20210416T110000
SUMMARY:Seminar in Numerical Analysis: Guy Gilboa (Technion - Israel Instit
 ute of Technology)
DESCRIPTION:Recent studies on nonlinear eigenvalue problems show surprising
  analogies to harmonic analysis (e.g. Fourier or wavelets). In this talk 
 we first show the total-variation (TV) transform\, based on the TV gradien
 t flow\, and its application in image processing. We then present new resu
 lts on analyzing gradient flows of homogeneous nonlinear operators (such a
 s the p-Laplacian). Our framework allows a thorough investigation of Dynam
 ic-Mode-Decomposition (DMD)\, a central dimensionality reduction method fo
 r time series data. We present analytic solutions of simple nonlinear case
 s\, reveal shortcomings of DMD and propose improved decomposition methods.
 \\r\\nFor further information about the seminar\, please visit this webpag
 e [t3://page?uid=1115].
X-ALT-DESC:<p>Recent studies on nonlinear eigenvalue problems show surprisi
 ng analogies to harmonic analysis (e.g. Fourier or wavelets).&nbsp\;In thi
 s talk we first show the total-variation (TV) transform\, based on the TV 
 gradient flow\, and its application in image processing. We then present n
 ew results on analyzing gradient flows of homogeneous nonlinear operators 
 (such as the p-Laplacian). Our framework allows a thorough investigation o
 f Dynamic-Mode-Decomposition (DMD)\, a central dimensionality reduction me
 thod for time series data. We present analytic solutions of simple nonline
 ar cases\, reveal shortcomings of DMD and propose improved decomposition m
 ethods.</p>\n<p>For further information about the seminar\, please visit t
 his <a href="t3://page?uid=1115" title="Opens internal link in current win
 dow">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20210416T120000
END:VEVENT
BEGIN:VEVENT
UID:news1161@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210627T202314
DTSTART;TZID=Europe/Zurich:20210409T110000
SUMMARY:Seminar in Numerical Analysis: Barbara Kaltenbacher (Universität K
 lagenfurt)
DESCRIPTION:High intensity (focused) ultrasound HIFU is used in numerous me
 dical and industrial applications ranging from litotripsy and thermotherap
 y via ultrasound cleaning and welding to sonochemistry. In this talk\, we
  will highlight two computational aspects related to the relevant nonlinea
 r acoustic phenomena\, namely\\r\\n 	absorbing boundary conditions for the
  treatment of open domain problems\; 	optimization tasks for ultrasound fo
 cusing. \\r\\nStrictly speaking\, acoustic sound propagation takes place i
 n full space or at least in a domain that is typically much larger than th
 e region of interest Ω. To restrict attention to a bounded domain Ω\, e.
 g\, for computational purposes\, artificial reflections on the boundary 
 ∂Ω have to be avoided. This can be done by imposing so-called absorbin
 g boundary conditions ABC that induce dissipation of outgoing waves. Here 
 it will turn out to be crucial to take into account nonlinearity of the PD
 E also in these ABC. This is joint work with Igor Shevchenko (Imperial Co
 llege London).  In the context of applications in HIFU\, focusing of nonli
 nearly propagating waves amounts to optimization problems. The design of u
 ltrasound excitation via piezoelectric transducers leads to a boundary con
 trol problem\; focusing high intensity ultrasound by a silicone lens requi
 res shape optimization. For both problem classes\, we will discuss the der
 ivation of gradient information in order to formulate optimality condition
 s and drive numerical optimization methods. This is joint work with Chris
 tian Clason (University of Duisburg-Essen)\, Vanja Nikolić (TU München)
 \, and Gunther Peichl (University of Graz).  Finally we will provide an ou
 tlook on imaging with nonlinearly acoustic waves\, which amounts to identi
 fying  spatially varying coefficients (sound speed and/or coefficient of 
 nonlinearity) in the Westervelt equation. This is recent joint work with 
 Masahiro Yamamoto (University of Tokyo) and William Rundell (Texas A&M Uni
 versity).\\r\\nFor further information about the seminar\, please visit th
 is webpage [t3://page?uid=1115].
X-ALT-DESC:<p>High intensity (focused) ultrasound HIFU is used in numerous 
 medical and industrial applications ranging from litotripsy and thermother
 apy via ultrasound cleaning and welding to sonochemistry.&nbsp\;In this ta
 lk\, we will highlight two computational aspects related to the relevant n
 onlinear acoustic phenomena\, namely</p>\n<ul><li>absorbing boundary con
 ditions for the treatment of open domain problems\;</li><li>optimization
  tasks for ultrasound focusing.</li></ul>\n<p>Strictly speaking\, acousti
 c sound propagation takes place in full space or at least in a domain that
  is typically much larger than the region of interest Ω. To restrict atte
 ntion to a bounded domain Ω\, e.g\, for computational purposes\, artifici
 al reflections on the boundary ∂Ω&nbsp\;have to be avoided. This can be
  done by imposing so-called absorbing boundary conditions ABC that induce 
 dissipation of outgoing waves. Here it will turn out to be crucial to take
  into account nonlinearity of the PDE also in these ABC.&nbsp\;This is joi
 nt work with Igor Shevchenko (Imperial College London).<br /><br /> In th
 e context of applications in HIFU\, focusing of nonlinearly propagating wa
 ves amounts to optimization problems. The design of ultrasound excitation 
 via piezoelectric transducers leads to a boundary control problem\; focusi
 ng high intensity ultrasound by a silicone lens requires shape optimizatio
 n. For both problem classes\, we will discuss the derivation of gradient i
 nformation in order to formulate optimality conditions and drive numerical
  optimization methods.&nbsp\;This is joint work with Christian Clason (Uni
 versity of Duisburg-Essen)\, Vanja Nikolić&nbsp\;(TU München)\, and Gunt
 her Peichl (University of Graz).<br /><br /> Finally we will provide an o
 utlook on imaging with nonlinearly acoustic waves\, which amounts to ident
 ifying&nbsp\; spatially varying coefficients (sound speed and/or coefficie
 nt of nonlinearity) in the Westervelt equation.&nbsp\;This is recent joint
  work with Masahiro Yamamoto (University of Tokyo) and William Rundell (Te
 xas A&amp\;M University).</p>\n<p>For further information about the semina
 r\, please visit this <a href="t3://page?uid=1115" title="Opens internal l
 ink in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20210409T120000
END:VEVENT
BEGIN:VEVENT
UID:news1127@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193706
DTSTART;TZID=Europe/Zurich:20201211T110000
SUMMARY:Seminar in Numerical Analysis: Heiko Gimperlein (Heriot-Watt Univer
 sity)
DESCRIPTION:Diffusion processes beyond Brownian motion have recently attrac
 ted significant interest from different communities in mathematics\, the p
 hysical and biological sciences. They are described by partial differentia
 l equations involving nonlocal operators with singular non-integrable kern
 els\, such as fractional Laplacians. This talk discusses the challenges of
  their approximation by finite elements and discusses our recent results o
 n the a priori analysis of h\, p and hp-versions for the integral fraction
 al Laplacian\, as well as their preconditioning. \\r\\nFor further inform
 ation about the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Diffusion processes beyond Brownian motion have recently attr
 acted significant interest from different communities in mathematics\, the
  physical and biological sciences. They are described by partial different
 ial equations involving nonlocal operators with singular non-integrable ke
 rnels\, such as fractional Laplacians. This talk discusses the challenges 
 of their approximation by finite elements and discusses our recent results
  on the a priori analysis of h\, p and hp-versions for the integral fracti
 onal Laplacian\, as well as their preconditioning.&nbsp\;</p>\n<p>For furt
 her information about the seminar\, please visit this <a href="t3://page?u
 id=1115" title="Opens internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20201211T120000
END:VEVENT
BEGIN:VEVENT
UID:news1098@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193612
DTSTART;TZID=Europe/Zurich:20201204T110000
SUMMARY:Seminar in Numerical Analysis: Bastian von Harrach (Goethe-Universi
 tät Frankfurt)
DESCRIPTION:We derive a simple criterion that ensures uniqueness\, Lipschit
 z stability and global convergence of Newton's method for the finite dim
 ensional zero-finding problem of a continuously differentiable\, pointwis
 e convex and monotonic function. Our criterion merely requires to evaluat
 e the directional derivative of the forward function at finitely many eva
 luation points and for finitely many directions.\\r\\nWe then demonstrate 
 that this result can be used to prove uniqueness\, stability and global c
 onvergence for an inverse coefficient problem with finitely many measurem
 ents. We consider the problem of determining an unknown inverse Robin tra
 nsmission coefficient in an elliptic PDE. Using a relation to monotonicit
 y and localized potentials techniques\, we show that a piecewise-constant
  coefficient on an a-priori known partition with a-priori known bounds is
  uniquely determined by finitely many boundary measurements and that it c
 an be uniquely and stably reconstructed by a globally convergent Newton i
 teration. We derive a constructive method to identify these boundary meas
 urements\, calculate the stability constant and give a numerical example.
 \\r\\n For further information about the seminar\, please visit this webpa
 ge [t3://page?uid=1115].
X-ALT-DESC:<p>We derive a simple criterion that ensures uniqueness\, Lipsch
 itz&nbsp\;stability and global convergence of Newton's method for the fini
 te&nbsp\;dimensional zero-finding problem of a continuously differentiable
 \,&nbsp\;pointwise convex and monotonic function. Our criterion merely req
 uires&nbsp\;to evaluate the directional derivative of the forward function
  at&nbsp\;finitely many evaluation points and for finitely many directions
 .</p>\n<p>We then demonstrate that this result can be used to prove unique
 ness\,&nbsp\;stability and global convergence for an inverse coefficient p
 roblem with&nbsp\;finitely many measurements. We consider the problem of d
 etermining an&nbsp\;unknown inverse Robin transmission coefficient in an e
 lliptic PDE. Using&nbsp\;a relation to monotonicity and localized potentia
 ls techniques\, we show&nbsp\;that a piecewise-constant coefficient on an 
 a-priori known partition&nbsp\;with a-priori known bounds is uniquely dete
 rmined by finitely many&nbsp\;boundary measurements and that it can be uni
 quely and stably&nbsp\;reconstructed by a globally convergent Newton itera
 tion. We derive a&nbsp\;constructive method to identify these boundary mea
 surements\, calculate&nbsp\;the stability constant and give a numerical ex
 ample.</p>\n<p><br /> For further information about the seminar\, please v
 isit this <a href="t3://page?uid=1115" title="Opens internal link in curre
 nt window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20201204T120000
END:VEVENT
BEGIN:VEVENT
UID:news1103@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193501
DTSTART;TZID=Europe/Zurich:20201120T110000
SUMMARY:Seminar in Numerical Analysis: Gabriel Lord (Radboud University Nij
 megen)
DESCRIPTION:We examine how time step adaptivity can be used to control pot
 ential instability arising from non-Lipschitz terms for stochastic p
 artial differential equations (SPDEs). I will give a brief introduction t
 o SPDEs and illustrate the stability issue with the standard uniform step 
 Euler method to motivate the adaptive method. I will present a strong 
 convergence result and outline the steps of the proof. To illustrate the
  method we examine the stochastic Allen-Cahn\, Swift-Hohenberg\,  Kura
 moto-Sivashinsky equations and finally will discuss a potential use of the
  adaptivity for the deterministic system. This is joint work with Stuart C
 ampbell.
X-ALT-DESC:<p>We examine how time step adaptivity can be used to control&nb
 sp\;potential&nbsp\;instability arising from&nbsp\;non-Lipschitz&nbsp\;ter
 ms&nbsp\;for&nbsp\;stochastic&nbsp\;partial differential&nbsp\;equations (
 SPDEs). I will give a brief introduction to SPDEs and illustrate the stabi
 lity issue with the standard uniform step Euler method&nbsp\;to&nbsp\;moti
 vate the adaptive method. I&nbsp\;will present a&nbsp\;strong convergence&
 nbsp\;result and outline the&nbsp\;steps of the proof. To illustrate the&n
 bsp\;method&nbsp\;we&nbsp\;examine the stochastic Allen-Cahn\, Swift-Hohen
 berg\,&nbsp\; Kuramoto-Sivashinsky equations and finally will discuss a po
 tential use of the adaptivity for the deterministic system. This is joint 
 work with Stuart Campbell.</p>
DTEND;TZID=Europe/Zurich:20201120T120000
END:VEVENT
BEGIN:VEVENT
UID:news1102@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193410
DTSTART;TZID=Europe/Zurich:20201113T110000
SUMMARY:Seminar in Numerical Analysis: Gilles Vilmart (Université de Genè
 ve)
DESCRIPTION:We show that the Strang splitting method applied to a diffusion
 -reaction equation with inhomogeneous general oblique boundary conditions
  is of order two when the diffusion equation is solved with the Crank-Nic
 olson method\, while order reduction occurs in general if using other Ru
 nge-Kutta schemes or even the exact flow itself for the diffusion part. We
  also show that this method recovers stationary states in contrast with sp
 litting methods in general.We prove these results when the source term onl
 y depends on the space variable. Numerical experiments suggest that the s
 econd order of convergence persists with general nonlinearities.\\r\\nThi
 s is joint work with Guillaume Bertoli (Université de Genève) and Chris
 tophe Besse (Institut de Mathématiques de Toulouse).\\r\\nFor further inf
 ormation about the seminar\, please visit this webpage [t3://page?uid=111
 5].
X-ALT-DESC:<p>We show that the Strang splitting method applied to a diffusi
 on-reaction&nbsp\;equation with inhomogeneous general oblique boundary con
 ditions is of&nbsp\;order two when the diffusion equation is solved with t
 he Crank-Nicolson&nbsp\;method\, while order reduction occurs in general i
 f using other&nbsp\;Runge-Kutta schemes or even the exact flow itself for 
 the diffusion part. We also show that this method recovers stationary stat
 es in contrast with splitting methods in general.We prove these results wh
 en the source term only depends on the space&nbsp\;variable. Numerical exp
 eriments suggest that the second order of&nbsp\;convergence persists with 
 general nonlinearities.</p>\n<p>This is joint work with Guillaume Bertoli 
 (Université de Genève) and&nbsp\;Christophe Besse (Institut de Mathémat
 iques de Toulouse).</p>\n<p>For further information about the seminar\, pl
 ease visit this&nbsp\;<a href="t3://page?uid=1115" title="Opens internal l
 ink in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20201113T120000
END:VEVENT
BEGIN:VEVENT
UID:news1101@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193336
DTSTART;TZID=Europe/Zurich:20201030T110000
SUMMARY:Seminar in Numerical Analysis: Alfio Borzi (Universität Würzburg)
DESCRIPTION:The Liouville equation is the fundamental building block of mod
 els that govern the evolution of density functions of multi-particle syst
 ems. These models include different Fokker-Planck and Boltzmann equations
  that arise in many application fields ranging from gas dynamics to pedes
 trians' motion where the need arises to control these systems.\\r\\nThis 
 talk provides an introduction to the formulation and solution of optimal 
 control problems governed by the Liouville equation and related models. T
 he purpose of this framework is the design of robust controls to steer th
 e motion of particles\, pedestrians\, etc.\, where these agents are repre
 sented in terms of density functions. For this purpose\, expected-value c
 ost functionals are considered that include attracting potentials and dif
 ferent costs of the controls\, whereas the control mechanism in the gover
 ning models is part of the drift or is included in a collision term.\\r\\
 nIn this talk\, theoretical and numerical results concerning ensemble opt
 imal control problems with Liouville\, Fokker-Planck and linear Boltzmann
  equations are presented.\\r\\n For further information about the seminar\
 , please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>The Liouville equation is the fundamental building block of m
 odels that&nbsp\;govern the evolution of density functions of multi-partic
 le systems. These&nbsp\;models include different Fokker-Planck and Boltzma
 nn equations that arise&nbsp\;in many application fields ranging from gas 
 dynamics to pedestrians' motion&nbsp\;where the need arises to control the
 se systems.</p>\n<p>This talk provides an introduction to the formulation 
 and solution&nbsp\;of optimal control problems governed by the Liouville e
 quation&nbsp\;and related models. The purpose of this framework&nbsp\;is t
 he design of robust controls to steer the motion of particles\, pedestrian
 s\, etc.\,&nbsp\;where these agents are represented in terms of density fu
 nctions.&nbsp\;For this purpose\, expected-value cost functionals are cons
 idered&nbsp\;that include attracting potentials and different costs of the
  controls\, whereas&nbsp\;the control mechanism in the governing models is
  part of the drift&nbsp\;or is included in a collision term.</p>\n<p>In th
 is talk\, theoretical and numerical results concerning ensemble&nbsp\;opti
 mal control problems with Liouville\, Fokker-Planck and linear&nbsp\;Boltz
 mann equations are presented.</p>\n<p><br /> For further information about
  the seminar\, please visit this <a href="t3://page?uid=1115" title="Opens
  internal link in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20201030T120000
END:VEVENT
BEGIN:VEVENT
UID:news1099@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20210623T193201
DTSTART;TZID=Europe/Zurich:20201016T110000
SUMMARY:Seminar in Numerical Analysis: Jürgen Dölz (Universität Bonn)
DESCRIPTION:We propose an efficient algorithm for the treatment of Volterra
  integral operators based on H2-matrix compression techniques. The algorit
 hm is built in an evolutionary manner\, and therefore\, is well suited for
  the problems\, where the right hand side depends on the solution itself a
 nd is not known for all time steps a priori. The resulting algorithm is of
  linear complexity O(N) w.r.t. to the number of time steps\, and requires 
 O(N) active memory. The memory consumption can be reduced to O(log N) for 
 the kernels of convolution type using the Laplace inversion techniques int
 roduced by Lubich et al\; the connection to the FOCQ algorithm is drawn. W
 e demonstrate the effectiveness of our algorithm on a series of numerical 
 examples.\\r\\n For further information about the seminar\, please visit t
 his webpage [t3://page?uid=1115].
X-ALT-DESC:<p>We propose an efficient algorithm for the treatment of Volter
 ra integral operators based on H2-matrix compression techniques. The algor
 ithm is built in an evolutionary manner\, and therefore\, is well suited f
 or the problems\, where the right hand side depends on the solution itself
  and is not known for all time steps a priori. The resulting algorithm is 
 of linear complexity O(N) w.r.t. to the number of time steps\, and require
 s O(N) active memory. The memory consumption can be reduced to O(log N) fo
 r the kernels of convolution type using the Laplace inversion techniques i
 ntroduced by Lubich et al\; the connection to the FOCQ algorithm is drawn.
  We demonstrate the effectiveness of our algorithm on a series of numerica
 l examples.</p>\n<p><br /> For further information about the seminar\, ple
 ase visit this&nbsp\;<a href="t3://page?uid=1115" title="Opens internal li
 nk in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20201016T120000
END:VEVENT
BEGIN:VEVENT
UID:news939@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191125T124030
DTSTART;TZID=Europe/Zurich:20191220T110000
SUMMARY:Seminar in Numerical Analysis: Kristin Kirchner (ETH Zürich)
DESCRIPTION:Many models in spatial statistics are based on Gaussian Matérn
  fields. Motivated by the relation between this class of Gaussian random f
 ields and stochastic partial differential equations (PDEs)\, we consider t
 he numerical solution of fractional-order elliptic stochastic PDEs with ad
 ditive spatial white noise on a bounded Euclidean domain.We propose an app
 roximation supported by a rigorous error analysis which shows different no
 tions of convergence at explicit and sharp rates. We furthermore discuss t
 he computational complexity of the proposed method. Finally\, we present s
 everal numerical experiments\, which attest the theoretical outcomes\, as 
 well as a statistical application where we use the method for inference\, 
 i.e.\, for parameter estimation given data\, and for spatial prediction.\\
 r\\nFor further information about the seminar\, please visit this webpage
  [t3://page?uid=1115].
X-ALT-DESC:<p>Many models in spatial statistics are based on Gaussian Maté
 rn fields. Motivated by the relation between this class of Gaussian random
  fields and stochastic partial differential equations (PDEs)\, we consider
  the numerical solution of fractional-order elliptic stochastic PDEs with 
 additive spatial white noise on a bounded Euclidean domain.<br /><br />We 
 propose an approximation supported by a rigorous error analysis which show
 s different notions of convergence at explicit and sharp rates. We further
 more discuss the computational complexity of the proposed method. Finally\
 , we present several numerical experiments\, which attest the theoretical 
 outcomes\, as well as a statistical application where we use the method fo
 r inference\, i.e.\, for parameter estimation given data\, and for spatial
  prediction.</p>\n<p>For further information about the seminar\, please vi
 sit this&nbsp\;<a href="t3://page?uid=1115" title="Opens internal link in 
 current window" class="internal-link">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191220T120000
END:VEVENT
BEGIN:VEVENT
UID:news931@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191210T105146
DTSTART;TZID=Europe/Zurich:20191213T110000
SUMMARY:Seminar in Numerical Analysis: Théophile Chaumont-Frelet (INRIA\, 
 Nice/Sophia Antipolis)
DESCRIPTION:The Helmholtz equation models the propagation of a time-harmoni
 c wave. It has received much attention since it is widely employed in appl
 ications\, but still challenging to numerically simulate in the high-frequ
 ency regime.  In this seminar\, we focus on acoustic waves for the sake of
  simplicity and consider finite element discretizations. The main goal of 
 the presentation is to highlight the improved performance of high order me
 thods (as compared to linear finite elements) when the frequency is large.
   We will very briefly cover the zero-frequency case\, that corresponds to
  the well-studied Poisson equation. We take advantage of this classical se
 tting to recall central concepts of the finite element theory such as quas
 i-optimality and interpolation error.  The second part of the seminar is d
 evoted to the high-frequency case. We show that without restrictive assump
 tions on the mesh size\, the finite element method is unstable\, and quasi
 -optimality is lost. We provide a detailed analysis\, as well as numerical
  examples\, which show that higher order methods are less affected by this
  phenomena\, and thus more suited to discretize high-frequency problems.  
 Before drawing our main conclusions\, we briefly discuss advanced topics\,
  such as the use of unfitted meshes in highly heterogeneous media and mesh
  refinements around re-entrant corners.\\r\\nFor further information about
  the seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>The Helmholtz equation models the propagation of a time-harmo
 nic wave.<br /> It has received much attention since it is widely employed
  in applications\,<br /> but still challenging to numerically simulate in 
 the high-frequency regime.<br /><br /> In this seminar\, we focus on acou
 stic waves for the sake of simplicity<br /> and consider finite element di
 scretizations. The main goal of the<br /> presentation is to highlight the
  improved performance of high order<br /> methods (as compared to linear f
 inite elements) when the frequency is large.<br /><br /> We will very bri
 efly cover the zero-frequency case\, that corresponds to the well-studied 
 Poisson equation. We take advantage of this classical setting to recall ce
 ntral concepts of the finite element theory such as quasi-optimality and i
 nterpolation error.<br /><br /> The second part of the seminar is devoted
  to the high-frequency case.<br /> We show that without restrictive assump
 tions on the mesh size\,<br /> the finite element method is unstable\, and
  quasi-optimality is lost.<br /> We provide a detailed analysis\, as well 
 as numerical examples\, which<br /> show that higher order methods are les
 s affected by this phenomena\,<br /> and thus more suited to discretize hi
 gh-frequency problems.<br /><br /> Before drawing our main conclusions\, 
 we briefly discuss advanced topics\,<br /> such as the use of unfitted mes
 hes in highly heterogeneous media<br /> and mesh refinements around re-ent
 rant corners.</p>\n<p>For further information about the seminar\, please v
 isit this&nbsp\;<a href="t3://page?uid=1115" title="Opens internal link in
  current window" class="internal-link">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191213T120000
END:VEVENT
BEGIN:VEVENT
UID:news926@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191130T081127
DTSTART;TZID=Europe/Zurich:20191206T110000
SUMMARY:Seminar in Numerical Analysis: Ira Neitzel (Universität Bonn)
DESCRIPTION:joint work with Dominik Hafemeyer\, Florian Mannel and Boris Ve
 xler\\r\\n We consider a convex  optimal control problem governed by a par
 tial differential equation in  one space dimension which is controlled by 
 a right-hand-side living in  the space of functions with bounded variation
 . These functions tend to favor optimal controls that are piecewise  const
 ant with often finitely many jump poins. We are interested in  deriving fi
 nite element discretization error estimates for the controls  when the sta
 te ist discretized with usual piecewise linear finite elements\, and the c
 ontrols is either variationally  discrete or piecwise constant. Due to the
  structure of the objective  function\, usual techniques for estimating th
 e control error cannot be  applied. Instead\, these have to be derived fro
 m (suboptimal) error estimates for the state\, which can later be improved
 . \\r\\nFor further information about the seminar\, please visit this web
 page [t3://page?uid=current].
X-ALT-DESC:<p>joint work with Dominik Hafemeyer\, Florian Mannel and Boris 
 Vexler</p>\n<p><br /> We consider a convex  optimal control problem govern
 ed by a partial differential equation in  one space dimension which is con
 trolled by a right-hand-side living in  the space of functions with bounde
 d variation. These functions tend to favor optimal controls that are piece
 wise  constant with often finitely many jump poins. We are interested in  
 deriving finite element discretization error estimates for the controls  w
 hen the state ist discretized with usual piecewise linear finite elements\
 , and the controls is either variationally  discrete or piecwise constant.
  Due to the structure of the objective  function\, usual techniques for es
 timating the control error cannot be  applied. Instead\, these have to be 
 derived from (suboptimal) error estimates for the state\, which can later 
 be improved. </p>\n<p>For further information about the seminar\, please v
 isit this&nbsp\;<a href="t3://page?uid=current" title="Opens internal link
  in current window">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191206T120000
END:VEVENT
BEGIN:VEVENT
UID:news930@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191107T104152
DTSTART;TZID=Europe/Zurich:20191115T110000
SUMMARY:Seminar in Numerical Analysis: Michael Multerer (USI Lugano)
DESCRIPTION:The numerical simulation of physical phenomena is very well und
 erstood given that the input data are given exactly. However\, in practic
 e\, the collection of these data is usually subjected to measurement erro
 rs. The goal of uncertainty quantification is to assess those errors and 
 their possible impact on simulation results.In this talk\, we address diff
 erent numerical aspects of uncertainty quantification in elliptic partial
  differential equations on random domains. Starting from the modelling of
  random domains via random vector fields\, wediscuss how the corresponding
  Karhunen-Loève expansion can efficiently becomputed. For the discretisat
 ion of the partial differential equation\, we apply an adaptive Galerkin 
 framework. An a posteriori error estimator is presented\, which allows fo
 r the problem-dependent iterative refinement of all discretisation paramet
 ers and the assessment of the achieved error reduction. The proposed appr
 oach is demonstrated in numerical benchmark problems.\\r\\nFor further in
 formation about the seminar\, please visit this webpage [t3://page?uid=11
 15].
X-ALT-DESC:<p>The numerical simulation of physical phenomena is very well u
 nderstood given that&nbsp\;the input data are given exactly. However\, in 
 practice\, the collection of these&nbsp\;data is usually subjected to meas
 urement errors. The goal of uncertainty quantification&nbsp\;is to assess 
 those errors and their possible impact on simulation results.In this talk\
 , we address different numerical aspects of uncertainty quantification&nbs
 p\;in elliptic partial differential equations on random domains.&nbsp\;Sta
 rting from the modelling of random domains via random vector fields\, wedi
 scuss how the corresponding Karhunen-Loève expansion can efficiently beco
 mputed. For the discretisation of the partial differential equation\,&nbsp
 \;we apply an adaptive Galerkin framework. An a posteriori error estimator
  is presented\,&nbsp\;which allows for the problem-dependent iterative ref
 inement of all discretisation parameters&nbsp\;and the assessment of the a
 chieved error reduction. The proposed approach is demonstrated&nbsp\;in nu
 merical benchmark problems.</p>\n<p>For further information about the semi
 nar\, please visit this&nbsp\;<a href="t3://page?uid=1115" title="Opens in
 ternal link in current window" class="internal-link">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191115T120000
END:VEVENT
BEGIN:VEVENT
UID:news920@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191104T081039
DTSTART;TZID=Europe/Zurich:20191108T110000
SUMMARY:Seminar in Numerical Analysis: Omar Lakkis (University of Sussex)
DESCRIPTION:Aposteriori error estimates provide a rigorous foundation for t
 he derivation of efficient adaptive algorithms for the approximation of so
 lutions of partial differential equations (PDEs).  While the literature i
 s rich with results for the approximation of elliptic and parabolic PDEs\,
  it is much less developed for the hyperbolic equations such as the acoust
 ic or elastic wave equations.  In this talk\, I will review some of the "
 standard" aposteriori results for the scalar linear wave equation\, includ
 ing those of [1] and [2]\, and present recent improvements and further dev
 elopments to lower order Sobolev norms based on Baker’s Trick [3] for ba
 ckward Euler schemes.  Subsequent focus will be given to practically rele
 vant methods such as Verlet\, Cosine\, or Newmark methods\, a popular exam
 ple of which is the Leap-frog method [4].\\r\\nNotes: This is based on joi
 nt work with E.H. Georgoulis\, C. Makridakis and J.M. Virtanen.\\r\\nRefer
 ences:\\r\\n[1] W. Bangerth and R. Rannacher\, J. Comput. Acoust. 9(2):575
 –591\, 2001.[2] C. Bernardi and E. Süli\, Math. Models Methods Appl. Sc
 i. 15(2):199--225\, 2005.[3] E. H. Georgoulis\, O. Lakkis\, and C. Makrida
 kis. IMA J. Numer. Anal.\, 33(4):1245–1264\, 2013\, http://arxiv.org/abs
 /1003.3641[4] E. H. Georgoulis\, O. Lakkis\, C. Makridakis\, and J. M. Vir
 tanen. SIAM J. Numer. Anal.\, 54(1)\, 2016\, http://arxiv.org/abs/1411.757
 2 \\r\\nFor further information about the seminar\, please visit this web
 page [t3://page?uid=1115].
X-ALT-DESC:<p>Aposteriori error estimates provide a rigorous foundation for
  the derivation of efficient adaptive algorithms for the approximation of 
 solutions of partial differential equations (PDEs). &nbsp\;While the liter
 ature is rich with results for the approximation of elliptic and parabolic
  PDEs\, it is much less developed for the hyperbolic equations such as the
  acoustic or elastic wave equations. &nbsp\;In this talk\, I will review s
 ome of the &quot\;standard&quot\; aposteriori results for the scalar linea
 r wave equation\, including those of [1] and [2]\, and present recent impr
 ovements and further developments to lower order Sobolev norms based on Ba
 ker’s Trick [3] for backward Euler schemes. &nbsp\;Subsequent focus will
  be given to practically relevant methods such as Verlet\, Cosine\, or New
 mark methods\, a popular example of which is the Leap-frog method [4].</p>
 \n<p><br />Notes: This is based on joint work with E.H. Georgoulis\, C. Ma
 kridakis and J.M. Virtanen.<br /></p>\n<p>References:</p>\n<p>[1] W. Bange
 rth and R. Rannacher\, J. Comput. Acoust. 9(2):575–591\, 2001.<br />[2] 
 C. Bernardi and E. Süli\, Math. Models Methods Appl. Sci. 15(2):199--225\
 , 2005.<br />[3] E. H. Georgoulis\, O. Lakkis\, and C. Makridakis. IMA J. 
 Numer. Anal.\, 33(4):1245–1264\, 2013\, http://arxiv.org/abs/1003.3641<b
 r />[4] E. H. Georgoulis\, O. Lakkis\, C. Makridakis\, and J. M. Virtanen.
  SIAM J. Numer. Anal.\, 54(1)\, 2016\, http://arxiv.org/abs/1411.7572 </p>
 \n<p>For further information about the seminar\, please visit this&nbsp\;<
 a href="t3://page?uid=1115" title="Opens internal link in current window" 
 class="internal-link">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191108T120000
END:VEVENT
BEGIN:VEVENT
UID:news932@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20191023T141150
DTSTART;TZID=Europe/Zurich:20191101T110000
SUMMARY:Seminar in Numerical Analysis: Giacomo De Souza (EPFL)
DESCRIPTION:Traditional explicit Runge--Kutta schemes\, though computationa
 lly inexpensive\, are inefficient for the integration of stiff ordinary di
 fferential equations due to stability issues. Conversely\, implicit scheme
 s are stable but can be overly expensive due to the solution of possibly l
 arge nonlinear systems with Newton-like methods\, whose convergence is nei
 ther guaranteed for large time steps. Explicit stabilized schemes such as 
 the Runge--Kutta--Chebyshev method (RKC)  represent a viable compromise\,
  as the width of their stability domain grows quadratically with respect t
 o the number of function evaluations\, thus presenting enhanced stability 
 properties with a reasonable computational cost. These methods are particu
 larly efficient for systems arising from the space discretization of parab
 olic partial differential equations (PDEs).The efficiency of these methods
  deteriorates as the system becomes stiffer\, even if stiffness is induced
  by only few degrees of freedom. In the framework of discretized parabolic
  PDEs\, the number of function evaluations has to be chosen inversely prop
 ortional to the smallest element size in order to achieve stability\, thus
  largely wasting computational resources on locally-refined meshes. We fir
 st tackle this issue by replacing the right hand side of the PDE with an a
 veraged force\, which is obtained by damping the high modes down using the
  dissipative effect of the equation itself and which is cheap to evaluate.
  Combining RKC methods with the averaged force we give rise to multirate R
 KC schemes\, for which the number of expensive function evaluations is ind
 ependent of the small elements' size.The stability properties of our metho
 d are demonstrated on a model problem and numerical experiments confirm th
 at the stability bottleneck caused by a few of fine mesh elements can be o
 vercome without sacrificing accuracy.\\r\\nFor further information about t
 he seminar\, please visit this webpage [t3://page?uid=1115].
X-ALT-DESC:<p>Traditional explicit Runge--Kutta schemes\, though computatio
 nally inexpensive\, are inefficient for the integration of stiff ordinary 
 differential equations due to stability issues. Conversely\, implicit sche
 mes are stable but can be overly expensive due to the solution of possibly
  large nonlinear systems with Newton-like methods\, whose convergence is n
 either guaranteed for large time steps. Explicit stabilized schemes such a
 s the Runge--Kutta--Chebyshev method (RKC)&nbsp\; represent a viable compr
 omise\, as the width of their stability domain grows quadratically with re
 spect to the number of function evaluations\, thus presenting enhanced sta
 bility properties with a reasonable computational cost. These methods are 
 particularly efficient for systems arising from the space discretization o
 f parabolic partial differential equations (PDEs).<br />The efficiency of 
 these methods deteriorates as the system becomes stiffer\, even if stiffne
 ss is induced by only few degrees of freedom. In the framework of discreti
 zed parabolic PDEs\, the number of function evaluations has to be chosen i
 nversely proportional to the smallest element size in order to achieve sta
 bility\, thus largely wasting computational resources on locally-refined m
 eshes. We first tackle this issue by replacing the right hand side of the 
 PDE with an averaged force\, which is obtained by damping the high modes d
 own using the dissipative effect of the equation itself and which is cheap
  to evaluate. Combining RKC methods with the averaged force we give rise t
 o multirate RKC schemes\, for which the number of expensive function evalu
 ations is independent of the small elements' size.<br />The stability prop
 erties of our method are demonstrated on a model problem and numerical exp
 eriments confirm that the stability bottleneck caused by a few of fine mes
 h elements can be overcome without sacrificing accuracy.</p>\n<p>For furth
 er information about the seminar\, please visit this&nbsp\;<a href="t3://p
 age?uid=1115" title="Opens internal link in current window" class="interna
 l-link">webpage</a>.</p>
DTEND;TZID=Europe/Zurich:20191101T120000
END:VEVENT
BEGIN:VEVENT
UID:news887@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190514T111137
DTSTART;TZID=Europe/Zurich:20190525T110000
SUMMARY:Seminar in Numerical Analysis: Florian Faucher (Université de Pau)
DESCRIPTION:We study the inverse problem associated with the propagation of
  time-harmonic waves. In the seismic context\, the available measurements 
 correspond with partial reflection data\, obtained from one side illuminat
 ion (only from the Earth surface). The inverse problem aims at recovering 
 the subsurface Earth medium parameters and we employ the Full Waveform Inv
 ersion (FWI) method\, which relies on an iterative minimization algorithm 
 of the difference between the measurement and simulation. \\r\\nWe investi
 gate the deployment of new devices developed in the acoustic setting: the 
 dual-sensors\, which are able to capture both the pressure field and the v
 ertical velocity of the waves. For solving the inverse problem\, we define
  a new cost function\, adapted to these two types of data and based upon t
 he reciprocity. We first note that the stability of the problem can be sho
 wn to be Lipschitz\, assuming piecewise linear parameters. In addition\, r
 eciprocity waveform inversion allows a separation between the observationa
 l and numerical acquisitions. In fact\, the numerical sources do not have 
 to coincide with the observational ones\, offering new possibilities to cr
 eate adapted computational acquisitions\, consequently reducing the numeri
 cal cost. We illustrate our approach with three-dimensional medium reconst
 ructions\, where we start with minimal information on the target models. W
 e also extend the methodology for elasticity. \\r\\nEventually\, if time a
 llows\, we shall explore the model representation in numerical seismic inv
 ersion\, where the adaptive eigenspace method appears as a promising appro
 ach to have a compromise between number of unknowns and resolution.  	 		 
 		 	 	 		\\r\\nReferences \\r\\n[1]  G. Alessandrini\, M. V. de Hoop\, F.
  Faucher\, R. Gaburro and E. Sincich\, Inverse problem for the Helmholtz e
 quation with Cauchy data: reconstruction with conditional well-posedness d
 riven iterative regularization\, ESAIM: M2AN (2019). \\r\\n[2]  E. Berett
 a\, M. V. De Hoop\, F. Faucher\, and O. Scherzer\, Inverse boundary value 
 problem for the Helmholtz equation: quantitative conditional Lipschitz sta
 bility estimates. SIAM Journal on Mathematical Analysis\, 48(6)\, pp.3962-
 3983 (2016).\\r\\n[3]  M. J. Grote\, M. Kray\, and U. Nahum\, Adaptive ei
 genspace method for inverse scattering problems in the frequency domain. I
 nverse Problems\, 33(2)\, 025006 (2017). \\r\\n[4]  H. Barucq\, F. Fauche
 r\, and O. Scherzer\, Eigenvector Model Descriptors for Solving an Inverse
  Problem of Helmholtz Equation. arXiv preprint arXiv:1903.08991 (2019).For
  further information about the seminar\, please visit this webpage.
X-ALT-DESC:We study the inverse problem associated with the propagation of 
 time-harmonic waves. In the seismic context\, the available measurements c
 orrespond with partial reflection data\, obtained from one side illuminati
 on (only from the Earth surface). The inverse problem aims at recovering t
 he subsurface Earth medium parameters and we employ the Full Waveform Inve
 rsion (FWI) method\, which relies on an iterative minimization algorithm o
 f the difference between the measurement and simulation. \nWe investigate 
 the deployment of new devices developed in the acoustic setting: the dual-
 sensors\, which are able to capture both the pressure field and the vertic
 al velocity of the waves. For solving the inverse problem\, we define a ne
 w cost function\, adapted to these two types of data and based upon the re
 ciprocity. We first note that the stability of the problem can be shown to
  be Lipschitz\, assuming piecewise linear parameters. In addition\, recipr
 ocity waveform inversion allows a separation between the observational and
  numerical acquisitions. In fact\, the numerical sources do not have to co
 incide with the observational ones\, offering new possibilities to create 
 adapted computational acquisitions\, consequently reducing the numerical c
 ost. We illustrate our approach with three-dimensional medium reconstructi
 ons\, where we start with minimal information on the target models. We als
 o extend the methodology for elasticity. \nEventually\, if time allows\, w
 e shall explore the model representation in numerical seismic inversion\, 
 where the adaptive eigenspace method appears as a promising approach to ha
 ve a compromise between number of unknowns and resolution. <br /><br /> 	 
 		 		 	 	 		\nReferences \n[1] &nbsp\;G. Alessandrini\, M. V. de Hoop\, F.
  Faucher\, R. Gaburro and E. Sincich\, Inverse problem for the Helmholtz e
 quation with Cauchy data: reconstruction with conditional well-posedness d
 riven iterative regularization\, ESAIM: M2AN (2019). \n[2] &nbsp\;E. Beret
 ta\, M. V. De Hoop\, F. Faucher\, and O. Scherzer\, Inverse boundary value
  problem for the Helmholtz equation: quantitative conditional Lipschitz st
 ability estimates. SIAM Journal on Mathematical Analysis\, 48(6)\, pp.3962
 -3983 (2016).\n[3] &nbsp\;M. J. Grote\, M. Kray\, and U. Nahum\, Adaptive 
 eigenspace method for inverse scattering problems in the frequency domain.
  Inverse Problems\, 33(2)\, 025006 (2017). \n[4] &nbsp\;H. Barucq\, F. Fau
 cher\, and O. Scherzer\, Eigenvector Model Descriptors for Solving an Inve
 rse Problem of Helmholtz Equation. arXiv preprint arXiv:1903.08991 (2019).
 <br /><br />For further information about the seminar\, please visit this 
 <link de/forschung/mathematik/seminar-in-numerical-analysis/ - - "Opens in
 ternal link in current window">webpage</link>.  
DTEND;TZID=Europe/Zurich:20190524T120000
END:VEVENT
BEGIN:VEVENT
UID:news885@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190507T193740
DTSTART;TZID=Europe/Zurich:20190517T110000
SUMMARY:Seminar in Numerical Analysis: Thomas Wihler (Universität Bern)
DESCRIPTION:A wide variety of (fixed-point) iterative methods for the solut
 ion of nonlinear equations (in Hilbert spaces) exists. In many cases\, suc
 h schemes can be interpreted as iterative local linearization methods\, wh
 ich can be obtained by applying a suitable linear preconditioning operator
  to the original (nonlinear) equation. Based on this observation\, we will
  derive a unified abstract framework which recovers some prominent iterati
 ve schemes. Furthermore\, in the context of numerical solutions methods fo
 r nonlinear partial differential equations\, we propose a combination of t
 he iterative linearization approach and the classical Galerkin discretizat
 ion method\, thereby giving rise to the so-called iterative linearization 
 Galerkin (ILG) methodology. Moreover\, still on an abstract level\, based 
 on elliptic reconstruction techniques\, we derive a posteriori error estim
 ates which separately take into account the discretization and linearizati
 on errors. Furthermore\, we propose an adaptive algorithm\, which provides
  an efficient interplay between these two effects. In addition\, some iter
 ative methods and numerical computations in the specific context of finite
  element discretizations of quasilinear stationary conservation laws will 
 be presented.For further information about the seminar\, please visit this
  webpage.
X-ALT-DESC: A wide variety of (fixed-point) iterative methods for the solut
 ion of nonlinear equations (in Hilbert spaces) exists. In many cases\, suc
 h schemes can be interpreted as iterative local linearization methods\, wh
 ich can be obtained by applying a suitable linear preconditioning operator
  to the original (nonlinear) equation. Based on this observation\, we will
  derive a unified abstract framework which recovers some prominent iterati
 ve schemes. Furthermore\, in the context of numerical solutions methods fo
 r nonlinear partial differential equations\, we propose a combination of t
 he iterative linearization approach and the classical Galerkin discretizat
 ion method\, thereby giving rise to the so-called iterative linearization 
 Galerkin (ILG) methodology. Moreover\, still on an abstract level\, based 
 on elliptic reconstruction techniques\, we derive a posteriori error estim
 ates which separately take into account the discretization and linearizati
 on errors. Furthermore\, we propose an adaptive algorithm\, which provides
  an efficient interplay between these two effects. In addition\, some iter
 ative methods and numerical computations in the specific context of finite
  element discretizations of quasilinear stationary conservation laws will 
 be presented.<br /><br />For further information about the seminar\, pleas
 e visit this <link de/forschung/mathematik/seminar-in-numerical-analysis/ 
 - - "Opens internal link in current window">webpage</link>.  
DTEND;TZID=Europe/Zurich:20190517T120000
END:VEVENT
BEGIN:VEVENT
UID:news861@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190423T105445
DTSTART;TZID=Europe/Zurich:20190503T110000
SUMMARY:Seminar in Numerical Analysis: Rémi Abgrall (Universität Zürich)
DESCRIPTION:Since the work of B. Wendroff and P. Lax\, we know what should 
 be the  correct form of the numerical approximation of conservation law. W
 e also  know\, after Hou and Le Floch\, what kind of problems we are facin
 g when  the flux form is not respected.  However\, this is not the end of 
 the story. All these works use a one  dimensional way of thinking: the mai
 n player is the normal flux across  cell interfaces. In addition there are
  several excellent numerical  methods that do not fit the form of the lax 
 Wendroff theorem.  In that talk\, I will introduce a more general setting 
 and show that any  reasonable scheme for conservation law can be put in th
 at framework. In  addition\, I will show that an equivalent flux formulati
 on\, with a  suitable definition of what is a flux\,  can be explicitly c
 onstructed  (and computed)\, so that any reasonable scheme can be put in a
  finite  volume form.  I will end the talk by showing some applications: h
 ow to systematically  construct entropy stable scheme\, or starting from t
 he non conservative  form of  a system-say the Euler equations-\, how to 
 construct a suitable  discretisation. And more.  This is a joint work with
  P. Bacigaluppi (now postdoc at ETH) and S.  Tokareva (now at Los Alamos).
 For further information about the seminar\, please visit this webpage.
X-ALT-DESC: Since the work of B. Wendroff and P. Lax\, we know what should 
 be the  correct form of the numerical approximation of conservation law. W
 e also  know\, after Hou and Le Floch\, what kind of problems we are facin
 g when  the flux form is not respected. <br /><br />However\, this is not
  the end of the story. All these works use a one  dimensional way of think
 ing: the main player is the normal flux across  cell interfaces. In additi
 on there are several excellent numerical  methods that do not fit the form
  of the lax Wendroff theorem. <br /><br />In that talk\, I will introduce
  a more general setting and show that any  reasonable scheme for conservat
 ion law can be put in that framework. In  addition\, I will show that an e
 quivalent flux formulation\, with a  suitable definition of what is a flux
 \,&nbsp\; can be explicitly constructed  (and computed)\, so that any reas
 onable scheme can be put in a finite  volume form. <br /><br />I will end
  the talk by showing some applications: how to systematically  construct e
 ntropy stable scheme\, or starting from the non conservative  form of&nbsp
 \; a system-say the Euler equations-\, how to construct a suitable  discre
 tisation. And more. <br /><br />This is a joint work with P. Bacigaluppi 
 (now postdoc at ETH) and S.  Tokareva (now at Los Alamos).<br /><br />For 
 further information about the seminar\, please visit this <link de/forschu
 ng/mathematik/seminar-in-numerical-analysis/ - - "Opens internal link in c
 urrent window">webpage</link>.  <br /> 
DTEND;TZID=Europe/Zurich:20190503T120000
END:VEVENT
BEGIN:VEVENT
UID:news857@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190402T151711
DTSTART;TZID=Europe/Zurich:20190412T110000
SUMMARY:Seminar in Numerical Analysis: Chris Stolk (University of Amsterdam
 )
DESCRIPTION:In this talk I discuss a recently developed finite difference d
 iscretisation of the Helmholtz equation and some solution methods for the 
 resulting linear systems. In high-frequency Helmholtz problems\, pollution
  errors\, due to numerical dispersion\, are a main source of error. We wil
 l show that such errors can be strongly reduced compared to other schemes\
 , including high-order finite elements\, by selecting coefficients for the
  discrete system that maximise the accuracy of geometrical optics phases a
 nd amplitudes. Such low dispersion schemes are of interest by themselves\,
  but can also be used to improve the efficiency of multigrid schemes. Comp
 utation times for a solver combining a multigrid method with domain decomp
 osition compare well to those of alternative methods.For further informati
 on about the seminar\, please visit this webpage.
X-ALT-DESC: In this talk I discuss a recently developed finite difference d
 iscretisation of the Helmholtz equation and some solution methods for the 
 resulting linear systems. In high-frequency Helmholtz problems\, pollution
  errors\, due to numerical dispersion\, are a main source of error. We wil
 l show that such errors can be strongly reduced compared to other schemes\
 , including high-order finite elements\, by selecting coefficients for the
  discrete system that maximise the accuracy of geometrical optics phases a
 nd amplitudes. Such low dispersion schemes are of interest by themselves\,
  but can also be used to improve the efficiency of multigrid schemes. Comp
 utation times for a solver combining a multigrid method with domain decomp
 osition compare well to those of alternative methods.<br /><br />For furth
 er information about the seminar\, please visit this <link de/forschung/ma
 thematik/seminar-in-numerical-analysis/ - - "Opens internal link in curren
 t window">webpage</link>.  
DTEND;TZID=Europe/Zurich:20190412T120000
END:VEVENT
BEGIN:VEVENT
UID:news844@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190320T092425
DTSTART;TZID=Europe/Zurich:20190329T110000
SUMMARY:Seminar in Numerical Analysis: Robert Scheichl (Universität Heidel
 berg)
DESCRIPTION:Sample-based multilevel uncertainty quantification tools\, such
  as  multilevel Monte Carlo\, multilevel quasi-Monte Carlo or multilevel  
 stochastic collocation\, have recently gained huge popularity due to  thei
 r potential to efficiently compute robust estimates of quantities of  inte
 rest (QoI) derived from PDE models that are subject to  uncertainties in t
 he input data (coefficients\, boundary conditions\,  geometry\, etc). Espe
 cially for problems with low regularity\, they are  asymptotically optimal
  in that they can provide statistics about such  QoIs at (asymptotically) 
 the same cost as it takes to compute one sample  to the target accuracy. H
 owever\, when the data uncertainty is localised  at random locations\, suc
 h as for manufacturing defects in composite  materials\, the cost per samp
 le can be reduced significantly by adapting  the spatial discretisation in
 dividually for each sample. Moreover\, the  adaptive process typically pro
 duces coarser approximations that can be  used directly for the multilevel
  uncertainty quantification. In this  talk\, I will present two novel deve
 lopments that aim to exploit these  ideas. In the first part I will presen
 t Continuous Level Monte Carlo  (CLMC)\, a generalisation of multilevel Mo
 nte Carlo (MLMC) to a  continuous framework where the level parameter is a
  continuous variable.  This provides a natural framework to use sample-wis
 e adaptive  refinement strategy\, with a goal-oriented error estimator as 
 our new  level parameter. We introduce a practical CLMC estimator (and alg
 orithm)  and prove a complexity theorem showing the same rate of complexit
 y as  for MLMC. Also\, we show that it is possible to make the CLMC estima
 tor  unbiased with respect to the true quantity of interest. Finally\, we 
  provide two numerical experiments which test the CLMC framework  alongsid
 e a sample-wise adaptive refinement strategy\, showing clear  gains over a
  standard MLMC approach with uniform grid hierarchies. In  the second part
 \, I will show how to extend the sample-adaptive strategy  to multilevel s
 tochastic collocation (MLSC) methods providing a  complexity estimate and 
 numerical experiments for a MLSC method that is  fully adaptive in the dim
 ension\, in the polynomial degrees and in the  spatial discretisation. Thi
 s is joint work with Gianluca Detommaso  (Bath)\, Tim Dodwell (Exeter) and
  Jens Lang (Darmstadt).For further information about the seminar\, please 
 visit this webpage.
X-ALT-DESC: Sample-based multilevel uncertainty quantification tools\, such
  as  multilevel Monte Carlo\, multilevel quasi-Monte Carlo or multilevel  
 stochastic collocation\, have recently gained huge popularity due to  thei
 r potential to efficiently compute robust estimates of quantities of  inte
 rest (QoI) derived from PDE models that are subject to  uncertainties in t
 he input data (coefficients\, boundary conditions\,  geometry\, etc). Espe
 cially for problems with low regularity\, they are  asymptotically optimal
  in that they can provide statistics about such  QoIs at (asymptotically) 
 the same cost as it takes to compute one sample  to the target accuracy. H
 owever\, when the data uncertainty is localised  at random locations\, suc
 h as for manufacturing defects in composite  materials\, the cost per samp
 le can be reduced significantly by adapting  the spatial discretisation in
 dividually for each sample. Moreover\, the  adaptive process typically pro
 duces coarser approximations that can be  used directly for the multilevel
  uncertainty quantification. In this  talk\, I will present two novel deve
 lopments that aim to exploit these  ideas. In the first part I will presen
 t Continuous Level Monte Carlo  (CLMC)\, a generalisation of multilevel Mo
 nte Carlo (MLMC) to a  continuous framework where the level parameter is a
  continuous variable.  This provides a natural framework to use sample-wis
 e adaptive  refinement strategy\, with a goal-oriented error estimator as 
 our new  level parameter. We introduce a practical CLMC estimator (and alg
 orithm)  and prove a complexity theorem showing the same rate of complexit
 y as  for MLMC. Also\, we show that it is possible to make the CLMC estima
 tor  unbiased with respect to the true quantity of interest. Finally\, we 
  provide two numerical experiments which test the CLMC framework  alongsid
 e a sample-wise adaptive refinement strategy\, showing clear  gains over a
  standard MLMC approach with uniform grid hierarchies. In  the second part
 \, I will show how to extend the sample-adaptive strategy  to multilevel s
 tochastic collocation (MLSC) methods providing a  complexity estimate and 
 numerical experiments for a MLSC method that is  fully adaptive in the dim
 ension\, in the polynomial degrees and in the  spatial discretisation. <br
  />This is joint work with Gianluca Detommaso  (Bath)\, Tim Dodwell (Exete
 r) and Jens Lang (Darmstadt).<br /><br />For further information about the
  seminar\, please visit this <link de/forschung/mathematik/seminar-in-nume
 rical-analysis/ - - "Opens internal link in current window">webpage</link>
 .  
DTEND;TZID=Europe/Zurich:20190329T120000
END:VEVENT
BEGIN:VEVENT
UID:news829@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190225T091300
DTSTART;TZID=Europe/Zurich:20190301T110000
SUMMARY:Seminar in Numerical Analysis: Markus Zimmermann (Technische Univer
 sität München)
DESCRIPTION:Solution spaces are sets of engineering solutions\, i.e.\, desi
 gns that satisfy all engineering requirements. Seeking solution spaces rat
 her than just one possibly optimal solution is numerically challenging\, b
 ut it can significantly simplify the development of systems in the presenc
 e of uncertainty and complexity. For different system components\, solutio
 n spaces are decomposed into independent target regions that enable distri
 buted development work and encompass uncertainty without particular underl
 ying uncertainty model. A basic stochastic algorithm to maximize so-called
  box-shaped solution spaces is presented. Two recent extensions are discus
 sed: first\, representations as Cartesian product of two- and higher-dimen
 sional spaces and\, second\, so-called solution-compensation spaces\, wher
 e design variables are grouped according to the order in which they need t
 o be specified. Applications to vehicle development for crash and driving 
 dynamics are presented.For further information about the seminar\, please 
 visit this webpage.
X-ALT-DESC: Solution spaces are sets of engineering solutions\, i.e.\, desi
 gns that satisfy all engineering requirements. Seeking solution spaces rat
 her than just one possibly optimal solution is numerically challenging\, b
 ut it can significantly simplify the development of systems in the presenc
 e of uncertainty and complexity. For different system components\, solutio
 n spaces are decomposed into independent target regions that enable distri
 buted development work and encompass uncertainty without particular underl
 ying uncertainty model. A basic stochastic algorithm to maximize so-called
  box-shaped solution spaces is presented. Two recent extensions are discus
 sed: first\, representations as Cartesian product of two- and higher-dimen
 sional spaces and\, second\, so-called solution-compensation spaces\, wher
 e design variables are grouped according to the order in which they need t
 o be specified. Applications to vehicle development for crash and driving 
 dynamics are presented.<br /><br />For further information about the semin
 ar\, please visit this <link de/forschung/mathematik/seminar-in-numerical-
 analysis/ - - "Opens internal link in current window">webpage</link>.  
DTEND;TZID=Europe/Zurich:20190301T120000
END:VEVENT
BEGIN:VEVENT
UID:news823@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190214T222417
DTSTART;TZID=Europe/Zurich:20190222T110000
SUMMARY:Seminar in Numerical Analysis: Edwin Mai  (Universität der Bundesw
 ehr München)
DESCRIPTION:With an increasing range of applications\, Shape Optimisation p
 roblems receive more and more interest in the engineering community\, whil
 e solving such problems is still a demanding task. In this talk the exampl
 e of a Stokes channel flow with the objective to reduce the energy dissipa
 tion is considered\, on which an optimise-then-discretize approach shall b
 e applied. Starting with a gradient descent method\, based on the analytic
 al shape derivative and the adjoint approach\, an initial optimisation pro
 cedure is discussed and differences in the shape derivative representation
  and their numerical implications are highlighted. Subsequently a possible
  way to derive shape hessian information in a so-called tangent-on-reverse
  method\, i.e. combining the adjoint and sensitivity approach\, is introdu
 ced. The shape hessian is utilised in a reduced SQP method for the equally
  constrained channel flow problem comprising of the objective\, PDE and ad
 ditional geometric constraints. In contrary to a one-shot approach the red
 uced approach requires the state and adjoint equations to be solved exactl
 y for each optimisation step. Finally\, some features of the numerical imp
 lementation using the finite element software package FEniCS and the obtai
 ned results are presented to show superiority of using hessian information
 .For further information about the seminar\, please visit this webpage.
X-ALT-DESC: With an increasing range of applications\, Shape Optimisation p
 roblems receive more and more interest in the engineering community\, whil
 e solving such problems is still a demanding task. In this talk the exampl
 e of a Stokes channel flow with the objective to reduce the energy dissipa
 tion is considered\, on which an optimise-then-discretize approach shall b
 e applied. Starting with a gradient descent method\, based on the analytic
 al shape derivative and the adjoint approach\, an initial optimisation pro
 cedure is discussed and differences in the shape derivative representation
  and their numerical implications are highlighted. Subsequently a possible
  way to derive shape hessian information in a so-called tangent-on-reverse
  method\, i.e. combining the adjoint and sensitivity approach\, is introdu
 ced. The shape hessian is utilised in a reduced SQP method for the equally
  constrained channel flow problem comprising of the objective\, PDE and ad
 ditional geometric constraints. In contrary to a one-shot approach the red
 uced approach requires the state and adjoint equations to be solved exactl
 y for each optimisation step. Finally\, some features of the numerical imp
 lementation using the finite element software package FEniCS and the obtai
 ned results are presented to show superiority of using hessian information
 .<br /><br />For further information about the seminar\, please visit this
  <link de/forschung/mathematik/seminar-in-numerical-analysis/ - - "Opens i
 nternal link in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20190222T120000
END:VEVENT
BEGIN:VEVENT
UID:news412@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181206T092622
DTSTART;TZID=Europe/Zurich:20181214T110000
SUMMARY:Seminar in Numerical Analysis: Martin Burger (Universität Erlangen
 )
DESCRIPTION:In this talk we will discuss nonlinear spectral           decom
 positions in Banach spaces\, which shed a new light on           multiscal
 e methods in imaging and open new possibilities of           filtering tec
 hniques. We provide a novel geometric           interpretation of nonlinea
 r eigenvalue problems in Banach           spaces and provide conditions un
 der which gradient flows for           norms or seminorms yield a spectral
  decomposition. We will see           that under these conditions standard
  variational schemes are           equivalent to the gradient flows for ar
 bitrary large time           step\, recovering previous results e.g. for t
 he one dimensional           total variation flow as special cases. \\r\\n
 For further information about the seminar\, please visit this webpage.
X-ALT-DESC: In this talk we will discuss nonlinear spectral           decom
 positions in Banach spaces\, which shed a new light on           multiscal
 e methods in imaging and open new possibilities of           filtering tec
 hniques. We provide a novel geometric           interpretation of nonlinea
 r eigenvalue problems in Banach           spaces and provide conditions un
 der which gradient flows for           norms or seminorms yield a spectral
  decomposition. We will see           that under these conditions standard
  variational schemes are           equivalent to the gradient flows for ar
 bitrary large time           step\, recovering previous results e.g. for t
 he one dimensional           total variation flow as special cases. \nFor 
 further information about the seminar\, please visit this <link de/forschu
 ng/mathematik/seminar-in-numerical-analysis/ - - "Opens internal link in c
 urrent window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181214T120000
END:VEVENT
BEGIN:VEVENT
UID:news409@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181203T104240
DTSTART;TZID=Europe/Zurich:20181207T110000
SUMMARY:Seminar in Numerical Analysis: Zakaria Belhachmi (LMIA - Universit
 é de Haute-Alsace)
DESCRIPTION:We present some ideas on modelling with diffusion operators  so
 me PDEs based geometry inpainting problems. The objective is to  provide a
  closed loop continuous to discrete models. The loop consists  of an initi
 al family of simple PDEs depending on some parameters  selected at the dis
 crete level from a posteriori informations. The  choice of these parameter
 s modify dynamically the system of equations  and the resulting models con
 verge (in the Gamma-convergence sense) to a limit -continuous- model that 
 capture the jump set of the  restaured image. We also discuss the compress
 ion-based inpainting within  this approach.For further information about t
 he seminar\, please visit this webpage.
X-ALT-DESC: We present some ideas on modelling with diffusion operators  so
 me PDEs based geometry inpainting problems. The objective is to  provide a
  closed loop continuous to discrete models. The loop consists  of an initi
 al family of simple PDEs depending on some parameters  selected at the dis
 crete level from a posteriori informations. The  choice of these parameter
 s modify dynamically the system of equations  and the resulting models con
 verge (in the Gamma-convergence sense) to a limit -continuous- model that 
 capture the jump set of the  restaured image. We also discuss the compress
 ion-based inpainting within  this approach.<br /><br />For further informa
 tion about the seminar\, please visit this <link de/forschung/mathematik/s
 eminar-in-numerical-analysis/ - external-link-new-window "Opens internal l
 ink in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181207T120000
END:VEVENT
BEGIN:VEVENT
UID:news393@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181116T170413
DTSTART;TZID=Europe/Zurich:20181123T110000
SUMMARY:Seminar in Numerical Analysis: Assyr Abdulle (EPFL)
DESCRIPTION:In this talk we discuss several challenges that arise in Bayesi
 an  inference for ordinary and partial differential equations. The numeric
 al  solvers used to compute the forward model of such problems induce a  p
 ropagation of the discretization error into the posterior  measure for the
  parameters of interest. This uncertainty originating  from the numerical 
 approximation error can be accounted for using  probabilistic numerical me
 thods. New probabilistic numerical methods for  ordinary differential equa
 tions that share geometric  properties of the true solution will be presen
 ted in the first part of  this talk.   In the second part of the talk\, w
 e will discuss a Bayesian approach for  inverse problems involving ellipti
 c partial differential equations with  multiple scales. Computing repeated
  forward problems in a multiscale  context is computationnally too expensi
 ve and  we propose a new strategy  based on the use of  "effective" forw
 ard  models originating from homogenization theory. Convergence of the tru
 e  posterior distribution for the parameters of interest towards the  homo
 genized posterior is established via G-convergence  for the Hellinger metr
 ic. A computational approach based on numerical  homogenization and reduce
 d basis methods is proposed for an efficient  evaluation of the forward mo
 del in a Markov Chain Monte-Carlo  procedure.    References:  A. Abdulle\
 , G. Garegnani\, Random time step probabilistic methods for  uncertainty q
 uantification in chaotic and geometric numerical  integration\, Preprint (
 2018)\, submitted for publication.  A. Abdulle\, A. Di Blasio\, Numerical 
 homogenization and model order  reduction for multiscale inverse problems\
 , to appear in SIAM MMS.  A. Abdulle\, A. Di Blasio\, A Bayesian numerical
  homogenization method for  elliptic multiscale inverse problems\, Preprin
 t (2018)\, submitted for  publication. For further information about the s
 eminar\, please visit this webpage.
X-ALT-DESC: In this talk we discuss several challenges that arise in Bayesi
 an  inference for ordinary and partial differential equations. The numeric
 al  solvers used to compute the forward model of such problems induce a  p
 ropagation of the discretization error into the posterior  measure for the
  parameters of interest. This uncertainty originating  from the numerical 
 approximation error can be accounted for using  probabilistic numerical me
 thods. New probabilistic numerical methods for  ordinary differential equa
 tions that share geometric  properties of the true solution will be presen
 ted in the first part of  this talk.&nbsp\; <br /> In the second part of t
 he talk\, we will discuss a Bayesian approach for  inverse problems involv
 ing elliptic partial differential equations with  multiple scales. Computi
 ng repeated forward problems in a multiscale  context is computationnally 
 too expensive and  we propose a new strategy &nbsp\;based on the use of&nb
 sp\; &quot\;effective&quot\; forward  models originating from homogenizati
 on theory. Convergence of the true  posterior distribution for the paramet
 ers of interest towards the  homogenized posterior is established via G-co
 nvergence  for the Hellinger metric. A computational approach based on num
 erical  homogenization and reduced basis methods is proposed for an effici
 ent  evaluation of the forward model in a Markov Chain Monte-Carlo  proced
 ure.&nbsp\; <br /><br /> References: <br /> A. Abdulle\, G. Garegnani\, R
 andom time step probabilistic methods for  uncertainty quantification in c
 haotic and geometric numerical  integration\, Preprint (2018)\, submitted 
 for publication. <br /> A. Abdulle\, A. Di Blasio\, Numerical homogenizati
 on and model order  reduction for multiscale inverse problems\, to appear 
 in SIAM MMS. <br /> A. Abdulle\, A. Di Blasio\, A Bayesian numerical homog
 enization method for  elliptic multiscale inverse problems\, Preprint (201
 8)\, submitted for  publication. <br /><br />For further information about
  the seminar\, please visit this <link de/forschung/mathematik/seminar-in-
 numerical-analysis/ - - "Opens internal link in current window">webpage</l
 ink>.
DTEND;TZID=Europe/Zurich:20181123T120000
END:VEVENT
BEGIN:VEVENT
UID:news307@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181024T123048
DTSTART;TZID=Europe/Zurich:20181116T113000
SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université Greno
 ble Alpes)
DESCRIPTION:Full waveform inversion (FWI) is a powerful high resolution sei
 smic  imaging method\, used in the academy for global and regional scale  
 imaging\, and in the oil & gas industry for exploration  purposes. It can 
 be understood as a PDE constrained optimization  problem: the misfit betwe
 en recorded seismic data and synthetic seismic  data computed as the solut
 ion of a wave propagation problem is reduced  over a space of parameters c
 ontrolling the wave  propagation. One of the main limitation of FWI is its
  dependency on the  accuracy of the initial guess of the solution. This li
 mitation is due  to the non-convexity of the standard least-squares misfit
  function used  to measure the discrepancy between recorded  and synthetic
  data\, and the use of local optimization techniques to  reduce this misfi
 t. In recent studies\, we have studied the interest for  using a misfit fu
 nction based on an optimal transport distance to  mitigate this issue. The
  convexity of this distance  with respect to shifted patterns is the main 
 reason why we are  interested in this distance\, as it can be seen as a pr
 oxy for the  convexity with respect to the wave velocities we want to  re
 construct.  In this talk\, we will give an overview of this work\, startin
 g  by introducing basic concepts on optimal transport\, before detailing  
 the difficulties for using the optimal transport distance in the  framewor
 k of FWI\, and reviewing the solutions we have proposed.\\r\\nFor further 
 information about the seminar\, please visit this webpage.
X-ALT-DESC: Full waveform inversion (FWI) is a powerful high resolution sei
 smic  imaging method\, used in the academy for global and regional scale  
 imaging\, and in the oil &amp\; gas industry for exploration  purposes. It
  can be understood as a PDE constrained optimization  problem: the misfit 
 between recorded seismic data and synthetic seismic  data computed as the 
 solution of a wave propagation problem is reduced  over a space of paramet
 ers controlling the wave  propagation. One of the main limitation of FWI i
 s its dependency on the  accuracy of the initial guess of the solution. Th
 is limitation is due  to the non-convexity of the standard least-squares m
 isfit function used  to measure the discrepancy between recorded  and synt
 hetic data\, and the use of local optimization techniques to  reduce this 
 misfit. In recent studies\, we have studied the interest for  using a misf
 it function based on an optimal transport distance to  mitigate this issue
 . The convexity of this distance  with respect to shifted patterns is the 
 main reason why we are  interested in this distance\, as it can be seen as
  a proxy for the  convexity with respect to the wave velocities we want to
 &nbsp\; reconstruct.  In this talk\, we will give an overview of this work
 \, starting  by introducing basic concepts on optimal transport\, before d
 etailing  the difficulties for using the optimal transport distance in the
   framework of FWI\, and reviewing the solutions we have proposed.\nFor fu
 rther information about the seminar\, please visit this <link de/forschung
 /mathematik/seminar-in-numerical-analysis/ - external-link-new-window "Ope
 ns internal link in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181116T123000
END:VEVENT
BEGIN:VEVENT
UID:news359@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181031T192501
DTSTART;TZID=Europe/Zurich:20181109T110000
SUMMARY:Seminar in Numerical Analysis: Stefan Sauter (Universität Zürich)
DESCRIPTION:In our talk we consider the Maxwell equations in the frequency 
 domain\, discretized by Nédélec-hp-finite elements. We develop a stabili
 ty and  convergence analysis which is explicit with respect to the wave nu
 mber k\, the mesh  size h\, and the local polynomial degree p. It turns ou
 t that\, for the choice  p>=log(k) \, the discretization does not suffer f
 rom the so-called pollution effect. This is known for high-frequency acous
 tic scattering. However\, the  analysis of Maxwell equations requires the 
 development of twelve additional theoretical tools which we call "the twel
 ve apostels". In our talk\, we explain these "apostels" and how they are n
 eeded to prove the stability and  convergence of our method.\\r\\nThis tal
 k comprises joint work with Prof. Markus Melenk\, TU Wien. \\r\\nFor furth
 er information about the seminar\, please visit this webpage.
X-ALT-DESC: In our talk we consider the Maxwell equations in the frequency 
 domain\, discretized by Nédélec-hp-finite elements. We develop a stabili
 ty and  convergence analysis which is explicit with respect to the wave nu
 mber k\, the mesh  size h\, and the local polynomial degree p. It turns ou
 t that\, for the choice  p&gt\;=log(k) \, the discretization does not suff
 er from the so-called pollution effect. This is known for high-frequency a
 coustic scattering. However\, the  analysis of Maxwell equations requires 
 the development of twelve additional theoretical tools which we call &quot
 \;the twelve apostels&quot\;. In our talk\, we explain these &quot\;aposte
 ls&quot\; and how they are needed to prove the stability and  convergence 
 of our method.\nThis talk comprises joint work with Prof. Markus Melenk\, 
 TU Wien. \nFor further information about the seminar\, please visit this <
 link de/forschung/mathematik/seminar-in-numerical-analysis/ - - "Opens int
 ernal link in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181109T120000
END:VEVENT
BEGIN:VEVENT
UID:news350@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181023T144703
DTSTART;TZID=Europe/Zurich:20181102T110000
SUMMARY:Seminar in Numerical Analysis: Maryna Kachanovska (ENSTA ParisTech)
DESCRIPTION:In  this work we consider the problem of the sound propagation 
 in a  bronchial network. Asymptotically\, this phenomenon can be modelled 
 by a  weighted wave equation posed on a fractal (i.e. self-similar) 1D tre
 e.  The  principal difficulty for the numerical resolution of the problem 
 is the  'infiniteness' of the geometry. To deal with this issue\, we will 
  present  transparent boundary conditions\, used to truncate the computati
 onal  domain to a finite subtree.\\r\\nThe  construction of such transpare
 nt conditions relies on the approximation  of the Dirichlet-to-Neumann (Dt
 N) operator\, whose symbol is a  meromorphic function that  satisfies a ce
 rtain non-linear functional  equation. We present two approaches to approx
 imate the DtN in the time  domain\, alternative to the low-order absorbing
  boundary conditions\, which appear inefficient in this case.\\r\\n The  f
 irst approach stems from the use of the convolution quadrature (cf.  [Lubi
 ch 1988]\, [Banjai\, Lubich\, Sayas 2016])\, which consists in  constructi
 ng an exact DtN for a semi-discretized in time problem. In  this case the 
 combination of the explicit leapfrog method for the  volumic terms and the
  implicit trapezoid rule for the boundary terms  leads to a second-order s
 cheme stable under the classical CFL condition.\\r\\nThe  second approach 
 is motivated by the Engquist-Majda ABCs (cf. [Engquist\,  Majda 1977])\, a
 nd consists in approximating the DtN by local operators\, obtained from th
 e truncation of the  meromorphic series which represents the symbol of the
  DtN. We show how  the respective error can be controlled and provide some
  complexity  estimates.\\r\\nThis is a joint work with Patrick Joly (INRIA
 \, France) and Adrien Semin (TU Darmstadt\, Germany). \\r\\nFor further in
 formation about the seminar\, please visit this webpage.
X-ALT-DESC: In  this work we consider the problem of the sound propagation 
 in a  bronchial network. Asymptotically\, this phenomenon can be modelled 
 by a  weighted wave equation posed on a fractal (i.e. self-similar) 1D tre
 e.  The  principal difficulty for the numerical resolution of the problem 
 is the  'infiniteness' of the geometry. To deal with this issue\, we will 
  present  transparent boundary conditions\, used to truncate the computati
 onal  domain to a finite subtree.\nThe  construction of such transparent c
 onditions relies on the approximation  of the Dirichlet-to-Neumann (DtN) o
 perator\, whose symbol is a  meromorphic function that  satisfies a certai
 n non-linear functional  equation. We present two approaches to approximat
 e the DtN in the time  domain\, alternative to the low-order absorbing bou
 ndary conditions\, which appear inefficient in this case.\n The  first app
 roach stems from the use of the convolution quadrature (cf.  [Lubich 1988]
 \, [Banjai\, Lubich\, Sayas 2016])\, which consists in  constructing an ex
 act DtN for a semi-discretized in time problem. In  this case the combinat
 ion of the explicit leapfrog method for the  volumic terms and the implici
 t trapezoid rule for the boundary terms  leads to a second-order scheme st
 able under the classical CFL condition.\nThe  second approach is motivated
  by the Engquist-Majda ABCs (cf. [Engquist\,  Majda 1977])\, and consists 
 in approximating the DtN by local operators\, obtained from the truncation
  of the  meromorphic series which represents the symbol of the DtN. We sho
 w how  the respective error can be controlled and provide some complexity 
  estimates.\nThis is a joint work with Patrick Joly (INRIA\, France) and A
 drien Semin (TU Darmstadt\, Germany). \nFor further information about the 
 seminar\, please visit this <link de/forschung/mathematik/seminar-in-numer
 ical-analysis/ - - "Opens internal link in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181102T120000
END:VEVENT
BEGIN:VEVENT
UID:news344@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181012T175433
DTSTART;TZID=Europe/Zurich:20181019T110000
SUMMARY:Seminar in Numerical Analysis: Martin Rumpf (Universität Bonn)
DESCRIPTION:We investigate a generalization of cubic splines to Riemannian 
 manifolds. Spline curves are defined as minimizers of the spline energy - 
 a combination of the Riemannian path energy and the time integral of the s
 quared covariant derivative of the path velocity - under suitable interpol
 ation conditions. A variational time discretization for the spline energy 
 leads to a constrained optimization problem over discrete paths on the man
 ifold. Existence of continuous and discrete spline curves is established u
 sing the direct method in the calculus of variations. Furthermore\, the co
 nvergence of discrete spline paths to a continuous spline curve follows fr
 om the Γ-convergence of the discrete to the continuous spline energy. Fin
 ally\, selected example settings are discussed\, including splines on embe
 dded finite-dimensional manifolds\, on a high-dimensional manifold of disc
 rete shells with applications in surface processing\, and on the infinite-
 dimensional shape manifold of viscous rods. This is based on joint work wi
 th Behrend Heeren and Benedikt Wirth.\\r\\nFor further information about t
 he seminar\, please visit this webpage.
X-ALT-DESC:We investigate a generalization of cubic splines to Riemannian m
 anifolds. Spline curves are defined as minimizers of the spline energy - a
  combination of the Riemannian path energy and the time integral of the sq
 uared covariant derivative of the path velocity - under suitable interpola
 tion conditions. A variational time discretization for the spline energy l
 eads to a constrained optimization problem over discrete paths on the mani
 fold. Existence of continuous and discrete spline curves is established us
 ing the direct method in the calculus of variations. Furthermore\, the con
 vergence of discrete spline paths to a continuous spline curve follows fro
 m the Γ-convergence of the discrete to the continuous spline energy. Fina
 lly\, selected example settings are discussed\, including splines on embed
 ded finite-dimensional manifolds\, on a high-dimensional manifold of discr
 ete shells with applications in surface processing\, and on the infinite-d
 imensional shape manifold of viscous rods. This is based on joint work wit
 h Behrend Heeren and Benedikt Wirth.\nFor further information about the se
 minar\, please visit this <link de/forschung/mathematik/seminar-in-numeric
 al-analysis/ - - "Opens internal link in current window">webpage</link>.
DTEND;TZID=Europe/Zurich:20181019T120000
END:VEVENT
BEGIN:VEVENT
UID:news223@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T181333
DTSTART;TZID=Europe/Zurich:20180601T110000
SUMMARY:Seminar in Numerical Analysis: Kenneth Duru (Ludwig-Maximilians-Uni
 versität München)
DESCRIPTION:High order accurate and explicit time-stable solvers are well s
 uited for hyperbolic wave propagation problems.  However\, because of the 
 complexities of real geometries\, internal interfaces\, nonlinear boundary
 /interface conditions and the presence of disparate spatial and temporal s
 cales present in real media and sources\, discontinuities and sharp wave f
 ronts become fundamental features of the solutions. Thus\, high order accu
 racy\, geometrically flexible and adaptive numerical algorithms are critic
 al for high fidelity and efficient simulations of wave phenomena in many a
 pplications. I will present a physics-based numerical flux suitable for in
 ter-element and boundary conditions in discontinuous Galerkin approximatio
 ns of first order hyperbolic PDEs. Using this physics-based numerical pena
 lty-flux\, we will develop a provably energy-stable discontinuous Galerkin
  approximations  of the elastic waves in complex and discontinuous media. 
 By construction the numerical flux is upwind and yields a discrete energy 
 estimate analogous to the continuous energy estimate. The discrete energy 
 estimates hold for conforming and non-conforming curvilinear elements. The
  ability to handle non-conforming curvilinear meshes allows for flexible a
 daptive mesh refinement strategies. The numerical scheme have been impleme
 nted in ExaHyPE\, a simulation engine for hyperbolic PDEs on adaptive stru
 ctured meshes\, for exa-scale supercomputers. I will show 3D numerical exp
 eriments demonstrating stability and high order accuracy. Finally\, we pre
 sent a large scale geophysical regional wave propagation problem in a hete
 rogeneous Earth model with geologically constrained media heterogeneity an
 d geometrically complex free-surface topography.
X-ALT-DESC:High order accurate and explicit time-stable solvers are well su
 ited for hyperbolic wave propagation problems.  However\, because of the c
 omplexities of real geometries\, internal interfaces\, nonlinear boundary/
 interface conditions and the presence of disparate spatial and temporal sc
 ales present in real media and sources\, discontinuities and sharp wave fr
 onts become fundamental features of the solutions. Thus\, high order accur
 acy\, geometrically flexible and adaptive numerical algorithms are critica
 l for high fidelity and efficient simulations of wave phenomena in many ap
 plications. I will present a physics-based numerical flux suitable for int
 er-element and boundary conditions in discontinuous Galerkin approximation
 s of first order hyperbolic PDEs. Using this physics-based numerical penal
 ty-flux\, we will develop a provably energy-stable discontinuous Galerkin 
 approximations  of the elastic waves in complex and discontinuous media. B
 y construction the numerical flux is upwind and yields a discrete energy e
 stimate analogous to the continuous energy estimate. The discrete energy e
 stimates hold for conforming and non-conforming curvilinear elements. The 
 ability to handle non-conforming curvilinear meshes allows for flexible ad
 aptive mesh refinement strategies. The numerical scheme have been implemen
 ted in ExaHyPE\, a simulation engine for hyperbolic PDEs on adaptive struc
 tured meshes\, for exa-scale supercomputers. I will show 3D numerical expe
 riments demonstrating stability and high order accuracy. Finally\, we pres
 ent a large scale geophysical regional wave propagation problem in a heter
 ogeneous Earth model with geologically constrained media heterogeneity and
  geometrically complex free-surface topography. 
DTEND;TZID=Europe/Zurich:20180601T120000
END:VEVENT
BEGIN:VEVENT
UID:news222@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T180942
DTSTART;TZID=Europe/Zurich:20180525T113000
SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université Greno
 ble Alpes)
DESCRIPTION:Full waveform inversion (FWI) is a powerful high resolution sei
 smic  imaging method\, used in the academy for global and regional scale  
 imaging\, and in the oil & gas industry for exploration purposes. It  can 
 be understood as a PDE constrained optimization problem: the misfit  betwe
 en recorded seismic data and synthetic seismic data computed as the  solut
 ion of a wave propagation problem is reduced over a space of  parameters c
 ontrolling the wave propagation. One of the main limitation  of FWI is its
  dependency on the accuracy of the initial guess of the  solution. This li
 mitation is due to the non-convexity of the standard  least-squares misfit
  function used to measure the discrepancy between  recorded and synthetic 
 data\, and the use of local optimization  techniques to reduce this misfit
 . In recent studies\, we have studied the  interest for using a misfit fun
 ction based on an optimal transport  distance to mitigate this issue. The 
 convexity of this distance with  respect to shifted patterns is the main r
 eason why we are interested in  this distance\, as it can be seen as a pro
 xy for the convexity with  respect to the wave velocities we want to  reco
 nstruct. In this talk\, we  will give an overview of this work\, starting 
 by introducing basic  concepts on optimal transport\, before detailing the
  difficulties for  using the optimal transport distance in the framework o
 f FWI\, and  reviewing the solutions we have proposed.
X-ALT-DESC:Full waveform inversion (FWI) is a powerful high resolution seis
 mic  imaging method\, used in the academy for global and regional scale  i
 maging\, and in the oil &amp\; gas industry for exploration purposes. It  
 can be understood as a PDE constrained optimization problem: the misfit  b
 etween recorded seismic data and synthetic seismic data computed as the  s
 olution of a wave propagation problem is reduced over a space of  paramete
 rs controlling the wave propagation. One of the main limitation  of FWI is
  its dependency on the accuracy of the initial guess of the  solution. Thi
 s limitation is due to the non-convexity of the standard  least-squares mi
 sfit function used to measure the discrepancy between  recorded and synthe
 tic data\, and the use of local optimization  techniques to reduce this mi
 sfit. In recent studies\, we have studied the  interest for using a misfit
  function based on an optimal transport  distance to mitigate this issue. 
 The convexity of this distance with  respect to shifted patterns is the ma
 in reason why we are interested in  this distance\, as it can be seen as a
  proxy for the convexity with  respect to the wave velocities we want to  
 reconstruct. In this talk\, we  will give an overview of this work\, start
 ing by introducing basic  concepts on optimal transport\, before detailing
  the difficulties for  using the optimal transport distance in the framewo
 rk of FWI\, and  reviewing the solutions we have proposed. 
DTEND;TZID=Europe/Zurich:20180525T123000
END:VEVENT
BEGIN:VEVENT
UID:news221@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T175831
DTSTART;TZID=Europe/Zurich:20180518T110000
SUMMARY:Seminar in Numerical Analysis: Holger Fröning (Universität Heidel
 berg)
DESCRIPTION:We are observing a continuous increase in concurrency and heter
 ogeneity for computing systems of any scale\, ranging from small mobile de
 vices to huge datacenters\, and driven by a steady demand for more computi
 ng power. One of the prime examples for an application with virtually unli
 mited computational requirements is machine learning\, in particular deep 
 neural networks (DNN). At the level of data-centers\, DNN training has alr
 eady led to a ubiquitous use of graphics processing units (GPUs)\, forming
  a prime example for specialization for computational improvement. Still\,
  this application is strongly hindered by insufficient compute power and b
 y scalability limitations. Contrary\, mobile architectures for DNN inferen
 ce are still nascent\, and a large amount of proposals have been published
  in the recent years. Both applications\, training and inference\, can fur
 thermore benefit a lot from algorithmic optimizations to reduce the comput
 ational requirements. This talk presents a short introduction of the appli
 cation\, a summary of our observations\, and our own research on reduced p
 recision by extreme forms of quantizations. Finally\, this talk will offer
  some opinions on anticipated research problems.
X-ALT-DESC:We are observing a continuous increase in concurrency and hetero
 geneity for computing systems of any scale\, ranging from small mobile dev
 ices to huge datacenters\, and driven by a steady demand for more computin
 g power. One of the prime examples for an application with virtually unlim
 ited computational requirements is machine learning\, in particular deep n
 eural networks (DNN). At the level of data-centers\, DNN training has alre
 ady led to a ubiquitous use of graphics processing units (GPUs)\, forming 
 a prime example for specialization for computational improvement. Still\, 
 this application is strongly hindered by insufficient compute power and by
  scalability limitations. Contrary\, mobile architectures for DNN inferenc
 e are still nascent\, and a large amount of proposals have been published 
 in the recent years. Both applications\, training and inference\, can furt
 hermore benefit a lot from algorithmic optimizations to reduce the computa
 tional requirements. This talk presents a short introduction of the applic
 ation\, a summary of our observations\, and our own research on reduced pr
 ecision by extreme forms of quantizations. Finally\, this talk will offer 
 some opinions on anticipated research problems. 
DTEND;TZID=Europe/Zurich:20180518T120000
END:VEVENT
BEGIN:VEVENT
UID:news220@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T175646
DTSTART;TZID=Europe/Zurich:20180504T110000
SUMMARY:Seminar in Numerical Analysis: Jan Hamaekers (Fraunhofer SCAI)
DESCRIPTION:In this talk\, we introduce a new scheme for the efficient nume
 rical treatment of the electronic Schrödinger equation for molecules. It 
 is based on the combination of a many-body expansion\, which corresponds t
 o the so-called bond order dissection Anova approach\, with a hierarchy of
  basis sets of increasing order. Here\, the energy is represented as a fin
 ite sum of contributions associated to subsets of nuclei and basis sets in
  a telescoping sum like fashion. Under the assumption of data locality of 
 the electronic density (nearsightedness of electronic matter)\, the terms 
 of this expansion decay rapidly and higher terms may be neglected. We furt
 her extend the approach in a dimension-adaptive fashion to generate quasi-
 optimal approximations\, i.e. a specific truncation of the hierarchical se
 ries such that the total benefit is maximized for a fixed amount of costs.
  This way\, we are able to achieve substantial speed up factors compared t
 o conventional first principles methods depending on the molecular system 
 under consideration. In particular\, the method can deal efficiently with 
 molecular systems which include only a small active part that needs to be 
 described by accurate but expensive models. Finally\, we discuss to apply 
 such a multi-level many-body decomposition in the context of machine learn
 ing for many-body systems.
X-ALT-DESC:In this talk\, we introduce a new scheme for the efficient numer
 ical treatment of the electronic Schrödinger equation for molecules. It i
 s based on the combination of a many-body expansion\, which corresponds to
  the so-called bond order dissection Anova approach\, with a hierarchy of 
 basis sets of increasing order. Here\, the energy is represented as a fini
 te sum of contributions associated to subsets of nuclei and basis sets in 
 a telescoping sum like fashion. Under the assumption of data locality of t
 he electronic density (nearsightedness of electronic matter)\, the terms o
 f this expansion decay rapidly and higher terms may be neglected. We furth
 er extend the approach in a dimension-adaptive fashion to generate quasi-o
 ptimal approximations\, i.e. a specific truncation of the hierarchical ser
 ies such that the total benefit is maximized for a fixed amount of costs. 
 This way\, we are able to achieve substantial speed up factors compared to
  conventional first principles methods depending on the molecular system u
 nder consideration. In particular\, the method can deal efficiently with m
 olecular systems which include only a small active part that needs to be d
 escribed by accurate but expensive models. Finally\, we discuss to apply s
 uch a multi-level many-body decomposition in the context of machine learni
 ng for many-body systems. 
DTEND;TZID=Europe/Zurich:20180504T120000
END:VEVENT
BEGIN:VEVENT
UID:news219@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T175418
DTSTART;TZID=Europe/Zurich:20180427T110000
SUMMARY:Seminar in Numerical Analysis: Pierre-Henri Tournier (UPMC - Univer
 sity Pierre and Marie Curie)
DESCRIPTION:This work deals with preconditioning the time-harmonic Maxwell 
 equations with absorption\, where the preconditioner is constructed using 
 two-level overlapping Additive Schwarz Domain Decomposition\, and the PDE 
 is discretised using finite-element methods of fixed\, arbitrary order. Th
 e theory shows that if the absorption is large enough\, and if the subdoma
 in and coarse mesh diameters are chosen appropriately\, then classical two
 -level overlapping Additive Schwarz Domain Decomposition preconditioning p
 erforms optimally – in the sense that GMRES converges in a wavenumber-in
 dependent number of iterations – for the problem with absorption. This w
 ork is an extension of the theory proposed in [1] for the Helmholtz equati
 on. Numerical experiments illustrate this theoretical result and also (i) 
 explore replacing the PEC boundary conditions on the subdomains by impedan
 ce boundary conditions\, and (ii) show that the preconditioner for the pro
 blem with absorption is also an effective preconditioner for the problem w
 ith no absorption. The numerical results include examples arising from app
 lications: a problem with absorption arising from medical imaging shows th
 e robustness of the preconditioner against heterogeneity\, and a scatterin
 g problem by the COBRA cavity shows good scalability of the preconditioner
  with up to 3000 processors. Finally\, additional numerical results for th
 e elastic wave equation are presented for benchmarks in seismic inversion.
 \\r\\n[1] I. G. Graham\, E. A. Spence\, and E. Vainikko. Domain decomposit
 ion preconditioning for high-frequency Helmholtz problems with absorption.
  Mathematics of Computation\, 86(307):2089–2127\, 2017.
X-ALT-DESC:This work deals with preconditioning the time-harmonic Maxwell e
 quations with absorption\, where the preconditioner is constructed using t
 wo-level overlapping Additive Schwarz Domain Decomposition\, and the PDE i
 s discretised using finite-element methods of fixed\, arbitrary order. The
  theory shows that if the absorption is large enough\, and if the subdomai
 n and coarse mesh diameters are chosen appropriately\, then classical two-
 level overlapping Additive Schwarz Domain Decomposition preconditioning pe
 rforms optimally – in the sense that GMRES converges in a wavenumber-ind
 ependent number of iterations – for the problem with absorption. This wo
 rk is an extension of the theory proposed in [1] for the Helmholtz equatio
 n. Numerical experiments illustrate this theoretical result and also (i) e
 xplore replacing the PEC boundary conditions on the subdomains by impedanc
 e boundary conditions\, and (ii) show that the preconditioner for the prob
 lem with absorption is also an effective preconditioner for the problem wi
 th no absorption. The numerical results include examples arising from appl
 ications: a problem with absorption arising from medical imaging shows the
  robustness of the preconditioner against heterogeneity\, and a scattering
  problem by the COBRA cavity shows good scalability of the preconditioner 
 with up to 3000 processors. Finally\, additional numerical results for the
  elastic wave equation are presented for benchmarks in seismic inversion.\
 n[1] I. G. Graham\, E. A. Spence\, and E. Vainikko. Domain decomposition p
 reconditioning for high-frequency Helmholtz problems with absorption. Math
 ematics of Computation\, 86(307):2089–2127\, 2017. 
DTEND;TZID=Europe/Zurich:20180427T120000
END:VEVENT
BEGIN:VEVENT
UID:news218@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T180010
DTSTART;TZID=Europe/Zurich:20180413T110000
SUMMARY:Seminar in Numerical Analysis: Philipp Morgenstern (Leibniz Univers
 ität Hannover)
DESCRIPTION:We introduce a mesh refinement algorithm for the Adaptive Isoge
 ometric Method using multivariate T-splines. We investigate linear indepen
 dence of the T-splines\, nestedness of the T-spline spaces\, and linear co
 mplexity in the sense of a uniform upper bound on the ratio of generated a
 nd marked elements\, which is crucial for a later proof of rate-optimality
  of the method. Altogether\, this work paves the way for a provably rate-o
 ptimal Adaptive Isogeometric Method with T-splines in any space dimension.
 \\r\\nAs an outlook to future work\, we outline an approach for the handli
 ng of zero knot intervals and multiple lines in the interior of the domain
 \, which are used in CAD applications for controlling the continuity of th
 e spline functions\, and we also sketch basic ideas for the local refineme
 nt of two-dimensional meshes that do not have tensor-product structure.
X-ALT-DESC:We introduce a mesh refinement algorithm for the Adaptive Isogeo
 metric Method using multivariate T-splines. We investigate linear independ
 ence of the T-splines\, nestedness of the T-spline spaces\, and linear com
 plexity in the sense of a uniform upper bound on the ratio of generated an
 d marked elements\, which is crucial for a later proof of rate-optimality 
 of the method. Altogether\, this work paves the way for a provably rate-op
 timal Adaptive Isogeometric Method with T-splines in any space dimension.\
 nAs an outlook to future work\, we outline an approach for the handling of
  zero knot intervals and multiple lines in the interior of the domain\, wh
 ich are used in CAD applications for controlling the continuity of the spl
 ine functions\, and we also sketch basic ideas for the local refinement of
  two-dimensional meshes that do not have tensor-product structure.
DTEND;TZID=Europe/Zurich:20180413T120000
END:VEVENT
BEGIN:VEVENT
UID:news217@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T174831
DTSTART;TZID=Europe/Zurich:20180323T110000
SUMMARY:Seminar in Numerical Analysis: Gregor Gantner (TU Wien)
DESCRIPTION:Since the advent of isogeometric analysis (IGA) in 2005\, the f
 inite element method (FEM) and the boundary element method (BEM) with spli
 nes have become an active field of research. The central idea of IGA is to
  use the same functions for the approximation of the solution of the consi
 dered partial differential equation (PDE) as for the representation of the
  problem geometry in computer aided design (CAD). Usually\, CAD is based o
 n tensor-product splines. To allow for adaptive refinement\, several exten
 sions of these have emerged\, e.g.\, hierarchical splines\, T-splines\, an
 d LR-splines. In view of geometry induced generic singularities and the fa
 ct that isogeometric methods employ higher-order ansatz functions\, the ga
 in of adaptive refinement (resp. loss for uniform refinement) is huge.\\r\
 \nIn this talk\, we first consider an adaptive FEM with hierarchical splin
 es of arbitrary degree for linear elliptic PDE systems of second order wit
 h Dirichlet boundary condition for arbitrary dimension d≥2. We assume th
 at the problem geometry can be parametrized over the d-dimensional unit cu
 be. We propose a refinement strategy to generate a sequence of locally ref
 ined meshes and corresponding discrete solutions. Adaptivity is driven by 
 some weighted-residual a posteriori error estimator. In [1]\, we proved li
 near convergence of the error estimator with optimal algebraic rate.\\r\\n
 Next\, we consider an adaptive BEM with hierarchical splines of arbitrary 
 degree for weakly-singular integral equations of the first kind that arise
  from the solution of linear elliptic PDE systems of second order with con
 stant coefficients and Dirichlet boundary condition. We assume that the bo
 undary of the geometry is the union of surfaces that can be parametrized o
 ver the unit square. Again\, we propose a refinement strategy to generate 
 a sequence of locally refined meshes and corresponding discrete solutions\
 , where adaptivity is driven by some weighted-residual a posteriori error 
 estimator. In [2]\, we proved linear convergence of the error estimator wi
 th optimal algebraic rate. In contrast to prior works\, which are restrict
 ed to the Laplace model problem\, our analysis allows for arbitrary ellipt
 ic PDE operators of second order with constant coefficients.\\r\\nFinally\
 , for one-dimensional boundaries\, we investigate an adaptive BEM with sta
 ndard splines instead of hierarchical splines. We modify the corresponding
  algorithm so that it additionally uses knot multiplicity increase which r
 esults in local smoothness reduction of the ansatz space. In [3]\, we prov
 ed linear convergence of the employed weighted-residual error estimator wi
 th optimal algebraic rate.\\r\\nREFERENCES\\r\\n[1] G. Gantner\, D. Haberl
 ik\, and Dirk Praetorius\, Adaptive IGAFEM with optimal convergence rates:
  Hierarchical B-splines. Math. Mod. Meth. in Appl. S.\, Vol. 27\, 2017.\\r
 \\n[2] G. Gantner\, Optimal adaptivity for splines in finite and boundary 
 element methods\, PhD thesis\, TU Wien\, 2017.\\r\\n[3] Michael Feischl\, 
 Gregor Gantner\, Alexander Haberl\, and Dirk Praetorius. Adaptive 2D IGA b
 oundary element methods. Eng. Anal. Bound. Elem.\, Vol. 62\, 2016.
X-ALT-DESC:Since the advent of isogeometric analysis (IGA) in 2005\, the fi
 nite element method (FEM) and the boundary element method (BEM) with splin
 es have become an active field of research. The central idea of IGA is to 
 use the same functions for the approximation of the solution of the consid
 ered partial differential equation (PDE) as for the representation of the 
 problem geometry in computer aided design (CAD). Usually\, CAD is based on
  tensor-product splines. To allow for adaptive refinement\, several extens
 ions of these have emerged\, e.g.\, hierarchical splines\, T-splines\, and
  LR-splines. In view of geometry induced generic singularities and the fac
 t that isogeometric methods employ higher-order ansatz functions\, the gai
 n of adaptive refinement (resp. loss for uniform refinement) is huge.\nIn 
 this talk\, we first consider an adaptive FEM with hierarchical splines of
  arbitrary degree for linear elliptic PDE systems of second order with Dir
 ichlet boundary condition for arbitrary dimension d≥2. We assume that th
 e problem geometry can be parametrized over the d-dimensional unit cube. W
 e propose a refinement strategy to generate a sequence of locally refined 
 meshes and corresponding discrete solutions. Adaptivity is driven by some 
 weighted-residual a posteriori error estimator. In [1]\, we proved linear 
 convergence of the error estimator with optimal algebraic rate.\nNext\, we
  consider an adaptive BEM with hierarchical splines of arbitrary degree fo
 r weakly-singular integral equations of the first kind that arise from the
  solution of linear elliptic PDE systems of second order with constant coe
 fficients and Dirichlet boundary condition. We assume that the boundary of
  the geometry is the union of surfaces that can be parametrized over the u
 nit square. Again\, we propose a refinement strategy to generate a sequenc
 e of locally refined meshes and corresponding discrete solutions\, where a
 daptivity is driven by some weighted-residual a posteriori error estimator
 . In [2]\, we proved linear convergence of the error estimator with optima
 l algebraic rate. In contrast to prior works\, which are restricted to the
  Laplace model problem\, our analysis allows for arbitrary elliptic PDE op
 erators of second order with constant coefficients.\nFinally\, for one-dim
 ensional boundaries\, we investigate an adaptive BEM with standard splines
  instead of hierarchical splines. We modify the corresponding algorithm so
  that it additionally uses knot multiplicity increase which results in loc
 al smoothness reduction of the ansatz space. In [3]\, we proved linear con
 vergence of the employed weighted-residual error estimator with optimal al
 gebraic rate.\n<h6>REFERENCES</h6>\n[1] G. Gantner\, D. Haberlik\, and Dir
 k Praetorius\, Adaptive IGAFEM with optimal convergence rates: Hierarchica
 l B-splines. Math. Mod. Meth. in Appl. S.\, Vol. 27\, 2017.\n[2] G. Gantne
 r\, Optimal adaptivity for splines in finite and boundary element methods\
 , PhD thesis\, TU Wien\, 2017.\n[3] Michael Feischl\, Gregor Gantner\, Ale
 xander Haberl\, and Dirk Praetorius. Adaptive 2D IGA boundary element meth
 ods. Eng. Anal. Bound. Elem.\, Vol. 62\, 2016. 
DTEND;TZID=Europe/Zurich:20180323T120000
END:VEVENT
BEGIN:VEVENT
UID:news224@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T181506
DTSTART;TZID=Europe/Zurich:20171110T110000
SUMMARY:Seminar in Numerical Analysis: Martin Eigel (WIAS Berlin)
DESCRIPTION:The Stochastic Galerkin FEM (SGFEM) is a common method to numer
 ically  solve PDEs with random data with the aim to obtain a functional  r
 epresentation of the stochastic solution. As with any spectral method\,  t
 he curse of dimensionality renders the approach very challenging  whenever
  the randomness depends on a large or even infinite set of  parameters. Th
 is makes function space adaptation and model reduction  strategies a neces
 sity. We review adaptive SGFEM based on reliable a  posteriori error estim
 ators for the affine and the lognormal cases. As  an alternative to a spar
 se discretisation\, the representation in a  hierarchical tensor format is
  examined. Moreover\, as an application of  the result\, we present an ada
 ptive method for explicit sampling-free  Bayesian inversion.
X-ALT-DESC:The Stochastic Galerkin FEM (SGFEM) is a common method to numeri
 cally  solve PDEs with random data with the aim to obtain a functional  re
 presentation of the stochastic solution. As with any spectral method\,  th
 e curse of dimensionality renders the approach very challenging  whenever 
 the randomness depends on a large or even infinite set of  parameters. Thi
 s makes function space adaptation and model reduction  strategies a necess
 ity. We review adaptive SGFEM based on reliable a  posteriori error estima
 tors for the affine and the lognormal cases. As  an alternative to a spars
 e discretisation\, the representation in a  hierarchical tensor format is 
 examined. Moreover\, as an application of  the result\, we present an adap
 tive method for explicit sampling-free  Bayesian inversion.
DTEND;TZID=Europe/Zurich:20171110T120000
END:VEVENT
BEGIN:VEVENT
UID:news225@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T181648
DTSTART;TZID=Europe/Zurich:20171103T110000
SUMMARY:Seminar in Numerical Analysis: Herbert Egger (TU Darmstadt)
DESCRIPTION:We consider the simulation of gas transport through pipe networ
 ks. An appropriate mixed variational formulation is proposed to take into 
 account the coupling conditions at pipe junctions automatically. This allo
 ws to obtain energy stable and mass conserving discretization schemes by G
 alerkin projection. A mixed finite element method is briefly discussed as 
 a particular choice. We also discuss the preservation of further structura
 l properties\, like uniform exponential stability and the correct approxim
 ation of asymptotic regimes.
X-ALT-DESC:We consider the simulation of gas transport through pipe network
 s. An appropriate mixed variational formulation is proposed to take into a
 ccount the coupling conditions at pipe junctions automatically. This allow
 s to obtain energy stable and mass conserving discretization schemes by Ga
 lerkin projection. A mixed finite element method is briefly discussed as a
  particular choice. We also discuss the preservation of further structural
  properties\, like uniform exponential stability and the correct approxima
 tion of asymptotic regimes. 
DTEND;TZID=Europe/Zurich:20171103T120000
END:VEVENT
BEGIN:VEVENT
UID:news226@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T181833
DTSTART;TZID=Europe/Zurich:20171006T110000
SUMMARY:Seminar in Numerical Analysis: Stephan Schmidt (Universität Würzb
 urg)
DESCRIPTION:Many PDE constrained optimization problems fall into the catego
 ry of  shape optimization\, meaning the geometry of the domain is the unkn
 own to  be found. Most natural applications are drag minimization in fluid
   dynamics\, but many tomography and image reconstruction problems also  f
 all into this category.\\r\\nThe talk introduces shape optimization  as a 
 special sub-class of PDE constraint optimization problems. The main  focus
  here will be on generating Newton-like methods for large scale  applicati
 ons. The key for this endeavor is the derivation of the shape  Hessian\, t
 hat is the second directional derivative of a cost functional  with respec
 t to geometry changes in a weak form based on material  derivatives instea
 d of classical local shape derivatives. To avoid human  errors\, a compute
 r aided derivation system is also introduced.\\r\\nThe methodologies are t
 ested on problem from fluid dynamics and geometric inverse problems.
X-ALT-DESC:Many PDE constrained optimization problems fall into the categor
 y of  shape optimization\, meaning the geometry of the domain is the unkno
 wn to  be found. Most natural applications are drag minimization in fluid 
  dynamics\, but many tomography and image reconstruction problems also  fa
 ll into this category.\nThe talk introduces shape optimization  as a speci
 al sub-class of PDE constraint optimization problems. The main  focus here
  will be on generating Newton-like methods for large scale  applications. 
 The key for this endeavor is the derivation of the shape  Hessian\, that i
 s the second directional derivative of a cost functional  with respect to 
 geometry changes in a weak form based on material  derivatives instead of 
 classical local shape derivatives. To avoid human  errors\, a computer aid
 ed derivation system is also introduced.\nThe methodologies are tested on 
 problem from fluid dynamics and geometric inverse problems. 
DTEND;TZID=Europe/Zurich:20171006T120000
END:VEVENT
BEGIN:VEVENT
UID:news227@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T202427
DTSTART;TZID=Europe/Zurich:20170512T110000
SUMMARY:Seminar in Numerical Analysis: Anatole von Lilienfeld (Universität
  Basel)
DESCRIPTION:Many of the most relevant chemical properties of matter depend 
  explicitly on atomistic and electronic details\, rendering a first  princ
 iples approach to chemistry mandatory. Alas\, even when using  high-perfor
 mance computers\, brute force high-throughput screening of  compounds is b
 eyond any capacity for all but the simplest systems and  properties due to
  the combinatorial nature of chemical space\, i.e. all  compositional\, co
 nstitutional\, and conformational isomers. Consequently\,  efficient explo
 ration algorithms need to exploit all implicit  redundancies present in ch
 emical space. I will discuss recently  developed statistical learning appr
 oaches for interpolating quantum  mechanical observables in compositional 
 and constitutional space.
X-ALT-DESC:Many of the most relevant chemical properties of matter depend  
 explicitly on atomistic and electronic details\, rendering a first  princi
 ples approach to chemistry mandatory. Alas\, even when using  high-perform
 ance computers\, brute force high-throughput screening of  compounds is be
 yond any capacity for all but the simplest systems and  properties due to 
 the combinatorial nature of chemical space\, i.e. all  compositional\, con
 stitutional\, and conformational isomers. Consequently\,  efficient explor
 ation algorithms need to exploit all implicit  redundancies present in che
 mical space. I will discuss recently  developed statistical learning appro
 aches for interpolating quantum  mechanical observables in compositional a
 nd constitutional space. 
DTEND;TZID=Europe/Zurich:20170512T120000
END:VEVENT
BEGIN:VEVENT
UID:news228@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T202644
DTSTART;TZID=Europe/Zurich:20170505T110000
SUMMARY:Seminar in Numerical Analysis: Sébastien Imperiale (INRIA)
DESCRIPTION:In this talk we present an approach for local space-time discre
 tisation of linear wave equations. Assuming  that high order Galerkin disc
 ontinuous method or spectral finite  elements are used we propose some (hi
 gh order) time discretisation that  can be chosen independently in each re
 gion of the domain of interest.  Each time discretisation is adapted to th
 e mesh or physics constraints.  The different obtained schemes obtained ar
 e then coupled in a stable way  by writing transmission conditions. A conv
 ergence analysis will be presented\, it is based upon energy analysis.
X-ALT-DESC:In this talk we present an approach for local space-time discret
 isation of linear wave equations. Assuming  that high order Galerkin disco
 ntinuous method or spectral finite  elements are used we propose some (hig
 h order) time discretisation that  can be chosen independently in each reg
 ion of the domain of interest.  Each time discretisation is adapted to the
  mesh or physics constraints.  The different obtained schemes obtained are
  then coupled in a stable way  by writing transmission conditions. A conve
 rgence analysis will be presented\, it is based upon energy analysis. 
DTEND;TZID=Europe/Zurich:20170505T120000
END:VEVENT
BEGIN:VEVENT
UID:news229@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T202938
DTSTART;TZID=Europe/Zurich:20170421T110000
SUMMARY:Seminar in Numerical Analysis: Martin Hanke-Bourgeois (Universität
  Mainz)
DESCRIPTION:We reconsider the impact of small volume perturbations of the c
 onductivity coefficient of second order elliptic equations in divergence f
 orm. The asymptotic expansion of the associated Neumann-Dirichlet operator
 s on bounded domains allows the development and analysis of sophisticated 
 algorithms to solve corresponding inverse boundary value problems of imped
 ance tomography. Examples of such algorithms are the MUSIC scheme and the 
 topological derivative. Novel applications include the incorporation of di
 screte electrode models and the exploitation of multiple driving frequenci
 es.
X-ALT-DESC:We reconsider the impact of small volume perturbations of the co
 nductivity coefficient of second order elliptic equations in divergence fo
 rm. The asymptotic expansion of the associated Neumann-Dirichlet operators
  on bounded domains allows the development and analysis of sophisticated a
 lgorithms to solve corresponding inverse boundary value problems of impeda
 nce tomography. Examples of such algorithms are the MUSIC scheme and the t
 opological derivative. Novel applications include the incorporation of dis
 crete electrode models and the exploitation of multiple driving frequencie
 s. 
DTEND;TZID=Europe/Zurich:20170421T120000
END:VEVENT
BEGIN:VEVENT
UID:news230@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T203249
DTSTART;TZID=Europe/Zurich:20170407T110000
SUMMARY:Seminar in Numerical Analysis: Stefan Kurz (TU Darmstadt)
DESCRIPTION:Superconducting cavities are standard components of particle  a
 ccelerators. Their design is typically described by parametrized  ellipses
  and determined by mathematical optimization. The simulation  model is sub
 ject to demanding requirements\, such as a relative accuracy  of 10−9 fo
 r the resonance frequency of the accelerating mode.  Since the geometry an
 d the electromagnetic fields are smooth\, an  approach in the gist of isog
 eometric analysis (IGA) suggests itself. The  geometry is modeled by a NUR
 BS mapping\, while the electromagnetic  fields are discretized by the B-sp
 line de Rahm complex [2]. An IGA  finite element method (FEM) for the Maxw
 ell eigenvalue problem was  investigated and showed promising results [3].
  For the same accuracy\,  the number of required degrees of freedom was re
 duced by a factor 3 . . .  9 compared to classical FEM. However\, CAD syst
 ems feature surface  descriptions only\, so the volumetric spline model ha
 d to be created  manually.\\r\\nTo live up to the promises of IGA\, namely
  closing the gap bewteen  design and analysis\, we suggest an IGA boundary
  element method (BEM). We  will review the state-of-the-art of all relevan
 t building blocks. We  will address the B-spline de Rham complex on a boun
 dary manifold\, the  Galerkin discretization of the electric field integra
 l equation\, and  present a convergence result. We will discuss a recent c
 ontour integral  method [1] to solve the resulting non-linear eigenvalue p
 roblem. Aspects  of integrating so-called ”fast methods” will also be 
 presented\, in  particular Adaptive Cross Approximation [5] and Calderón 
 preconditioning  [4].\\r\\n[1] W.-J. Beyn. An integral method for solving 
 nonlinear eigenvalue problems. Linear Algebra Appl\, 436(10):3839–3863\,
  2012.\\r\\n[2] A. Buffa\, G. Sangalli\, and R. Vázquez. Isogeometric ana
 lysis in  electromagnetics: B-splines approximation. Comput Method Appl M\
 ,  199:1143–1152\, 2010.\\r\\n[3] J. Corno\, C. de Falco\, H. De Gersem\
 , and S. Schöps. Isogeometric  simulation of Lorentz detuning in supercon
 ducting accelerator cavities.  Comput Phys Commun\, 201:1–7\, February 2
 016.\\r\\n[4] J. Li\, D. Dault\, B. Liu\, Y. Tong\, and B. Shanker. Subdiv
 ision  based isogeometric analysis technique for electric field integral  
 equations for simply connected structures. J Comput Phys\, 319:145–162\,
   2016.\\r\\n[5] B. Marussig\, J. Zechner\, G. Beer\, and T.-P. Fries. Fas
 t  isogeometric boundary element method based on independent field  approx
 imation. Comput Method Appl M\, 284:458–488\, 2015.\\r\\nThe work of Ste
 fan Kurz is supported by the ’Excellence Initiative’  of the German Fe
 deral and State Governments and the Graduate School of  Computational Engi
 neering at Technische Universität Darmstadt.
X-ALT-DESC:Superconducting cavities are standard components of particle  ac
 celerators. Their design is typically described by parametrized  ellipses 
 and determined by mathematical optimization. The simulation  model is subj
 ect to demanding requirements\, such as a relative accuracy  of 10<sup>−
 9</sup> for the resonance frequency of the accelerating mode.  Since the g
 eometry and the electromagnetic fields are smooth\, an  approach in the gi
 st of isogeometric analysis (IGA) suggests itself. The  geometry is modele
 d by a NURBS mapping\, while the electromagnetic  fields are discretized b
 y the B-spline de Rahm complex [2]. An IGA  finite element method (FEM) fo
 r the Maxwell eigenvalue problem was  investigated and showed promising re
 sults [3]. For the same accuracy\,  the number of required degrees of free
 dom was reduced by a factor 3 . . .  9 compared to classical FEM. However\
 , CAD systems feature surface  descriptions only\, so the volumetric splin
 e model had to be created  manually.\nTo live up to the promises of IGA\, 
 namely closing the gap bewteen  design and analysis\, we suggest an IGA bo
 undary element method (BEM). We  will review the state-of-the-art of all r
 elevant building blocks. We  will address the B-spline de Rham complex on 
 a boundary manifold\, the  Galerkin discretization of the electric field i
 ntegral equation\, and  present a convergence result. We will discuss a re
 cent contour integral  method [1] to solve the resulting non-linear eigenv
 alue problem. Aspects  of integrating so-called ”fast methods” will al
 so be presented\, in  particular Adaptive Cross Approximation [5] and Cald
 erón preconditioning  [4].\n[1] W.-J. Beyn. An integral method for solvin
 g nonlinear eigenvalue problems. Linear Algebra Appl\, 436(10):3839–3863
 \, 2012.\n[2] A. Buffa\, G. Sangalli\, and R. Vázquez. Isogeometric analy
 sis in  electromagnetics: B-splines approximation. Comput Method Appl M\, 
  199:1143–1152\, 2010.\n[3] J. Corno\, C. de Falco\, H. De Gersem\, and 
 S. Schöps. Isogeometric  simulation of Lorentz detuning in superconductin
 g accelerator cavities.  Comput Phys Commun\, 201:1–7\, February 2016.\n
 [4] J. Li\, D. Dault\, B. Liu\, Y. Tong\, and B. Shanker. Subdivision  bas
 ed isogeometric analysis technique for electric field integral  equations 
 for simply connected structures. J Comput Phys\, 319:145–162\,  2016.\n[
 5] B. Marussig\, J. Zechner\, G. Beer\, and T.-P. Fries. Fast  isogeometri
 c boundary element method based on independent field  approximation. Compu
 t Method Appl M\, 284:458–488\, 2015.\nThe work of Stefan Kurz is suppor
 ted by the ’Excellence Initiative’  of the German Federal and State Go
 vernments and the Graduate School of  Computational Engineering at Technis
 che Universität Darmstadt. 
DTEND;TZID=Europe/Zurich:20170407T120000
END:VEVENT
BEGIN:VEVENT
UID:news231@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T203426
DTSTART;TZID=Europe/Zurich:20170324T110000
SUMMARY:Seminar in Numerical Analysis: Jens Oettershagen (Universität Bonn
 )
DESCRIPTION:In this talk\, we discuss algorithms for multivariate integrati
 on in  reproducing kernel Hilbert spaces (RKHS). Here\, we study optimally
   weighted Monte Carlo integration in Sobolev spaces with dominating mixed
   smoothness. Moreover\, we consider integration in spaces of analytic  fu
 nctions. To this end\, we construct optimally weighted univariate  quadrat
 ure rules with carfully selected points and employ them within a  generali
 zed sparse grid to the problem of multivariate integration.  Applications 
 are given in econometrics and parametric differential  equations with affi
 ne linear diffusion coefficients.
X-ALT-DESC:In this talk\, we discuss algorithms for multivariate integratio
 n in  reproducing kernel Hilbert spaces (RKHS). Here\, we study optimally 
  weighted Monte Carlo integration in Sobolev spaces with dominating mixed 
  smoothness. Moreover\, we consider integration in spaces of analytic  fun
 ctions. To this end\, we construct optimally weighted univariate  quadratu
 re rules with carfully selected points and employ them within a  generaliz
 ed sparse grid to the problem of multivariate integration.  Applications a
 re given in econometrics and parametric differential  equations with affin
 e linear diffusion coefficients. 
DTEND;TZID=Europe/Zurich:20170324T120000
END:VEVENT
BEGIN:VEVENT
UID:news232@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T203725
DTSTART;TZID=Europe/Zurich:20161216T110000
SUMMARY:Seminar in Numerical Analysis: Christian Rieger (Universität Bonn)
DESCRIPTION:In this talk\, we will briefly discuss a general methodology of
   approximation algorithms based on reproducing kernels and their  associa
 ted Hilbert spaces. We will outline how reproducing kernels  naturally ari
 se in many reconstruction problems.\\r\\nFurthermore\, we will present a d
 eterministic a priori (often  exponential) convergence analysis via sampli
 ng inequalities which can be  employed to analyze a large class of regular
 ized reconstruction  schemes.\\r\\nSuch an analysis enables us to derive a
  priori couplings of  various discretization and regularization parameters
 . Such parameters  can range from iteration numbers in numerical linear al
 gebra\, numerical  evaluation of input parameters to rounding errors.\\r\\
 nAn important issue is the choice of the reproducing kernel. We  will disc
 uss some implications of such choices and address the problem  of approxim
 ating the solution of a parametric partial differential  equation using pr
 oblem adapted kernels.\\r\\nThis is partly based on joint work with M. Gri
 ebel and B. Zwicknagl (both Bonn University).
X-ALT-DESC:In this talk\, we will briefly discuss a general methodology of 
  approximation algorithms based on reproducing kernels and their  associat
 ed Hilbert spaces. We will outline how reproducing kernels  naturally aris
 e in many reconstruction problems.\nFurthermore\, we will present a determ
 inistic a priori (often  exponential) convergence analysis via sampling in
 equalities which can be  employed to analyze a large class of regularized 
 reconstruction  schemes.\nSuch an analysis enables us to derive a priori c
 ouplings of  various discretization and regularization parameters. Such pa
 rameters  can range from iteration numbers in numerical linear algebra\, n
 umerical  evaluation of input parameters to rounding errors.\nAn important
  issue is the choice of the reproducing kernel. We  will discuss some impl
 ications of such choices and address the problem  of approximating the sol
 ution of a parametric partial differential  equation using problem adapted
  kernels.\nThis is partly based on joint work with M. Griebel and B. Zwick
 nagl (both Bonn University). 
DTEND;TZID=Europe/Zurich:20161216T120000
END:VEVENT
BEGIN:VEVENT
UID:news233@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T203922
DTSTART;TZID=Europe/Zurich:20161209T110000
SUMMARY:Seminar in Numerical Analysis: Daniel Peterseim (Universität Bonn)
DESCRIPTION:This talk presents a variational approach for the numerical  ho
 mogenization of elliptic partial differential equations with arbitrary  ro
 ugh diffusion coefficients. The trial and test space in this  (Petrov-)Gal
 erkin method are derived from linear finite elements on a  coarse mesh of 
 width H by local fine-scale correction. The correction is  based on the pr
 e-computation of cell problems on patches of diameter H  log(1/H). The mod
 erate overlap of the patches suffices to prove O(H)  convergence of the me
 thod without any pre-asymptotic effects. The key  step in the error analys
 is is the proof of the exponential decay of the  so-called fine-scale Gree
 n's function\, i.e.\, the impulse response of the  variational equation in
  the absence of coarse-scale finite element  functions. The method allows 
 the characterization of effective  coefficients on a given target scale of
  numerical resolution. Among  further applications of the approach are pol
 lution-free high-frequency  scattering and explicit time stepping on spati
 ally adaptive meshes.
X-ALT-DESC:This talk presents a variational approach for the numerical  hom
 ogenization of elliptic partial differential equations with arbitrary  rou
 gh diffusion coefficients. The trial and test space in this  (Petrov-)Gale
 rkin method are derived from linear finite elements on a  coarse mesh of w
 idth H by local fine-scale correction. The correction is  based on the pre
 -computation of cell problems on patches of diameter H  log(1/H). The mode
 rate overlap of the patches suffices to prove O(H)  convergence of the met
 hod without any pre-asymptotic effects. The key  step in the error analysi
 s is the proof of the exponential decay of the  so-called fine-scale Green
 's function\, i.e.\, the impulse response of the  variational equation in 
 the absence of coarse-scale finite element  functions. The method allows t
 he characterization of effective  coefficients on a given target scale of 
 numerical resolution. Among  further applications of the approach are poll
 ution-free high-frequency  scattering and explicit time stepping on spatia
 lly adaptive meshes. 
DTEND;TZID=Europe/Zurich:20161209T120000
END:VEVENT
BEGIN:VEVENT
UID:news234@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T204059
DTSTART;TZID=Europe/Zurich:20161202T110000
SUMMARY:Seminar in Numerical Analysis: Xavier Antoine (Université de Lorra
 ine)
DESCRIPTION:The aim of this talk is to introduce nonoverlapping Schwarz dom
 ain  decomposition methods for time harmonic waves (acoustics\,  electroma
 gnetism). In particular\, we will focus on the construction of   rational
  Padé transmission boundary conditions to get fast converging  solvers fo
 r prospecting high frequency problems. Some numerical  simulations will il
 lustrate the theoretical developments. The methods  have been implemented 
 in a freely available software called GetDDM.
X-ALT-DESC:The aim of this talk is to introduce nonoverlapping Schwarz doma
 in  decomposition methods for time harmonic waves (acoustics\,  electromag
 netism). In particular\, we will focus on the construction of&nbsp\;  rati
 onal Padé transmission boundary conditions to get fast converging  solver
 s for prospecting high frequency problems. Some numerical  simulations wil
 l illustrate the theoretical developments. The methods  have been implemen
 ted in a freely available software called GetDDM. 
DTEND;TZID=Europe/Zurich:20161202T120000
END:VEVENT
BEGIN:VEVENT
UID:news235@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T204408
DTSTART;TZID=Europe/Zurich:20161118T110000
SUMMARY:Seminar in Numerical Analysis: Peter Zaspel (Universität Heidelber
 g / HITS)
DESCRIPTION:Hierarchical matrices approximate specific types of dense matri
 ces\,  e.g.\, from discretized integral equations\, kernel-based approxima
 tion  and Gaussian process regression\, leading to log-linear time complex
 ity  in dense matrix-vector products. To be able to solve large-scale  app
 lications\, H-matrix algorithms have to be parallelized. A special  kind o
 f parallel hardware are many-core processors\, e.g. graphics  processing u
 nits (GPUs). The parallelization of H-matrices on many-core  processors is
  difficult due to the complex nature of the underlying  algorithms that ne
 ed to be mapped to rather simple parallel operations.\\r\\nWe are interest
 ed to use these many-core processors for the full  H-matrix construction a
 nd application process. A motivation for this  interest lies in the well-k
 nown claim that future standard processors  will evolve towards many-core 
 hardware\, anyway. In order to be prepared  for this development\, we want
  to discuss many-core parallel formulations  of classical H-matrix algorit
 hms and adaptive cross approximations.\\r\\nIn the presentation\, the use 
 of H-matrices is motivated by the  model application of kernel-based appro
 ximation for the solution of  parametric PDEs\, e.g. PDEs with stochastic 
 coefficients. The main part  of the talk will be dedicated to the challeng
 es of H-matrix  parallelizations on many-core hardware with the specific m
 odel hardware  of GPUs. We propose a set of parallelization strategies whi
 ch overcome  most of these challenges. Benchmarks of our implementation ar
 e used to  explain the effect of different parallel formulations of the al
 gorithms.
X-ALT-DESC:Hierarchical matrices approximate specific types of dense matric
 es\,  e.g.\, from discretized integral equations\, kernel-based approximat
 ion  and Gaussian process regression\, leading to log-linear time complexi
 ty  in dense matrix-vector products. To be able to solve large-scale  appl
 ications\, H-matrix algorithms have to be parallelized. A special  kind of
  parallel hardware are many-core processors\, e.g. graphics  processing un
 its (GPUs). The parallelization of H-matrices on many-core  processors is 
 difficult due to the complex nature of the underlying  algorithms that nee
 d to be mapped to rather simple parallel operations.\nWe are interested to
  use these many-core processors for the full  H-matrix construction and ap
 plication process. A motivation for this  interest lies in the well-known 
 claim that future standard processors  will evolve towards many-core hardw
 are\, anyway. In order to be prepared  for this development\, we want to d
 iscuss many-core parallel formulations  of classical H-matrix algorithms a
 nd adaptive cross approximations.\nIn the presentation\, the use of H-matr
 ices is motivated by the  model application of kernel-based approximation 
 for the solution of  parametric PDEs\, e.g. PDEs with stochastic coefficie
 nts. The main part  of the talk will be dedicated to the challenges of H-m
 atrix  parallelizations on many-core hardware with the specific model hard
 ware  of GPUs. We propose a set of parallelization strategies which overco
 me  most of these challenges. Benchmarks of our implementation are used to
   explain the effect of different parallel formulations of the algorithms.
  
DTEND;TZID=Europe/Zurich:20161118T120000
END:VEVENT
BEGIN:VEVENT
UID:news236@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T204738
DTSTART;TZID=Europe/Zurich:20161104T110000
SUMMARY:Seminar in Numerical Analysis: Thorsten Hohage (Universität Götti
 ngen)
DESCRIPTION:Inverse problems usually consist in finding causes for observed
   effects. The essential difficulty in solving inverse problems is  ill-po
 sedness: causes typically do not depend continuously on their  effects alt
 hough vice versa effects typically depend continuously on  causes. To avoi
 d infinite amplification of measurement errors\,  regularization methods h
 ave to be employed to solve inverse problems  numerically. The aim of regu
 larization theory is to analyze the  convergence and speed of convergence 
 of such methods as the noise level  tends to 0.\\r\\nOver the last years V
 ariational Source Conditions (VSCs) have become a  standard assumption for
  the analysis of these methods. Compared to  spectral source conditions th
 ey have a number of advantages: They can be  used for general nonquadratic
  penalty and data fidelity terms\, lead to  simpler proofs\, are often not
  only sufficient\, but even necessary for  certain convergence rates\, and
  they do not involve the derivative of the  forward operator (and hence do
  not require restrictive assumptions such  as a tangential cone condition)
 . However\, so far only few sufficient  conditions for VSCs for specific i
 nverse problems are known.\\r\\nTo overcome this drawback\, we propose a g
 eneral strategy for the  verification of VSCs\, which consists of two suff
 icient conditions: One  of them describes the smoothness of the solution\,
  and the other one the  degree of ill-posedness of the operator. For a num
 ber of important  linear inverse problems this leads to equivalent charact
 erizations of  VSCs in terms of Besov spaces and necessary and sufficient 
 conditions  for rates of convergence. We also discuss the application of o
 ur  strategy to nonlinear parameter identification and inverse medium  sca
 ttering problems where it provides sufficient conditions for VSCs in  term
 s of standard function spaces.
X-ALT-DESC:Inverse problems usually consist in finding causes for observed 
  effects. The essential difficulty in solving inverse problems is  ill-pos
 edness: causes typically do not depend continuously on their  effects alth
 ough vice versa effects typically depend continuously on  causes. To avoid
  infinite amplification of measurement errors\,  regularization methods ha
 ve to be employed to solve inverse problems  numerically. The aim of regul
 arization theory is to analyze the  convergence and speed of convergence o
 f such methods as the noise level  tends to 0.\nOver the last years Variat
 ional Source Conditions (VSCs) have become a  standard assumption for the 
 analysis of these methods. Compared to  spectral source conditions they ha
 ve a number of advantages: They can be  used for general nonquadratic pena
 lty and data fidelity terms\, lead to  simpler proofs\, are often not only
  sufficient\, but even necessary for  certain convergence rates\, and they
  do not involve the derivative of the  forward operator (and hence do not 
 require restrictive assumptions such  as a tangential cone condition). How
 ever\, so far only few sufficient  conditions for VSCs for specific invers
 e problems are known.\nTo overcome this drawback\, we propose a general st
 rategy for the  verification of VSCs\, which consists of two sufficient co
 nditions: One  of them describes the smoothness of the solution\, and the 
 other one the  degree of ill-posedness of the operator. For a number of im
 portant  linear inverse problems this leads to equivalent characterization
 s of  VSCs in terms of Besov spaces and necessary and sufficient condition
 s  for rates of convergence. We also discuss the application of our  strat
 egy to nonlinear parameter identification and inverse medium  scattering p
 roblems where it provides sufficient conditions for VSCs in  terms of stan
 dard function spaces. 
DTEND;TZID=Europe/Zurich:20161104T120000
END:VEVENT
BEGIN:VEVENT
UID:news237@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T204932
DTSTART;TZID=Europe/Zurich:20161021T110000
SUMMARY:Seminar in Numerical Analysis: Steffen Börm (Universität Kiel)
DESCRIPTION:In the context of stochastic partial differential equation\, we
  are  frequently faced with equations in high-dimensional domains. In orde
 r to  obtain efficient numerical methods for these equations\, we have to 
 take  local regularity properties of the solution into account\, e.g.\, by
   using locally refined finite element meshes. Extending standard meshing 
  algorithms to higher dimensions poses a significant challenge.\\r\\nWe pr
 opose an alternative: the Galerkin trial space is  constructed using a par
 tition of unity. By multiplying local cut-off  functions with polynomials\
 , we can obtain discretizations of arbitrary  order\, and local grid refin
 ement can be realized by reducing the  supports of the cut-off functions. 
 The main challenge lies in the  construction of the corresponding system m
 atrix\, since even determining  the sparsity pattern involves interactions
  between cut-off functions on  different levels of the mesh hierarchy.\\r\
 \nOur approach leads to a sparse system matrix\, the basis functions  are 
 convenient tensor products of functions on lower-dimensional  domains\, an
 d local regularity can be exploited by variable-order  interpolation in or
 der to obtain close to optimal complexity.
X-ALT-DESC:In the context of stochastic partial differential equation\, we 
 are  frequently faced with equations in high-dimensional domains. In order
  to  obtain efficient numerical methods for these equations\, we have to t
 ake  local regularity properties of the solution into account\, e.g.\, by 
  using locally refined finite element meshes. Extending standard meshing  
 algorithms to higher dimensions poses a significant challenge.\nWe propose
  an alternative: the Galerkin trial space is  constructed using a partitio
 n of unity. By multiplying local cut-off  functions with polynomials\, we 
 can obtain discretizations of arbitrary  order\, and local grid refinement
  can be realized by reducing the  supports of the cut-off functions. The m
 ain challenge lies in the  construction of the corresponding system matrix
 \, since even determining  the sparsity pattern involves interactions betw
 een cut-off functions on  different levels of the mesh hierarchy.\nOur app
 roach leads to a sparse system matrix\, the basis functions  are convenien
 t tensor products of functions on lower-dimensional  domains\, and local r
 egularity can be exploited by variable-order  interpolation in order to ob
 tain close to optimal complexity. 
DTEND;TZID=Europe/Zurich:20161021T120000
END:VEVENT
BEGIN:VEVENT
UID:news238@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T205111
DTSTART;TZID=Europe/Zurich:20161007T110000
SUMMARY:Seminar in Numerical Analysis: Fabio Baruffa (Leibniz-Rechenzentrum
 \, München)
DESCRIPTION:In the framework of the Intel Parallel Computing Centre at Leib
 niz  Supercomputing Centre (LRZ)\, Fabio Baruffa will present recent resul
 ts  on the performance optimization of Gadget-3 on multi and many-core  co
 mputer architectures\, including the new Intel Xeon Phi processor of  seco
 nd generation\, codenamed Knights Landing (KNL). An overview of  results f
 or node-level scalability\, vector efficiency and performance  are present
 ed here. Our work is based on an isolated\, representative  code kernel\, 
 where threading parallelism\, data locality and  vectorization efficiency 
 was improved. The node-level parallel  efficiency improved by factors rang
 ing from 5x to 16x on Haswell and KNL  nodes\, respectively. Moreover\, a 
 vectorization efficiency of 80% (6.6x)  on a prototypical target loop of t
 he code is obtained without  programming using intrinsics instructions.
X-ALT-DESC:In the framework of the Intel Parallel Computing Centre at Leibn
 iz  Supercomputing Centre (LRZ)\, Fabio Baruffa will present recent result
 s  on the performance optimization of Gadget-3 on multi and many-core  com
 puter architectures\, including the new Intel Xeon Phi processor of  secon
 d generation\, codenamed Knights Landing (KNL). An overview of  results fo
 r node-level scalability\, vector efficiency and performance  are presente
 d here. Our work is based on an isolated\, representative  code kernel\, w
 here threading parallelism\, data locality and  vectorization efficiency w
 as improved. The node-level parallel  efficiency improved by factors rangi
 ng from 5x to 16x on Haswell and KNL  nodes\, respectively. Moreover\, a v
 ectorization efficiency of 80% (6.6x)  on a prototypical target loop of th
 e code is obtained without  programming using intrinsics instructions. 
DTEND;TZID=Europe/Zurich:20161007T120000
END:VEVENT
BEGIN:VEVENT
UID:news239@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T205454
DTSTART;TZID=Europe/Zurich:20160527T110000
SUMMARY:Seminar in Numerical Analysis: Ana Djurdjevac (FU Berlin)
DESCRIPTION:Sometimes the partial differential equations with random coeffi
 cients can be better formulated on moving domains\, especially in biologic
 al applications. We will introduce and analyse the advection-diffusion equ
 ations with random coefficients on moving hypersurfaces. Under suitable re
 gularity assumptions\, using Banach-Necas-Babuska theorem\, we will prove 
 existence and uniqueness of the weak solution and also we will give some r
 egularity results about the solution. For discretization in space\, we wil
 l apply the evolving surface finite element method. In order to deal with 
 uncertainty\, we will use Monte Carlo method. Furthermore\, we plan to dis
 cuss the case when the velocity of the hypersuraface is random.This is a j
 oint work with Charles M. Elliott (University of Warwick\, UK)\, Ralf Korn
 huber (Free University Berlin\, Germany) and Thomas Ranner (University of 
 Leeds\, UK).
X-ALT-DESC:Sometimes the partial differential equations with random coeffic
 ients can be better formulated on moving domains\, especially in biologica
 l applications. We will introduce and analyse the advection-diffusion equa
 tions with random coefficients on moving hypersurfaces. Under suitable reg
 ularity assumptions\, using Banach-Necas-Babuska theorem\, we will prove e
 xistence and uniqueness of the weak solution and also we will give some re
 gularity results about the solution. For discretization in space\, we will
  apply the evolving surface finite element method. In order to deal with u
 ncertainty\, we will use Monte Carlo method. Furthermore\, we plan to disc
 uss the case when the velocity of the hypersuraface is random.<br /><br />
 This is a joint work with Charles M. Elliott (University of Warwick\, UK)\
 , Ralf Kornhuber (Free University Berlin\, Germany) and Thomas Ranner (Uni
 versity of Leeds\, UK). 
DTEND;TZID=Europe/Zurich:20160527T120000
END:VEVENT
BEGIN:VEVENT
UID:news240@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T205652
DTSTART;TZID=Europe/Zurich:20160520T110000
SUMMARY:Seminar in Numerical Analysis: Julien Diaz (INRIA)
DESCRIPTION:Seismic imaging techniques such as Reverse Time Migration (RTM)
  or Full  Waveform Inversion (FWI) can be applied in time domain or in fre
 quency  domain. After havind recalled the principle of RTM and discussed t
 he  advantages and drawbacks of both approaches\, we focus on frequential 
  domain. We show the usefulness  of Discontinuous for solving acoustic  a
 nd elastodynamic wave equation\, et we apply the Interior Penalty  Discont
 inuous Galerkin method (IPDG) to the modelling of elasto/acoustic  couplin
 g. We then present an alternative method\, the Hydridizable  Discontinuous
  Galerkin method (HDG)\, which reduces the number of  unknowns of the glob
 al linear system thanks to the introduction of a  Lagrange multiplier defi
 ned only on the faces of the cells of the mesh.  We illustrate the efficie
 ncy of HDG with respect to IPDG thanks to  comparisons on academic and ind
 ustrial test cases.
X-ALT-DESC:Seismic imaging techniques such as Reverse Time Migration (RTM) 
 or Full  Waveform Inversion (FWI) can be applied in time domain or in freq
 uency  domain. After havind recalled the principle of RTM and discussed th
 e  advantages and drawbacks of both approaches\, we focus on frequential  
 domain. We show the usefulness&nbsp\; of Discontinuous for solving acousti
 c  and elastodynamic wave equation\, et we apply the Interior Penalty  Dis
 continuous Galerkin method (IPDG) to the modelling of elasto/acoustic  cou
 pling. We then present an alternative method\, the Hydridizable  Discontin
 uous Galerkin method (HDG)\, which reduces the number of  unknowns of the 
 global linear system thanks to the introduction of a  Lagrange multiplier 
 defined only on the faces of the cells of the mesh.  We illustrate the eff
 iciency of HDG with respect to IPDG thanks to  comparisons on academic and
  industrial test cases. 
DTEND;TZID=Europe/Zurich:20160520T120000
END:VEVENT
BEGIN:VEVENT
UID:news241@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T205837
DTSTART;TZID=Europe/Zurich:20160513T110000
SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (CNRS Paris 6)
DESCRIPTION:Optimized Schwarz methods (OSM) are very popular methods which 
 were  introduced by P.L. Lions for elliptic problems and B. Despres for  p
 ropagative wave phenomena. One drawback is the lack of theoretical  result
 s for variable coefficients problems and overlapping  decompositions. We b
 uild here a coarse space for which the convergence  rate of the two-level 
 method is guaranteed regardless of the regularity  of the coefficients. We
  do this by introducing a symmetrized variant of  the ORAS (Optimized Rest
 ricted Additive Schwarz) algorithm. Numerical  results on nearly incompres
 sible elasticity and Stokes system are shown  for systems with hundred of 
 millions of degrees of freedom on high  performance computers.
X-ALT-DESC:Optimized Schwarz methods (OSM) are very popular methods which w
 ere  introduced by P.L. Lions for elliptic problems and B. Despres for  pr
 opagative wave phenomena. One drawback is the lack of theoretical  results
  for variable coefficients problems and overlapping  decompositions. We bu
 ild here a coarse space for which the convergence  rate of the two-level m
 ethod is guaranteed regardless of the regularity  of the coefficients. We 
 do this by introducing a symmetrized variant of  the ORAS (Optimized Restr
 icted Additive Schwarz) algorithm. Numerical  results on nearly incompress
 ible elasticity and Stokes system are shown  for systems with hundred of m
 illions of degrees of freedom on high  performance computers. 
DTEND;TZID=Europe/Zurich:20160513T120000
END:VEVENT
BEGIN:VEVENT
UID:news242@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T210058
DTSTART;TZID=Europe/Zurich:20160429T110000
SUMMARY:Seminar in Numerical Analysis: Andreas Rieder (Karlsruhe Institute 
 of Technology)
DESCRIPTION:Electrical impedance tomography is a non-invasive method for im
 aging the electrical conductivity of an object from voltage measurements o
 n its surface. This inverse problem suffers threefold: it is highly nonlin
 ear\, severely ill-posed\, and highly under-determined. To obtain yet reas
 onable reconstructions\, maximal information needs to be extracted from th
 e data. We will present and analyze a holistic Newton-type method which ad
 dresses all these challenges. Finally\, we demonstrate the performance of 
 this concept numerically for simulated and measured data.
X-ALT-DESC:Electrical impedance tomography is a non-invasive method for ima
 ging the electrical conductivity of an object from voltage measurements on
  its surface. This inverse problem suffers threefold: it is highly nonline
 ar\, severely ill-posed\, and highly under-determined. To obtain yet reaso
 nable reconstructions\, maximal information needs to be extracted from the
  data. We will present and analyze a holistic Newton-type method which add
 resses all these challenges. Finally\, we demonstrate the performance of t
 his concept numerically for simulated and measured data. 
DTEND;TZID=Europe/Zurich:20160429T120000
END:VEVENT
BEGIN:VEVENT
UID:news246@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T211540
DTSTART;TZID=Europe/Zurich:20151204T110000
SUMMARY:Seminar in Numerical Analysis: Sebastian Ullmann (TU Darmstadt)
DESCRIPTION:Surrogate models can be used to decrease  the computational cos
 t for uncertainty quantification in the context of  parabolic PDEs with st
 ochastic data. Projection based reduced-order  modeling provides surrogate
 s which inherit the spatial structure of the  solution as well as the unde
 rlying physics. In my talk I focus on the  type of models that is derived 
 by a Galerkin projection onto a proper  orthogonal decomposition (POD) of 
 snapshots of the solution.\\r\\nStandard  techniques assume that all snaps
 hots use one and the same spatial mesh.  I present a generalization for un
 steady adaptive finite elements\, where  the mesh can change from time ste
 p to time step and\, in the case of  stochastic sampling\, from realizatio
 n to realization. I will answer the  following questions: How can the codi
 ng effort for creating such a  reduced-order model be minimized? How can t
 he union of all snapshot  meshes be avoided? What is the main difference b
 etween static and  adaptive snapshots in the error analysis of Galerkin re
 duced-order  models?\\r\\nAs a numerical test case I consider a two-dimens
 ional  viscous Burgers equation with smooth initial data multiplied by a  
 normally distributed random variable. The results illustrate the  converge
 nce properties with respect to the number of POD basis functions  and indi
 cate possible savings of computation time.
X-ALT-DESC:Surrogate models can be used to decrease  the computational cost
  for uncertainty quantification in the context of  parabolic PDEs with sto
 chastic data. Projection based reduced-order  modeling provides surrogates
  which inherit the spatial structure of the  solution as well as the under
 lying physics. In my talk I focus on the  type of models that is derived b
 y a Galerkin projection onto a proper  orthogonal decomposition (POD) of s
 napshots of the solution.\nStandard  techniques assume that all snapshots 
 use one and the same spatial mesh.  I present a generalization for unstead
 y adaptive finite elements\, where  the mesh can change from time step to 
 time step and\, in the case of  stochastic sampling\, from realization to 
 realization. I will answer the  following questions: How can the coding ef
 fort for creating such a  reduced-order model be minimized? How can the un
 ion of all snapshot  meshes be avoided? What is the main difference betwee
 n static and  adaptive snapshots in the error analysis of Galerkin reduced
 -order  models?\nAs a numerical test case I consider a two-dimensional  vi
 scous Burgers equation with smooth initial data multiplied by a  normally 
 distributed random variable. The results illustrate the  convergence prope
 rties with respect to the number of POD basis functions  and indicate poss
 ible savings of computation time. 
DTEND;TZID=Europe/Zurich:20151204T120000
END:VEVENT
BEGIN:VEVENT
UID:news243@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T210827
DTSTART;TZID=Europe/Zurich:20151201T121000
SUMMARY:Seminar CCCS: Prof. M. Griebel (University of Bonn)
DESCRIPTION:Polymeric viscoelastic fluids can be modelled by using the Navi
 er-Stokes equations on the macroscopic scale with an additional stress ten
 sor and a higher-dimensional Fokker-Plank equation or a corresponding stoc
 hastic PDE on the microscopic scale. Here\, the dimension of the microscop
 ic problem is 3N where N+1 is the number of beads in the underlying spring
  bead model for viscoelasticity. For the numerical treatment of the overal
 l system\, we couple the the stochastic Brownian configuration field metho
 d with our fully parallelized three-dimensional Navier-Stokes solver NaSt3
 DGPF. But due to the microscopic problem\, we directly encounter the curse
  of dimensionality. To this end\, we suggest the so-called dimension-adapt
 ive sparse grid approach. It allows to deal with moderate-sized subproblem
 s in an adaptive fashion. Furthermore\, all arising subproblems can be tre
 ated fully in parallel. This way\, reliable multiscale simulations of visc
 oelastic flow problems for microscopic models with N>1 get possible for th
 e first time. This is  joint work with Alexander Rüttgers from Bonn.
X-ALT-DESC:Polymeric viscoelastic fluids can be modelled by using the Navie
 r-Stokes equations on the macroscopic scale with an additional stress tens
 or and a higher-dimensional Fokker-Plank equation or a corresponding stoch
 astic PDE on the microscopic scale. Here\, the dimension of the microscopi
 c problem is 3N where N+1 is the number of beads in the underlying spring 
 bead model for viscoelasticity. For the numerical treatment of the overall
  system\, we couple the the stochastic Brownian configuration field method
  with our fully parallelized three-dimensional Navier-Stokes solver NaSt3D
 GPF. But due to the microscopic problem\, we directly encounter the curse 
 of dimensionality. To this end\, we suggest the so-called dimension-adapti
 ve sparse grid approach. It allows to deal with moderate-sized subproblems
  in an adaptive fashion. Furthermore\, all arising subproblems can be trea
 ted fully in parallel. This way\, reliable multiscale simulations of visco
 elastic flow problems for microscopic models with N&gt\;1 get possible for
  the first time. This is&nbsp\; joint work with Alexander Rüttgers from B
 onn. 
DTEND;TZID=Europe/Zurich:20151201T131500
END:VEVENT
BEGIN:VEVENT
UID:news244@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T211049
DTSTART;TZID=Europe/Zurich:20151120T110000
SUMMARY:Seminar in Numerical Analysis: Stefan Sauter (University of Zürich
 )
DESCRIPTION:In this talk we consider an intrinsic approach for the direct c
 omputation of the fluxes for problems in potential theory. We present a ge
 neral method for the derivation of intrinsic conforming and non-conforming
  finite element spaces and appropriate lifting operators for the evaluatio
 n of the right-hand side from abstract theoretical principles related to t
 he second Strang Lemma. This intrinsic finite element method is analyzed a
 nd convergence with optimal order is proved.
X-ALT-DESC:In this talk we consider an intrinsic approach for the direct co
 mputation of the fluxes for problems in potential theory. We present a gen
 eral method for the derivation of intrinsic conforming and non-conforming 
 finite element spaces and appropriate lifting operators for the evaluation
  of the right-hand side from abstract theoretical principles related to th
 e second Strang Lemma. This intrinsic finite element method is analyzed an
 d convergence with optimal order is proved. 
DTEND;TZID=Europe/Zurich:20151120T120000
END:VEVENT
BEGIN:VEVENT
UID:news245@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T211238
DTSTART;TZID=Europe/Zurich:20151106T110000
SUMMARY:Seminar in Numerical Analysis: Sanna Mönkölä (University of Jyv
 äskylä)
DESCRIPTION:A wide range of numerical methods have been used for solving  t
 ime-harmonic wave equations. Typically\, the methods are based on  complex
 -valued formulations leading to large-scale indefinite linear  equations. 
 An alternative is to simulate time-dependent equations in  time\, until th
 e time-harmonic solution is reached. However\, this  approach suffers from
  poor convergence\, particularly in the case of  large wavenumbers and com
 plicated domains. We accelerate the convergence  rate by employing a contr
 ollability method. The problem is formulated  as a least-squares optimizat
 ion problem\, which is solved by the  conjugate gradient algorithm. The ef
 ficiency of the method relies on  smart discretizations. For spatial discr
 etization we use the spectral  element method or the discrete exterior cal
 culus\, and for time evolution  we consider leap-frog style discretization
  with non-uniform timesteps  or higher-order schemes. For constructing spa
 tially isotropic grids for  complex geometries\, we use non-uniform polygo
 nal structures imitating  the close packing in crystal lattices.
X-ALT-DESC:A wide range of numerical methods have been used for solving  ti
 me-harmonic wave equations. Typically\, the methods are based on  complex-
 valued formulations leading to large-scale indefinite linear  equations. A
 n alternative is to simulate time-dependent equations in  time\, until the
  time-harmonic solution is reached. However\, this  approach suffers from 
 poor convergence\, particularly in the case of  large wavenumbers and comp
 licated domains. We accelerate the convergence  rate by employing a contro
 llability method. The problem is formulated  as a least-squares optimizati
 on problem\, which is solved by the  conjugate gradient algorithm. The eff
 iciency of the method relies on  smart discretizations. For spatial discre
 tization we use the spectral  element method or the discrete exterior calc
 ulus\, and for time evolution  we consider leap-frog style discretization 
 with non-uniform timesteps  or higher-order schemes. For constructing spat
 ially isotropic grids for  complex geometries\, we use non-uniform polygon
 al structures imitating  the close packing in crystal lattices. 
DTEND;TZID=Europe/Zurich:20151106T120000
END:VEVENT
BEGIN:VEVENT
UID:news247@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T211947
DTSTART;TZID=Europe/Zurich:20151016T110000
SUMMARY:Seminar in Numerical Analysis: Wolfgang Hornfeck (DLR Köln)
DESCRIPTION:Crystallography\, as seen from a mathematician's viewpoint\, is
  mainly  concerned with "Sphere Packings\, Lattices and Groups" (as is the
  title  of a famous book of Conway and Sloane). It is clear\, however\, th
 at there  are many other connections between crystallography and mathemati
 cs\,  ranging from more general applications of graph theory to more speci
 al  ones such as differential geometry. In my talk I want to present some 
  explorations into some applications of uniform distribution theory  withi
 n a crystallographic context.
X-ALT-DESC:Crystallography\, as seen from a mathematician's viewpoint\, is 
 mainly  concerned with &quot\;Sphere Packings\, Lattices and Groups&quot\;
  (as is the title  of a famous book of Conway and Sloane). It is clear\, h
 owever\, that there  are many other connections between crystallography an
 d mathematics\,  ranging from more general applications of graph theory to
  more special  ones such as differential geometry. In my talk I want to pr
 esent some  explorations into some applications of uniform distribution th
 eory  within a crystallographic context. 
DTEND;TZID=Europe/Zurich:20151016T120000
END:VEVENT
BEGIN:VEVENT
UID:news248@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T212124
DTSTART;TZID=Europe/Zurich:20151009T110000
SUMMARY:Seminar in Numerical Analysis: Andrea Barth (University of Stuttgar
 t)
DESCRIPTION:Multilevel Monte Carlo methods  were introduced to lower the co
 mputational complexity for the  calculation of\, for instance\, the expect
 ation of a random quantity. More  precisely\, in comparison to standard Mo
 nte Carlo methods the  computational complexity is (asymptotically) equal 
 to the calculation of  one sample of the problem on the finest grid used. 
 The price to pay for  this increase in efficiency is that the problem need
 s to be solved not  only on one (fine) grid\, but on a hierarchy of discre
 tizations. This  implies first that the solution has to be represented on 
 all grids and  second\, that the variance of the detail (the difference of
  approximate  solutions on two consecutive grids) converges with the refin
 ement of the  grid.\\r\\nIn this talk\, I will give an introduction to mul
 tilevel  Monte Carlo methods in the case when the variance of the detail d
 oes not  converge uniformly. The idea is illustrated by the calculation of
  the  expectation for an elliptic problem with a random multiscale coeffic
 ient  and then extended to approximations of statistical solutions to the 
  Navier-Stokes equations.
X-ALT-DESC:Multilevel Monte Carlo methods  were introduced to lower the com
 putational complexity for the  calculation of\, for instance\, the expecta
 tion of a random quantity. More  precisely\, in comparison to standard Mon
 te Carlo methods the  computational complexity is (asymptotically) equal t
 o the calculation of  one sample of the problem on the finest grid used. T
 he price to pay for  this increase in efficiency is that the problem needs
  to be solved not  only on one (fine) grid\, but on a hierarchy of discret
 izations. This  implies first that the solution has to be represented on a
 ll grids and  second\, that the variance of the detail (the difference of 
 approximate  solutions on two consecutive grids) converges with the refine
 ment of the  grid.\nIn this talk\, I will give an introduction to multilev
 el  Monte Carlo methods in the case when the variance of the detail does n
 ot  converge uniformly. The idea is illustrated by the calculation of the 
  expectation for an elliptic problem with a random multiscale coefficient 
  and then extended to approximations of statistical solutions to the  Navi
 er-Stokes equations. 
DTEND;TZID=Europe/Zurich:20151009T120000
END:VEVENT
BEGIN:VEVENT
UID:news249@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T212633
DTSTART;TZID=Europe/Zurich:20150925T110000
SUMMARY:Seminar in Numerical Analysis: Francois Bouchut (Université Paris-
 Est)
DESCRIPTION:We study approximations by conforming methods of the solution t
 o variational inequalities which arise in the context of inviscid incompre
 ssible Bingham type non-Newtonian fluid flows and of the total variation f
 low problem.\\r\\nIn the general context of a convex lower semi-continuous
  functional on a Hilbert space\, we prove the convergence of time implicit
  space conforming approximations\, without viscosity and for non-smooth da
 ta. Then we introduce a general class of total variation functionals\, for
  which we can apply the regularization method. We consider the time implic
 it regularized\, linearized or not\, algorithms\, and prove their converge
 nce for general total variation functionals.
X-ALT-DESC:We study approximations by conforming methods of the solution to
  variational inequalities which arise in the context of inviscid incompres
 sible Bingham type non-Newtonian fluid flows and of the total variation fl
 ow problem.\nIn the general context of a convex lower semi-continuous func
 tional on a Hilbert space\, we prove the convergence of time implicit spac
 e conforming approximations\, without viscosity and for non-smooth data. T
 hen we introduce a general class of total variation functionals\, for whic
 h we can apply the regularization method. We consider the time implicit re
 gularized\, linearized or not\, algorithms\, and prove their convergence f
 or general total variation functionals. 
DTEND;TZID=Europe/Zurich:20150925T120000
END:VEVENT
BEGIN:VEVENT
UID:news250@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T222212
DTSTART;TZID=Europe/Zurich:20150901T110000
SUMMARY:Seminar in Numerical Analysis: Abdul-Lateef Haji-Ali (King Abdullah
  University)
DESCRIPTION:I discuss using single level and multilevel Monte Carlo  method
 s to compute quantities of interests of a stochastic particle  system in t
 he mean-field. In this context\, the stochastic particles  follow a couple
 d system of Ito stochastic differential equations (SDEs).  Moreover\, this
  stochastic particle system converges to a stochastic  mean-field limit as
  the number of particles tends to infinity.\\r\\nIn  2012\, my Master thes
 is developed different versions of Multilevel Monte  Carlo (MLMC) for part
 icle systems\, both with respect to time steps and  number of particles an
 d proposed using particle antithetic estimators  for MLMC. In that thesis\
 , I showed moderate savings of MLMC compared to  Monte Carlo. In this talk
 \, I recall and expand on these results\,  emphasizing the importance of a
 ntithetic estimators in stochastic  particle systems. I will finally concl
 ude by proposing the use of our  recent Multi-index Monte Carlo method to 
 obtain improved convergence  rates.
X-ALT-DESC:I discuss using single level and multilevel Monte Carlo  methods
  to compute quantities of interests of a stochastic particle  system in th
 e mean-field. In this context\, the stochastic particles  follow a coupled
  system of Ito stochastic differential equations (SDEs).  Moreover\, this 
 stochastic particle system converges to a stochastic  mean-field limit as 
 the number of particles tends to infinity.\nIn  2012\, my Master thesis de
 veloped different versions of Multilevel Monte  Carlo (MLMC) for particle 
 systems\, both with respect to time steps and  number of particles and pro
 posed using particle antithetic estimators  for MLMC. In that thesis\, I s
 howed moderate savings of MLMC compared to  Monte Carlo. In this talk\, I 
 recall and expand on these results\,  emphasizing the importance of antith
 etic estimators in stochastic  particle systems. I will finally conclude b
 y proposing the use of our  recent Multi-index Monte Carlo method to obtai
 n improved convergence  rates. 
DTEND;TZID=Europe/Zurich:20150901T120000
END:VEVENT
BEGIN:VEVENT
UID:news251@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T213252
DTSTART;TZID=Europe/Zurich:20150529T110000
SUMMARY:Seminar in Numerical Analysis: Victorita Dolean (University of Nice
 )
DESCRIPTION:For linear problems\, domain decomposition methods can be used 
 directly  as iterative solvers\, but also as preconditioners for Krylov me
 thods. In  practice\, Krylov acceleration is almost always used\, since th
 e Krylov  method finds a much better residual polynomial than the stationa
 ry  iteration\, and thus converges much faster. We show in this work that 
  also for non-linear problems\, domain decomposition methods can either be
   used directly as iterative solvers\, or one can use them as  preconditio
 ners for Newton’s method. For the concrete case of the  parallel Schwarz
  method\, we show that we obtain a preconditioner we call  RASPEN (Restric
 ted Additive Schwarz Preconditioned Exact Newton) which  is similar to ASP
 IN (Additive Schwarz Preconditioned Inexact Newton)\,  but with all compon
 ents directly defined by the iterative method. This  has the advantage tha
 t RASPEN already converges when used as an  iterative solver\, in contrast
  to ASPIN\, and we thus get a substantially  better preconditioner for New
 ton’s method. We illustrate our findings  with numerical results on the 
 Forchheimer equation and a non-linear  diffusion problem.
X-ALT-DESC:For linear problems\, domain decomposition methods can be used d
 irectly  as iterative solvers\, but also as preconditioners for Krylov met
 hods. In  practice\, Krylov acceleration is almost always used\, since the
  Krylov  method finds a much better residual polynomial than the stationar
 y  iteration\, and thus converges much faster. We show in this work that  
 also for non-linear problems\, domain decomposition methods can either be 
  used directly as iterative solvers\, or one can use them as  precondition
 ers for Newton’s method. For the concrete case of the  parallel Schwarz 
 method\, we show that we obtain a preconditioner we call  RASPEN (Restrict
 ed Additive Schwarz Preconditioned Exact Newton) which  is similar to ASPI
 N (Additive Schwarz Preconditioned Inexact Newton)\,  but with all compone
 nts directly defined by the iterative method. This  has the advantage that
  RASPEN already converges when used as an  iterative solver\, in contrast 
 to ASPIN\, and we thus get a substantially  better preconditioner for Newt
 on’s method. We illustrate our findings  with numerical results on the F
 orchheimer equation and a non-linear  diffusion problem. 
DTEND;TZID=Europe/Zurich:20150529T120000
END:VEVENT
BEGIN:VEVENT
UID:news252@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T213451
DTSTART;TZID=Europe/Zurich:20150522T110000
SUMMARY:Seminar in Numerical Analysis: Stéphane Lanteri (Inria Sophia Anti
 polis)
DESCRIPTION:We present a discontinuous finite element (discontinuous Galerk
 in)  time-domain solver for the numerical simulation of the interaction of
   light with nanometer scale structures. The method relies on a compact  s
 tencil high order interpolation of the electromagnetic  field  components
  within each cell of  an unstructured tetrahedral mesh. This  piecewise p
 olynomial numerical approximation is allowed to be  discontinuous from one
  mesh cell to another\, and the consistency of the  global approximation i
 s obtained thanks to the definition of appropriate  numerical traces of th
 e fields on a face shared by two neighboring  cells. Time integration is a
 chieved using an explicit scheme and no  global mass matrix inversion is r
 equired to advance the solution at each  time step. Moreover\, the resulti
 ng time-domain solver is particularly  well adapted to parallel computing.
  The proposed method is an extension  of the method that we initially prop
 osed in [1] for the simulation of  electromagnetic wave propagation innond
 ispersive heterogeneous media at  microwave frequencies.\\r\\nThis is a jo
 int work with Claire Scheid and Jonathan Viquerat.\\r\\n[1] Fezoui\, L.\, 
 S. Lanteri\, S. Lohrengel\, and S. Piperno.  Convergenceand stability of a
  discontinuous Galerkin time-domain method  for the3D  heterogeneous Maxw
 ell  equations on  unstructured meshes.  ESAIM:Math. Model. Numer. Anal.
 \, Vol. 39\, No. 6\, 1149-1176\, 2005.
X-ALT-DESC:We present a discontinuous finite element (discontinuous Galerki
 n)  time-domain solver for the numerical simulation of the interaction of 
  light with nanometer scale structures. The method relies on a compact  st
 encil high order interpolation of the electromagnetic&nbsp\; field  compon
 ents within each cell of&nbsp\; an unstructured tetrahedral mesh. This  pi
 ecewise polynomial numerical approximation is allowed to be  discontinuous
  from one mesh cell to another\, and the consistency of the  global approx
 imation is obtained thanks to the definition of appropriate  numerical tra
 ces of the fields on a face shared by two neighboring  cells. Time integra
 tion is achieved using an explicit scheme and no  global mass matrix inver
 sion is required to advance the solution at each  time step. Moreover\, th
 e resulting time-domain solver is particularly  well adapted to parallel c
 omputing. The proposed method is an extension  of the method that we initi
 ally proposed in [1] for the simulation of  electromagnetic wave propagati
 on innondispersive heterogeneous media at  microwave frequencies.\nThis is
  a joint work with Claire Scheid and Jonathan Viquerat.\n[1] Fezoui\, L.\,
  S. Lanteri\, S. Lohrengel\, and S. Piperno.  Convergenceand stability of 
 a discontinuous Galerkin time-domain method  for the3D&nbsp\; heterogeneou
 s Maxwell&nbsp\; equations on&nbsp\; unstructured meshes.  ESAIM:Math. Mod
 el. Numer. Anal.\, Vol. 39\, No. 6\, 1149-1176\, 2005. 
DTEND;TZID=Europe/Zurich:20150522T120000
END:VEVENT
BEGIN:VEVENT
UID:news253@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T213628
DTSTART;TZID=Europe/Zurich:20150508T110000
SUMMARY:Seminar in Numerical Analysis: Christian Stohrer (ENSTA ParisTech)
DESCRIPTION:Electromagnetic phenomena can be modeled using Maxwell's equati
 ons.  In particular we are interested in harmonic electromagnetic waves  p
 ropagating through a highly oscillatory material such as e.g. fiber  reinf
 orced plastic. The permittivity and the permeability of such  materials va
 ry on a microscopic length scale. The use of standard edge  finite element
 s is of limited profit\, since the microscopic structure  requires very re
 fined meshes to provide satisfying approximations. This  may easily result
  in computational costs difficult to manage. However\,  if one is only int
 erested in the effective behavior of the solution and  not in the microsco
 pic details\, homogenization techniques can be used to  overcome these dif
 ficulties. In this talk we review first the results  of analytical homogen
 ization results for Maxwell's equations. The goal  of this theory is to re
 place the oscillatory material with an effective  one\, such that the over
 all behavior of the solution remains unchanged.  The solution of the arisi
 ng equations can be solved with standard  numerical methods because the ef
 fective material depends no longer on  the micro scale. In the second part
  of the talk we propose a multiscale  scheme following the framework of th
 e finite element heterogeneous  multiscale method (FE-HMM). Contrary to th
 e discretization of the  analytically homogenized equation\, no effective 
 coefficient must be  precomputed beforehand. We prove that the FE-HMM solu
 tion converges to  the homogenized one for periodic materials and show som
 e numerical  experiments.\\r\\nThis is a joint work with Sonia Fliss and P
 atrick Ciarlet.
X-ALT-DESC:Electromagnetic phenomena can be modeled using Maxwell's equatio
 ns.  In particular we are interested in harmonic electromagnetic waves  pr
 opagating through a highly oscillatory material such as e.g. fiber  reinfo
 rced plastic. The permittivity and the permeability of such  materials var
 y on a microscopic length scale. The use of standard edge  finite elements
  is of limited profit\, since the microscopic structure  requires very ref
 ined meshes to provide satisfying approximations. This  may easily result 
 in computational costs difficult to manage. However\,  if one is only inte
 rested in the effective behavior of the solution and  not in the microscop
 ic details\, homogenization techniques can be used to  overcome these diff
 iculties. In this talk we review first the results  of analytical homogeni
 zation results for Maxwell's equations. The goal  of this theory is to rep
 lace the oscillatory material with an effective  one\, such that the overa
 ll behavior of the solution remains unchanged.  The solution of the arisin
 g equations can be solved with standard  numerical methods because the eff
 ective material depends no longer on  the micro scale. In the second part 
 of the talk we propose a multiscale  scheme following the framework of the
  finite element heterogeneous  multiscale method (FE-HMM). Contrary to the
  discretization of the  analytically homogenized equation\, no effective c
 oefficient must be  precomputed beforehand. We prove that the FE-HMM solut
 ion converges to  the homogenized one for periodic materials and show some
  numerical  experiments.\nThis is a joint work with Sonia Fliss and Patric
 k Ciarlet. 
DTEND;TZID=Europe/Zurich:20150508T120000
END:VEVENT
BEGIN:VEVENT
UID:news254@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T213831
DTSTART;TZID=Europe/Zurich:20150424T110000
SUMMARY:Seminar in Numerical Analysis: David Cohen (Umeå University)
DESCRIPTION:A fully discrete approximation of one-dimensional nonlinear sto
 chastic wave equations driven by multiplicative noise is presented. A stan
 dard finite difference approximation is used in space and a stochastic tri
 gonometric method for the temporal approximation. This explicit time integ
 rator allows for error bounds uniformly in time and space. Moreover\, unif
 orm almost sure convergence of the numerical solution is proved.\\r\\nThis
  is a joint work with Lluís Quer-Sardanyons\, Universitat Autònoma de Ba
 rcelona.
X-ALT-DESC:A fully discrete approximation of one-dimensional nonlinear stoc
 hastic wave equations driven by multiplicative noise is presented. A stand
 ard finite difference approximation is used in space and a stochastic trig
 onometric method for the temporal approximation. This explicit time integr
 ator allows for error bounds uniformly in time and space. Moreover\, unifo
 rm almost sure convergence of the numerical solution is proved.\nThis is a
  joint work with Lluís Quer-Sardanyons\, Universitat Autònoma de Barcelo
 na. 
DTEND;TZID=Europe/Zurich:20150424T120000
END:VEVENT
BEGIN:VEVENT
UID:news255@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T214015
DTSTART;TZID=Europe/Zurich:20150320T110000
SUMMARY:Seminar in Numerical Analysis: Roland Griesmaier (Universität Wür
 zburg)
DESCRIPTION:One of the main themes of inverse scattering theory for time-ha
 rmonic  acoustic or electromagnetic waves is to determine information abou
 t  unknown objects or inhomogeneous media from observations of scattered  
 waves away from these objects or outside these media. Such inverse  proble
 ms are typically non-linear and severely ill-posed.\\r\\nBesides standard 
 regularization methods\, which are often iterative\, a  completely differe
 nt methodology - so-called qualitative reconstruction  methods - has attra
 cted a lot of interest recently. These algorithms  recover specific qualit
 ative properties of scattering objects or  anomalies inside a medium in a 
 reliable and fast way. They avoid the  simulation of forward models and ne
 ed no a priori information on  physical or topological properties of the u
 nknown objects or  inhomogeneities to be reconstructed. One of the drawbac
 ks of currently  available qualitative reconstruction methods is the large
  amount of data  required by most of these algorithms. It is usually assum
 ed that  measurement data of waves scattered by the unknown objects corres
 ponding  to infinitely many primary waves are given - at least theoretical
 ly.\\r\\nWe consider the inverse source problem for the Helmholtz equation
  as a  means to provide a qualitative inversion algorithm for inverse  sca
 ttering problems for acoustic or electromagnetic waves with a single  exci
 tation only. Probing an ensemble of obstacles by just one primary  wave at
  a fixed frequency and measuring the far field of the  corresponding scatt
 ered wave\, the inverse scattering problem that we are  interested in cons
 ists in reconstructing the support of the scatterers.  To this end we rewr
 ite the scattering problem as a source problem and  apply two recently dev
 eloped algorithms - the inverse Radon  approximation and the convex scatte
 ring support - to recover information  on the support of the corresponding
  source. The first method builds  upon a windowed Fourier transform of the
  far field data followed by a  filtered backprojection\, and although this
  procedure yields a rather  blurry reconstruction\, it can be applied to i
 dentify the number and the  positions of well separated source components.
  This information is then  utilized to split the far field into individual
  far field patterns  radiated by each of the well separated source compone
 nts using a  Galerkin scheme. Finally we compute the convex scattering sup
 ports  associated to the individual source components as a reconstruction 
 of  the individual scatterers. We discuss this algorithm and present  nume
 rical results.
X-ALT-DESC:One of the main themes of inverse scattering theory for time-har
 monic  acoustic or electromagnetic waves is to determine information about
   unknown objects or inhomogeneous media from observations of scattered  w
 aves away from these objects or outside these media. Such inverse  problem
 s are typically non-linear and severely ill-posed.\nBesides standard regul
 arization methods\, which are often iterative\, a  completely different me
 thodology - so-called qualitative reconstruction  methods - has attracted 
 a lot of interest recently. These algorithms  recover specific qualitative
  properties of scattering objects or  anomalies inside a medium in a relia
 ble and fast way. They avoid the  simulation of forward models and need no
  a priori information on  physical or topological properties of the unknow
 n objects or  inhomogeneities to be reconstructed. One of the drawbacks of
  currently  available qualitative reconstruction methods is the large amou
 nt of data  required by most of these algorithms. It is usually assumed th
 at  measurement data of waves scattered by the unknown objects correspondi
 ng  to infinitely many primary waves are given - at least theoretically.\n
 We consider the inverse source problem for the Helmholtz equation as a  me
 ans to provide a qualitative inversion algorithm for inverse  scattering p
 roblems for acoustic or electromagnetic waves with a single  excitation on
 ly. Probing an ensemble of obstacles by just one primary  wave at a fixed 
 frequency and measuring the far field of the  corresponding scattered wave
 \, the inverse scattering problem that we are  interested in consists in r
 econstructing the support of the scatterers.  To this end we rewrite the s
 cattering problem as a source problem and  apply two recently developed al
 gorithms - the inverse Radon  approximation and the convex scattering supp
 ort - to recover information  on the support of the corresponding source. 
 The first method builds  upon a windowed Fourier transform of the far fiel
 d data followed by a  filtered backprojection\, and although this procedur
 e yields a rather  blurry reconstruction\, it can be applied to identify t
 he number and the  positions of well separated source components. This inf
 ormation is then  utilized to split the far field into individual far fiel
 d patterns  radiated by each of the well separated source components using
  a  Galerkin scheme. Finally we compute the convex scattering supports  as
 sociated to the individual source components as a reconstruction of  the i
 ndividual scatterers. We discuss this algorithm and present  numerical res
 ults. 
DTEND;TZID=Europe/Zurich:20150320T120000
END:VEVENT
BEGIN:VEVENT
UID:news256@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T214209
DTSTART;TZID=Europe/Zurich:20150220T110000
SUMMARY:Seminar in Numerical Analysis: Timo Betcke (University College Lond
 on)
DESCRIPTION:The BEM++ boundary element library is a software project that w
 as  started in 2010 at University College London to provide an open-source
   general purpose BEM library for a variety of application areas. In this 
  talk we introduce the underlying design concepts of the library and  disc
 uss several applications\, including high-frequency preconditioning  for u
 ltrasound applications\, the solution of time-domain problems via  convolu
 tion quadrature\, light-scattering from ice crystals\, and the  solution o
 f coupled FEM/BEM problems with FEniCS and BEM++.
X-ALT-DESC:The BEM++ boundary element library is a software project that wa
 s  started in 2010 at University College London to provide an open-source 
  general purpose BEM library for a variety of application areas. In this  
 talk we introduce the underlying design concepts of the library and  discu
 ss several applications\, including high-frequency preconditioning  for ul
 trasound applications\, the solution of time-domain problems via  convolut
 ion quadrature\, light-scattering from ice crystals\, and the  solution of
  coupled FEM/BEM problems with FEniCS and BEM++. 
DTEND;TZID=Europe/Zurich:20150220T120000
END:VEVENT
BEGIN:VEVENT
UID:news257@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T215147
DTSTART;TZID=Europe/Zurich:20141219T110000
SUMMARY:Seminar in Numerical Analysis: Andrea Barth (Universität Stuttgart
 )
DESCRIPTION:Multilevel Monte Carlo methods for multiscale problems
X-ALT-DESC:Multilevel Monte Carlo methods for multiscale problems
DTEND;TZID=Europe/Zurich:20141219T120000
END:VEVENT
BEGIN:VEVENT
UID:news258@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T215328
DTSTART;TZID=Europe/Zurich:20141212T110000
SUMMARY:Seminar in Numerical Analysis: Olaf Schenk (Universita della Svizze
 ra italiana)
DESCRIPTION:We will review the state-of-the art techniques in the parallel 
 direct  solution of linear systems of equations and present several recent
  new  research directions. This includes (i) fast methods for evaluating  
 certain selected elements of a matrix function that can be used for  solvi
 ng the Kohn-Sham-equation without explicit diagonalization and (ii)  stoch
 astic optimization problems under uncertainty from power grid  problems fr
 om electrical power grid systems. Several algorithmic and  performance eng
 ineering advances are discussed to sove the underlying  sparse linear alge
 bra problems. The new developments include novel  incomplete augmented mul
 ticore sparse factorizations\, multicore- and  GPU-based dense matrix impl
 ementations\, and communication-avoiding  Krylov solvers. We also improve 
 the interprocess communication on Cray  systems to solve e.g. 24-hour hori
 zon power grid problems from  electrical power grid systems of realistic s
 ize with up to 1.95 billion  decision variables and 1.94 billion constrain
 ts.  Full-scale results are  reported on Cray XC30 and BG/Q\, where we ob
 serve very  good parallel  efficiencies and solution times within a opera
 tionally defined time  interval. To our knowledge\, "real-time"-compatible
  performance on a  broad range of architectures for this class of problems
  has not been  possible prior to present work.
X-ALT-DESC:We will review the state-of-the art techniques in the parallel d
 irect  solution of linear systems of equations and present several recent 
 new  research directions. This includes (i) fast methods for evaluating  c
 ertain selected elements of a matrix function that can be used for  solvin
 g the Kohn-Sham-equation without explicit diagonalization and (ii)  stocha
 stic optimization problems under uncertainty from power grid  problems fro
 m electrical power grid systems. Several algorithmic and  performance engi
 neering advances are discussed to sove the underlying  sparse linear algeb
 ra problems. The new developments include novel  incomplete augmented mult
 icore sparse factorizations\, multicore- and  GPU-based dense matrix imple
 mentations\, and communication-avoiding  Krylov solvers. We also improve t
 he interprocess communication on Cray  systems to solve e.g. 24-hour horiz
 on power grid problems from  electrical power grid systems of realistic si
 ze with up to 1.95 billion  decision variables and 1.94 billion constraint
 s.&nbsp\; Full-scale results are  reported on Cray XC30 and BG/Q\, where w
 e observe very&nbsp\; good parallel  efficiencies and solution times withi
 n a operationally defined time  interval. To our knowledge\, &quot\;real-t
 ime&quot\;-compatible performance on a  broad range of architectures for t
 his class of problems has not been  possible prior to present work. 
DTEND;TZID=Europe/Zurich:20141212T120000
END:VEVENT
BEGIN:VEVENT
UID:news259@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T215544
DTSTART;TZID=Europe/Zurich:20141205T110000
SUMMARY:Seminar in Numerical Analysis: Nakul Chitnis (Swiss Tropical Health
  Institute\, Basel)
DESCRIPTION:Malaria is an infectious disease\, spread through mosquito bite
 s\, that is  responsible for substantial morbidity and mortality around th
 e world.  In the last decade\, through increased funding and a global scal
 e up of  control interventions that target mosquitoes\, significant reduct
 ions in  transmission and disease burden have been achieved. However\, the
 se gains  in public health are faced with the twin threat of a decrease in
   funding for malaria control and the development of resistance  (physiolo
 gical and behavioural) in mosquitoes.\\r\\nMathematical models  can help t
 o determine more efficient combinations of existing and new  interventions
  in reducing malaria transmission and delaying the spread  of resistance. 
 We present difference equation models of mosquito  population dynamics and
  malaria in mosquitoes\; and ordinary differential  equation models of mos
 quito movement and population dynamics. We  analyse these models to provid
 e threshold conditions for the survival of  mosquitoes and show the existe
 nce of invariant positive states\; and run  numerical simulations to provi
 de quantitative comparisons of  interventions that target mosquitoes with 
 varying levels of resistance.
X-ALT-DESC:Malaria is an infectious disease\, spread through mosquito bites
 \, that is  responsible for substantial morbidity and mortality around the
  world.  In the last decade\, through increased funding and a global scale
  up of  control interventions that target mosquitoes\, significant reducti
 ons in  transmission and disease burden have been achieved. However\, thes
 e gains  in public health are faced with the twin threat of a decrease in 
  funding for malaria control and the development of resistance  (physiolog
 ical and behavioural) in mosquitoes.\nMathematical models  can help to det
 ermine more efficient combinations of existing and new  interventions in r
 educing malaria transmission and delaying the spread  of resistance. We pr
 esent difference equation models of mosquito  population dynamics and mala
 ria in mosquitoes\; and ordinary differential  equation models of mosquito
  movement and population dynamics. We  analyse these models to provide thr
 eshold conditions for the survival of  mosquitoes and show the existence o
 f invariant positive states\; and run  numerical simulations to provide qu
 antitative comparisons of  interventions that target mosquitoes with varyi
 ng levels of resistance. 
DTEND;TZID=Europe/Zurich:20141205T120000
END:VEVENT
BEGIN:VEVENT
UID:news260@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T215716
DTSTART;TZID=Europe/Zurich:20141121T110000
SUMMARY:Seminar in Numerical Analysis: Armin Iske (Universität Hamburg)
DESCRIPTION:This talk discusses the utility of meshfree kernel techniques i
 n  adaptive finite volume particle methods (FVPM). To this end\, we give t
 en  good reasons in favour of using kernel-based reconstructions in the  r
 ecovery step of FVPM\, where our discussion addresses relevant  computatio
 nal aspects concerning numerical stability and accuracy\, as  well as more
  specific points concerning efficient implementation.  Special emphasis is
  finally placed on morerecent advances in the  construction of adaptive FV
 PM\, where WENO reconstructions by  polyharmonic spline kernelsare used in
  combination with ADER flux  evaluations to obtain high order methods for 
 hyperbolic problems.
X-ALT-DESC:This talk discusses the utility of meshfree kernel techniques in
   adaptive finite volume particle methods (FVPM). To this end\, we give te
 n  good reasons in favour of using kernel-based reconstructions in the  re
 covery step of FVPM\, where our discussion addresses relevant  computation
 al aspects concerning numerical stability and accuracy\, as  well as more 
 specific points concerning efficient implementation.  Special emphasis is 
 finally placed on morerecent advances in the  construction of adaptive FVP
 M\, where WENO reconstructions by  polyharmonic spline kernelsare used in 
 combination with ADER flux  evaluations to obtain high order methods for h
 yperbolic problems. 
DTEND;TZID=Europe/Zurich:20141121T120000
END:VEVENT
BEGIN:VEVENT
UID:news261@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T215846
DTSTART;TZID=Europe/Zurich:20141114T110000
SUMMARY:Seminar in Numerical Analysis: Wolfgang Wendland (Universität Stut
 tgart)
DESCRIPTION:The minimal energy problem for nonnegative charges on a closed 
 surface Γ  in R^3 goes back to C.F. Gauss in 1839. The corresponding Ries
 z kernel  is then weakly singular on Γ. If one considers double layer pot
 entials  with dipole charges on Γ\, the minimal energy problem then is ba
 sed on  hypersingular Riesz potentials in the form of Hadamard’s partie 
 finie  integral operators defining pseudodifferential operators of positiv
 e  degree on smooth Γ. Existence and uniqueness results for the minimal  
 energy problem and a corresponding boundary element method will be  presen
 ted.
X-ALT-DESC:The minimal energy problem for nonnegative charges on a closed s
 urface Γ  in R^3 goes back to C.F. Gauss in 1839. The corresponding Riesz
  kernel  is then weakly singular on Γ. If one considers double layer pote
 ntials  with dipole charges on Γ\, the minimal energy problem then is bas
 ed on  hypersingular Riesz potentials in the form of Hadamard’s partie f
 inie  integral operators defining pseudodifferential operators of positive
   degree on smooth Γ. Existence and uniqueness results for the minimal  e
 nergy problem and a corresponding boundary element method will be  present
 ed. 
DTEND;TZID=Europe/Zurich:20141114T120000
END:VEVENT
BEGIN:VEVENT
UID:news262@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T220058
DTSTART;TZID=Europe/Zurich:20141107T110000
SUMMARY:Seminar in Numerical Analysis: Frédéric Hecht (Université Pierre
 -et-Marie Curie Paris 6)
DESCRIPTION:FreeFem++ is a free software for numerical resolution of partia
 l  differential equations using the finite elements method. After a short 
  presentation of the possibilities of the software\, we will see through  
 examples how to approach PDEs with mesh adaptation and parallel  computing
 . These examples are among\\r\\n Piezoelectric problemsThermal problems wi
 th thermal resistancesElasticity problemsProblems of fluid mechanics like 
 incompressible Navier-StokesProblem of melting and/or solidification of th
 e ice. (Boussinesq with specific heat)
X-ALT-DESC:FreeFem++ is a free software for numerical resolution of partial
   differential equations using the finite elements method. After a short  
 presentation of the possibilities of the software\, we will see through  e
 xamples how to approach PDEs with mesh adaptation and parallel  computing.
  These examples are among\n<ul><li><p> Piezoelectric problems</p></li><li><p>Thermal problems with thermal resistances</p></li><li><p>Elasticity pro
 blems</p></li><li><p>Problems of fluid mechanics like incompressible Navie
 r-Stokes</p></li><li><p>Problem of melting and/or solidification of the ic
 e. (Boussinesq with specific heat)</p></li></ul>
DTEND;TZID=Europe/Zurich:20141107T120000
END:VEVENT
BEGIN:VEVENT
UID:news263@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T220342
DTSTART;TZID=Europe/Zurich:20141024T110000
SUMMARY:Seminar in Numerical Analysis: Juliette Chabassier (INRIA-Bordeaux)
DESCRIPTION:A family of fourth order coupled implicit-explicit schemes is p
 resented  as a special case of fourth order implicit-implicit schemes for 
 linear  wave equations. The domain of interest is decomposed into several 
  regions where different fourth order time discretization are used\,  chos
 en among a family of implicit or explicit fourth order schemes. The  coupl
 ing is based on a Lagrangian formulation on the boundaries between  the se
 veral potentially non conforming meshes of the regions. A global  discrete
  energy is shown to be preserved and leads to global fourth  order consist
 ency. Numerical results in 1d and 2d illustrate the good  behavior of the 
 schemes and their potential for realistic highly  heterogeneous cases or s
 trongly refined geometries\, for which using  everywhere an explicit schem
 e can be extremely penalizing because the  time step must respect the stab
 ility condition adapted to the smallest  element or the highest velocities
 . Accuracy up to fourth order reduces  the numerical dispersion inherent t
 o implicit methods used with a large  time step\, and makes this family of
  schemes attractive compared to  second order accurate methods in time. Th
 e presented technique could be  an alternative to local time stepping prov
 ided that some limitations are  overcame in the future : treatment of diss
 ipative terms\, non trivial  boundary conditions\, coupling with a PML reg
 ion\, fluid structure  coupling...
X-ALT-DESC:A family of fourth order coupled implicit-explicit schemes is pr
 esented  as a special case of fourth order implicit-implicit schemes for l
 inear  wave equations. The domain of interest is decomposed into several  
 regions where different fourth order time discretization are used\,  chose
 n among a family of implicit or explicit fourth order schemes. The  coupli
 ng is based on a Lagrangian formulation on the boundaries between  the sev
 eral potentially non conforming meshes of the regions. A global  discrete 
 energy is shown to be preserved and leads to global fourth  order consiste
 ncy. Numerical results in 1d and 2d illustrate the good  behavior of the s
 chemes and their potential for realistic highly  heterogeneous cases or st
 rongly refined geometries\, for which using  everywhere an explicit scheme
  can be extremely penalizing because the  time step must respect the stabi
 lity condition adapted to the smallest  element or the highest velocities.
  Accuracy up to fourth order reduces  the numerical dispersion inherent to
  implicit methods used with a large  time step\, and makes this family of 
 schemes attractive compared to  second order accurate methods in time. The
  presented technique could be  an alternative to local time stepping provi
 ded that some limitations are  overcame in the future : treatment of dissi
 pative terms\, non trivial  boundary conditions\, coupling with a PML regi
 on\, fluid structure  coupling...  
DTEND;TZID=Europe/Zurich:20141024T120000
END:VEVENT
BEGIN:VEVENT
UID:news264@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T220655
DTSTART;TZID=Europe/Zurich:20140704T110000
SUMMARY:Seminar in Numerical Analysis: Luis Garcia Naranjo (UNAM\, Mexico C
 ity)
DESCRIPTION:In mechanics\, constraints that restrict the possible configura
 tions of the system are termed holonomic. A simple example is the fixed le
 ngth of the rod of a pendulum. Mechanical systems with constraints on the 
 velocities that do not arise as constraints on positions are called nonhol
 onomic. These often arise in rolling systems\, like a sphere rotating with
 out slipping on a table.\\r\\nThe study of nonholonomic mechanical systems
  is challenging because the equations of motion are not Hamil- tonian. The
  dynamics of the system can however be described in terms of a bracket of 
 functions that fails to satisfy the Jacobi identity. One now speaks of an 
 almost Poisson bracket.\\r\\nThe failure of the Jacobi identity leads to p
 henomena that are not shared by usual Hamiltonian systems. Open questions 
 in nonholonomic mechanics that have received attention in recent years inc
 lude determining general conditions for measure preservation\, existence o
 f asymptotic equilibria\, relationship between symmetries and con- servati
 on laws\, reduction\, and integrability.\\r\\nIn the first part of this ta
 lk I will present a basic introduction to nonholonomic mechanics. I will t
 hen present my recent work with Y. Fedorov and J. C. Marrero in which we s
 tudy the problem of measure preservation for nonholonomic systems possessi
 ng symmetries in a systematic manner. Our method allows us to identify spe
 cific parameter values for which there exists a preserved measure for conc
 rete mechanical examples.
X-ALT-DESC:In mechanics\, constraints that restrict the possible configurat
 ions of the system are termed holonomic. A simple example is the fixed len
 gth of the rod of a pendulum. Mechanical systems with constraints on the v
 elocities that do not arise as constraints on positions are called nonholo
 nomic. These often arise in rolling systems\, like a sphere rotating witho
 ut slipping on a table.\nThe study of nonholonomic mechanical systems is c
 hallenging because the equations of motion are not Hamil- tonian. The dyna
 mics of the system can however be described in terms of a bracket of funct
 ions that fails to satisfy the Jacobi identity. One now speaks of an almos
 t Poisson bracket.\nThe failure of the Jacobi identity leads to phenomena 
 that are not shared by usual Hamiltonian systems. Open questions in nonhol
 onomic mechanics that have received attention in recent years include dete
 rmining general conditions for measure preservation\, existence of asympto
 tic equilibria\, relationship between symmetries and con- servation laws\,
  reduction\, and integrability.\nIn the first part of this talk I will pre
 sent a basic introduction to nonholonomic mechanics. I will then present m
 y recent work with Y. Fedorov and J. C. Marrero in which we study the prob
 lem of measure preservation for nonholonomic systems possessing symmetries
  in a systematic manner. Our method allows us to identify specific paramet
 er values for which there exists a preserved measure for concrete mechanic
 al examples. 
DTEND;TZID=Europe/Zurich:20140704T120000
END:VEVENT
BEGIN:VEVENT
UID:news265@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T220851
DTSTART;TZID=Europe/Zurich:20140523T110000
SUMMARY:Seminar in Numerical Analysis: Kersten Schmidt (TU Berlin)
DESCRIPTION:Shielding sheets are commonly used in the protection of electro
 nic  devices. With their large aspect ratios they become a serious issue f
 or  the direct application of the boundary element method (BEM) due to the
   occuring almost singular integrals.\\r\\nWith impedance transmission  co
 nditions (ITCs) we can propose boundary element formulations on the  sheet
  mid-line (or mid-surface) only. In the beginning of the talk we  give a m
 otivation of meaningful impedance transmission conditions\, which  relate 
 jumps and mean values of the Dirichlet and Neumann traces on the  mid-line
 . This relation mayinvolve surface differential operators\,  as for bounda
 ry conditions of Wentzell's type\, and depend on frequency\,  conductivity
 \, sheet thickness and sheet geometry e.g. curvature). These  parameters m
 ay take small or large values and may lead to singularly  perturbedboundar
 y integral equations.\\r\\nWe will introduce  related boundary element met
 hods in two and three dimensions and analyse  well-posedness and discretis
 ation error depending on the model  parameters. Numerical experiments conf
 irm the convergence order of the  discretisation error of the proposed BEM
  and that the discretisation  error behaves for smooth enough sheets equiv
 alent to the exact solution  when varying the model parameters. The result
 s obtained for the eddy  current model\, for which a Poisson equation has 
 to be solved outside the  mid-line\, can be transfered to the Helmholtz eq
 uation and to  transmission conditionsarising from other models.
X-ALT-DESC:Shielding sheets are commonly used in the protection of electron
 ic  devices. With their large aspect ratios they become a serious issue fo
 r  the direct application of the boundary element method (BEM) due to the 
  occuring almost singular integrals.\nWith impedance transmission  conditi
 ons (ITCs) we can propose boundary element formulations on the  sheet mid-
 line (or mid-surface) only. In the beginning of the talk we  give a motiva
 tion of meaningful impedance transmission conditions\, which  relate jumps
  and mean values of the Dirichlet and Neumann traces on the  mid-line. Thi
 s relation may<br />involve surface differential operators\,  as for bound
 ary conditions of Wentzell's type\, and depend on frequency\,  conductivit
 y\, sheet thickness and sheet geometry e.g. curvature). These  parameters 
 may take small or large values and may lead to singularly  perturbed<br />
 boundary integral equations.\nWe will introduce  related boundary element 
 methods in two and three dimensions and analyse  well-posedness and discre
 tisation error depending on the model  parameters. Numerical experiments c
 onfirm the convergence order of the  discretisation error of the proposed 
 BEM and that the discretisation  error behaves for smooth enough sheets eq
 uivalent to the exact solution  when varying the model parameters. The res
 ults obtained for the eddy  current model\, for which a Poisson equation h
 as to be solved outside the  mid-line\, can be transfered to the Helmholtz
  equation and to  transmission conditions<br />arising from other models. 
DTEND;TZID=Europe/Zurich:20140523T120000
END:VEVENT
BEGIN:VEVENT
UID:news266@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T221031
DTSTART;TZID=Europe/Zurich:20140516T110000
SUMMARY:Seminar in Numerical Analysis: Valeriu Savcenco (Shell Global Solut
 ions/ TU Eindhoven)
DESCRIPTION:Multirate methods are highly efficient for large-scale ODE and 
 PDE  problems with widely different time scales. Multirate methods enable 
 one  to use large time steps for slowly varying spatial regions\, and smal
 l  steps for rapidly varying spatial regions. Multirate schemes for  conse
 rvation laws seem to come in two flavors: schemes that are locally  incons
 istent\, and schemes that lack mass-conservation. In this  presentation th
 ese two defects will be discussed for one-dimensional  conservation laws. 
 Particular attention will be given to monotonicity  properties of the mult
 irate schemes\, such as maximum principles and the  total variation dimini
 shing (TVD) property. The study of these  properties will be done within t
 he framework of partitioned Runge-Kutta  methods. It will also be seen tha
 t the incompatibility of consistency  and mass-conservation holds for genu
 ine multirate schemes\, but not for  general partitioned methods.
X-ALT-DESC:Multirate methods are highly efficient for large-scale ODE and P
 DE  problems with widely different time scales. Multirate methods enable o
 ne  to use large time steps for slowly varying spatial regions\, and small
   steps for rapidly varying spatial regions. Multirate schemes for  conser
 vation laws seem to come in two flavors: schemes that are locally  inconsi
 stent\, and schemes that lack mass-conservation. In this  presentation the
 se two defects will be discussed for one-dimensional  conservation laws. P
 articular attention will be given to monotonicity  properties of the multi
 rate schemes\, such as maximum principles and the  total variation diminis
 hing (TVD) property. The study of these  properties will be done within th
 e framework of partitioned Runge-Kutta  methods. It will also be seen that
  the incompatibility of consistency  and mass-conservation holds for genui
 ne multirate schemes\, but not for  general partitioned methods. 
DTEND;TZID=Europe/Zurich:20140516T120000
END:VEVENT
BEGIN:VEVENT
UID:news267@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T221204
DTSTART;TZID=Europe/Zurich:20140509T110000
SUMMARY:Seminar in Numerical Analysis: Tucker Carrington (Queen's Universit
 y Kingston)
DESCRIPTION:To compute the vibrational spectrum of a molecule without negle
 cting  coupling and anharmonicity one must calculate eigenvalues and  eige
 nvectors of a large matrix representing the Hamiltonian in an  appropriate
  basis. Iterative algorithms (e.g. Lanczos\, Davidson\, Filter  Diagonalis
 ation) enable one to compute eigenvalues and eigenvectors. It  is easy to 
 efficiently implement iterative algorithms when a direct  product basis is
  used. However\, for a molecule with more than four  atoms\, a direct prod
 uct basis set is large and it is better to reduce  the number of basis fun
 ctions required to obtain converged eigenvalues  by pruning. This is done 
 without jeopardizing the efficiency of the  matrix-vector products require
 d by all iterative algorithms. In this  talk\, I shall present new basis-s
 ize reduction ideas that are compatible  with efficient matrix-vector prod
 ucts. The basis is designed to include  the product basis functions couple
 d by the largest terms in the  potential and important for computing low-l
 ying vibrational levels. To  solve the vibrational Schrödinger equation 
 without approximating the  potential\, one must use quadrature to compute 
 potential matrix elements.  When using iterative methods in conjunction wi
 th quadrature\, it is  important to evaluate matrix-vector products by doi
 ng sums sequentially.  This is only possible if both the basis and the gri
 d have structure.  Although it is designed to include only functions coupl
 ed by the largest  terms in the potential\, the basis we use and also the 
 (Smolyak-type)  quadrature for doing integrals with the basis have enough 
 structure to  make efficient matrix-vector products possible. Using the qu
 adrature  methods of this paper\, we evaluate the accuracy of calculations
  made by  making multimode approximations.
X-ALT-DESC:To compute the vibrational spectrum of a molecule without neglec
 ting  coupling and anharmonicity one must calculate eigenvalues and  eigen
 vectors of a large matrix representing the Hamiltonian in an  appropriate 
 basis. Iterative algorithms (e.g. Lanczos\, Davidson\, Filter  Diagonalisa
 tion) enable one to compute eigenvalues and eigenvectors. It  is easy to e
 fficiently implement iterative algorithms when a direct  product basis is 
 used. However\, for a molecule with more than four  atoms\, a direct produ
 ct basis set is large and it is better to reduce  the number of basis func
 tions required to obtain converged eigenvalues  by pruning. This is done w
 ithout jeopardizing the efficiency of the  matrix-vector products required
  by all iterative algorithms. In this  talk\, I shall present new basis-si
 ze reduction ideas that are compatible  with efficient matrix-vector produ
 cts. The basis is designed to include  the product basis functions coupled
  by the largest terms in the  potential and important for computing low-ly
 ing vibrational levels. To  solve the vibrational Schrödinger equation w
 ithout approximating the  potential\, one must use quadrature to compute p
 otential matrix elements.  When using iterative methods in conjunction wit
 h quadrature\, it is  important to evaluate matrix-vector products by doin
 g sums sequentially.  This is only possible if both the basis and the grid
  have structure.  Although it is designed to include only functions couple
 d by the largest  terms in the potential\, the basis we use and also the (
 Smolyak-type)  quadrature for doing integrals with the basis have enough s
 tructure to  make efficient matrix-vector products possible. Using the qua
 drature  methods of this paper\, we evaluate the accuracy of calculations 
 made by  making multimode approximations. 
DTEND;TZID=Europe/Zurich:20140509T120000
END:VEVENT
BEGIN:VEVENT
UID:news268@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T221403
DTSTART;TZID=Europe/Zurich:20140307T110000
SUMMARY:Seminar in Numerical Analysis: Fabio Nobile (EPF Lausanne)
DESCRIPTION:We consider the Darcy equation to describe the flow in a satura
 ted porous medium. The permeability of the medium is described as a log-no
 rmal random field\, eventually conditioned to available direct measurement
 s\, to account for its relatively large uncertainty and heterogeneity.\\r\
 \nWe consider perturbation methods based on Taylor expansion of the soluti
 on of the PDE around the nominal permeability value. Successive higher ord
 er corrections to the statistical moments such as pointwise mean and covar
 iance of the solution can be obtained recursively from the computation of 
 high order correlation functions which\, on their turn\, solve high dimens
 ional problems. To overcome the curse of dimensionality in computing and s
 toring such high order correlations\, we adopt a low-rank format\, namely 
 the so called tensor-train (TT) format.\\r\\nWe show that\, on the one han
 d\, the Taylor series does not converge globally\, so that it only makes s
 ense to compute corrections up to a maximum critical order\, beyon which t
 he accuracy of the solution deteriorates insetad of improving. On the othe
 r hand\, we show on some numerical test cases\, the effectiveness of the p
 roposed approach in case of a moderately small variance of the log-normal 
 permeability field.
X-ALT-DESC:We consider the Darcy equation to describe the flow in a saturat
 ed porous medium. The permeability of the medium is described as a log-nor
 mal random field\, eventually conditioned to available direct measurements
 \, to account for its relatively large uncertainty and heterogeneity.\nWe 
 consider perturbation methods based on Taylor expansion of the solution of
  the PDE around the nominal permeability value. Successive higher order co
 rrections to the statistical moments such as pointwise mean and covariance
  of the solution can be obtained recursively from the computation of high 
 order correlation functions which\, on their turn\, solve high dimensional
  problems. To overcome the curse of dimensionality in computing and storin
 g such high order correlations\, we adopt a low-rank format\, namely the s
 o called tensor-train (TT) format.\nWe show that\, on the one hand\, the T
 aylor series does not converge globally\, so that it only makes sense to c
 ompute corrections up to a maximum critical order\, beyon which the accura
 cy of the solution deteriorates insetad of improving. On the other hand\, 
 we show on some numerical test cases\, the effectiveness of the proposed a
 pproach in case of a moderately small variance of the log-normal permeabil
 ity field. 
DTEND;TZID=Europe/Zurich:20140307T120000
END:VEVENT
BEGIN:VEVENT
UID:news269@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T222002
DTSTART;TZID=Europe/Zurich:20140221T110000
SUMMARY:Seminar in Numerical Analysis: Marc Dambrine (Université de Pau)
DESCRIPTION:I am interested in the influence of small geometrical perturbat
 ions on  the solution of elliptic problems. The cases of a single inclusio
 n or  several well-separated inclusions have been deeply studied.  I will
   first recall here the techniques to construct an asymptotics expansion  
 in that case. Then I will consider moderately close inclusions\, i.e. the 
  distance between the inclusions tends to zero more slowly than their  cha
 racteristic size and provide a complete asymptotic description of the  sol
 ution of Laplace equation. I will also present numerical simulations  base
 d on the multiscale superposition method derived from the first  order exp
 ansion.\\r\\nI will explain how some mathematical questions  about the los
 s of coercivity arise from the computation of the profiles  appearing in t
 he expansion. Ventcel boundary conditions are second order  differential c
 onditions that appears  when looking for a transparent  boundary conditio
 n for an exterior boundary value problem in planar  linear elasticity. The
  goal is to bound the infinite domain by a large  “box” to make numeri
 cal approximations possible. Like Robin boundary  conditions\, they lead t
 o wellposed variational problems under a sign  condition of a coefficient.
  Nevertheless situations where this condition  is violated appeared in sev
 eral works. The wellposedness of such problems was still open.  I will pr
 esent\, in the generic case\,  existence  and uniqueness result of the so
 lution for the Ventcel boundary value  problem without the sign condition.
  Then\, I will consider perforated  geometries and give conditions to remo
 ve the genericity restriction.
X-ALT-DESC:I am interested in the influence of small geometrical perturbati
 ons on  the solution of elliptic problems. The cases of a single inclusion
  or  several well-separated inclusions have been deeply studied.&nbsp\; I 
 will  first recall here the techniques to construct an asymptotics expansi
 on  in that case. Then I will consider moderately close inclusions\, i.e. 
 the  distance between the inclusions tends to zero more slowly than their 
  characteristic size and provide a complete asymptotic description of the 
  solution of Laplace equation. I will also present numerical simulations  
 based on the multiscale superposition method derived from the first  order
  expansion.\nI will explain how some mathematical questions  about the los
 s of coercivity arise from the computation of the profiles  appearing in t
 he expansion. Ventcel boundary conditions are second order  differential c
 onditions that appears&nbsp\; when looking for a transparent  boundary con
 dition for an exterior boundary value problem in planar  linear elasticity
 . The goal is to bound the infinite domain by a large  “box” to make n
 umerical approximations possible. Like Robin boundary  conditions\, they l
 ead to wellposed variational problems under a sign  condition of a coeffic
 ient. Nevertheless situations where this condition  is violated appeared i
 n several works. The wellposedness of such problems was still open.&nbsp\;
  I will present\, in the generic case\,&nbsp\; existence  and uniqueness r
 esult of the solution for the Ventcel boundary value  problem without the 
 sign condition. Then\, I will consider perforated  geometries and give con
 ditions to remove the genericity restriction. 
DTEND;TZID=Europe/Zurich:20140221T120000
END:VEVENT
BEGIN:VEVENT
UID:news270@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T230659
DTSTART;TZID=Europe/Zurich:20131206T110000
SUMMARY:Seminar in Numerical Analysis: Mario S. Mommer (Universität Heidel
 berg)
DESCRIPTION:Optimum experimental design (OED) is the problem of finding set
 ups for an experiment in such a way that the collected data allows for opt
 imally accurate estimation of the parameters of interest - taking into acc
 ount an experimental budget. In practice\, the parameters are only approxi
 mately known as a matter of course\, while at the same time\, solving an O
 ED problem is in a way equivalent to magnifying the dependence of the syst
 em response on these quantities.  As a consequence\, designs computed on 
 the basis of a "good guess" of the parameters may underperform dramaticall
 y in practice\, especially for problems involving nonlinear models.\\r\\nI
 n this talk\, we consider robust formulations for optimum experimental des
 ign that work under significant uncertainty. Our focus is on problem setti
 ngs in which the model is described by differential equations of some type
  that are solved numerically. Our approach is based on a semi-infinite pro
 gramming formulation in which we exploit additional problem structure\, to
 gether with sparse grids\, to ensure tractability. The talk includes numer
 ical experiments to illustrate and compare the effectiveness of the approa
 ches.
X-ALT-DESC:Optimum experimental design (OED) is the problem of finding setu
 ps for an experiment in such a way that the collected data allows for opti
 mally accurate estimation of the parameters of interest - taking into acco
 unt an experimental budget. In practice\, the parameters are only approxim
 ately known as a matter of course\, while at the same time\, solving an OE
 D problem is in a way equivalent to magnifying the dependence of the syste
 m response on these quantities.&nbsp\; As a consequence\, designs computed
  on the basis of a &quot\;good guess&quot\; of the parameters may underper
 form dramatically in practice\, especially for problems involving nonlinea
 r models.\nIn this talk\, we consider robust formulations for optimum expe
 rimental design that work under significant uncertainty. Our focus is on p
 roblem settings in which the model is described by differential equations 
 of some type that are solved numerically. Our approach is based on a semi-
 infinite programming formulation in which we exploit additional problem st
 ructure\, together with sparse grids\, to ensure tractability. The talk in
 cludes numerical experiments to illustrate and compare the effectiveness o
 f the approaches. 
DTEND;TZID=Europe/Zurich:20131206T120000
END:VEVENT
BEGIN:VEVENT
UID:news271@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T230846
DTSTART;TZID=Europe/Zurich:20131122T110000
SUMMARY:Seminar in Numerical Analysis: Armin Lechleiter (Universität Breme
 n)
DESCRIPTION:It is well-known that the interior eigenvalues of the Laplacian
  in a  bounded domain share connections to scattering problems in the exte
 rior  of this domain. For instance\, certain boundary integral equations f
 or  exterior scattering problems fail at interior eigenvalues.\\r\\nSimila
 r  connections also exist for inverse exterior scattering problems - for  
 instance\, if zero is an eigenvalue of the far-field operator at a fixed  
 wave number\, then the squared wave number is an interior eigenvalue.  Des
 pite it is in general wrong that interior eigenvalues correspond to  zero 
 being an eigenvalue of the far field operator\, one can prove a  pretty di
 rect characterization of interior eigenvalues via the behavior  of the pha
 ses of the eigenvalues of the far-field operator.\\r\\nIn  this talk\, we 
 present this characterization and sketch its proof for  Dirichlet\, Neuman
 n\, and Robin boundary conditions. Then we extend this  theory to impenetr
 able scattering objects and show via a couple of  numerical examples that 
 one can indeed use this characterization to  compute interior eigenvalues 
 of unknown scattering objects from the  spectrum of their far-field operat
 ors.\\r\\nOur motivation to study  this so-called inside-outside duality c
 omes from a paper by Eckmann and  Pillet (1995). This is joint work with A
 ndreas Kirsch (KIT) and Stefan  Peters (University of Bremen).
X-ALT-DESC:It is well-known that the interior eigenvalues of the Laplacian 
 in a  bounded domain share connections to scattering problems in the exter
 ior  of this domain. For instance\, certain boundary integral equations fo
 r  exterior scattering problems fail at interior eigenvalues.\nSimilar  co
 nnections also exist for inverse exterior scattering problems - for  insta
 nce\, if zero is an eigenvalue of the far-field operator at a fixed  wave 
 number\, then the squared wave number is an interior eigenvalue.  Despite 
 it is in general wrong that interior eigenvalues correspond to  zero being
  an eigenvalue of the far field operator\, one can prove a  pretty direct 
 characterization of interior eigenvalues via the behavior  of the phases o
 f the eigenvalues of the far-field operator.\nIn  this talk\, we present t
 his characterization and sketch its proof for  Dirichlet\, Neumann\, and R
 obin boundary conditions. Then we extend this  theory to impenetrable scat
 tering objects and show via a couple of  numerical examples that one can i
 ndeed use this characterization to  compute interior eigenvalues of unknow
 n scattering objects from the  spectrum of their far-field operators.\nOur
  motivation to study  this so-called inside-outside duality comes from a p
 aper by Eckmann and  Pillet (1995). This is joint work with Andreas Kirsch
  (KIT) and Stefan  Peters (University of Bremen). 
DTEND;TZID=Europe/Zurich:20131122T120000
END:VEVENT
BEGIN:VEVENT
UID:news272@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T231105
DTSTART;TZID=Europe/Zurich:20131115T110000
SUMMARY:Seminar in Numerical Analysis: Wim Vanroose (University of Antwerpe
 n)
DESCRIPTION:Many imaging systems such phase contrast tomography\, internal 
 reflection  microscopy or reaction microscopes measure the far or near fie
 ld of  the scattered wave. We present an efficient and scalable method to 
  calculate the far- and near field of a Helmholtz equation describing a  g
 iven object. The far and near field solution can be written as an  integra
 l of the Greens function multiplied by the solution of the  Helmholtz equa
 tion with absorbing boundary conditions. By deforming the  contour of the 
 integral we only require the numerical solution of the  Helmholtz equation
  along a complex valued contour.  We show that  Helmholtz equation along 
 this contour is equivalent to a complex  shifted Laplacian that can be sol
 ved efficiently by multigrid. This  results in an scalable method to calcu
 lated the far and near field  integral. We discuss this numerical method\,
  show its applicability\,  scalability and discuss its limitations.
X-ALT-DESC:Many imaging systems such phase contrast tomography\, internal r
 eflection  microscopy or reaction microscopes measure the far or near fiel
 d of  the scattered wave. We present an efficient and scalable method to  
 calculate the far- and near field of a Helmholtz equation describing a  gi
 ven object. The far and near field solution can be written as an  integral
  of the Greens function multiplied by the solution of the  Helmholtz equat
 ion with absorbing boundary conditions. By deforming the  contour of the i
 ntegral we only require the numerical solution of the  Helmholtz equation 
 along a complex valued contour.&nbsp\; We show that  Helmholtz equation al
 ong this contour is equivalent to a complex  shifted Laplacian that can be
  solved efficiently by multigrid. This  results in an scalable method to c
 alculated the far and near field  integral. We discuss this numerical meth
 od\, show its applicability\,  scalability and discuss its limitations. 
DTEND;TZID=Europe/Zurich:20131115T120000
END:VEVENT
BEGIN:VEVENT
UID:news273@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T231305
DTSTART;TZID=Europe/Zurich:20131025T110000
SUMMARY:Seminar in Numerical Analysis: Ludovic Métivier (Université de Gr
 enoble)
DESCRIPTION:Full Waveform Inversion is an efficient seismic imaging techniq
 ue for  quantitative estimation of subsurface parameters such as the P-wav
 e and  S-wave velocities\, density\, attenuation and anisotropy parameters
 . The  method is based on the iterative minimization of the misfit between
   observed and calculated data. During the past ten years\, the method has
   been successfully applied to real data in 2D acoustic and elastic  confi
 guration\, as well as in 3D acoustic configuration. The inverse  Hessian o
 perator plays an important role in the reconstruction process.  Particular
 ly\, this operator should correct for illumination deficits\,  frequency b
 andlimited effects\, and help to restore the correct amplitude  of less il
 luminated parameters. In this presentation\, we will focus on  the methods
  we have to approximate this operator\, from preconditioned  gradient-base
 d methods\, to quasi-Newton methods (l-BFGS) and truncated  Newton methods
 . We will present results obtained on 2D synthetic and  real data for the 
 reconstruction of P-wave velocity which illustrate  the importance of the 
 approximation of this operator. We will also  present a simple illustratio
 n of the inverse Hessian operator effect in a  multi-parameter framework. 
 In this context\, the operator helps to  mitigate the trade-off between di
 fferent classes of parameters.
X-ALT-DESC:Full Waveform Inversion is an efficient seismic imaging techniqu
 e for  quantitative estimation of subsurface parameters such as the P-wave
  and  S-wave velocities\, density\, attenuation and anisotropy parameters.
  The  method is based on the iterative minimization of the misfit between 
  observed and calculated data. During the past ten years\, the method has 
  been successfully applied to real data in 2D acoustic and elastic  config
 uration\, as well as in 3D acoustic configuration. The inverse  Hessian op
 erator plays an important role in the reconstruction process.  Particularl
 y\, this operator should correct for illumination deficits\,  frequency ba
 ndlimited effects\, and help to restore the correct amplitude  of less ill
 uminated parameters. In this presentation\, we will focus on  the methods 
 we have to approximate this operator\, from preconditioned  gradient-based
  methods\, to quasi-Newton methods (l-BFGS) and truncated  Newton methods.
  We will present results obtained on 2D synthetic and  real data for the r
 econstruction of P-wave velocity which illustrate  the importance of the a
 pproximation of this operator. We will also  present a simple illustration
  of the inverse Hessian operator effect in a  multi-parameter framework. I
 n this context\, the operator helps to  mitigate the trade-off between dif
 ferent classes of parameters. 
DTEND;TZID=Europe/Zurich:20131025T120000
END:VEVENT
BEGIN:VEVENT
UID:news274@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T231504
DTSTART;TZID=Europe/Zurich:20131011T110000
SUMMARY:Seminar in Numerical Analysis: Maya de Buhan (Université Paris Des
 cartes)
DESCRIPTION:In this talk\, we propose a new method to solve the following i
 nverse  problem: we aim at reconstructing\, from boundary measurements\, t
 he  location\, the shape and the wave propagation speed of an unknown  obs
 tacle surrounded by a medium whose properties are known. Our strategy  com
 bines two methods recently developed by the authors:\\r\\n1 - the  Time-Re
 versed Absorbing Condition method: It combines time reversal  techniques a
 nd absorbing boundary conditions to reconstruct and  regularize the signal
  in a truncated domain that encloses the obstacle.  This enables us to red
 uce the size of the computational domain where we  solve the inverse probl
 em\, now from virtual internal measurements.\\r\\n2  - the Adaptive Invers
 ion method: It is an inversion method which looks  for the value of the un
 known wave propagation speed in a basis composed  by eigenvectors of an el
 liptic operator. Then\, it uses an iterative  process to adapt the mesh an
 d the basis and improve the reconstruction.\\r\\nWe  present several numer
 ical examples in two dimensions to illustrate the  efficiency of the combi
 nation of both methods. In particular\, our  strategy allows (a) to reduce
  the computational cost\, (b) to stabilize  the inverse problem and (c) to
  improve the precision of the results.
X-ALT-DESC:In this talk\, we propose a new method to solve the following in
 verse  problem: we aim at reconstructing\, from boundary measurements\, th
 e  location\, the shape and the wave propagation speed of an unknown  obst
 acle surrounded by a medium whose properties are known. Our strategy  comb
 ines two methods recently developed by the authors:\n1 - the  Time-Reverse
 d Absorbing Condition method: It combines time reversal  techniques and ab
 sorbing boundary conditions to reconstruct and  regularize the signal in a
  truncated domain that encloses the obstacle.  This enables us to reduce t
 he size of the computational domain where we  solve the inverse problem\, 
 now from virtual internal measurements.\n2  - the Adaptive Inversion metho
 d: It is an inversion method which looks  for the value of the unknown wav
 e propagation speed in a basis composed  by eigenvectors of an elliptic op
 erator. Then\, it uses an iterative  process to adapt the mesh and the bas
 is and improve the reconstruction.\nWe  present several numerical examples
  in two dimensions to illustrate the  efficiency of the combination of bot
 h methods. In particular\, our  strategy allows (a) to reduce the computat
 ional cost\, (b) to stabilize  the inverse problem and (c) to improve the 
 precision of the results. 
DTEND;TZID=Europe/Zurich:20131011T120000
END:VEVENT
BEGIN:VEVENT
UID:news275@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T231654
DTSTART;TZID=Europe/Zurich:20130524T110000
SUMMARY:Seminar in Numerical Analysis: Angela Kunoth (Universität Paderbor
 n)
DESCRIPTION:Optimization problems constrained by linear parabolic evolution
  PDEs are  challenging from a computational point of view\, as they requir
 e to  solve a system of PDEs coupled globally in time and space.  For the
 ir  solution\, conventional time-stepping methods quickly reach their  lim
 itations due to the enormous demand for storage.  For such a coupled  PDE
  system\, adaptive methods which aim at distributing the available  degree
 s of freedom in an a-posteriori-fashion to capture singularities  in the d
 ata or domain\, with respect to both space and time\, appear to be  most p
 romising. Employing wavelet schemes for full weak space-time  formulations
  of the parabolic PDEs\, we can prove convergence and optimal  complexity.
  \\r\\nYet another level of challenge are control problems  constrained by
  evolution PDEs involving stochastic or countably many  infinite parametri
 c coefficients: for each instance of the parameters\,  this requires the s
 olution of the complete control problem. \\r\\nOur  method of attack is ba
 sed on the following new theoretical paradigm.  It  is first shown for co
 ntrol problems constrained by evolution PDEs\,  formulated in full weak sp
 ace-time form\, that state\, costate and control  are analytic as function
 s depending on these parameters. Moreover\, we  establish that these funct
 ions allow expansions in terms of sparse  tensorized generalized polynomia
 l chaos (gpc) bases.  Their sparsity is  quantified in terms of p-summabi
 lity of the coefficient sequences for  some 0 < p <= 1. Resulting a-priori
  estimates establish  the  existence of an index set\, allowing for concu
 rrent approximations of  state\, co-state and control for which the gpc ap
 proximations attain  rates of best N-term approximation. These findings se
 rve as the  analytical foundation for the development of corresponding spa
 rse  realizations in terms of deterministic adaptive Galerkin approximatio
 ns  of state\, co-state and control on the entire\, possibly  infinite-dim
 ensional parameter space.\\r\\nThe results were obtained with Max Gunzburg
 er (Florida State University) and with Christoph Schwab (ETH Zuerich).
X-ALT-DESC:Optimization problems constrained by linear parabolic evolution 
 PDEs are  challenging from a computational point of view\, as they require
  to  solve a system of PDEs coupled globally in time and space.&nbsp\; For
  their  solution\, conventional time-stepping methods quickly reach their 
  limitations due to the enormous demand for storage.&nbsp\; For such a cou
 pled  PDE system\, adaptive methods which aim at distributing the availabl
 e  degrees of freedom in an a-posteriori-fashion to capture singularities 
  in the data or domain\, with respect to both space and time\, appear to b
 e  most promising. Employing wavelet schemes for full weak space-time  for
 mulations of the parabolic PDEs\, we can prove convergence and optimal  co
 mplexity. \nYet another level of challenge are control problems  constrain
 ed by evolution PDEs involving stochastic or countably many  infinite para
 metric coefficients: for each instance of the parameters\,  this requires 
 the solution of the complete control problem. \nOur  method of attack is b
 ased on the following new theoretical paradigm.&nbsp\; It  is first shown 
 for control problems constrained by evolution PDEs\,  formulated in full w
 eak space-time form\, that state\, costate and control  are analytic as fu
 nctions depending on these parameters. Moreover\, we  establish that these
  functions allow expansions in terms of sparse  tensorized generalized pol
 ynomial chaos (gpc) bases.&nbsp\; Their sparsity is  quantified in terms o
 f p-summability of the coefficient sequences for  some 0 &lt\; p &lt\;= 1.
  Resulting a-priori estimates establish&nbsp\; the  existence of an index 
 set\, allowing for concurrent approximations of  state\, co-state and cont
 rol for which the gpc approximations attain  rates of best N-term approxim
 ation. These findings serve as the  analytical foundation for the developm
 ent of corresponding sparse  realizations in terms of deterministic adapti
 ve Galerkin approximations  of state\, co-state and control on the entire\
 , possibly  infinite-dimensional parameter space.\nThe results were obtain
 ed with Max Gunzburger (Florida State University) and with Christoph Schwa
 b (ETH Zuerich). 
DTEND;TZID=Europe/Zurich:20130524T120000
END:VEVENT
BEGIN:VEVENT
UID:news276@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T231855
DTSTART;TZID=Europe/Zurich:20130503T110000
SUMMARY:Seminar in Numerical Analysis: Rüdiger Schultz (Universität Duisb
 urg-Essen)
DESCRIPTION:This talk aims at demonstrating how concepts and techniques whi
 ch are  well-established in operations research may serve as blueprints fo
 r  approaching shape optimization with linearized  elasticity and  stocha
 stic loading. Stochastic shape optimization problems are  considered from 
 a two-stage viewpoint: In a first stage\, without  anticipation of the ran
 dom loading\, the shape has to be fixed. After  realization  of the load\
 , the displacement obtained from solving the  elasticity boundary value pr
 oblem then may be seen as a second-stage (or  recourse) action\, and the v
 ariational problem of the weak formulation  as a second-stage optimization
  problem.\\r\\nAt this point\, there is  a perfect match with two-stage st
 ochastic programming: after having  taken a non-anticipative decision in 
  the first stage\, and having  observed the random data\, a well-defined 
 second-stage problem remains  and is solved to optimality. Suitable object
 ive functions complete the  formal descriptions of the models\, for instan
 ce\, costs in the  stochastic-programming setting and compliance or tracki
 ng functionals in  shape optimization.\\r\\nStochastic programming now off
 ers a wide  collection of models to address shape optimization under uncer
 tainty.  This starts with risk neutral models\, is continued by mean-risk 
   optimization involving different risk measures\, and will finally lead 
  to analogues in shape optimization of decision problems with  stochastic-
 order (or dominance) constraints.\\r\\nIn the talk we will present these m
 odels\, discuss solution methods\, and report some computational tests.
X-ALT-DESC:This talk aims at demonstrating how concepts and techniques whic
 h are  well-established in operations research may serve as blueprints for
   approaching shape optimization with linearized &nbsp\;elasticity and  st
 ochastic loading. Stochastic shape optimization problems are  considered f
 rom a two-stage viewpoint: In a first stage\, without  anticipation of the
  random loading\, the shape has to be fixed. After  realization &nbsp\;of 
 the load\, the displacement obtained from solving the  elasticity boundary
  value problem then may be seen as a second-stage (or  recourse) action\, 
 and the variational problem of the weak formulation  as a second-stage opt
 imization problem.\nAt this point\, there is  a perfect match with two-sta
 ge stochastic programming: after having  taken a non-anticipative decision
  in &nbsp\;the first stage\, and having  observed the random data\, a well
 -defined second-stage problem remains  and is solved to optimality. Suitab
 le objective functions complete the  formal descriptions of the models\, f
 or instance\, costs in the  stochastic-programming setting and compliance 
 or tracking functionals in  shape optimization.\nStochastic programming no
 w offers a wide  collection of models to address shape optimization under 
 uncertainty.  This starts with risk neutral models\, is continued by mean-
 risk  &nbsp\;optimization involving different risk measures\, and will fin
 ally lead  to analogues in shape optimization of decision problems with  s
 tochastic-order (or dominance) constraints.\nIn the talk we will present t
 hese models\, discuss solution methods\, and report some computational tes
 ts. 
DTEND;TZID=Europe/Zurich:20130503T120000
END:VEVENT
BEGIN:VEVENT
UID:news277@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T232119
DTSTART;TZID=Europe/Zurich:20130419T110000
SUMMARY:Seminar in Numerical Analysis: Wolfgang Wendland (Universität Stut
 tgart)
DESCRIPTION:As a special case of nonlinear Rieman--Hilbert problems with cl
 osed  boundary data in multiply connected domains\, here a doubly connecte
 d  domain like an annulus is considered.\\r\\nThe nonlinear boundary  cond
 itions for the desired holomorphic solutions lead to nonlinear  singular i
 ntegral equations on the boundary which belong to the class of  quasiruled
  Fredholm maps defined on quasicylindrical domains in  appropriate separab
 le Banach spaces.\\r\\nThe closed boundary data  give a priori estimates f
 or the modulus of solutions which in turn  implies a priori estimates in t
 he Sobolev spaces considered here. For  this class of problems\, the Shnir
 elman--Efendiev degree of mappings can  be defined which allows to investi
 gate the existence of solutions if the  boundary conditions satisfy some t
 opological assumptions.\\r\\nThe  lifting of the boundary  value problem 
 via holomorphic transformation  onto the universal covering of the unit di
 sc allows to construct a  homotopic deformation of the lifted nonlinear si
 ngular integral  equations to a uniquely solvable case  which implies tha
 t the degree of  mapping is 1 and existence of (in fact at least two) solu
 tions  follows.\\r\\nIf the nonlinear integral equations on the boundary a
 re  appoximated by trigonometric point collocation then the theory also  i
 mplies that approximate solutions exist and converge asymptotically.
X-ALT-DESC:As a special case of nonlinear Rieman--Hilbert problems with clo
 sed  boundary data in multiply connected domains\, here a doubly connected
   domain like an annulus is considered.\nThe nonlinear boundary  condition
 s for the desired holomorphic solutions lead to nonlinear  singular integr
 al equations on the boundary which belong to the class of  quasiruled Fred
 holm maps defined on quasicylindrical domains in  appropriate separable Ba
 nach spaces.\nThe closed boundary data  give a priori estimates for the mo
 dulus of solutions which in turn  implies a priori estimates in the Sobole
 v spaces considered here. For  this class of problems\, the Shnirelman--Ef
 endiev degree of mappings can  be defined which allows to investigate the 
 existence of solutions if the  boundary conditions satisfy some topologica
 l assumptions.\nThe  lifting of the boundary&nbsp\; value problem via holo
 morphic transformation  onto the universal covering of the unit disc allow
 s to construct a  homotopic deformation of the lifted nonlinear singular i
 ntegral  equations to a uniquely solvable case&nbsp\; which implies that t
 he degree of  mapping is 1 and existence of (in fact at least two) solutio
 ns  follows.\nIf the nonlinear integral equations on the boundary are  app
 oximated by trigonometric point collocation then the theory also  implies 
 that approximate solutions exist and converge asymptotically. 
DTEND;TZID=Europe/Zurich:20130419T120000
END:VEVENT
BEGIN:VEVENT
UID:news278@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T232307
DTSTART;TZID=Europe/Zurich:20130412T110000
SUMMARY:Seminar in Numerical Analysis: Florian Loos (Universität der Bunde
 swehr München)
DESCRIPTION:The number of electrical devices in modern cars supplied by hig
 h  currents grows continuously. In order to avoid hot spot generation and 
  overheating on the one hand\, but to save weight and material on the  oth
 er hand\, electrical connecting structures have to be dimensioned  appropr
 iately. The heat transfer in current carrying multicables with  considerat
 ion of the rise of electrical resistivity for higher  temperatures is desc
 ribed by a system of semilinear equations with  discontinuous coefficients
 . The effects of convection and radiation are  taken into account by a non
 linear boundary condition.\\r\\nSimulation  results and experimental studi
 es show that the positioning of the single  cables has important influence
  on the maximum temperatures. In order to  find an optimal cable design\, 
 i.e. to arrange the single cables with  fixed cross section and current su
 ch that the maximum temperature is  minimized\, a shape optimization probl
 em is formulated. We derive an  adjoint system and the shape gradient usin
 g the formal Lagrange  approach. The effect of the discontinuity of some c
 oefficients on the  shape gradient is shown. By application of different (
 nonlinear)  optimizers combined with the finite element solver COMSOL Mult
 iphysics\, a  solution is obtained numerically. In this talk\, we present 
 the modeling  of the problem\, the derivation of the shape gradient and nu
 merical  results.\\r\\nThis is joint work with Helmut Harbrecht and Thomas
  Apel.
X-ALT-DESC:The number of electrical devices in modern cars supplied by high
   currents grows continuously. In order to avoid hot spot generation and  
 overheating on the one hand\, but to save weight and material on the  othe
 r hand\, electrical connecting structures have to be dimensioned  appropri
 ately. The heat transfer in current carrying multicables with  considerati
 on of the rise of electrical resistivity for higher  temperatures is descr
 ibed by a system of semilinear equations with  discontinuous coefficients.
  The effects of convection and radiation are  taken into account by a nonl
 inear boundary condition.\nSimulation  results and experimental studies sh
 ow that the positioning of the single  cables has important influence on t
 he maximum temperatures. In order to  find an optimal cable design\, i.e. 
 to arrange the single cables with  fixed cross section and current such th
 at the maximum temperature is  minimized\, a shape optimization problem is
  formulated. We derive an  adjoint system and the shape gradient using the
  formal Lagrange  approach. The effect of the discontinuity of some coeffi
 cients on the  shape gradient is shown. By application of different (nonli
 near)  optimizers combined with the finite element solver COMSOL Multiphys
 ics\, a  solution is obtained numerically. In this talk\, we present the m
 odeling  of the problem\, the derivation of the shape gradient and numeric
 al  results.\nThis is joint work with Helmut Harbrecht and Thomas Apel. 
DTEND;TZID=Europe/Zurich:20130412T120000
END:VEVENT
BEGIN:VEVENT
UID:news279@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T232556
DTSTART;TZID=Europe/Zurich:20121214T110000
SUMMARY:Seminar in Numerical Analysis: Ulrich Römer (TU Darmstadt)
DESCRIPTION:Simulation results of magnetic devices with ferromagnetic mater
 ials are  highly sensitive to the nonlinear material law and the geometry.
  Due to  uncertainties inherent to measurements and the fabrication proces
 s\, the  precise knowledge of model input data cannot be assumed to be giv
 en.  Therefore\, in recent years\, methods of uncertainty quantification h
 ave  become more and more important. In this talk a short overview over  a
 pplication examples (magnets\, machines) will be given and the PDEs for  t
 he magnetic fields will be discussed. Under some simplifications these  ar
 e 2D nonlinear elliptic interface equations of monotone type. We will  int
 roduce a stochastic collocation method based on generalized  polynomial ch
 aos to quantify the uncertainties. Furthermore\, we will  discuss a worst-
 case scenario analysis to cover cases where the  statistics of the inputs 
 is not available. Since for the worst-case  analysis gradient information 
 is especially important\, sensitivity  analysis techniques\, e.g.\, adjoin
 t equations and shape calculus are  required.\\r\\nJoint work with Sebasti
 an Schöps and Thomas Weiland.
X-ALT-DESC:Simulation results of magnetic devices with ferromagnetic materi
 als are  highly sensitive to the nonlinear material law and the geometry. 
 Due to  uncertainties inherent to measurements and the fabrication process
 \, the  precise knowledge of model input data cannot be assumed to be give
 n.  Therefore\, in recent years\, methods of uncertainty quantification ha
 ve  become more and more important. In this talk a short overview over  ap
 plication examples (magnets\, machines) will be given and the PDEs for  th
 e magnetic fields will be discussed. Under some simplifications these  are
  2D nonlinear elliptic interface equations of monotone type. We will  intr
 oduce a stochastic collocation method based on generalized  polynomial cha
 os to quantify the uncertainties. Furthermore\, we will  discuss a worst-c
 ase scenario analysis to cover cases where the  statistics of the inputs i
 s not available. Since for the worst-case  analysis gradient information i
 s especially important\, sensitivity  analysis techniques\, e.g.\, adjoint
  equations and shape calculus are  required.\nJoint work with Sebastian Sc
 höps and Thomas Weiland. 
DTEND;TZID=Europe/Zurich:20121214T120000
END:VEVENT
BEGIN:VEVENT
UID:news280@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T232736
DTSTART;TZID=Europe/Zurich:20121207T110000
SUMMARY:Seminar in Numerical Analysis: Mike Botchev (University of Twente)
DESCRIPTION:We review some recent advances in Krylov subspace methods to co
 mpute an  action of the matrix exponential on a given vector.  In particu
 lar\, we  briefly discuss residual-based and shift-and-invert Krylov subsp
 ace  methods in the context of space-discretized 3D Maxwell's equations. 
  In  our limited experience\, a conventional time stepping\, where actions
  of  the matrix exponential have to be repeatedly computed at every time  
 step\, are usually inefficient.  We therefore discuss an alternative  app
 roach\, based on block Krylov subspaces\, where just a couple  evaluations
  of the matrix exponential suffice to solve the problem for  the whole tim
 e interval.
X-ALT-DESC:We review some recent advances in Krylov subspace methods to com
 pute an  action of the matrix exponential on a given vector.&nbsp\; In par
 ticular\, we  briefly discuss residual-based and shift-and-invert Krylov s
 ubspace  methods in the context of space-discretized 3D Maxwell's equation
 s.&nbsp\; In  our limited experience\, a conventional time stepping\, wher
 e actions of  the matrix exponential have to be repeatedly computed at eve
 ry time  step\, are usually inefficient.&nbsp\; We therefore discuss an al
 ternative  approach\, based on block Krylov subspaces\, where just a coupl
 e  evaluations of the matrix exponential suffice to solve the problem for 
  the whole time interval. 
DTEND;TZID=Europe/Zurich:20121207T120000
END:VEVENT
BEGIN:VEVENT
UID:news281@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T232939
DTSTART;TZID=Europe/Zurich:20121109T110000
SUMMARY:Seminar in Numerical Analysis: Sören Bartels (Universität Freibur
 g)
DESCRIPTION:The mathematical description of the elastic deformation of thin
  plates  can be derived by a dimension reduction from three-dimensional  e
 lasticity and leads to the minimization of an energy functional that  invo
 lves the second fundamental form of the deformation and is subject  to the
  constraint that the deformation is an isometry. We discuss two  approache
 s to the discretization of the second order derivatives and the  treatment
  of the isometry constraint. The first one relaxes the second  order deriv
 atives via a Reissner-Mindlin approximation and the second  one employs di
 screte Kirchhoff triangles that define a nonconforming  second order deriv
 ative. In both cases the deformation is decoupled from  the deformation gr
 adient and this enables us to employ techniques  developed for the approxi
 mation of harmonic maps to impose the  constraint on the deformation gradi
 ent at the nodes of a triangulation.  The solution of the nonlinear discre
 te schemes is done by appropriate  gradient flows and we demonstrate their
  energy decreasing behaviours  under mild conditions on step sizes. Numeri
 cal experiments show that the  proposed schemes provide accurate approxima
 tions for large vertical  loads as well as compressive boundary conditions
 .
X-ALT-DESC:The mathematical description of the elastic deformation of thin 
 plates  can be derived by a dimension reduction from three-dimensional  el
 asticity and leads to the minimization of an energy functional that  invol
 ves the second fundamental form of the deformation and is subject  to the 
 constraint that the deformation is an isometry. We discuss two  approaches
  to the discretization of the second order derivatives and the  treatment 
 of the isometry constraint. The first one relaxes the second  order deriva
 tives via a Reissner-Mindlin approximation and the second  one employs dis
 crete Kirchhoff triangles that define a nonconforming  second order deriva
 tive. In both cases the deformation is decoupled from  the deformation gra
 dient and this enables us to employ techniques  developed for the approxim
 ation of harmonic maps to impose the  constraint on the deformation gradie
 nt at the nodes of a triangulation.  The solution of the nonlinear discret
 e schemes is done by appropriate  gradient flows and we demonstrate their 
 energy decreasing behaviours  under mild conditions on step sizes. Numeric
 al experiments show that the  proposed schemes provide accurate approximat
 ions for large vertical  loads as well as compressive boundary conditions.
  
DTEND;TZID=Europe/Zurich:20121109T120000
END:VEVENT
BEGIN:VEVENT
UID:news282@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T233431
DTSTART;TZID=Europe/Zurich:20121102T110000
SUMMARY:Seminar in Numerical Analysis: Dominik Schötzau (University of Bri
 tish Columbia)
DESCRIPTION:We introduce and analyze a new mixed finite element method for 
  the  spatial discretization of an incompressible magnetohydrodynamics   p
 roblem. It is based on divergence-conforming elements for the fluid   velo
 cities and on curl-conforming elements for the magnetic unknowns.  The tan
 gential continuity of the velocities is enforced by a DG   approach. Centr
 al features of the resulting method are that it produces   exactly diverge
 nce-free velocity approximations and is provably   energy-stable\, and tha
 t it correctly captures the strongest magnetic  singularities in non-smoot
 h domains. We carry out the error analysis of  the method\, and present a 
 comprehensive set of numerical tests in two  and three dimensions. We also
  discuss some recent ideas regarding the  design of efficient solvers for 
 the matrix systems.
X-ALT-DESC:We introduce and analyze a new mixed finite element method for  
 the  spatial discretization of an incompressible magnetohydrodynamics   pr
 oblem. It is based on divergence-conforming elements for the fluid   veloc
 ities and on curl-conforming elements for the magnetic unknowns.  The tang
 ential continuity of the velocities is enforced by a DG   approach. Centra
 l features of the resulting method are that it produces   exactly divergen
 ce-free velocity approximations and is provably   energy-stable\, and that
  it correctly captures the strongest magnetic  singularities in non-smooth
  domains. We carry out the error analysis of  the method\, and present a c
 omprehensive set of numerical tests in two  and three dimensions. We also 
 discuss some recent ideas regarding the  design of efficient solvers for t
 he matrix systems.  
DTEND;TZID=Europe/Zurich:20121102T120000
END:VEVENT
BEGIN:VEVENT
UID:news283@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T233556
DTSTART;TZID=Europe/Zurich:20121026T110000
SUMMARY:Seminar Numerical Analysis: Drosos Kourounis (Università della Svi
 zzera italiana)
DESCRIPTION:Adjoint-based gradients form an important ingredient of fast  o
 ptimization algorithms for computer-assisted history matching and  life-cy
 cle production optimization. Large-scale applications of  adjoint-based re
 servoir optimization reported so far concern relatively  simple physics\, 
 in particular two-phase (oil-water) or three-phase  (oil-gas-water) applic
 ations. In contrast\, compositional simulation has  the added complexity o
 f frequent flash calculations and high  compressibilities which potentiall
 y complicate both the adjoint  computation and gradient-based optimization
 \, especially in the presence  of complex constraints. These aspects are i
 nvestigated using a new  adjoint implementation in a research reservoir si
 mulator designed on top  of an automatic differentiation framework coupled
  to a standard  large-scale nonlinear optimization package.  Based on sev
 eral examples  of increasing complexity we conclude that the AD-based adjo
 int  implementation is capable of accurately and efficiently computing  gr
 adients for multi-component reservoir flow. However\, optimization of  str
 ongly compressible flow with constraints on well rates or pressures  leads
  to potentially poor performance in conjunction with an external  optimiza
 tion package. We present a pragmatic but effective strategy to  overcome t
 his issue.
X-ALT-DESC:Adjoint-based gradients form an important ingredient of fast  op
 timization algorithms for computer-assisted history matching and  life-cyc
 le production optimization. Large-scale applications of  adjoint-based res
 ervoir optimization reported so far concern relatively  simple physics\, i
 n particular two-phase (oil-water) or three-phase  (oil-gas-water) applica
 tions. In contrast\, compositional simulation has  the added complexity of
  frequent flash calculations and high  compressibilities which potentially
  complicate both the adjoint  computation and gradient-based optimization\
 , especially in the presence  of complex constraints. These aspects are in
 vestigated using a new  adjoint implementation in a research reservoir sim
 ulator designed on top  of an automatic differentiation framework coupled 
 to a standard  large-scale nonlinear optimization package.&nbsp\; Based on
  several examples  of increasing complexity we conclude that the AD-based 
 adjoint  implementation is capable of accurately and efficiently computing
   gradients for multi-component reservoir flow. However\, optimization of 
  strongly compressible flow with constraints on well rates or pressures  l
 eads to potentially poor performance in conjunction with an external  opti
 mization package. We present a pragmatic but effective strategy to  overco
 me this issue. 
DTEND;TZID=Europe/Zurich:20121026T120000
END:VEVENT
BEGIN:VEVENT
UID:news284@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T233803
DTSTART;TZID=Europe/Zurich:20121012T110000
SUMMARY:Seminar in Numerical Analysis: Stefan Volkwein (Universität Konsta
 nz)
DESCRIPTION:We consider the following problem of error estimation for the o
 ptimal  control of nonlinear partial differential equations: Let an arbitr
 ary  admissible control function be given. How far is it from the next  lo
 cally optimal control? Under natural assumptions including a  second-order
  sufficient optimality condition for the (unknown) locally  optimal contro
 l\, we estimate the distance between the two controls. To  do this\, we ne
 ed some information on the lowest eigenvalue of the  reduced Hessian. We a
 pply this technique to a model reduced optimal  control problem obtained b
 y proper orthogonal decomposition (POD). The  distance between a local sol
 ution of the reduce problem to a local  solution of the original problem i
 s estimated.
X-ALT-DESC:We consider the following problem of error estimation for the op
 timal  control of nonlinear partial differential equations: Let an arbitra
 ry  admissible control function be given. How far is it from the next  loc
 ally optimal control? Under natural assumptions including a  second-order 
 sufficient optimality condition for the (unknown) locally  optimal control
 \, we estimate the distance between the two controls. To  do this\, we nee
 d some information on the lowest eigenvalue of the  reduced Hessian. We ap
 ply this technique to a model reduced optimal  control problem obtained by
  proper orthogonal decomposition (POD). The  distance between a local solu
 tion of the reduce problem to a local  solution of the original problem is
  estimated. 
DTEND;TZID=Europe/Zurich:20121012T120000
END:VEVENT
BEGIN:VEVENT
UID:news285@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T233949
DTSTART;TZID=Europe/Zurich:20120928T110000
SUMMARY:Seminar in Numerical Analysis: Frédéric Nataf (Université Pierre
  et Marie Curie)
DESCRIPTION:We introduce the time reversed absorbing conditions (TRAC) in t
 ime   reversal methods. They enable to "recreate the past" without knowing
    the source which has emitted the signals that are back-propagated. Two 
   applications to inverse problems are given in both the full and partial 
   aperture case.
X-ALT-DESC:We introduce the time reversed absorbing conditions (TRAC) in ti
 me   reversal methods. They enable to &quot\;recreate the past&quot\; with
 out knowing   the source which has emitted the signals that are back-propa
 gated. Two   applications to inverse problems are given in both the full a
 nd partial   aperture case.  
DTEND;TZID=Europe/Zurich:20120928T120000
END:VEVENT
BEGIN:VEVENT
UID:news286@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T234139
DTSTART;TZID=Europe/Zurich:20120601T090000
SUMMARY:Seminar in Numerical Analysis: Walter Gautschi (Purdue University)
DESCRIPTION:Algorithms are developed for computing the coefficients in the 
  three-term recurrence relation of repeatedly modified orthogonal  polynom
 ials\, the modifications involving division of the orthogonality  measure 
 by a linear function with real or complex coefficient. The  respective Gau
 ssian quadrature rules can be used to account for simple  or multiple pole
 s that may be present in the integrand. Several examples  are given to ill
 ustrate this.
X-ALT-DESC:Algorithms are developed for computing the coefficients in the  
 three-term recurrence relation of repeatedly modified orthogonal  polynomi
 als\, the modifications involving division of the orthogonality  measure b
 y a linear function with real or complex coefficient. The  respective Gaus
 sian quadrature rules can be used to account for simple  or multiple poles
  that may be present in the integrand. Several examples  are given to illu
 strate this. 
DTEND;TZID=Europe/Zurich:20120601T100000
END:VEVENT
BEGIN:VEVENT
UID:news287@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T234452
DTSTART;TZID=Europe/Zurich:20120525T090000
SUMMARY:Seminar in Numerical Analysis: Wolfgang Bangerth (Texas A&M Univers
 ity)
DESCRIPTION:In many of the modern biomedical imaging modalities\, the measu
 rable  signal can be described as the solution of a partial differential  
 equation that depends nonlinearly on the tissue properties (the  "paramete
 rs") one would like to image. Consequently\, there are typically  no expli
 cit solution formulas for these so-called "inverse problems"  that can rec
 over the parameters from the measurements\, and the only way  to generate 
 body images from measurements is through numerical  approximation.\\r\\nTh
 e resulting parameter estimation schemes have  the underlying partial diff
 erential equations as side-constraints\, and  the solution of these optimi
 zation problems often requires solving the  partial differential equation 
 thousands or hundred of thousands of  times. The development of efficient 
 schemes is therefore of great  interest for the practical use of such imag
 ing modalities in clinical  settings. In this talk\, the formulation and e
 fficient solution  strategies for such inverse problems will be discussed\
 , and we will  demonstrate its efficacy using examples from our work on Op
 tical  Tomography\, a novel way of imaging tumors in humans and animals. T
 he  talk will conclude with an outlook to even more complex problems that 
  attempt to automatically optimize experimental setups to obtain better  i
 mages.
X-ALT-DESC:In many of the modern biomedical imaging modalities\, the measur
 able  signal can be described as the solution of a partial differential  e
 quation that depends nonlinearly on the tissue properties (the  &quot\;par
 ameters&quot\;) one would like to image. Consequently\, there are typicall
 y  no explicit solution formulas for these so-called &quot\;inverse proble
 ms&quot\;  that can recover the parameters from the measurements\, and the
  only way  to generate body images from measurements is through numerical 
  approximation.\nThe resulting parameter estimation schemes have  the unde
 rlying partial differential equations as side-constraints\, and  the solut
 ion of these optimization problems often requires solving the  partial dif
 ferential equation thousands or hundred of thousands of  times. The develo
 pment of efficient schemes is therefore of great  interest for the practic
 al use of such imaging modalities in clinical  settings. In this talk\, th
 e formulation and efficient solution  strategies for such inverse problems
  will be discussed\, and we will  demonstrate its efficacy using examples 
 from our work on Optical  Tomography\, a novel way of imaging tumors in hu
 mans and animals. The  talk will conclude with an outlook to even more com
 plex problems that  attempt to automatically optimize experimental setups 
 to obtain better  images. 
DTEND;TZID=Europe/Zurich:20120525T100000
END:VEVENT
BEGIN:VEVENT
UID:news288@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T234712
DTSTART;TZID=Europe/Zurich:20120427T090000
SUMMARY:Seminar in Numerical Analysis: Daniel Kressner (EPFL)
DESCRIPTION:Verifying the stability of a matrix A under perturbations can b
 e a   challenging task\, especially when additional structure is imposed o
 n  the set of admissible perturbations. In the unstructured case\, a new  
 class of algorithms has recently been proposed by Guglielmi\, Lubich\, and
   Overton to efficiently compute extremal points (such as the right-most  
 point) of the pseudospectrum. In this talk\, we discuss two extensions of 
  these algorithms. First\, we show how subspace acceleration can be used  
 to significantly speed up convergence\, yielding a quadratically  converge
 nt subspace method. Second\, an extension to certain structured  pseudospe
 ctra is provided. This gives the possibility to address  structures (real 
 Hamiltonian\,  block diagonal\, ...)  that have so far  been inaccessible
  by existing techniques.\\r\\nThis talk is based on joint work with Nicola
  Guglielmi\, Christian Lubich\, and Bart Vandereycken.
X-ALT-DESC:Verifying the stability of a matrix <i>A</i> under perturbations
  can be a   challenging task\, especially when additional structure is imp
 osed on  the set of admissible perturbations. In the unstructured case\, a
  new  class of algorithms has recently been proposed by Guglielmi\, Lubich
 \, and  Overton to efficiently compute extremal points (such as the right-
 most  point) of the pseudospectrum. In this talk\, we discuss two extensio
 ns of  these algorithms. First\, we show how subspace acceleration can be 
 used  to significantly speed up convergence\, yielding a quadratically  co
 nvergent subspace method. Second\, an extension to certain structured  pse
 udospectra is provided. This gives the possibility to address  structures 
 (real Hamiltonian\,  block diagonal\, ...)&nbsp\; that have so far  been i
 naccessible by existing techniques.\nThis talk is based on joint work with
  Nicola Guglielmi\, Christian Lubich\, and Bart Vandereycken. 
DTEND;TZID=Europe/Zurich:20120427T100000
END:VEVENT
BEGIN:VEVENT
UID:news289@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T234846
DTSTART;TZID=Europe/Zurich:20120420T090000
SUMMARY:Seminar in Numerical Analysis: Ilario Mazzieri (MOX - Politecnico d
 i Milano)
DESCRIPTION:The study and development of spectral element (SE) methods for 
  simulating elastic wave propagation in seismic regions has been  subjecte
 d to a tremendous growth\, occurred in the past ten years. SE  methods are
  based on high-order Lagrangian interpolants sampled at the  Gauss-Legendr
 e-Lobatto quadrature points\, and combine the flexibility of  finite eleme
 nts with the accuracy of spectral techniques. Since they  are based on the
  weak formulation of the elastodynamics equations\, they  handle naturally
  both interface continuity and free boundary conditions\,  allowing very a
 ccurate resolutions of evanescent interface and surface  waves. Moreover\,
  SE methods retain a high level parallel structure\, thus  are well suited
  for massively parallel computations. The main drawback  of SE methods is 
 that they usually require a uniform polynomial order on  the whole computa
 tional domain\, and this can lead to an unreasonably  large computational 
 effort\, in particular in regions where a fine mesh  grid is needed alread
 y to describe accurately the domain geometry.\\r\\nHere\,  we consider a D
 iscontinuous Galerkin (DGSE) and a Mortar (MSE) spectral  element methods 
 coupled with the leap-frog time integration scheme to  simulate seismic wa
 ve propagations in two and three dimensional  heterogeneous media. The mai
 n advantage with respect to conforming  discretizations\, as SE method\, i
 s that DGSE and MSE discretizations can  accommodate discontinuities\, not
  only in the parameters\, but also in the  wavefield\, while preserving th
 e energy. The domain of interest Ω is  assumed to be union of polygonal s
 ubdomain Ωi. We allow this subdomain decomposition to be geometrically no
 n-conforming. Inside each subdomain Ωi\, a conforming high order finite e
 lement space associated to a partition Thi(Ωi)  is introduced. We conside
 r different polynomial approximation degrees  within different subdomains.
  To handle non-conforming meshes and  non-uniform polynomial degrees acros
 s ∂Ωi \, a DG or a Mortar discretization is considered.\\r\\nApplicatio
 ns of the DGSE and MSE methods to simulate realistic seismic wave propagat
 ion problems are presented.\\r\\nJoint work with: P.F. Antonietti\, A. Qua
 rteroni and F. Rapetti.
X-ALT-DESC:The study and development of spectral element (SE) methods for  
 simulating elastic wave propagation in seismic regions has been  subjected
  to a tremendous growth\, occurred in the past ten years. SE  methods are 
 based on high-order Lagrangian interpolants sampled at the  Gauss-Legendre
 -Lobatto quadrature points\, and combine the flexibility of  finite elemen
 ts with the accuracy of spectral techniques. Since they  are based on the 
 weak formulation of the elastodynamics equations\, they  handle naturally 
 both interface continuity and free boundary conditions\,  allowing very ac
 curate resolutions of evanescent interface and surface  waves. Moreover\, 
 SE methods retain a high level parallel structure\, thus  are well suited 
 for massively parallel computations. The main drawback  of SE methods is t
 hat they usually require a uniform polynomial order on  the whole computat
 ional domain\, and this can lead to an unreasonably  large computational e
 ffort\, in particular in regions where a fine mesh  grid is needed already
  to describe accurately the domain geometry.\nHere\,  we consider a Discon
 tinuous Galerkin (DGSE) and a Mortar (MSE) spectral  element methods coupl
 ed with the leap-frog time integration scheme to  simulate seismic wave pr
 opagations in two and three dimensional  heterogeneous media. The main adv
 antage with respect to conforming  discretizations\, as SE method\, is tha
 t DGSE and MSE discretizations can  accommodate discontinuities\, not only
  in the parameters\, but also in the  wavefield\, while preserving the ene
 rgy. The domain of interest Ω is  assumed to be union of polygonal subdom
 ain Ω<sub>i</sub>. We allow this subdomain decomposition to be geometrica
 lly non-conforming. Inside each subdomain Ω<sub>i</sub>\, a conforming hi
 gh order finite element space associated to a partition <i>T</i><sub>hi</s
 ub>(Ω<sub>i</sub>)  is introduced. We consider different polynomial appro
 ximation degrees  within different subdomains. To handle non-conforming me
 shes and  non-uniform polynomial degrees across ∂Ω<sub>i</sub> \, a DG 
 or a Mortar discretization is considered.\nApplications of the DGSE and MS
 E methods to simulate realistic seismic wave propagation problems are pres
 ented.\nJoint work with: P.F. Antonietti\, A. Quarteroni and F. Rapetti. 
DTEND;TZID=Europe/Zurich:20120420T100000
END:VEVENT
BEGIN:VEVENT
UID:news290@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235028
DTSTART;TZID=Europe/Zurich:20120323T090000
SUMMARY:Seminar in Numerical Analysis: Reinhold Schneider (TU Berlin)
DESCRIPTION:The DMRG algorithm (density matrix renormalization group algori
 thm)  introduced by S. White provides a powerful  tool for the numerical 
  treatment of spin systems. The DMRG version for the electronic  Schrödin
 ger equation\, the  QDMRG (quantum chemistry density matrix  renormalizat
 ion group) algorithm is less known.  Although it provides an  approximait
 on of the full CI solution within polynomial complexity.  Concepts known f
 rom spin systems\, e.g. matrix product states\, tree   tensor networks ha
 ve been rediscovered recently in tensor product  approximation\, under a d
 ifferent perspective as  hierarchical Tucker  representation introduced b
 y Hackbusch and coworkers. and on   TT-tensors  (tensor trains) by Osele
 dets & Tyrtishnikov\, offering a  promising approach for the numerical tr
 eatment of high dimensional  differential equation.  We have shown that u
 nder a full rank condition  TT tensors form a manifold and characterize it
 s tangent space\, e.g.  to  apply the Dirac-Frenkel variartional  princi
 ple. We propose an  alternating linear scheme  (ALS alternating linear sc
 heme) approach for  optimization in the TT format. A modified alternating 
 linear scheme  (MALS) applied to the electronic Schrödinger equation rese
 mbles exactly  the density matrix renormalization group algorithm (QDMRG).
  Identifying  the discretized Fock space with the tensor product space ⊗
 R2 (⊗C2)\,  the formalism of second quantization is directly implemente
 d in the tensor treatment for numerical computation.\\r\\nJoint work with 
 Th. Rohwedder and S. Holtz
X-ALT-DESC:The DMRG algorithm (density matrix renormalization group algorit
 hm)  introduced by S. White provides a powerful&nbsp\; tool for the numeri
 cal  treatment of spin systems. The DMRG version for the electronic  Schr
 ödinger equation\, the&nbsp\; QDMRG (quantum chemistry density matrix  re
 normalization group) algorithm is less known.&nbsp\; Although it provides 
 an  approximaiton of the full CI solution within polynomial complexity.  C
 oncepts known from spin systems\, e.g. matrix product states\, tree&nbsp\;
   tensor networks have been rediscovered recently in tensor product  appro
 ximation\, under a different perspective as&nbsp\; hierarchical Tucker  re
 presentation introduced by Hackbusch and coworkers. and on&nbsp\;  TT-tens
 ors&nbsp\; (tensor trains) by Oseledets &amp\; Tyrtishnikov\, offering a  
 promising approach&nbsp\;for the numerical treatment of high dimensional  
 differential equation.&nbsp\; We have shown that under a full rank conditi
 on  TT tensors form a manifold and characterize its tangent space\, e.g.&n
 bsp\; to  apply the Dirac-Frenkel variartional&nbsp\; principle. We propos
 e an  alternating linear scheme&nbsp\; (ALS alternating linear scheme) app
 roach for  optimization in the TT format. A modified alternating linear sc
 heme  (MALS) applied to the electronic Schrödinger equation resembles exa
 ctly  the density matrix renormalization group algorithm (QDMRG). Identify
 ing  the discretized Fock space with the tensor product space ⊗<i>R</i><
 sup>2</sup> (⊗<i>C</i><sup>2</sup>)\,&nbsp\; the formalism of second qua
 ntization is directly implemented in the tensor treatment for numerical co
 mputation.\nJoint work with Th. Rohwedder and S. Holtz 
DTEND;TZID=Europe/Zurich:20120323T100000
END:VEVENT
BEGIN:VEVENT
UID:news291@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235154
DTSTART;TZID=Europe/Zurich:20120309T090000
SUMMARY:Seminar in Numerical Analysis: Luca Frediani (University of Tromsø
 )
DESCRIPTION:Most modern molecular electronic structure calculations are bas
 ed on  Density Functional Theory (DFT) due to the very convenient balance 
  between accuracy and required computational resources. The accuracy of  t
 he result is then dependent on the quality of the functional and the  chos
 en basis set. By replacing traditional Gaussian Type Orbitals (GTOs)  with
  Multiwavelets\, the basis set can be made practically complete  making th
 e lack of the "exact" functional as the only source of error  left. We hav
 e implemented a Multiwavelet-based DFT code\, which makes use  of the inte
 gral formulation of the Kohn-Sham equations of DFT.  Different strategies 
 to optimize the density have been attempted and  will be presented. The ma
 in limitation of the present approach is the  large memory demand of the s
 oftware compared to traditional methods. In  order to overcome such a limi
 tation a massively parallel implementation  has been developed for the ker
 nel of the code: the application of the  Green's operator represented in t
 he so called Non-Standard form.
X-ALT-DESC:Most modern molecular electronic structure calculations are base
 d on  Density Functional Theory (DFT) due to the very convenient balance  
 between accuracy and required computational resources. The accuracy of  th
 e result is then dependent on the quality of the functional and the  chose
 n basis set. By replacing traditional Gaussian Type Orbitals (GTOs)  with 
 Multiwavelets\, the basis set can be made practically complete  making the
  lack of the &quot\;exact&quot\; functional as the only source of error  l
 eft. We have implemented a Multiwavelet-based DFT code\, which makes use  
 of the integral formulation of the Kohn-Sham equations of DFT.  Different 
 strategies to optimize the density have been attempted and  will be presen
 ted. The main limitation of the present approach is the  large memory dema
 nd of the software compared to traditional methods. In  order to overcome 
 such a limitation a massively parallel implementation  has been developed 
 for the kernel of the code: the application of the  Green's operator repre
 sented in the so called Non-Standard form. 
DTEND;TZID=Europe/Zurich:20120309T100000
END:VEVENT
BEGIN:VEVENT
UID:news292@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235420
DTSTART;TZID=Europe/Zurich:20111216T090000
SUMMARY:Seminar in Numerical Analysis: Bernard Haasdonk (Universität Stutt
 gart)
DESCRIPTION:In this presentation we introduce a method for rapid and certif
 ied  parameter optimization of problems with PDE constraints given by  evo
 lution equations. We make use of an RB-formulation for a general  class of
  evolution problems covering standard iterative time-stepping  schemes. Th
 e reduced spaces for such problems are beneficially  constructed by the PO
 D-Greedy procedure\, for which we recently provided  theoretical foundatio
 n by convergence rate proofs. Extensions of this  procedure involve parame
 ter- and time-partitioning approaches. We will  demonstrate\, how these in
 gredients can be used in iterative direct  parameter optimization problems
 . In addition to approximate surrogate  optimization results\, we provide 
 rigorous a-posteriori error bounds for  solutions\, outputs\, sensitivitie
 s and optimal parameters.
X-ALT-DESC:In this presentation we introduce a method for rapid and certifi
 ed  parameter optimization of problems with PDE constraints given by  evol
 ution equations. We make use of an RB-formulation for a general  class of 
 evolution problems covering standard iterative time-stepping  schemes. The
  reduced spaces for such problems are beneficially  constructed by the POD
 -Greedy procedure\, for which we recently provided  theoretical foundation
  by convergence rate proofs. Extensions of this  procedure involve paramet
 er- and time-partitioning approaches. We will  demonstrate\, how these ing
 redients can be used in iterative direct  parameter optimization problems.
  In addition to approximate surrogate  optimization results\, we provide r
 igorous a-posteriori error bounds for  solutions\, outputs\, sensitivities
  and optimal parameters. 
DTEND;TZID=Europe/Zurich:20111216T100000
END:VEVENT
BEGIN:VEVENT
UID:news293@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235602
DTSTART;TZID=Europe/Zurich:20111209T090000
SUMMARY:Seminar in Numerical Analysis: Daniel Weiss (Universität Tübingen
 )
DESCRIPTION:There are many systems which exhibit different scales which act
  in the  following way: A slow dynamic of interest is driven or affected b
 y  highly-oscillatory components of the systems. A simple example is an  o
 ld-fashioned alarm clock\, which moves on a table driven by the fast  move
 ments of the clapper.\\r\\nWe will explain the Heterogeneous  Multiscale M
 ethod (HMM)[1]\, which is believed to provide a numerical  method for all 
 kind of multiscale systems to overcome the difficulties  of numerical inte
 gration generated by highly-oscillatory components. We  will formulate the
  HMM for highly-oscillatory Hamiltonian systems with  solution-dependent f
 requencies more precisely for the double spring  pendulum with very stiff 
 springs. Finally we will discuss the drawbacks  of this method in case of 
 solution-dependent frequencies. \\r\\n[1] E\, W.\; Engquist\, B.: The hete
 rogeneous multi-scale method\, Comm. Math. Sci.\, 1\, 87--133\, 2003.
X-ALT-DESC:There are many systems which exhibit different scales which act 
 in the  following way: A slow dynamic of interest is driven or affected by
   highly-oscillatory components of the systems. A simple example is an  ol
 d-fashioned alarm clock\, which moves on a table driven by the fast  movem
 ents of the clapper.\nWe will explain the Heterogeneous  Multiscale Method
  (HMM)[1]\, which is believed to provide a numerical  method for all kind 
 of multiscale systems to overcome the difficulties  of numerical integrati
 on generated by highly-oscillatory components. We  will formulate the HMM 
 for highly-oscillatory Hamiltonian systems with  solution-dependent freque
 ncies more precisely for the double spring  pendulum with very stiff sprin
 gs. Finally we will discuss the drawbacks  of this method in case of solut
 ion-dependent frequencies. \n[1] E\, W.\; Engquist\, B.: The heterogeneous
  multi-scale method\, <i>Comm. Math. Sci.</i>\, <b>1</b>\, 87--133\, 2003.
  
DTEND;TZID=Europe/Zurich:20111209T100000
END:VEVENT
BEGIN:VEVENT
UID:news294@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235800
DTSTART;TZID=Europe/Zurich:20111202T090000
SUMMARY:Seminar in Numerical Analysis: Andreas Stock (Universität Stuttgar
 t)
DESCRIPTION:In the Particle-In-Cell (PIC) method particles and electromagne
 tic  fields are fully self-consistent coupled to solve the Vlasov equation
   describing the behavior of rarefied plasma flows without collisions. In 
  the underlying physical model of the PIC method different time-scales  oc
 curs\, i.e. the electromagnetic fields propagating with the speed of  ligh
 t and the particles moving with speeds much lower than the  electromagneti
 c fields. Due to the CFL condition for explicit time  integration schemes 
 the time step is restricted to the largest  propagating speed\, yielding a
 n inefficient time discretization for the  slower particles. We developed 
 a time domain decomposition based on a  multirate multistep technique to t
 reat each component on it's specific  time-scale leading to an efficient t
 ime-integration algorithm.
X-ALT-DESC:In the Particle-In-Cell (PIC) method particles and electromagnet
 ic  fields are fully self-consistent coupled to solve the Vlasov equation 
  describing the behavior of rarefied plasma flows without collisions. In  
 the underlying physical model of the PIC method different time-scales  occ
 urs\, i.e. the electromagnetic fields propagating with the speed of  light
  and the particles moving with speeds much lower than the  electromagnetic
  fields. Due to the CFL condition for explicit time  integration schemes t
 he time step is restricted to the largest  propagating speed\, yielding an
  inefficient time discretization for the  slower particles. We developed a
  time domain decomposition based on a  multirate multistep technique to tr
 eat each component on it's specific  time-scale leading to an efficient ti
 me-integration algorithm. 
DTEND;TZID=Europe/Zurich:20111202T100000
END:VEVENT
BEGIN:VEVENT
UID:news295@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180716T235940
DTSTART;TZID=Europe/Zurich:20111118T090000
SUMMARY:Seminar in Numerical Analysis: Matthias Bollhöfer (Technische Univ
 ersität Braunschweig)
DESCRIPTION:Hierarchical matrix approximations have become an attractive nu
 merical  approach in solving partial differential equations whenever the a
 nalytic  solution can be represented by a kernel functions that allows for
   approximate local separable representations. The philosophy of a  hierar
 chical matrix approximation consists of borrowing a matrix  partition from
  an admissibility condition of the underlying analytic  model and working 
 with blocks that are expected to be of low rank. While  the existence of h
 ierarchical matrix approximations is relatively well  understood\, the con
 crete way of numerically computing a suitable  approximation still raises 
 some open questions such as the h-independent convergence of the computed 
 approximation.\\r\\nIn  this talk we present a new technique to locally pr
 eserve constraints  inside the hierarchical matrix approximation. Numerica
 l experiments  indicate that imposing these local constraints leads to con
 stant number  of iteration steps when solving elliptic partial differentia
 l equations  of second order while without preserving these constraints th
 e number of  iteration steps grow as h → 0. We will further discuss this
  approach from the theoretical point of view and will sketch why our appro
 ximate hierarchical LU decomposition leads to a spectral equivalent approx
 imation.\\r\\nThis is joint work with Mario Bebendorf and Michael Bratsch 
 from the University of Bonn.
X-ALT-DESC:Hierarchical matrix approximations have become an attractive num
 erical  approach in solving partial differential equations whenever the an
 alytic  solution can be represented by a kernel functions that allows for 
  approximate local separable representations. The philosophy of a  hierarc
 hical matrix approximation consists of borrowing a matrix  partition from 
 an admissibility condition of the underlying analytic  model and working w
 ith blocks that are expected to be of low rank. While  the existence of hi
 erarchical matrix approximations is relatively well  understood\, the conc
 rete way of numerically computing a suitable  approximation still raises s
 ome open questions such as the <i>h</i>-independent convergence of the com
 puted approximation.\nIn  this talk we present a new technique to locally 
 preserve constraints  inside the hierarchical matrix approximation. Numeri
 cal experiments  indicate that imposing these local constraints leads to c
 onstant number  of iteration steps when solving elliptic partial different
 ial equations  of second order while without preserving these constraints 
 the number of  iteration steps grow as <i>h</i> → 0. We will further dis
 cuss this approach from the theoretical point of view and will sketch why 
 our approximate hierarchical <i>LU</i> decomposition leads to a spectral e
 quivalent approximation.\nThis is joint work with Mario Bebendorf and Mich
 ael Bratsch from the University of Bonn. 
DTEND;TZID=Europe/Zurich:20111118T100000
END:VEVENT
BEGIN:VEVENT
UID:news296@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180717T000114
DTSTART;TZID=Europe/Zurich:20111104T090000
SUMMARY:Seminar in Numerical Analysis: Sara Minisini (Shell Global Solution
 s International)
DESCRIPTION:Efficient and accurate modeling of the wave equation is importa
 nt for  seismic exploration. We compare the performance of explicit time  
 stepping with the discontinuous Galerkin method and with continuous  augme
 nted third and fourth-order mass-lumped elements for tetrahedra.  There ar
 e two choices for the last\, one with a more favorable CFL  number. Numeri
 cal experiments illustrate the accuracy\, usefulness\, and  versatility of
  these methods when solving 3D problems in inhomogeneous  media.
X-ALT-DESC:Efficient and accurate modeling of the wave equation is importan
 t for  seismic exploration. We compare the performance of explicit time  s
 tepping with the discontinuous Galerkin method and with continuous  augmen
 ted third and fourth-order mass-lumped elements for tetrahedra.  There are
  two choices for the last\, one with a more favorable CFL  number. Numeric
 al experiments illustrate the accuracy\, usefulness\, and  versatility of 
 these methods when solving 3D problems in inhomogeneous  media. 
DTEND;TZID=Europe/Zurich:20111104T100000
END:VEVENT
BEGIN:VEVENT
UID:news297@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180717T000250
DTSTART;TZID=Europe/Zurich:20111028T090000
SUMMARY:Seminar in Numerical Analysis: Oliver Ernst (Technische Universitä
 t Bergakademie Freiberg)
DESCRIPTION:We present a case study for probabilistic uncertainty quantific
 ation  (UQ) applied to groundwater flow in the context of site assessment 
 for  radioactive waste disposal based on data from the Waste Isolation Pil
 ot  Plant (WIPP) in Carlsbad\, New Mexico. In this context\, the primary  
 quantity of interest is the time it takes for a particle of  radioactivity
  to be transported with the groundwater from the repository  to man's envi
 ronment. The mathematical model consists of a stationary  diffusion equati
 on for the hydraulic head in which the hydraulic  conductivity coefficient
  is a random field. Once the (stochastic)  hydraulic head is computed\, co
 ntaminant transport can be modelled by  particle tracing in the associated
  velocity field.\\r\\nWe compare two approaches: Gaussian process emulator
 s and stochastic  collocation combined with geostatistical techniques for 
 determining the  parameters of the input random field's probability law. T
 he second  approach involves the numerical solution of the PDE with random
  data as a  parametrized deterministic system. The calculation of the stat
 istics of  the travel time from the solution of the stochastic model is fo
 rmulated  for each of the methods being studied and the results compared.
X-ALT-DESC:We present a case study for probabilistic uncertainty quantifica
 tion  (UQ) applied to groundwater flow in the context of site assessment f
 or  radioactive waste disposal based on data from the Waste Isolation Pilo
 t  Plant (WIPP) in Carlsbad\, New Mexico. In this context\, the primary  q
 uantity of interest is the time it takes for a particle of  radioactivity 
 to be transported with the groundwater from the repository  to man's envir
 onment. The mathematical model consists of a stationary  diffusion equatio
 n for the hydraulic head in which the hydraulic  conductivity coefficient 
 is a random field. Once the (stochastic)  hydraulic head is computed\, con
 taminant transport can be modelled by  particle tracing in the associated 
 velocity field.\nWe compare two approaches: Gaussian process emulators and
  stochastic  collocation combined with geostatistical techniques for deter
 mining the  parameters of the input random field's probability law. The se
 cond  approach involves the numerical solution of the PDE with random data
  as a  parametrized deterministic system. The calculation of the statistic
 s of  the travel time from the solution of the stochastic model is formula
 ted  for each of the methods being studied and the results compared. 
DTEND;TZID=Europe/Zurich:20111028T100000
END:VEVENT
BEGIN:VEVENT
UID:news298@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180717T000533
DTSTART;TZID=Europe/Zurich:20111021T090000
SUMMARY:Seminar in Numerical Analysis: Robert C. Dalang (EPFL)
DESCRIPTION:I will present an introduction to stochastic partial differenti
 al equations\, with an emphasis on the stochastic heat and wave equations.
  These equations describe the motion of a medium subject to random excitat
 ion\, which is usually taken to be Gaussian space-time white noise. This p
 robabilistic object will be presented through several examples as well as 
 defined mathematically. Some important questions and results obtained will
  also be discussed.
X-ALT-DESC:I will present an introduction to stochastic partial differentia
 l equations\, with an emphasis on the stochastic heat and wave equations. 
 These equations describe the motion of a medium subject to random excitati
 on\, which is usually taken to be Gaussian space-time white noise. This pr
 obabilistic object will be presented through several examples as well as d
 efined mathematically. Some important questions and results obtained will 
 also be discussed. 
DTEND;TZID=Europe/Zurich:20111021T100000
END:VEVENT
BEGIN:VEVENT
UID:news299@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20180717T000852
DTSTART;TZID=Europe/Zurich:20111014T090000
SUMMARY:Seminar in Numerical Analysis: Annika Lang (ETHZ)
DESCRIPTION:We analyze the convergence and complexity of Multi-Level Monte 
 Carlo  (MLMC) discretizations of a class of abstract stochastic\, paraboli
 c  equations driven by square integrable martingales. We show\, under  reg
 ularity assumptions on the solution that are minimal  under certain  crit
 eria\, that the combination of piecewise linear\, continuous  multi-level 
 Finite Element discretizations in space and  Euler--Maruyama  discretizat
 ions in time yields mean square convergence of order one in  space and of 
 order 1/2 in time to the expected value of the mild  solution. The complex
 ity of the multi-level estimator is shown to scale  log-linearly with resp
 ect to the corresponding work to generate a  single solution path on the f
 inest mesh\, resp. of the corresponding  deterministic parabolic problem o
 n the finest mesh. Examples are  provided for Levy driven SPDEs as well as
  equations for randomly forced  surface diffusions.
X-ALT-DESC:We analyze the convergence and complexity of Multi-Level Monte C
 arlo  (MLMC) discretizations of a class of abstract stochastic\, parabolic
   equations driven by square integrable martingales. We show\, under  regu
 larity assumptions on the solution that are minimal&nbsp\; under certain  
 criteria\, that the combination of piecewise linear\, continuous  multi-le
 vel Finite Element discretizations in space and&nbsp\; Euler--Maruyama  di
 scretizations in time yields mean square convergence of order one in  spac
 e and of order 1/2 in time to the expected value of the mild  solution. Th
 e complexity of the multi-level estimator is shown to scale  log-linearly 
 with respect to the corresponding work to generate a  single solution path
  on the finest mesh\, resp. of the corresponding  deterministic parabolic 
 problem on the finest mesh. Examples are  provided for Levy driven SPDEs a
 s well as equations for randomly forced  surface diffusions. 
DTEND;TZID=Europe/Zurich:20111014T100000
END:VEVENT
END:VCALENDAR
