Many problems in shape optimization involve constraints in the form of one or more partial differential equations. In practice, the material properties of the underlying shape on which a PDE is defined are not known exactly; it is natural to use a probability distribution based on empirical measurements and incorporate this information when designing an optimal shape. Additionally, one might wish to obtain a shape that is robust in its response to certain external inputs, such as forces. It is helpful to view shape optimization problems subject to uncertainty through the lens of stochastic optimization, where a wealth of theory and algorithms already exist for finite-dimensional problems. The focus will be on the algorithmic handling of these problems in the case of a high stochastic dimension. Stochastic approximation, which dynamically samples from the stochastic space over the course of iterations, is favored in this case, and we show how these methods can be applied to shape optimization. We study the classical stochastic gradient method, which was introduced in 1951 by Robbins and Monro and is widely used in machine learning. In particular, we investigate its application to infinite-dimensional shape manifolds. Further, we present numerical examples showing the performance of the method, also in combination with the augmented Lagrangian method for problems with geometric constraints. Joint work with: Kathrin Welker, Estefania Loayza-Romero, Tim Suchan

For further information about the seminar, please visit this webpage.

]]>