TY - JOUR

T1 - A machine-learning minimal-residual (ML-MRes) framework for goal-oriented finite element discretizations

AU - Brevis, Ignacio

AU - Muga, Ignacio

AU - van der Zee, Kristoffer G.

N1 - Publisher Copyright:
© 2020 Elsevier Ltd
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2021/8/1

Y1 - 2021/8/1

N2 - We introduce the concept of machine-learning minimal-residual (ML-MRes) finite element discretizations of partial differential equations (PDEs), which resolve quantities of interest with striking accuracy, regardless of the underlying mesh size. The methods are obtained within a machine-learning framework during which the parameters defining the method are tuned against available training data. In particular, we use a provably stable parametric Petrov–Galerkin method that is equivalent to a minimal-residual formulation using a weighted norm. While the trial space is a standard finite element space, the test space has parameters that are tuned in an off-line stage. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored towards the quantity of interest. We use an artificial neural network to define the parametric family of test spaces. Using numerical examples for the Laplacian and advection equation in one and two dimensions, we demonstrate that the ML-MRes finite element method has superior approximation of quantities of interest even on very coarse meshes.

AB - We introduce the concept of machine-learning minimal-residual (ML-MRes) finite element discretizations of partial differential equations (PDEs), which resolve quantities of interest with striking accuracy, regardless of the underlying mesh size. The methods are obtained within a machine-learning framework during which the parameters defining the method are tuned against available training data. In particular, we use a provably stable parametric Petrov–Galerkin method that is equivalent to a minimal-residual formulation using a weighted norm. While the trial space is a standard finite element space, the test space has parameters that are tuned in an off-line stage. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored towards the quantity of interest. We use an artificial neural network to define the parametric family of test spaces. Using numerical examples for the Laplacian and advection equation in one and two dimensions, we demonstrate that the ML-MRes finite element method has superior approximation of quantities of interest even on very coarse meshes.

KW - Data-driven algorithms

KW - Goal-oriented finite elements

KW - Machine-learning acceleration

KW - Petrov–Galerkin method

KW - Residual minimization

KW - Weighted inner-products

UR - http://www.scopus.com/inward/record.url?scp=85090484698&partnerID=8YFLogxK

U2 - 10.1016/j.camwa.2020.08.012

DO - 10.1016/j.camwa.2020.08.012

M3 - Article

AN - SCOPUS:85090484698

SN - 0898-1221

VL - 95

SP - 186

EP - 199

JO - Computers and Mathematics with Applications

JF - Computers and Mathematics with Applications

ER -