TY - JOUR
T1 - Neural control of discrete weak formulations
T2 - Galerkin, least squares & minimal-residual methods with quasi-optimal weights
AU - Brevis, Ignacio
AU - Muga, Ignacio
AU - van der Zee, Kristoffer G.
N1 - Funding Information:
The authors are grateful to the reviewers for their helpful suggestions. KvdZ acknowledges helpful discussions with Dante Kalise, Hamd Alsobhi, Anne Boschman and Andrew Stuart. This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777778 (MATHROCKS). IM acknowledges support from the project DI Investigación Innovadora Interdisciplinaria PUCV 2021 No̲ 039.409/2021: Nanoiónica: Un enfoque interdisciplinario. The work by IB was partially supported by ANID FONDECYT/Postdoctorado No 3200827. The research by KvdZ was supported by the Engineering and Physical Sciences Research Council (EPSRC), UK under Grant EP/T005157/1 and EP/W010011/1.
Funding Information:
The authors are grateful to the reviewers for their helpful suggestions. KvdZ acknowledges helpful discussions with Dante Kalise, Hamd Alsobhi, Anne Boschman and Andrew Stuart. This research has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777778 (MATHROCKS) . IM acknowledges support from the project DI Investigación Innovadora Interdisciplinaria PUCV 2021 N 039.409/2021: Nanoiónica: Un enfoque interdisciplinario. The work by IB was partially supported by ANID FONDECYT/Postdoctorado No 3200827 . The research by KvdZ was supported by the Engineering and Physical Sciences Research Council (EPSRC), UK under Grant EP/T005157/1 and EP/W010011/1 .
Publisher Copyright:
© 2022 The Authors
PY - 2022
Y1 - 2022
N2 - There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyze a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework.
AB - There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyze a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework.
KW - Artificial neural networks
KW - Data-driven discretization
KW - Optimal neural control
KW - Quasi-minimization
KW - Quasi-optimal convergence
KW - Weighted finite element methods
UR - http://www.scopus.com/inward/record.url?scp=85141408017&partnerID=8YFLogxK
U2 - 10.1016/j.cma.2022.115716
DO - 10.1016/j.cma.2022.115716
M3 - Article
AN - SCOPUS:85141408017
JO - Computer Methods in Applied Mechanics and Engineering
JF - Computer Methods in Applied Mechanics and Engineering
SN - 0045-7825
M1 - 115716
ER -