(542c) A Data-Driven Approach to Determining Problem Well-Posedness | AIChE

(542c) A Data-Driven Approach to Determining Problem Well-Posedness

Authors 

Kevrekidis, I. G. - Presenter, Princeton University
Bertalan, T., Johns Hopkins University
Rebrova, E., Princeton University
Kevrekidis, G., University of Massachusetts at Amherst
In recent years, the (re)emergence of physics-informed neural networks (PINNs) [1,2] has been proposed as an alternative to direct numerical simulations (DNS) for producing solutions to differential equations (DEs). The key idea is that the easy access to derivatives of the approximant can be used to create a loss function based on the residuals of the DE at all discretization locations in the (space-)time domain. This is in addition to another important loss term--that the solution should match provided values or derivative(s) at particular supervised points--conditions analogous to traditional initial and boundary conditions for DNS. A major strength of the PINN approach is that the supervisory data need not be prescribed at the domain locations typically used for initial and boundary conditions, but can instead be opportunistically sampled throughout the domain of interest ("internal" as opposed to "boundary" conditions).

Another apparent advantage is that the variational solution of the PINN problem will return results even if the original problem is not well posed (whether due to overspecification of underspecification of constraints). However, though the method will produce an approximant at all domain locations, and a well-trained PINN will extrapolate reasonably to all locations where the DE-residual term is enforced, this does not mean that the usual issues of well-posedness, critical for DNS, are absent for PINNs. Here, we explore (for both the under-constrained and the over-constrained case) how the lack of well-posedness affects the approximant produced by the PINN training.

In particular, we relate the richness of variational solutions to the number of prescribed characteristics in wave-type problems, and draw analogies to randomized linear algebra development.

We use this variational approach for PINNs, but also for more traditional numerical discretizations, and conclude with some recommendations on metrics for diagnosing over- and under-determinedness in PINN training.

[1]: Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations." Journal of Computational Physics 378 (2019): 686-707.

[2]: Lagaris, Likas, Fotiadis. "Artificial neural networks for solving ordinary and partial differential equations." IEEE Transactions on Neural Networks. 9(5) (1998): 987-1000.