Academy Offer

Extended to August 15

Claim a 20% discount on all eLearning and virtual courses purchases with code EDU20OFF.

This promotion does not apply to the credential programs.

Physics-Constrained Deep Learning of Unmodeled Physics in Systems Governed By Stochastic Differential Equations

  • Checkout

    Checkout

    Do you already own this?

    Pricing


    Individuals

    AIChE Member Credits 0.5
    AIChE Members $19.00
    AIChE Graduate Student Members Free
    AIChE Undergraduate Student Members Free
    Non-Members $29.00
  • Type:
    Conference Presentation
  • Conference Type:
    AIChE Annual Meeting
  • Presentation Date:
    November 8, 2021
  • Duration:
    15 minutes
  • Skill Level:
    Intermediate
  • PDHs:
    0.50

Share This Post:

The recent works of [1-4] demonstrate how constraining loss functions with partial differential equations that reflect underlying system physics (e.g., conservation of momentum) can enhance the robustness and efficiency of training deep learning models. These physics-constrained neural networks (also referred to as physics-informed neural networks) can enable discovery of governing equations and reduced-order models along with prediction of complex dynamics from incomplete models and data. Even more recently, the physics-constrained neural network framework has been extended with adversarial networks for uncertainty propagation through and the solution of stochastic differential equations (SDEs) [5-7]. SDEs can be used to model the dynamics of a wide variety of complex systems, including those involving electrical and cell signal processing, colloidal/molecular self-assembly, and chemical reactions. Learning the unmodeled physics within these SDEs (e.g., drift and diffusion coefficients) is crucial for unraveling the fundamental understanding of these systems’ stochastic and nonlinear behavior [8].

We propose a moment-matching strategy for training deep neural networks to learn constitutive equations that represent unmodeled physics in SDEs. The first step is to collect state trajectory data over time under various input profiles using either an experimental system or a high-fidelity simulator. Since the system evolution is inherently stochastic, the “experiment” must be repeated multiple times over a finite time so that moment trajectories can be estimated. Using the known structure of the SDE, we can apply established uncertainty propagation methods (e.g., unscented transform) to predict the moment trajectories over time for fixed neural network parameters; the unknown weight and bias values represent the unmodeled physics. To train these unknown neural network parameters, we first construct a loss function using the predicted and measured moments and then we develop an efficient training algorithm that leverages recent advances in automatic differentiation and stochastic gradient descent (SGD) [9-10].

We demonstrate the efficacy of the proposed framework on an in-silico three-dimensional system of self-assembling DNA functionalized colloids that have shown enormous promise for sensing and photonics applications [11]. This particular system is especially prone to kinetic arrest due to the complexity of its underlying and competing energetic driving forces, which include repulsive interactions among the underlying silica particles, repulsive interactions due to single-stranded DNA (ssDNA) chain overlap, and attractive interactions due to ssDNA hybridization. Specifically, we use our previously reported autoencoder-based dimensionality reduction framework to discover a set of order parameters to describe the self-assembly system state [12]. We next apply the proposed neural network-based moment-matching strategy to learn the free energy and diffusion landscapes within a low-dimensional Langevin equation that can be used to describe self-assembly dynamics [13]. We finally use these landscapes to analyze the relative importance of various kinetic traps and demonstrate how changes in external conditions can be used to avoid these kinetic traps and reach target structures.

References

(1) Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707.

(2) Raissi, M., & Karniadakis, G. E. (2018). Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357, 125-141.

(3) Raissi, M. (2018). Deep hidden physics models: Deep learning of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1), 932-955.

(4) Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2018). Multistep neural networks for data-driven discovery of nonlinear dynamical systems. arXiv preprint arXiv:1801.01236.

(5) Yang, Y., & Perdikaris, P. (2019). Adversarial uncertainty quantification in physics-informed neural networks. Journal of Computational Physics, 394, 136-152.

(6) Zhang, D., Lu, L., Guo, L., & Karniadakis, G. E. (2019). Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. Journal of Computational Physics, 397, 108850.

(7) Yang, L., Meng, X., & Karniadakis, G. E. (2021). B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. Journal of Computational Physics, 425, 109913.

(8) Van Kampen, N. G. (1976). Stochastic differential equations. Physics reports, 24(3), 171-228.

(9) Baydin, A. G., Pearlmutter, B. A., Radul, A. A., & Siskind, J. M. (2018). Automatic differentiation in machine learning: a survey. Journal of machine learning research, 18.

(10) Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Zheng, X. (2016). Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16) (pp. 265-283).

(11) Pretti, E., Zerze, H., Song, M., Ding, Y., Mahynski, N. A., Hatch, H. W., ... & Mittal, J. (2018). Assembly of three-dimensional binary superlattices from multi-flavored particles. Soft Matter, 14(30), 6303-6312.

(12) O’Leary, J., Mao, R., Pretti, E. J., Paulson, J. A., Mittal, J., & Mesbah, A. (2021). Deep learning for characterizing the self-assembly of three-dimensional colloidal systems. Soft Matter, 17(4), 989-999.

(13) Tang, X., Rupp, B., Yang, Y., Edwards, T. D., Grover, M. A., & Bevan, M. A. (2016). Optimal feedback controlled assembly of perfect crystals. ACS nano, 10(7), 6791-6798.

Presenter(s): 
Once the content has been viewed and you have attested to it, you will be able to download and print a certificate for PDH credits. If you have already viewed this content, please click here to login.

Checkout

Checkout

Do you already own this?

Pricing


Individuals

AIChE Member Credits 0.5
AIChE Members $19.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
Non-Members $29.00
Language: