Deep learning and PDE: a quick and practical introduction
- Thursday, September 17, 2020 from 3:00pm to 4:00pm
- Webex Meeting number: 120 556 7048 Password: applied
Abstract: In this talk I will try to provide a very quick introduction from
generic artificial neural networks (ANN) to their application to numerically solving PDE. This less of a research talk, and more like a tutorial and practical in nature.
In the first part, I will describe a simple model of feed-forward artificial neural network, starting from the biologically motivated individual perceptron as its core unit. If technology permits, I will include a MATLAB demonstration for some simple regression and classification problems, and mention the universal representation theorem.
In the intermission, I will provide a superficial overview of more specific/complicated network architectures that are currently being employed, such as convolutional NN, recurrent NN, auto-encoder networks, generative adversarial networks (GAN). I will also hint at the difference between supervised and unsupervised (or
self-supervised) training, as well as talk about a menu of activation functions and network regularizers.
In the second part, we will focus on a relatively recent approach to use "Deep learning" to numerically tackle (certain) PDE. We will specifically look at the core ideas in the 2018 paper on "Deep Galerkin Methods" . There, we construct an ANN to represent the unknown function u: for arbitrary input values x, the network will evaluate to
an approximation of u(x). This DGM network will be trained using a loss
function defined based on the PDE terms, and converge to the solution of
the PDE. Again, if technology permits, I will provide a live demo of a (few) simple examples.
 Sirignano and Spiliopoulos, "DGM: A deep learning algorithm for
solving partial differential equations", Journal of Computational
Physics 375 (2018):1339-1364.
- Department of Mathematical Sciences