Presentation #1
Speaker: Maosheng Guo, ASU SoMSS
Title: Stable Approximations from Equispaced Samples via Polynomial Frames
Abstract: We are interested in the problem of approximating analytic functions over general and compact domains in higher dimensions. As a starting point, we will consider the case of approximating analytic functions on the interval [-1,1] from their values at a set of equispaced nodes. A result of Platte, Trefethen & Kuijlaars states that fast and stable approximation from equispaced samples is generally impossible. In particular, any method that converges exponentially fast must also be exponentially ill-conditioned. In this talk, we will explore a positive counterpart to this result that has been recently proposed in a paper by Adcock and Shadrin. Using polynomial frame approximations on an extended interval [-gamma,gamma] with gamma>1, we can formulate a well-conditioned method that provides exponential decay of the error down to a finite but user-controlled tolerance epsilon>0. We will also explore two implementations of the polynomial frame approximation: an SVD-regularized least-squares fit as described by Adcock and Shadrin, and a column and row selection method that leverages QR factorizations to reduce the data needed in the approximation.
Presentation #2
Speaker: Qijia Judy Yun, ASU SoMSS
Title: Simulation-based Classification of Hand-Written Characters
Abstract: This project attempts a Gaussian process approach to soft classification of hand-written digits. We collected 200 training examples of handwritten digits with ordered timestamps and trained the Gaussian Process model. We then generated large amounts of synthetic data from the Gaussian Process and converted the data into bitmap grayscale pixel format. Lastly, we compared the result of running a Convolutional Neural Network (CNN) on the training data versus the synthetic data we generated. The results of this project can potentially be applied to signature fraud detection.
Presentation #3
Speaker: Guangting Yu, ASU SoMSS
Title: Physics-informed Neural Network solving numerical PDEs
Abstract: We use a Physics-Informed Neural Network (PINN) to solve numerical PDEs which are first-order linear in time but nonlinear and higher-order in other variables (e.g., nonlinear heat and Schrodinger equations), which are equivalent to solving IVP of systems of ODEs. The PINN uses the initial value as the training data and implicit Runge-Kutta (iRK) as the training criteria but avoids solving nonlinear equations contrary to the iRK, and keeps the unconditional stability property as iRK. As a result, the PINN outperforms explicit RK in stability so that the time step in the discretization can be large. A few examples will be shown to illustrate the high order-of-convergence of PINN. This work follows the idea from the paper from Maziar Raissi: https://github.com/maziarraissi/PINNs.
RTG Seminar
Monday, April 25
1:30 pm
Zoom meeting room link: https://asu.zoom.us/j/6871076660
Note: These presentations will be via Zoom.
ASU Students