Date of Graduation
5-2024
Document Type
Dissertation
Degree Name
Doctor of Philosophy in Mathematics (PhD)
Degree Level
Graduate
Department
Mathematical Sciences
Advisor/Mentor
Padgett, Joshua L.
Committee Member
Nakarmi, Ukash
Second Committee Member
Chen, Jiahui
Third Committee Member
Kaman, Tulin
Keywords
Artificial neural networks; Multi-Level Picard; Feynman-Kac; Monte Carlo method
Abstract
This dissertation seeks to explore a certain calculus for artificial neural networks. Specifi- cally we will be looking at versions of the heat equation, and exploring strategies on how to approximate them. Our strategy towards the beginning will be to take a technique called Multi-Level Picard (MLP), and present a simplified version of it showing that it converges to a solution of the equation �� ∂ ud�� (t, x) = (∇2xud) (t, x). ∂t We will then take a small detour exploring the viscosity super-solution properties of so- lutions to such equations. It is here that we will first encounter Feynman-Kac, and see that solutions to these equations can be expressed the expected value of a certain stochastic in- tegral. The final and last part of the dissertation will be dedicated to expanding a certain neu- ral network framework. We will build on this framework by introducing new operations, namely raising to a power, and use this to build out neural network polynomials. This opens the gateway for approximating transcendental functions such as exp (x) , sin (x), and cos (x). This, coupled with a trapezoidal rule mechanism for integration allows us to approximate expressions of the form exp ����ab □dt��. We will, in the last chapter, look at how the technology of neural networks developed in the previous two chapters work towards approximating the expression that Feynman-Kac asserts must be the solution to these modified heat equations. We will then end by giving approximate bounds for the error in the Monte Carlo method. All the while we will maintain that the parameter estimates and depth estimates remain polynomial on 1ε . As an added bonus we will also look at the simplified MLP technque from the previous chapters of this dissertation and show that yes, they can indeed be approximated with ar- tificial neural networks, and that yes, they can be done so with neural networks whose parameters and depth counts grow only polynomially on 1ε . Our appendix will contain code listings of these neural network operations, some of the architectures, and some small scale simulation results.
Citation
Rafi, S. A. (2024). Analysis and Construction of Artificial Neural Networks for the Heat Equations, and Their Associated Parameters, Depths, and Accuracies.. Graduate Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/5277