Date of Graduation
12-2022
Document Type
Thesis
Degree Name
Master of Science in Industrial Engineering (MSIE)
Degree Level
Graduate
Department
Industrial Engineering
Advisor/Mentor
Pohl, Edward A.
Committee Member
Liao, Haitao
Second Committee Member
Sullivan, Kelly M.
Keywords
Machine learning; Network reliability; Reinforcement learning; Resource allocation
Abstract
Networks provide a variety of critical services to society (e.g. power grid, telecommunication, water, transportation) but are prone to disruption. With this motivation, we study a sequential decision problem in which an initial network is improved over time (e.g., by adding or increasing the reliability of edges) and rewards are gained over time as a function of the network’s all-terminal reliability. The actions during each time period are limited due to availability of resources such as time, money, or labor. To solve this problem, we utilized a Deep Reinforcement Learning (DRL) approach implemented within OpenAI-Gym using Stable Baselines. A Proximal Policy Optimization (PPO) was used to identify the edge to be improved or a new edge to be added based on the current state of the network and the available budget. To calculate the network’s all-terminal reliability, a reliability polynomial was employed. To understand how the model behaves under a variety of conditions, we explored numerous network configurations with different initial link reliability, added link reliability, number of nodes, and budget structures. We conclude with a discussion of insights gained from our set of designed experiments.
Citation
Wells, H. (2022). Using Reinforcement Learning to Improve Network Reliability through Optimal Resource Allocation. Graduate Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/4714
Included in
Industrial Engineering Commons, Industrial Technology Commons, Systems Engineering Commons