Date of Graduation

8-2018

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Engineering (PhD)

Degree Level

Graduate

Department

Industrial Engineering

Advisor

Kelly Sullivan

Committee Member

Chase Rainwater

Second Committee Member

Shengfan Zhang

Third Committee Member

Kenneth Mitchell

Keywords

Approximate Dynamic Program, Inland Dredging, Maintenance, Markov Decision Process, Optimization, Transportation

Abstract

My dissertation research focuses on developing and applying methodologies for optimizing the maintenance of complex systems. The main goal of these methodologies is minimizing maintenance and failure cost and maximizing the economic output of the systems. In Chapter 1, we develop a stochastic programming model to select a budget-limited subset of maintenance dredging projects to maximize the expected commodity tonnage that can be transported through the inland waterway system. The inland navigation system ensures cost-effective marine transportation throughout many portions of the country and is therefore vital to the U.S. economy. The uncertainty in these processes creates a challenging problem in budgeting for and selecting inland maintenance dredging projects. Our model incorporates uncertainty due to unpredictable amount of budget required for emergency maintenance dredging. This problem is modeled as a two-stage stochastic program and a heuristic algorithm is developed as a solution approach. The model and heuristic are implemented using real data obtained for the U.S. inland waterway network.

In Chapter 2, we model a selective maintenance problem for a series-parallel system with multiple components over a finite planning horizon. Uncertainty due to random failure of components requires us to develop a stochastic program such as Markov Decision Process. However, by increasing the number of components, providing the maintenance plan in a reasonable time requires approximate strategies. Thus, we develop an approximate dynamic programming (ADP) model with a heuristic method called Multiple Weighted Objective (MWO) to solve the redundancy allocation problem inside the algorithm to select the maintenance actions with limited budget for large systems over time. Chapter 3 extends the work in Chapter \ref{ch:chapter2} by considering increasing failure rate for components in a series-parallel system instead of constant failure rate. Markov decision process is developed to model the problem. Introducing age for components increases the dimension of the state vector, thereby creating some significant challenges above what was encountered in Chapter 2. In order to address this source of complexity, the combination of ADP algorithm with MWO heuristic is developed to solve larger instances in a reasonable time.

Share

COinS