Date of Graduation

8-2016

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Engineering (PhD)

Degree Level

Graduate

Department

Industrial Engineering

Advisor/Mentor

Edward A. Pohl

Committee Member

Chase E. Rainwater

Second Committee Member

Shengfan Zhang

Third Committee Member

Raymond R. Hill

Keywords

Applied sciences, Heuristic approaches, Markov decision process, Reliability, Reliability growth

Abstract

This research proposes novel solution techniques in the realm of reliability and reliability growth. We first consider a redundancy allocation problem to design a system that maximizes the reliability of a complex series-parallel system comprised of components with deterministic reliability. We propose a new meta-heuristic, inspired by the behavior of bats hunting prey, to find component allocation and redundancy levels that provide optimal or near-optimal system reliability levels. Each component alternative has an associated cost and weight and the system is constrained by cost and weight factors. We allow for component mixing within a subsystem, with a pre-defined maximum level of component redundancy per subsystem, which adds to problem complexity and prevents an optimal solution from being derived analytically.

The second problem of interest involves how we model a system's reliability growth as it undergoes testing and how we minimize deviation from planned growth. We propose a Grey Model, GM(1,1) for modeling reliability growth on complex systems when failure data is sparse. The GM(1,1) model's performance is benchmarked with the Army Materiel Systems Analysis Activity (AMSAA) model, the standard within the reliability growth modeling community. For continuous and discrete (one-shot) testing, the GM(1,1) model shows itself to be superior to the AMSAA model when modeling reliability growth with small failure data sets.

Finally, to ensure the reliability growth planning curve is followed as closely as possible, we determine the best level of corrective action to employ on a discovered failure mode, with corrective action levels allowed to vary based upon the amount of resources allocated for failure mode improvement. We propose a Markov Decision Process (MDP) approach to handle the stochasticity of failure data and its corresponding system reliability estimate. By minimizing a weighted deviation from the planning curve, systems will ideally meet the reliability milestones specified by the planning curve, while simultaneously avoiding system over-development and unnecessary resource expenditure for over-correction of failure modes.

Share

COinS