Date of Graduation

7-2021

Document Type

Thesis

Degree Name

Master of Science in Statistics and Analytics (MS)

Degree Level

Graduate

Department

Statistics and Analytics

Advisor/Mentor

John Tipton

Committee Member

Qingyang Zhang

Second Committee Member

Avishek Chakraborty

Keywords

Gibbs sampler, Markov Chain Monte Carlo, MCMC, MCMC algorithm, Metropolis-Hasting, Monte Carlo Markov Chain, Simulation studies

Abstract

Markov chain Monte Carlo (MCMC) is a simulation technique that produces a Markov chain designed to converge to a stationary distribution. In Bayesian statistics, MCMC is used to obtain samples from a posterior distribution for inference. To ensure the accuracy of estimates using MCMC samples, the convergence to the stationary distribution of an MCMC algorithm has to be checked. As computation time is a resource, optimizing the efficiency of an MCMC algorithm in terms of effective sample size (ESS) per time unit is an important goal for statisticians. In this paper, we use simulation studies to demonstrate how the Gibbs sampler and the Metropolis-Hasting algorithm works and how MCMC diagnostic tests are used to check for MCMC convergence. We investigated and compared the efficiency of different MCMC algorithms fit to a linear and a spatial model. Our results showed that the Gibbs sampler and the Metropolis-Hasting algorithm give estimates similar to the maximum likelihood estimates, validating the accuracy of MCMC. The results also imply that the efficiency of an MCMC algorithm can be affected by different factors. In particular, a model with more parameters could still be more efficient in terms of ESS per time unit. For fitting large datasets, algorithms whose computation involves dividing a large matrix into smaller matrices can be more efficient than algorithms that use the entire large matrix.

Share

COinS