Author ORCID Identifier:
Date of Graduation
8-2025
Document Type
Dissertation
Degree Name
Doctor of Philosophy in Mathematics (PhD)
Degree Level
Graduate
Department
Mathematical Sciences
Advisor/Mentor
Robinson, Samantha
Committee Member
Giovanni Petris
Second Committee Member
Qingyang Zhang
Keywords
Asymptotic Bayes Optimality; Exploratory Factor Analysis; Gaussian Graphical Models; Gaussian Graphical Networks; Horseshoe Prior; Sparsity
Abstract
High-dimensional data analysis frequently involves extracting meaningful structure from noisy, sparse signals. In recent years, Bayesian shrinkage priors—particularly global-local shrinkage priors—have emerged as powerful tools for inducing sparsity while preserving signal fidelity. Among these, the Horseshoe prior has gained notable attention for its capacity to simultaneously shrink irrelevant parameters and retain substantial signals. This dissertation explores the Horseshoe prior as a unified framework for sparse Bayesian inference across theory, simulation, and real-world application. The first component develops new theoretical results establishing the asymptotic Bayes optimality of the Horseshoe prior in Gaussian graphical models (GGMs). We consider sparse precision matrix estimation where structure is inferred via shrinkage on off-diagonal entries. Under suitable sparsity and regularity conditions, we show that the posterior distribution concentrates around the true precision matrix at minimax-optimal rates. The second component investigates the performance of the Horseshoe prior in Bayesian Exploratory Factor Analysis (EFA), a method commonly used for dimension reduction and latent structure identification. Through a carefully designed simulation study, we evaluate the Horseshoe prior’s ability to recover sparse factor loading matrices under varying levels of noise, dimensionality, and factor sparsity. We compare its performance to other continuous shrinkage priors and assess model recovery using posterior summaries and uncertainty quantification. Our findings indicate that the Horseshoe prior offers a favorable balance between sparsity, interpretability, and estimation accuracy, even in challenging settings. The third and final component applies Horseshoe-based models to a real-world high-dimensional dataset. We consider an application in psychometrics where both interpretability and statistical regularization are crucial. We implement a Horseshoe prior within a suitable Bayesian model—either a sparse graphical model, a factor model, or a hybrid—and perform inference via Gibbs Sampler. The analysis highlights the practical advantages of the Horseshoe prior, including effective feature selection, robust parameter estimation, and clear interpretability of underlying latent structure. Taken together, this dissertation offers a comprehensive investigation into the theoretical properties, empirical behavior, and applied utility of the Horseshoe prior in sparse Bayesian inference. The results not only reinforce its theoretical foundations but also advocate for its practical relevance in diverse scientific domains. The work contributes to the broader goal of developing principled, scalable, and interpretable methods for high-dimensional Bayesian analysis.
Citation
Roddy, J. T. (2025). Utilizing the Horseshoe Prior in Exploratory Factor Analysis and Gaussian Graphical Networks. Graduate Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/5870