Date of Graduation
Master of Science in Statistics and Analytics (MS)
Second Committee Member
The Lasso and the Horseshoe, gold-standards in the frequentist and Bayesian paradigms, critically depend on learning the error variance. This causes a lack of scale invariance and adaptability to heavy-tailed data. The √ Lasso [Belloni et al., 2011] attempt to correct this by using the `1 norm on both the likelihood and the penalty for the objective function. In contrast, there is essentially no methods for uncertainty quantification or automatic parameter tuning via a formal Bayesian treatment of an unknown error distribution. On the other hand, Bayesian shrinkage priors lacking a local shrinkage term fails to adapt to the large signals embedded in noise. In this thesis, I propose to build a fully Bayesian method called √ DL that achieves scale invariance and robustness to heavy tails while maintaining computational efficiency. The classical √ Lasso estimate is then recovered as the posterior mode with an appropriate modification of the local shrinkage prior. The Bayesian √ DL leads to uncertainty quantification by yielding standard error estimates and credible sets for the underlying parameters. Furthermore, the hierarchical model leads to an automatic tuning of the penalty parameter using a full Bayes or empirical Bayes approach, avoiding any ad-hoc choice over a grid. We provide an efficient Gibbs sampling scheme based on Normal scale mixture representation of Laplace densities. Performance on real and simulated data exhibit excellent small sample properties and we establish some theoretical guarantees.
Abba, M. (2018). Adapting to Sparsity and Heavy Tailed Data. Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/2907
Available for download on Sunday, August 02, 2020