Date of Graduation

8-2024

Document Type

Thesis

Degree Name

Master of Science in Statistics and Analytics (MS)

Degree Level

Graduate

Department

Statistics and Analytics

Advisor/Mentor

Kaman, Tulin

Committee Member

Zhang, Qingyang

Second Committee Member

Plummer, Sean

Keywords

Deep neural networks; Parameters; Pruning; Sparsity

Abstract

Over the past decade, the widespread adoption of deep neural networks has been a breakthrough driven by significant computational advancements. Additionally, the number of parameters of those models is exponentially increasing for performing complex tasks and achieving better performance. However, in most practical cases, often there are constraints in the number of parameters due to limited resources in storage size and computational cost. Network pruning can lead to an optimal solution to this problem. In this thesis, I present supporting evidence to the hypothesis that higher sparsity leads to better performance for a convolution-based neural network. I perform performance studies to investigate the effect of the sparsity levels on the neural networks such as residual neural network (ResNet) and convolutional neural network (U-Net). I have used pre-iterative/ iterative pruning and trained encoder based architecture (ResNet) and encoder decoder based architecture (UNet) on the CIFAR10 and Electron Microscopy Dataset with equal parameter sizes on different sparsity levels. It concludes that with the constraint of fixed parameters, the sparse architecture performs better. A linear regression model has been used for statistical evidence.

Share

COinS