Date of Graduation

5-2022

Document Type

Thesis

Degree Name

Bachelor of Science

Degree Level

Undergraduate

Department

Computer Science and Computer Engineering

Advisor/Mentor

Zhan, Justin

Committee Member/Reader

Gauch, John

Committee Member/Second Reader

Streeter, Lora

Abstract

The state-of-the-art for pruning neural networks is ambiguous due to poor experimental practices in the field. Newly developed approaches rarely compare to each other, and when they do, their comparisons are lackluster or contain errors. In the interest of stabilizing the field of pruning, this paper initiates a dive into reproducing prominent pruning algorithms across several architectures and datasets. As a first step towards this goal, this paper shows results for foresight weight pruning across 6 baseline pruning strategies, 5 modern pruning strategies, random pruning, and one legacy method (Optimal Brain Damage). All strategies are evaluated on 3 different architectures and 3 different datasets. These results reveal several truths for foresight pruning. First, magnitude-based methods are ill-advised and perform worse than random pruning in a large percentage of test cases. Second, Hessianbased methods commonly under-prune by approximately 50%, rendering them slightly worse than competing methods that prune right at the given compression ratio. Third, Single-shot Network Pruning (SNIP) is the most consistent method for foresight pruning, followed by GraSP, Layer-adaptive Magnitude Pruning (LAMP), and the gradient-magnitude baselines. Finally, the toughest issue for researchers developing new foresight pruning methods will be how to prevent layer-collapse without significant cost to task accuracy.

Keywords

Supervised learning, computer vision, image classification, unstructured pruning, foresight pruning

Share

COinS