Date of Graduation
Bachelor of Science
Computer Science and Computer Engineering
Thompson, Craig W.
Committee Member/Second Reader
In recent years, deep neural networks have become the state-of-the art in many machine learning domains. Despite many advances, these networks are still extremely prone to overfit. In neural networks, a main cause of overfit is coadaptation of neurons which allows noise in the data to be interpreted as meaningful features. Dropout is a technique to mitigate coadaptation of neurons, and thus stymie overfit. In this paper, we present data that suggests dropout is not always universally applicable. In particular, we show that dropout is useful when the ratio of network complexity to training data is very high, otherwise traditional weight decay is more effective.
Slatton, Thomas Grant, "A comparison of dropout and weight decay for regularizing deep neural networks" (2014). Computer Science and Computer Engineering Undergraduate Honors Theses. 29.