Date of Graduation

5-2014

Document Type

Thesis

Degree Name

Bachelor of Science

Degree Level

Undergraduate

Department

Computer Science and Computer Engineering

Advisor/Mentor

Gashler, Michael

Committee Member/Reader

Thompson, Craig W.

Committee Member/Second Reader

Beavers, Gordon

Abstract

In recent years, deep neural networks have become the state-of-the art in many machine learning domains. Despite many advances, these networks are still extremely prone to overfit. In neural networks, a main cause of overfit is coadaptation of neurons which allows noise in the data to be interpreted as meaningful features. Dropout is a technique to mitigate coadaptation of neurons, and thus stymie overfit. In this paper, we present data that suggests dropout is not always universally applicable. In particular, we show that dropout is useful when the ratio of network complexity to training data is very high, otherwise traditional weight decay is more effective.

Share

COinS