A comparison of dropout and weight decay for regularizing deep neural networks

UARKive Repository

Show simple item record

dc.contributor.advisor Gashler, Michael
dc.creator Slatton, Thomas Grant 1992- 2014-06-11T18:08:24Z 2014-06-11T18:08:24Z 2014-05 2014-05-21 May 2014
dc.description.abstract In recent years, deep neural networks have become the state-of-the art in many machine learning domains. Despite many advances, these networks are still extremely prone to overfit. In neural networks, a main cause of overfit is coadaptation of neurons which allows noise in the data to be interpreted as meaningful features. Dropout is a technique to mitigate coadaptation of neurons, and thus stymie overfit. In this paper, we present data that suggests dropout is not always universally applicable. In particular, we show that dropout is useful when the ratio of network complexity to training data is very high, otherwise traditional weight decay is more effective.
dc.format.mimetype application/pdf
dc.subject Computer Science
dc.title A comparison of dropout and weight decay for regularizing deep neural networks
dc.type Thesis 2014-06-11T18:08:24Z Bachelor of Science Undergraduate University of Arkansas, Fayetteville Computer Science Computer Science and Computer Engineering
dc.type.material text
dc.contributor.committeeMember Thompson, Craig W
dc.contributor.committeeMember Beavers, Gordon

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search UARKive

Advanced Search


Admin login