UARKive

A comparison of dropout and weight decay for regularizing deep neural networks

UARKive Repository

Show simple item record

dc.contributor.advisor Gashler, Michael
dc.creator Slatton, Thomas Grant 1992-
dc.date.accessioned 2014-06-11T18:08:24Z
dc.date.available 2014-06-11T18:08:24Z
dc.date.created 2014-05
dc.date.issued 2014-05-21
dc.date.submitted May 2014
dc.identifier.uri http://hdl.handle.net/10826/990
dc.description.abstract In recent years, deep neural networks have become the state-of-the art in many machine learning domains. Despite many advances, these networks are still extremely prone to overfit. In neural networks, a main cause of overfit is coadaptation of neurons which allows noise in the data to be interpreted as meaningful features. Dropout is a technique to mitigate coadaptation of neurons, and thus stymie overfit. In this paper, we present data that suggests dropout is not always universally applicable. In particular, we show that dropout is useful when the ratio of network complexity to training data is very high, otherwise traditional weight decay is more effective.
dc.format.mimetype application/pdf
dc.subject Computer Science
dc.title A comparison of dropout and weight decay for regularizing deep neural networks
dc.type Thesis
dc.date.updated 2014-06-11T18:08:24Z
thesis.degree.name Bachelor of Science
thesis.degree.level Undergraduate
thesis.degree.grantor University of Arkansas, Fayetteville
thesis.degree.discipline Computer Science
thesis.degree.department Computer Science and Computer Engineering
dc.type.material text
dc.contributor.committeeMember Thompson, Craig W
dc.contributor.committeeMember Beavers, Gordon


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search UARKive


Advanced Search

Browse

Admin login