The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
Lorenzo Rosasco;
2016-01-01
Abstract
The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
Theory I Why and When Can Deep Networks Avoid the Curse of Dimensionality.pdf
accesso aperto
Tipologia:
Documento in versione editoriale
Dimensione
2.81 MB
Formato
Adobe PDF
|
2.81 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.