The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

Lorenzo Rosasco;
2017-01-01

Abstract

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.
File in questo prodotto:
File Dimensione Formato  
11567-888539.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: Documento in versione editoriale
Dimensione 1.72 MB
Formato Adobe PDF
1.72 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/888539
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 321
  • ???jsp.display-item.citation.isi??? 180
social impact