The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.

Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review

Lorenzo Rosasco;
2017-01-01

Abstract

The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.
File in questo prodotto:
File Dimensione Formato  
Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality a Review.pdf

accesso aperto

Tipologia: Documento in Post-print
Dimensione 2.51 MB
Formato Adobe PDF
2.51 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/888631
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact