The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.
|Titolo:||Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review|
|Data di pubblicazione:||2017|
|Appare nelle tipologie:||01.01 - Articolo su rivista|