We employ properties of high-dimensional geometry to obtain some insights into capabilities of deep perceptron networks to classify large data sets. We derive conditions on network depths, types of activation functions, and numbers of parameters that imply that approximation errors behave almost deterministically. We illustrate general results by concrete cases of popular activation functions: Heaviside, ramp sigmoid, rectified linear, and rectified power. Our probabilistic bounds on approximation errors are derived using concentration of measure type inequalities (method of bounded differences) and concepts from statistical learning theory.& COPY; 2023 Elsevier Ltd. All rights reserved.
Approximation of classifiers by deep perceptron networks
Sanguineti, Marcello
2023-01-01
Abstract
We employ properties of high-dimensional geometry to obtain some insights into capabilities of deep perceptron networks to classify large data sets. We derive conditions on network depths, types of activation functions, and numbers of parameters that imply that approximation errors behave almost deterministically. We illustrate general results by concrete cases of popular activation functions: Heaviside, ramp sigmoid, rectified linear, and rectified power. Our probabilistic bounds on approximation errors are derived using concentration of measure type inequalities (method of bounded differences) and concepts from statistical learning theory.& COPY; 2023 Elsevier Ltd. All rights reserved.File | Dimensione | Formato | |
---|---|---|---|
NN23.pdf
accesso aperto
Tipologia:
Documento in versione editoriale
Dimensione
408.78 kB
Formato
Adobe PDF
|
408.78 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.