We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed. In particular, we consider the square loss and show that for a universal step-size choice, the number of passes acts as a regularization parameter, and optimal finite sample bounds can be achieved by early-stopping. Moreover, we show that larger step-sizes are allowed when considering mini-batches. Our analysis is based on a unifying approach, encompassing both batch and stochastic gradient methods as special cases.
Titolo: | Optimal Learning for Multi-pass Stochastic Gradient Methods | |
Autori: | ||
Data di pubblicazione: | 2016 | |
Rivista: | ||
Handle: | http://hdl.handle.net/11567/888639 | |
Appare nelle tipologie: | 01.01 - Articolo su rivista |
File in questo prodotto:
File | Descrizione | Tipologia | |
---|---|---|---|
Optimal Learning.pdf | Documento in Post-print | Open Access Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.