An algorithm based on the Extended Kalman Filter (EKF) for optimization of parameters in neural networks is presented and a convergence analysis of the estimated parameters values to the optimal ones is made. By using results on stochastic stability of EKF in filtering for discrete-time nonlinear systems, it is proved that the approximation error of the proposed learning method is locally exponentially bounded in mean square. Such a training can be performed also in batch mode and outperforms well-known training methods, as shown by means of simulation results.

On the convergence EKF-based parameters optimization for neural networks

ALESSANDRI, ANGELO;SANGUINETI, MARCELLO
2003-01-01

Abstract

An algorithm based on the Extended Kalman Filter (EKF) for optimization of parameters in neural networks is presented and a convergence analysis of the estimated parameters values to the optimal ones is made. By using results on stochastic stability of EKF in filtering for discrete-time nonlinear systems, it is proved that the approximation error of the proposed learning method is locally exponentially bounded in mean square. Such a training can be performed also in batch mode and outperforms well-known training methods, as shown by means of simulation results.
2003
0-7803-7924-1
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/259396
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 7
social impact