The straigthforward silicon implementation of learning neural networks requires the availability of a proper circuit technique for varying and maintaining the weights of synaptic connections. Not being conceivable a truly analog memory, it is necessary to introduce discretized values for the weights. A most efficient implementation in terms of area of silicon and power dissipation is the analog approach suggested by Mead. However the effects of the discretization on the functionality of the neural algorithm should also be assessed. In this framework, the paper (1) illustrates an architectural configuration for the Back Propagation (BP) algorithm, (2) presents the circuit solution for the basic blocks, (3) analyses critically the effect of weight discretization on the BP algorithm. It is demostrated, by simulation, that 24 levels plus sign can be sufficient (and even 16 in some cases), it occurs if a proper learning technique is adopted to escape from spurious minima added by discretization.

Effects of weight discretization on the back propagation learning method: Algorithm design and hardware realization

CAVIGLIA, DANIELE;VALLE, MAURIZIO;BISIO, GIACOMO
1990-01-01

Abstract

The straigthforward silicon implementation of learning neural networks requires the availability of a proper circuit technique for varying and maintaining the weights of synaptic connections. Not being conceivable a truly analog memory, it is necessary to introduce discretized values for the weights. A most efficient implementation in terms of area of silicon and power dissipation is the analog approach suggested by Mead. However the effects of the discretization on the functionality of the neural algorithm should also be assessed. In this framework, the paper (1) illustrates an architectural configuration for the Back Propagation (BP) algorithm, (2) presents the circuit solution for the basic blocks, (3) analyses critically the effect of weight discretization on the BP algorithm. It is demostrated, by simulation, that 24 levels plus sign can be sufficient (and even 16 in some cases), it occurs if a proper learning technique is adopted to escape from spurious minima added by discretization.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/841271
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 0
social impact