The straigthforward silicon implementation of learning neural networks requires the availability of a proper circuit technique for varying and maintaining the weights of synaptic connections. Not being conceivable a truly analog memory, it is necessary to introduce discretized values for the weights. A most efficient implementation in terms of area of silicon and power dissipation is the analog approach suggested by Mead. However the effects of the discretization on the functionality of the neural algorithm should also be assessed. In this framework, the paper (1) illustrates an architectural configuration for the Back Propagation (BP) algorithm, (2) presents the circuit solution for the basic blocks, (3) analyses critically the effect of weight discretization on the BP algorithm. It is demostrated, by simulation, that 24 levels plus sign can be sufficient (and even 16 in some cases), it occurs if a proper learning technique is adopted to escape from spurious minima added by discretization.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Effects of weight discretization on the back propagation learning method: Algorithm design and hardware realization|
|Data di pubblicazione:||1990|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|