The paper addresses the role of randomization in the training process of a learning machine, and analyses the affinities between two well-known schemes, namely, Extreme Learning Machines (ELMs) and the learning framework using similarity functions. These paradigms share a common approach to inductive learning, which combines an explicit remapping of data with a linear separator; however, they seem to exploit different strategies in the design of the mapping layer. The paper shows that, in fact, the theory of learning with similarity functions can stimulate a novel interpretation of the ELM paradigm, thus leading to a common framework. New insights into the ELM model are obtained, and the ELM strategy for the setup of the neurons parameters can be significantly improved. Experimental results confirm that the novel method improves over conventional approaches, especially in the trade-off between classification accuracy and machine complexity (i.e., the dimensionality of the remapped space). This, in turn, supports the reliability of the unified framework envisioned in this paper.
|Titolo:||Learning with similarity functions: A novel design for the extreme learning machine|
|Data di pubblicazione:||2017|
|Appare nelle tipologie:||01.01 - Articolo su rivista|