Conventional Machine Learning (ML) algorithms do not contemplate computational constraints when learning models: when targeting their implementation on embedded devices, restrictions are related to, for example, limited depth of the arithmetic unit, memory availability, or battery capacity. We propose a new learning framework, i.e. Algorithmic Risk Minimization (ARM), which relies on the notion of stability of a learning algorithm, and includes computational constraints during the learning process. ARM allows to train resource-sparing models and enables to efficiently implement the next generation of ML methods for smart embedded systems. Advantages are shown on a case study conducted in the framework of Human Activity Recognition on Smartphones, on which we show that effective and computationally non-intensive models can be trained from data and implemented on the destination devices.
|Titolo:||Learning hardware friendly classifiers through algorithmic risk minimization|
|Data di pubblicazione:||2016|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|