The recognition of complex actions is still a challenging task in Computer Vision especially in daily living scenarios, where problems like occlusion and limited field of view are very common. Recognition of Activity Daily Living (ADL) could improve the quality of life and supporting independent and healthy living of older or/and impaired people by using information and communication technologies at home, at the workplace and in public spaces.This paper proposes to embed spatio-temporal information into ontology models to improve action recognition using visual words. Actions detected by visual words are implemented as Primitive States in the scenario and then used as Components of Composite States to merge them with spatio-temporal patterns that the people display while performing ADLs. In a challenging dataset, such as SmartHome, where a high variance intra-class and low variance inter-class is present, recognition results for some actions improve in precision and recall thanks to spatial information.
|Titolo:||Recognition of Daily Activities by embedding hand-crafted features within a semantic analysis|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|
File in questo prodotto:
|Recognition-of-Daily-Activities-by-embedding-handcrafted-features-within-a-semantic-analysis2019.pdf||Documento in versione editoriale||Administrator Richiedi una copia|