We live in a multi-modal world; therefore it comes as no surprise that the human brain is tailored for the integration of multi-sensory input. Inspired by the human brain, the multi-sensory data is used in Artificial Intelligence (AI) for teaching different concepts to computers. Autonomous Agents (AAs) are AI systems that sense and act autonomously in complex dynamic environments. Such agents can build up Self-Awareness (SA) by describing their experiences through multi-sensorial information with appropriate models and correlating them incrementally with the currently perceived situation to continuously expand their knowledge. This thesis proposes methods to learn such awareness models for AAs. These models include SA and situational awareness models in order to perceive and understand itself (self variables) and its surrounding environment (external variables) at the same time. An agent is considered self-aware when it can dynamically observe and understand itself and its surrounding through different proprioceptive and exteroceptive sensors which facilitate learning and maintaining a contextual representation by processing the observed multi-sensorial data. We proposed a probabilistic framework for generative and descriptive dynamic models that can lead to a computationally efficient SA system. In general, generative models facilitate the prediction of future states while descriptive models enable to select the representation that best fits the current observation. The proposed framework employs a Probabilistic Graphical Models (PGMs) such as Dynamic Bayesian Networks (DBNs) that represent a set of variables and their conditional dependencies. Once we obtain this probabilistic representation, the latter allows the agent to model interactions between itself, as observed through proprioceptive sensors, and the environment, as observed through exteroceptive sensors. In order to develop an awareness system, not only an agent needs to recognize the normal states and perform predictions accordingly, but also it is necessary to detect the abnormal states with respect to its previously learned knowledge. Therefore, there is a need to measure anomalies or irregularities in an observed situation. In this case, the agent should be aware that an abnormality (i.e., a non-stationary condition) never experienced before, is currently present. Due to our specific way of representation, which makes it possible to model multi-sensorial data into a uniform interaction model, the proposed work not only improves predictions of future events but also can be potentially used to effectuate a transfer learning process where information related to the learned model can be moved and interpreted by another body.

Learning probabilistic interaction models

BAYDOUN, MOHAMAD
2020-02-26

Abstract

We live in a multi-modal world; therefore it comes as no surprise that the human brain is tailored for the integration of multi-sensory input. Inspired by the human brain, the multi-sensory data is used in Artificial Intelligence (AI) for teaching different concepts to computers. Autonomous Agents (AAs) are AI systems that sense and act autonomously in complex dynamic environments. Such agents can build up Self-Awareness (SA) by describing their experiences through multi-sensorial information with appropriate models and correlating them incrementally with the currently perceived situation to continuously expand their knowledge. This thesis proposes methods to learn such awareness models for AAs. These models include SA and situational awareness models in order to perceive and understand itself (self variables) and its surrounding environment (external variables) at the same time. An agent is considered self-aware when it can dynamically observe and understand itself and its surrounding through different proprioceptive and exteroceptive sensors which facilitate learning and maintaining a contextual representation by processing the observed multi-sensorial data. We proposed a probabilistic framework for generative and descriptive dynamic models that can lead to a computationally efficient SA system. In general, generative models facilitate the prediction of future states while descriptive models enable to select the representation that best fits the current observation. The proposed framework employs a Probabilistic Graphical Models (PGMs) such as Dynamic Bayesian Networks (DBNs) that represent a set of variables and their conditional dependencies. Once we obtain this probabilistic representation, the latter allows the agent to model interactions between itself, as observed through proprioceptive sensors, and the environment, as observed through exteroceptive sensors. In order to develop an awareness system, not only an agent needs to recognize the normal states and perform predictions accordingly, but also it is necessary to detect the abnormal states with respect to its previously learned knowledge. Therefore, there is a need to measure anomalies or irregularities in an observed situation. In this case, the agent should be aware that an abnormality (i.e., a non-stationary condition) never experienced before, is currently present. Due to our specific way of representation, which makes it possible to model multi-sensorial data into a uniform interaction model, the proposed work not only improves predictions of future events but also can be potentially used to effectuate a transfer learning process where information related to the learned model can be moved and interpreted by another body.
26-feb-2020
File in questo prodotto:
File Dimensione Formato  
phdunige_3184808.pdf

accesso aperto

Tipologia: Tesi di dottorato
Dimensione 10.14 MB
Formato Adobe PDF
10.14 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/997450
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact