Ensuring safe behaviors, i.e., minimizing the probability that a control strategy yields undesirable effects, becomes crucial when robots interact with humans in semi-structured environments through adaptive control strategies. In previous papers, we contributed to propose an approach that (i) computes control policies through reinforcement learning, (ii) verifies them against safety requirements with probabilistic model checking, and (iii) repairs them with greedy local methods until requirements are met. Such learn-verify-repair work-flow was shown effective in some — relatively simple and confined — test cases. In this paper, we frame human-robot interaction in light of such previous contributions, and we test the effectiveness of the learn-verify-repair approach in a more realistic factory-to-home deployment scenario. The purpose of our test is to assess whether we can verify that interaction patterns are carried out with negligible human-to-robot collision probability and whether, in the presence of user tuning, strategies which determine offending behaviors can be effectively repaired.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Testing a learn-verify-repair approach for safe human-robot interaction|
|Data di pubblicazione:||2015|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|