This paper argues in favour of using formal methods to ensure safety of deployed stochastic policies learned by robots in unstructured environments. It has been demonstrated that multi-objective learning alone is not sufficient to ensure globally safe behaviours in such robots, whereas learning-specific methods yield deterministic policies which are less flexible or effective in practice. Under certain restrictions on state-space, modelling safety using probabilistic computational tree logic and ensuring such safety via automated repair can overcome these shortcomings. Promising results are obtained on a realistic setup and pros and cons of such method are discussed.

Is verification a requisite for safe adaptive robots?

PATHAK, SHASHANK;METTA, GIORGIO;TACCHELLA, ARMANDO
2014-01-01

Abstract

This paper argues in favour of using formal methods to ensure safety of deployed stochastic policies learned by robots in unstructured environments. It has been demonstrated that multi-objective learning alone is not sufficient to ensure globally safe behaviours in such robots, whereas learning-specific methods yield deterministic policies which are less flexible or effective in practice. Under certain restrictions on state-space, modelling safety using probabilistic computational tree logic and ensuring such safety via automated repair can overcome these shortcomings. Promising results are obtained on a realistic setup and pros and cons of such method are discussed.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/863728
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact