Observation Decoding with Sensor Models: Recognition Tasks via Classical Planning

Diego Aineto, Sergio Jimenez, Eva Onaindia

PosterID: 3
picture_as_pdf PDF
library_books Slides
menu_book BibTeX
Observation decoding aims at discovering the underlying state trajectory of an acting agent from a sequence of observations. This task is at the core of various recognition activities that exploit planning as resolution method but there is a general lack of formal approaches that reason about the partial information received by the observer, or leverage the distribution of the observations emitted by the sensors. In this paper, we formalize the observation decoding task exploiting a probabilistic sensor model to build more accurate hypothesis about the behaviour of the acting agent. Our proposal extends the expressiveness of former recognition approaches by accepting observation sequences where one observation of the sequence can represent the reading of more than one variable, thus enabling observations over actions and partially observable states simultaneously. We formulate the probability distribution of the observations perceived when the agent performs an action or visits a state as a classical cost planning task that is solved with an optimal planner. The experiments will show that exploiting a sensor model increases the accuracy of predicting the agent behaviour in four different contexts.

Session E5: Robotics & Embedded Applications
Canb 10/27/2020, 17:00 – 18:00
10/31/2020, 00:00 – 01:00
Paris 10/27/2020, 07:00 – 08:00
10/30/2020, 14:00 – 15:00
NYC 10/27/2020, 02:00 – 03:00
10/30/2020, 09:00 – 10:00
LA 10/26/2020, 23:00 – 00:00
10/30/2020, 06:00 – 07:00