Integrating Acting, Planning, and Learning in Hierarchical Operational Models

Sunandita Patra, James Mason, Amit Kumar, Malik Ghallab, Paolo Traverso, Dana Nau

PosterID: 71
picture_as_pdf PDF
library_books Slides
library_books Poster
menu_book BibTeX
We present new planning and learning algorithms for use with the RAE (Refinement Acting Engine) acting procedure (Ghallab et al., 2016). RAE uses hierarchical operational models to perform tasks in dynamically changing environments. Our planning algorithm, UPOM, does a UCT-like search in the space of operational models in order to tell RAE which operational model to use for each task. Our learning strategies acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. Our experimental results show that UPOM and our learning strategies significantly improve RAE’s performance in four test domains using two different metrics: efficiency and success ratio.

Session Am4: Planning & Learning
Canb 10/29/2020, 10:00 – 11:00
10/31/2020, 03:00 – 04:00
Paris 10/29/2020, 00:00 – 01:00
10/30/2020, 17:00 – 18:00
NYC 10/28/2020, 19:00 – 20:00
10/30/2020, 12:00 – 13:00
LA 10/28/2020, 16:00 – 17:00
10/30/2020, 09:00 – 10:00