Utilising Uncertainty for Efficient Learning of Likely-Admissible Heuristics

Ofir Marom, Benjamin Rosman

PosterID: 5
picture_as_pdf PDF
library_books Slides
library_books Poster
menu_book BibTeX
Likely-admissible heuristics have previously been introduced as heuristics that are admissible with some probability. While such heuristics only produce likely-optimal plans, they have the advantage that it is more feasible to learn such heuristics from training data using machine learning algorithms. Naturally, it is ideal if this training data consists of optimal plans, but such data is prohibitive to produce. To overcome this, previous work introduced a bootstrap procedure that generates training data using random task generation that incrementally learns on more complex tasks. However, 1) using random task generation is inefficient and; 2) the procedure generates non-optimal plans for training and this causes errors to compound as learning progresses, resulting in high suboptimality. In this paper we introduce a framework that utilises uncertainty overcome the shortcomings of previous approaches. In particular, we show that we can use uncertainly to efficiently explore task-space when generating training tasks, and then learn likely-admissible heuristics that produce low suboptimality. We illustrate the advantages of our approach on the 15-puzzle, 24-puzzle, 24-pancake and 15-blocksworld domains using Bayesian neural networks to model uncertainty.

Session E1: Learning for Deterministic Planning
Canb 10/27/2020, 18:00 – 19:00
10/31/2020, 01:00 – 02:00
Paris 10/27/2020, 08:00 – 09:00
10/30/2020, 15:00 – 16:00
NYC 10/27/2020, 03:00 – 04:00
10/30/2020, 10:00 – 11:00
LA 10/27/2020, 00:00 – 01:00
10/30/2020, 07:00 – 08:00