Solving K-MDPs

Jonathan Ferrer-Mestres, Thomas G. Dietterich, Olivier Buffet, Iadine Chadès

PosterID: 37
picture_as_pdf PDF
library_books Slides
library_books Poster
menu_book BibTeX
Markov Decision Processes (MDPs) are employed to model sequential decision-making problems under uncertainty. Traditionally, algorithms to solve MDPs have focused on solving large state or action spaces. With increasing applications of MDPs to human-operated domains such as conservation of biodiversity and health, developing easy-to-interpret solutions is of paramount importance to increase uptake of MDP policies. Here, we define the problem of solving $K$-MDPs, i.e., given an original MDP and a constraint on the number of states ($K$), generate a reduced state space MDP that minimizes the difference between the original optimal MDP value function and the reduced optimal $K$-MDP value function. Building on existing non-transitive and transitive approximate state abstraction functions, we propose a family of three algorithms based on binary search with sub-optimality bounded polynomially in a precision parameter: \KILP{}, \Qd{} and \ad{}. We compare these algorithms to a greedy algorithm (\greedy) and clustering approach (\kmeans). On randomly generated MDPs and two computational sustainability MDPs, \ad{} outperformed all algorithms when it could find a feasible solution. While numerous state abstraction problems have been proposed in the literature, we believe this is the first time that the general problem of solving $K$-MDPs is suggested. We hope that our work will generate future research aiming at increasing the interpretability of MDP policies in human-operated domains.

Session Aus3+Aus5: Probabilistic Planning & Learning
Canb 10/28/2020, 11:00 – 12:15
10/29/2020, 20:00 – 21:15
Paris 10/28/2020, 01:00 – 02:15
10/29/2020, 10:00 – 11:15
NYC 10/27/2020, 20:00 – 21:15
10/29/2020, 05:00 – 06:15
LA 10/27/2020, 17:00 – 18:15
10/29/2020, 02:00 – 03:15