Krishnendu Chatterjee, Martin Chmelík, Deep Karkhanis, Petr Novotný, Amélie Royer |
PosterID:
62
PDF
Slides
Poster
BibTeX
|
Multiple-environment Markov decision processes (MEMDPs) are MDPs equipped with not one, but multiple probabilistic transition functions, which represent the various possible unknown environments. While the previous research on MEMDPs focused on theoretical properties for long-run average payoff, we study them with discounted-sum payoff and focus on their practical advantages and applications. MEMDPs can be viewed as a special case of Partially observable and Mixed observability MDPs: the state of the system is perfectly observable, but not the environment. We show that the specific structure of MEMDPs allows for more efficient algorithmic analysis, in particular for faster belief updates. We experimentally demonstrate the applicability of MEMDPs in several domains, including contextual recommendation systems and parameterized Markov decision processes. |
Canb | 10/29/2020, 01:00 – 02:00 |
10/30/2020, 21:00 – 22:00 |
Paris | 10/28/2020, 15:00 – 16:00 |
10/30/2020, 11:00 – 12:00 |
NYC | 10/28/2020, 10:00 – 11:00 |
10/30/2020, 06:00 – 07:00 |
LA | 10/28/2020, 07:00 – 08:00 |
10/30/2020, 03:00 – 04:00 |
Guidelines for Action Space Definition in Reinforcement Learning-Based Traffic Signal Control Systems
Maxime Treca, Julian Garbiso, Dominique Barth
Multiple-Environment Markov Decision Processes: Efficient Analysis and Applications
Krishnendu Chatterjee, Martin Chmelík, Deep Karkhanis, Petr Novotný, Amélie Royer
Probabilistic planning with formal guarantees for mobile service robots
Bruno Lacerda, Fatma Faruq, David Parker, Nick Hawes
A correctness result for synthesizing plans with loops in stochastic domains
Laszlo Treszkai, Vaishak Belle