Download PDFOpen PDF in browser

Temporal Difference Learning for Model Predictive Control

EasyChair Preprint no. 9576

20 pagesDate: January 15, 2023

Abstract

Data-driven model predictive control has two key advantages over model-free methods: a potential for improved sample efficiency through model learning, and better performance as computational budget for planning increases. However, it is both costly to plan over long horizons and challenging to obtain an accurate model of the environment. In this work, we combine the strengths of model-free and model-based methods. We use a learned task-oriented latent dynamics model for local trajectory optimization over a short horizon, and use a learned terminal value function to estimate long-term return, both of which are learned jointly by temporal difference learning. Our method, TD-MPC, achieves superior sample efficiency and asymptotic performance over prior work on both state and image-based continuous control tasks from DMControl and Meta- World. Code and videos are available at https: //nicklashansen.github.io/td-mpc.

Keyphrases: Model Predictive Control, model-based reinforcement learning, Reinforcement Learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:9576,
  author = {Nicklas Hansen and Xiaolong Wang and Hao Su},
  title = {Temporal Difference Learning for Model Predictive Control},
  howpublished = {EasyChair Preprint no. 9576},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser