Publications

A work-in-progress report that connects the idea of value-equivalent model learning with insights from the representation learning community to build robust value aware models on meaningful value function sets

Value Gradient weighted Model-Based Reinforcement Learning (VaGraM) is a novel loss function for learning accurate models for RL. It uses the gradient of the value function to bias a squared loss to account for the sensitivity of the RL agent to model errors.

When humans observe a physical system, they can easily locate objects, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control in a task with heavily interacting objects.

DeepNotebooks is a framework for automated data analysis build on Jupyter Notebooks and Sum-Prodcut Networks.