官方解读见 https://analyticsindiamag.com/guide-to-mbirl-model-based-inverse-reinforcement-learning/
This repository contains code for
- ML3: Meta-Learning via Learned Losses, presented at ICPR 2020, won best student award (pdf)
- MBIRL: Model-Based Inverse Reinforcement Learning from Visual Demonstrations, presented at CoRL 2020 (pdf)
In the LearningToLearn folder, run:
conda create -n l2l python=3.7
conda activate l2l
python setup.py develop
To reproduce results of the ML3 paper follow the README instructions in the ml3
folder
@inproceedings{ml3,
author = {Sarah Bechtle and Artem Molchanov and Yevgen Chebotar and Edward Grefenstette and Ludovic Righetti and Gaurav Sukhatme and Franziska Meier},
title = {Meta Learning via Learned Loss},
booktitle = {International Conference on Pattern Recognition, {ICPR}, Italy, January 10-15, 2021},
year = {2021} }
To test our MBIRL algorithm follow the README instructions in the mbirl
folder
@InProceedings{mbirl,
author = {Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai and Franziska Meier},
booktitle = {Conference on Robot Learning (CoRL)},
title = {Model Based Inverse Reinforcement Learning from Visual Demonstration},
year = {2020},
video = {https://www.youtube.com/watch?v=sRrNhtLk12M&t=52s},
}
LearningToLearn
is released under the MIT license. See LICENSE for additional details about it.
See also our Terms of Use and Privacy Policy.