Important links
We have prerecorded all talks and uploaded them to YouTube. The talks will NOT be replayed during the workshop. We encourage all participants to watch them ahead of time so that we can make the panel discussions with the speakers more engaging and insightful.
Abstract
Applying machine learning to real-world systems such as robotics has been an important part of the NeurIPS community in past years. Progress in machine learning has enabled robots to demonstrate strong performance in helping humans in some household and care-taking tasks, manufacturing and logistics, transportation and monitoring, and many other unstructured and human-centric environments. While these results are promising, access to high-quality, task-relevant data remains one of the largest bottlenecks for successful deployment of such technologies in the real world.
Lifelong learning, transfer, and continuous improvement during deployment is the most likely path to break that barrier. However, accessing these data sources comes with fundamental challenges, which include safety, stability, and the daunting issue of providing supervision for learning while the robot is in operation. Today, unique new opportunities are presenting themselves in this quest for robust, continuous learning: large-scale, self-supervised and multimodal approaches to learning are showing and often exceeding state-of-the-art supervised learning approaches; reinforcement and imitation learning are becoming more stable and data-efficient in real-world settings; new approaches combining strong, principled safety and stability guarantees with the expressive power of machine learning are emerging.
This workshop aims to discuss how these emerging trends in machine learning of self-supervision and lifelong learning can be best utilized in real-world robotic systems. We bring together experts with diverse perspectives on this topic to highlight the ways current successes in the field are changing the conversation around lifelong learning, and how this will affect the future of robotics, machine learning, and our ability to deploy intelligent, self-improving agents to enhance people’s lives.
Scope of contributions:
- Challenges in real-world application of machine learning in robot perception and decision-making
- Lifelong learning and adaptation for robots
- Data-efficiency via transfer, multitask, and meta learning
- Understanding, quantifying, and bridging the simulation-to reality-gap
- Uncertainty, robustness, and safety
- Self-supervised and semi-supervised representation learning
- Predictive coding
- Environment prediction
- Occlusion inference
- Long-horizon task learning
- Demonstration-based and goal-oriented policy learning
- Reward specification or learning
- Online or active learning for system identification and adaptation to a changing dynamics
- Self-supervised skill acquisition via self-play and student-teacher approaches
- Transfer learning across robot morphologies
- Architectures for open-ended learning
- Active perception
- Scene interpretation
Invited Speakers
- Razvan Pascanu (DeepMind) with Michał Zając (Jagiellonian University)
- Tamim Asfour (Karlsruhe Institute of Technology)
- Danica Kragic (KTH Royal Institute of Technology in Stockholm) with Ioanna Mitsioni (KTH) and Petra Poklukar (KTH)
- Anca Dragan (UC Berkeley)
- Dorsa Sadigh with Sidd Karamchetti (Stanford)
- Eric Eaton with Jorge Mendez (Uni. of Pennsylvania)
- Chad Jenkins with Anthony Opipari and Zhen Zeng (Uni. of Michigan)
- Nathan Ratliff (NVIDIA) with Anqi Li (Uni. of Washington) and Mandy Xie (Georgia Tech)
- Jeannette Bohg with Rika Antonova (Stanford)
- Sonia Chernova with Weiyu Liu (Georgia Tech)
- Claudia Clopath with Tudor Berariu (Imperial College London)
- Jean-Baptiste Mouret (INRIA) with Konstantinos Chatzilygeroudi (Uni. Patras)
- Beomjoom Kim (KAIST)
- Roberto Calandra (Facebook AI Research)
- Hao Su (UC San Diego) with Kaichun Mo (Stanford) and Fanbo Xiang (UC San Diego)
- Jan Peters (TU Darmstadt)
Schedule
In Pacific Time (San Francisco Time)
07:00 - 07:15 | Opening Remarks |
07:15 - 07.30 | Contributed Talk 1: Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations (Best paper runner up) |
07:30 - 08:15 | Panel: Learning from and Interacting with Humans (Q&A 1) - Anca Dragan (UC Berkeley) - Dorsa Sadigh (Stanford) - Chad Jenkins (Uni. of Michigan) - Sonia Chernova (Georgia Tech) - Tamim Asfour (Karlsruhe Institute of Technology) Pre-recorded talks to watch before the panel |
08:15 - 08:45 | Break |
08:45 - 09:45 | Poster Session 1 |
09:45 - 10:30 | Panel: Domains and Applications (Q&A 2) - Danica Kragic (KTH Royal Institute of Technology in Stockholm) - Razvan Pascanu (DeepMind) - Jean-Baptiste Mouret (INRIA) - Jan Peters (TU Darmstadt) Pre-recorded talks to watch before the panel |
10:30 - 15:15 | Break |
15:15 - 16:15 | Panel: End2End and Modular Systems (Q&A 3) - Beomjoom Kim (KAIST) - Eric Eaton (UPenn) - Nathan Ratliff (NVIDIA) - Jeannette Bohg (Stanford) - Hao Su (UC San Diego) - Claudia Clopath (Imperial College London) - Roberto Calandra (Facebook AI Research) Pre-recorded talks to watch before the panel |
16:15 - 16:30 | Contributed Talk 2: Lifelong Robotic Reinforcement Learning by Retaining Experiences (Best paper award) |
16:30 - 17:00 | Break |
17:00 - 18:00 | Poster Session 2 |
18:00 - 19:00 | Debate: Scale (more data/compute) vs smarts (intelligent control/perception algorithms) - Nathan Ratliff (NVIDIA) - Eric Eaton (Uni. of Pennsylvania) - Dorsa Sadigh (Stanford) - Roberto Calandra (Facebook AI Research) - Chad Jenkins (Uni. of Michigan) - Hao Su (UC San Diego) |
19:00 - 19:15 | Closing Remarks |
Poster Session
Accepted Papers
Sequential learning theory and practice
- A1: Sample-Efficient Policy Search with a Trajectory Autoencoder [paper]
- A2: Learning Design and Construction with Varying-Sized Materials via Prioritized Memory Resets [paper]
- A3: panda-gym : Open-source Goal-conditioned Enviroments for Robotic Learning [paper]
- A4: Versatile Inverse Reinforcement Learning via Cumulative Rewards [paper]
- A5: Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration [paper]
- A6: Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets [paper]
Life long and causal models
- B1: Open-Access Physical Robotics Environment for Real-World Reinforcement Learning Benchmark and Research [paper]
- B2: Guiding Evolutionary Strategies by Differentiable Robot Simulators [paper]
- B3: Maximum Likelihood Constraint Inference on Continuous State Spaces [paper]
- B4: Lifelong Robotic Reinforcement Learning by Retaining Experiences [paper]
- B5: Task-Independent Causal State Abstraction [paper]
- B6: Variational Inference MPC for Robot Motion with Normalizing Flows [paper]
Robotic perception
- C1: Solving Occlusion in Terrain Mapping using Neural Networks [paper]
- C2: Visual Affordance-guided Policy Optimization [paper]
- C3: Using Dense Object Descriptors for Picking Cluttered General Objects with Reinforcement Learning [paper]
- C4: Object Representations Guided By Optical Flow [paper]
- C5: Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations [paper]
- C6: IL-flOw: Imitation Learning from Observation using Normalizing Flows [paper]
Demonstration and Imitation
- D1: Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations [paper]
- D2: Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments [paper]
- D3: Demonstration-Guided Q-Learning [paper]
- D4: ADHERENT: Learning Human-like Trajectory Generators for Whole-body Control of Humanoid Robots [paper]
- D5: Simultaneous Human Action and Motion Prediction [paper]
- D6: What Would the Expert do()?: Causal Imitation Learning [paper]
Organizers
- Alex Bewley (Google Research, Zurich)
- Igor Gilitschenski (University of Toronto)
- Masha Itkina (Stanford University)
- Hamidreza Kasaei (University of Groningen, Netherlands)
- Jens Kober (TU Delft, Netherlands)
- Nathan Lambert (University of California, Berkeley)
- Julien Perez (Naver Labs Europe)
- Ransalu Senanayake (Stanford University)
- Vincent Vanhoucke (Google Research, Mountain View)
- Markus Wulfmeier (Google DeepMind, London)
Program Committee
We would like to thank the program committee for shaping the excellent technical program. They are: Ben Moran, Boris Ivanovic, Jayesh Gupta, Johannes Betz, Johannes , Laura Graesser, Linda van, Lionel Ott, Nemanja Rakicevic, Saeid Amiri, Ted Xiao, Tuomas Haarnoja, Vincent Vanhoucke, Zlatan Ajanovic, Anthony Francis, Dushyant Rao, Edward Johns, Giovanni Franzese, Hermann Blum, Jacob Varley, Jie Tan, Krishan Rana, Michelle Lee, Nishan Srishankar, Peter Karkus, Spencer Richards, Thomas Power, Tomi Silander, Yevgen Chebotar, Yizhe Wu, Yunzhu Li, Abhishek Cauligi, Ashvin Nair, Bernard Lange, Carlos Celemin, Christina Kazantzidou, David Rendleman, Esen Yel, Jianwei Yang, Jihong Zhu, Kunal Menda, Lukas Schmid, Mao Shan, Martina Zambelli, Michel Breyer, Minttu Alakuijala, Preston Culbertson, Sarah Bechtle, Siddharth Karamcheti, Steven Bohez, Thomas Lew, Todor Davchev, Weixuan Zhang
Important dates
Finanicial Support for NeurIPS Attendees - Dec 1 deadline
- Submission deadline:
30 September 2021 (Anywhere on Earth) - Notification:
23 October 2021 (Anywhere on Earth) - Optional poster video due:
01 November 2021 Anywhere on Earth) - Camera-ready due:
19 November 2021 (Anywhere on Earth) - Workshop: 14 December 2021
Submission Instructions
Submissions should use the NeurIPS Workshop template available here and be 4 pages (plus as many pages as necessary for references). The reviewing proces will be double blind, so please submit as anonymous by using ‘\usepackage{neurips_wrl2021}’ in your main tex file.
Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conference.
Submissions can be made at https://cmt3.research.microsoft.com/NEURIPSWRL2021/.
Poster and Camera-Ready Submission Instructions
Camera-ready paper deadline (Nov 19, 2021 AOE)
- The camera-ready paper deadline is Nov 19, 2021 AOE.
- Make sure to include your name and affiliations. This can be done by adding the argument “final” as \usepackage[final]{neurips_wrl2021} in you tex file.
- The camera-ready version is still 4 pages + references.
- This file should be uploaded by clicking Create Camera Ready Submission at https://cmt3.research.microsoft.com/NEURIPSWRL2021/.
- If you have supplementary materials as a pdf, you can add them as extra pages to the camera-ready paper (below 4 pages and references) and submit as a single pdf.
- If you have videos, code, or any other non-pdf materials, unfortunately, we cannot accept them. However, you are encouraged to separately upload them to your own website, google drive, dropbox, github, youtube, etc. and include the links in the paper.
Poster deadline (Nov 19, 2021 AOE)
- You will be given access to gather.town to interact with the workshop attendees and present your work. We will provide more details about gather.town in early December.
- In the meantime, you are required to create a poster (PNG file, no more than 5120 width x 2880 height and no more than 10 MB) and upload before Nov 19, 2021 AOE. Please follow the instructions given in https://neurips.cc/PosterUpload.
FAQ
-
There are two poster sessions in the schedule. Which should I attend?
To accommodate multiple timezones in this virtual workshop format, all posters will be on display in our GatherTown poster floor throughout the duration of the workshop. Authors should, if feasible, try and attend both sheduled sessions to present your poster. If not possible, please present at the session that best works for you.
-
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but we ask that it be limited to a reasonable amount (max 10 pages in addition to the main submission) and that it follow the same NeurIPS format as the paper. References do not count towards the limit of 4 pages.
-
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
-
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We will not be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate). However, we will accept submissions that are under review at the time of submission to our workshop. For instance, papers that have been submitted to the International Conference on Robotics and Automation (ICRA) 2021 can be submitted to our workshop.
-
My real-robot experiments are affected by Covid-19. Can I include simulation results instead?
If your paper requires conducting experiments on physical robots and access to the experimental platform is limited due to Covid-19 workplace access restrictions, whenever possible, you may validate your methods through simulation.
Contacts
For any further questions, you can contact us at neuripswrl2021@robot-learning.ml
Sponsors
We are very thankful to our corporate sponsors, Naver Labs Europe, Google Brain and DeepMind, for enabling us to provide best paper awards and student registration fees.