The growing capabilities of learning-based methods in control and robotics have precipitated a shift in the design of software for autonomous systems. Recent successes fuel the hope that robots will increasingly perform varying tasks working alongside humans in complex, dynamic environments. However, the application of learning approaches to real-world robotic systems has been limited because real-world scenarios introduce challenges not arising in simulation.
In this workshop, we aim to identify and tackle the main challenges to learning on real robotic systems. First, many current machine learning methods rely on large quantities of labeled data. While raw sensor data is available at high rates, the required variety is hard to obtain and the human effort to annotate or design reward functions is an even larger burden. Second, algorithms must guarantee some measure of safety and robustness to be deployed in real systems that interact with property and people. Instantaneous reset mechanisms, as common in simulation to recover from even critical failures, present a great challenge to real robots. Third, the real world is significantly more complex and varied than curated datasets and simulations. Successful approaches must scale to this complexity, be able to adapt to novel situations and recover from mistakes.
As a community, we are exploring a wide range of solutions to each of these challenges. To explore the limits of different directions, we aim to address in particular questions about the trade-offs and potential necessity of particular design aspects via included panel discussion as well as the invited presentations:
- Transfer learning (simulation - sim2real, multitask, different domains, etc)
- Explicit methods for planning, prediction, and uncertainty modelling
The primary focus of the submission lies on tackling these challenges resulting from operation in the real world. We will encourage submissions that experiment on physical systems, and specifically consider algorithmic developments aimed at tackling the challenges presented by physical systems. We believe this focus on real-world application will bring together a cross-section of researchers working on different areas of research for a fruitful exchange of ideas including our invited speakers.
Important dates
Submission deadline (extended): 13 September 2019 (Anywhere on Earth)Notification: 01 October 2019Camera ready: 01 December 2019- Workshop: 14 December 2019
Invited Speakers
-
Marc Deisenroth (University College London)
Useful Models for Robot Learning
Abstract: In robot learning we face challenge of data-efficient learning. In this talk, we will make the case for three types of useful models that become handy in robot learning: probabilistic models, hierarchical models, and models that allow us to incorporate the underlying physics. We will briefly outline strong use cases for these three models in the context of model-based reinforcement learning, meta learning, and system identification. -
Nima Fazeli (University of Michigan, Ann Arbor)
Toward Topic Models of Robotic Manipulation and the JengaBot
Abstract: Hierarchical representations are a promising tool for robotic manipulation tasks. In this talk, we present a two-step approach to learning a class of hierarchical predictive models with application to physics modeling and robotic manipulation. These models facilitate learning spatio-temporal abstractions over physics and are amenable to probabilistic inference, controls, and planning. To build intuition for the learned abstractions, we provide an analogy between these models and their natural language processing counter-parts. Further, we show how these models can be used for sample-efficient controls. We end with a discussion on when we expect these approaches to not work well. -
Raia Hadsell (DeepMind)
TBD
Abstract: TBD -
Edward Johns (Imperial College London)
Finding the Right Balance of Modelling and Learning for Vision-based Robot Manipulation
Abstract: In recent years, a wide range of learning-based approaches have been developed for robot control. However, classical robot control, using model-based approaches, has significant potential which has often been overlooked, due to the popularity of end-to-end systems. With a specific focus on vision-based robot manipulation, I will argue that finding the right balance of these two ideologies presents our best chance of making the first real breakthroughs for robotics in unstructured environments. -
Takayuki Osa (Kyushu Institute of Technology)
How should we design a robot learning system?
Abstract: When we develop a robot learning system, we have to consider many design choices. In this talk, I will share some of my experiences with industrial partners and discuss problems in the design of a robot learning system. -
Angela Schoellig (University of Toronto)
Machine Learning for Mobile Robots: Safety, Data Efficiency, and Fast Adaptation
Abstract: The ultimate promise of robotics is to design devices that can physically interact with the world. To date, robots have been primarily deployed in highly structured and predictable environments. However, we envision the next generation of robots (ranging from self-driving and -flying vehicles to robot assistants) to operate in unpredictable and generally unknown environments alongside humans. This challenges current robot algorithms, which have been largely based on a-priori knowledge about the system and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been mostly limited to learning single tasks, and demonstrated in simulation or structured lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety when placed in a closed-loop system architecture. It will also require to answer the fundamental question of how to design learning architectures for dynamic and interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.
Organizers
- Sanket Kamthe (Imperial College London)
- Kate Rakelly (University of California, Berkeley)
- Markus Wulfmeier (DeepMind)
- Roberto Calandra (Facebook AI Research)
- Danica Kragic (Royal Institute of Technology, KTH)
- Stefan Schaal (Google)
Schedule
09:00 | Introduction and opening remarks |
09:15 | Invited talk - Marc Deisenroth |
09:45 | Coffee break |
10:30 | Poster session 1 |
11:15 | Contributed talk - Laura Smith presenting AVID: Translating Human Demonstrations for Automated Learning |
11:30 | Invited talk - Takayuki Osa |
12:00 | Lunch break |
13:30 | Invited talk - Raia Hadsell |
14:00 | Invited talk - Nima Fazeli |
14:30 | Poster session 2 |
15:30 | Coffee break |
16:00 | Contributed talk - Michelle A. Lee and Carlos Florensa presenting Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning |
16:15 | Invited talk - Angela Schoellig |
16:45 | Invited talk - Edward Johns |
17:15 | Panel discussion |
18:00 | End |
Panel Session
Submit your questions for our panel session here!.
Accepted Papers
Accepted papers are listed in alphabetical order. All papers will be presented in poster format during both poster sessions.
- Deep Reinforcement Learning for Biomimetic Touch: Learning to Type Braille
Alex Church, John Lloyd, Raia Hadsell, Nathan Lepora - Improving Model-Based Reinforcement Learning via Model-Augmented Pathwise Derivative
Ignasi Clavera, Yao(Violet) Fu, Pieter Abbeel - Mutual Information Maximization for Robust Plannable Representations
Ignasi Clavera, Yiming Ding, Pieter Abbeel - Active Robot Imitation Learning with Autoencoders and Imagined Rollouts
Norman Di Palo, Edward Johns - Self-Supervised Correspondence in Visuomotor Policy Learning
Peter R Florence, Lucas Manuelli, Russ Tedrake - Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox - Zero-Shot Reinforcement Learning with Deep Attention Convolutional Neural Networks
Sahika Genc, Sunil Mallya, Sravan Babu Bodapati, Tao Sun, Yunzhe Tao - H∞ Model-free Reinforcement Learning with Robust Stability Guarantee
Minghao Han, Lixian Zhang, Yuan Tian, Jun Wang, Wei Pan - Towards Object Detection from Motion
Rico Jonschkowski, Austin Stone - Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation
Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Yisheng Guan, Juan Rojas, Paul Weng - Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
Suraj Nair, Chelsea Finn - AVID: Translating Human Demonstrations for Automated Learning
Laura M Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine - Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration
Simon B Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor - VILD: Variational Imitation Learning with Diverse-quality Demonstrations
Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama - Human-Robot Collaboration via Deep Reinforcement Learning of Real-World Interactions
Jonas Tjomsland, Ali Shafti, Aldo Faisal - Thinking While Moving: Deep ReinforcementLearning in Concurrent Environments
Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog - Morphology-Agnostic Visual Robotic Control
Brian Yang, Dinesh Jayaraman, Glen Berseth, Alexei A Efros, Sergey Levine - Enhanced Adversarial Strategically-Timed Attacks on Deep Reinforcement Learning
Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, I-Te Hung, Yi Ouyang, Xiaoli Ma - SwarmNet: Towards Imitation Learning of Multi-Robot Behavior with Graph Neural Networks
Siyu Zhou, Mariano Phielipp, Jorge Sefair, Sara Walker, Heni Ben Amor
Program Committee
We would like to thank the program committee for shaping the excellent technical program. In alphabetical order they are:
Abbas Abdolmaleki, Hany Abdulsamad, Andrea Bajcsy, Feryal Behbahani, Djalel Benbouzid, Michael Bloesch, Caterina Buizza, Roberto Calandra, Nutan Chen, Misha Denil, Coline Devin, Marco Ewerton, Walter Goodwin, Tuomas Haarnoja, Roland Hafner, James Harrison, Karol Hausman, Edward Johns, Ashvin Nair, Takayuki Osa, Simone Parisi, Akshara Rai, Nemanja Rakicevic, Dushyant Rao, Siddharth Reddy, Apoorva Sharma, Johannes A. Stork, Li Sun, Filipe Veiga, Ruohan Wang, Ruohan Wang, Rob Weston, Yizhe Wu
Contacts
For any further questions, you can contact us at neuripswrl2019@robot-learning.ml
Sponsors
We are very thankful to our corporate sponsors for enabling us to provide student travel grants!