Note
Registered participants and authors can access all they need during the workshop at https://neurips.cc/virtual/2022/workshop/49997. Once you visit the webpage, click on the arrow next to your name at the top right corner and select the appropriate TIME ZONE. Please use the zoom link with the password at the top of the same page to join pannels and talks, Topia links to join poster sessions, and RocketChat to post yopur questions. For any further questions, you can contact us at neuripswrl2022@robot-learning.ml.
We have prerecorded all talks and uploaded them to YouTube. The talks will NOT be replayed during the workshop. We encourage all participants to watch them ahead of time so that we can make the panel discussions with the speakers more engaging and insightful.
NeurIPS 2022 will be held from Nov 28 through Dec 9, 2022. It will be a Hybrid Conference with a physical component in New Orleans during the first week, and a virtual component the second week. Please visit https://neurips.cc/Conferences/2022/ for the latest updates about the venue and dates. This proposed workshop will be fully virtual and will be held on 9 Dec, 2022.
Abstract
Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).
To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are developed for use in human environments, in addition to these technical challenges, we must also consider the social aspects of robotics such as privacy, transparency, fairness, and algorithmic bias. Both technical and social challenges also present opportunities for robotics and ML researchers alike. Contributing to advances in the aforementioned sub-fields promises to have an important impact on real-world robot deployment in human environments, building towards robots that use human feedback, indicate when their model is uncertain, and are safe to operate autonomously in safety-critical settings such as healthcare and transportation.
This year’s robot learning workshop aims at discussing unique research challenges from the lens of trustworthy robotics. We adopt a broad definition of trustworthiness that highlights different application domains and the responsibility of the robotics and ML research communities to develop “robots for social good.” Bringing together experts with diverse backgrounds from the ML and robotics communities, the workshop will offer new perspectives on trust in the context of ML-driven robot systems.
Scope of contributions:
Specific areas of interest include but are not limited to:
- epistemic uncertainty estimation in robotics;
- explainable robot learning;
- domain adaptation and distribution shift in robot learning;
- multi-modal trustworthy sensing and sensor fusion;
- safe deployment for applications such as agriculture, space, science, and healthcare;
- privacy aware robotic perception;
- information system security in robot learning;
- learning from offline data and safe on-line learning;
- simulation-to-reality transfer for safe deployment;
- robustness and safety evaluation;
- certifiability and performance guarantees;
- robotics for social good;
- safe robot learning with humans in the loop;
- algorithmic bias in robot learning;
- ethical robotics.
Invited Speakers
- Karol Hausman (Google Brain) with Brian Ichter (Google Brain)
- Georgia Chalvatzaki (TU Darmstadt) with Puze Liu (TU Darmstadt)
- Shuran Song (Columbia University)
- Katherine Driggs-Campbell (UIUC) with Zhe Huang (UIUC)
- Luca Carlone (MIT) with Rajat Talak (MIT)
- Stefani Tellex (Brown University)
- Been Kim (Google Research)
- Sarah Dean (Cornell University) with Andrew J. Taylor (Caltech)
- Haoran Tang (UC Berkeley) with Anusha Nagabandi (Covariant)
- Matthew Johnson-Roberson (CMU) with S.R.Manikandasriram (U. Michigan)
- Scott Reed (DeepMind) and Gabriel Barth-Maron (DeepMind) and Jackie Kay (DeepMind)
- Leila Takayama (UCSC)
- Andy Zang (Google Brain) with Jacky Liang (CMU)
- Animesh Garg (University of Toronto)
Debate
- Katherine Driggs-Campbell (UIUC)
- Sarah Dean (Cornell University)
- Luca Carlone (MIT)
- Animesh Garg (University of Toronto)
- Matthew Johnson-Roberson (CMU)
- Igor Gilitschenski (University of Toronto)
Schedule
Tentative. In Pacific Daylight Time (San Francisco Time).
07:00 - 07:15 | Opening Remarks |
07:15 - 07.30 | Contributed Talk: DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics |
07:30 - 08:15 | Panel: Uncertainty-Aware Machine Learning for Robotics (Q&A 1) Pre-recorded talks to watch before the panel |
08:15 - 08:30 | Break |
08:30 - 09:15 | Panel: Scaling & Models (Q&A 2) Pre-recorded talks to watch before the panel |
09:15 - 10:15 | Poster Session 1 |
10:15 - 11:00 | Panel: Safety and Verification for Decision-Making Systems (Q&A 3) Pre-recorded talks to watch before the panel |
11:00 - 15.00 | Break |
15:00 - 16:00 | Debate Session: Robotics for Good |
16:00 - 16:15 | Contributed Talk: Robust Forecasting for Robotic Control: A Game-Theoretic Approach |
16:15 - 16:30 | Contributed Talk: Certifiably-correct Control Policies for Safe Learning and Adaptation in Assistive Robotics |
16:30 - 17:00 | Break |
17:00 - 18:00 | Poster Session 2 |
18:00 - 18:45 | Panel: Explainability/Predictability Robotics (Q&A 4) Pre-recorded talks to watch before the panel |
18:45 - 19:00 | Closing Remarks |
Accepted Papers
- Visual Backtracking Teleoperation: A Data Collection Protocol for Offline Image-Based RL [paper]
- Conformal Semantic Keypoint Detection with Statistical Guarantees [paper]
- A Contextual Bandit Approach for Learning to Plan in Environments with Probabilistic Goal Configurations [paper]
- Imitating careful experts to avoid catastrophic events [paper]
- Infrastructure-based End-to-End Learning and Prevention of Driver Failure [paper]
- Formal Controller Synthesis for Stochastic Dynamical Models with Epistemic Uncertainty [paper]
- A Benchmark for Out of Distribution Detection in Point Cloud 3D Semantic Segmentation [paper]
- VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training [paper]
- Learning Certifiably Robust Controllers Using Fragile Perception [paper]
- PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning [paper]
- Robust Forecasting for Robotic Control: A Game-Theoretic Approach [paper]
- DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics [paper]
- MAEA: Multimodal Attribution Framework for Embodied AI [paper]
- Safety-Guaranteed Skill Discovery for Robot Manipulation Tasks [paper]
- Insights towards Sim2Real Contact-Rich Manipulation [paper]
- Train Offline, Test Online: A Real Robot Learning Benchmark [paper]
- Learning a Meta-Controller for Dynamic Grasping [paper]
- Real World Offline Reinforcement Learning with Realistic Data Source [paper]
- Interactive Language: Talking to Robots in Real Time [paper]
- Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models [paper]
- Certifiably-correct Control Policies for Safe Learning and Adaptation in Assistive Robotics [paper]
- Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [paper]
Organizers
- Alex Bewley (Google Research, Zurich)
- Anca Dragan (UC Berkeley)
- Igor Gilitschenski (University of Toronto)
- Emily Hannigan (Columbia University)
- Masha Itkina (Toyota Research Institute)
- Hamidreza Kasaei (University of Groningen, Netherlands)
- Nathan Lambert (University of California, Berkeley)
- Julien Perez (Naver Labs Europe)
- Ransalu Senanayake (Stanford University)
- Jonathan Tompson (Google Research, Mountain View)
- Markus Wulfmeier (Google DeepMind, London)
Advisory Board
- Roberto Calandra (Facebook AI Research)
- Jens Kober (TU Delft, Netherlands)
- Danica Kragic (KTH)
- Fabio Ramos (NVIDIA, University of Sydney)
- Vincent Vanhoucke (Google Research, Mountain View)
Important dates
- Submission deadline:
22 September 2022 Friday, 30 September 2022 (Anywhere on Earth) - Notification:
20 October 2022 (Anywhere on Earth) - Optional poster video due:
TBD Anywhere on Earth) - Camera-ready due:
18 November 2022 (Anywhere on Earth) - Poster due: 30 November 2022 (Anywhere on Earth)
- Workshop: 9 December 2022
Submission Instructions
Submissions should use the NeurIPS Workshop template available here and be 4 pages (plus as many pages as necessary for references). The reviewing process will be double blind, so please submit as anonymous by using ‘\usepackage{neurips_wrl2022}’ in your main tex file.
Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conference.
Submissions can be made at https://cmt3.research.microsoft.com/RLW2022.
FAQ
-
There are two poster sessions in the schedule. Which should I attend?
To accommodate multiple timezones in this virtual workshop format, all posters will be on display in our GatherTown poster floor throughout the duration of the workshop. Authors should, if feasible, try and attend both sheduled sessions to present your poster. If not possible, please present at the session that best works for you.
-
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but we ask that it be limited to a reasonable amount (max 10 pages in addition to the main submission) and that it follow the same NeurIPS format as the paper. References do not count towards the limit of 4 pages.
-
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
-
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We will not be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate). However, we will accept submissions that are under review at the time of submission to our workshop. For instance, papers that have been submitted to the International Conference on Robotics and Automation (ICRA) 2023 or the International Conference on Learning Representations (ICLR) 2023 can be submitted to our workshop.
-
My real-robot experiments are affected by Covid-19. Can I include simulation results instead?
If your paper requires conducting experiments on physical robots and access to the experimental platform is limited due to Covid-19 workplace access restrictions, whenever possible, you may validate your methods through simulation.
Contacts
For any further questions, you can contact us at neuripswrl2022@robot-learning.ml
Sponsors
We are very thankful to our corporate sponsors for enabling us to provide best paper awards and student registration fees.