May 20 (08:00-12:30) in Room 27
08:00 - 08:10 Introduction
08:10 - 08:30 Organizers: "Nature vs. Nurture in Robotics" (slides)
08:35 - 08:55 Marc Toussaint: "Without Nature no Nurture!" (slides)
09:00 - 09:20 Leslie P. Kaelbling: "Doing for Our Robots What Evolution Did for Us" (slides)
09:25 - 09:55 Jonas Buchli: "It's All Just Optimization..." (slides)
10:00 - 10:10 One-minute poster spot lights (slides)
10:10 - 10:40 Coffee break and poster session
10:40 - 11:00 Sergey Levine: "Deep Robotic Learning" (slides)
11:05 - 11:25 George Konidaris: "Avoiding Learning by Exploiting Structure" (siides)
11:30 - 12:30 Panel discussion
Olov Andersson and Patrick Doherty: "Scalable Probabilistic Inference Could Bridge the Gap Between Learning and Robotics"
Minoru Asada: "Robot nature via robot nurture: from cognitive developmental robotics to constructive developmental science"
Laurie Dickstein-Fischer: "Who's Shaping Who: Symbiotic Reinforcement Learning for Robot-Assisted Autism Therapy"
Karlo Griparic, Tomislav Haus, Marko Car, Tamara Petrovic, Matko Orsag, Damjan Miklic, and Stjepan Bogdan: "ASSISIbf: a story of bees, fish and robots"
Aleksandar Jevtić, Adrià Colomé, Guillem Alenyà, and Carme Torras: "Learning Robot Motion through User Intervention and Policy Search"
Rico Jonschkowski and Oliver Brock: "Learning with Robotic Priors"
Peter Karkus, Andras Kupcsik, David Hsu, and Wee Sun Lee: "Integrated Planning and Learning in a Hierarchy"
Alex Kendall: "Nature vs. Nurture (or Engineering vs . Machine Learning) in Robotics"
Anh Nguyen, Dimitrios Kanoulas, Darwin G. Caldwell, and Nikos G. Tsagarakis: "Towards Scene Understanding for Autonomous Manipulation"
Marc Toussaint: "Without Nature no Nurture!"
Abstract: Learning is all good. But. Consider an average ICRA paper and the real-world robot demo included. What did the students(?) have to do to get this running? Probably use ROS, a PCL pipeline for table top segmentation, MoveIt, SMASH, perhaps some SLAM, control or path planning methods, and then, somewhere hidden in the zig lines of code, there is a little learning algorithm which was prominently proposed in the paper. This tells us that the 'application scope' of learning methods in todays systems is typically narrow. Maybe the reason is not that ML is limited, but that these things that make a robot system are not really understood, not coherently represented in a way suitable for ML, in vector spaces, as relational database, or as "generalized relational semi-DecPOMDPs" or so. But these things are the Nature of robotics. We don't understand the Nature of robotics---Nature in the workshop's title sense. And without that, no Nurture. The discussion of learning on the system-level is far from novel; I will mention previous views. And I will mention my (current) personal fetish to go about this: formulating the integrated problem in optimization terms, and thinking of novel languages to be able to do so. I mention Logic-Geometric Programming, physical exploration, and RL for manipulation learning as shy examples of this endeavour.
Leslie P. Kaelbling: "Doing for Our Robots What Evolution Did for Us"
Abstract: It is tempting to imagine advances in machine learning that will remove the need for any human engineering whatsoever. And, if we were able to operate on an evolutionary time-scale, that might be possible. However, we would like to build robots that can learn to be useful in their (and our!) lifetimes. So, we need to reduce the hypothesis space for the learning in a way that gives as much leverage to the learning as possible (by reducing sample complexity and difficulty of the optimization problem) while retaining the ability to learn to behave effectively in highly complex and variable environments. I don't actually know how to do this, but I'll speculate about it anyway.
Jonas Buchli: "It's All Just Optimization..."
Abstract: I will discuss the topic of 'Nature vs. Nurture in Robotics froman optimization point of view make a possibly contentious statement: As all (!) design and programmingof our robots can be subsumed under an optimization perspective, the nature vs. nurture debate inrobotics is just a matter of discussion of limited computational resources (or a statement thereof). Or otherwise stated, the border between what is nature (i.e. human engineered solution) and what is nurture (machine learning, optimal design, ...) will constantly shift towards more nurture and less nature as our computational abilities increase. However, that does not mean this is a simple and natural evolution of things as it requires more than just waiting for faster processors, but we might have to rethink some more basic computational paradigms to push the boundary further. Looking at the same line of thought from a different angle, we can however ask ourselves why in nature we see a relatively clear delineation between nature and nurture. Implementation detail and limits of mother nature? Or is it because of fundamental rules (just as fundamental as the laws of thermodynamics and conservation of information)? In the first case my statement will be shown true at some point, in the second we will run (aka discover) into this fundamental limit sooner or later.
Sergey Levine: "Deep Robotic Learning"
George Konidaris: "Avoiding Learning by Exploiting Structure"
Abstract: Learning is hard, and should be avoided where possible. We can view the nature vs. nurture question as a formal question of the extent to which the structure in an agent's environment allow us to construct it (nature) so as to maximally avoid learning (nurture). I will formalize this question as a question about how many degrees of freedom a learning agent should have, and propose that it can be related to the the distribution of problems it expects to face over its lifetime. In particular, the effective"degrees of freedom" necessary to specify intelligent behavior is the size of the sufficient statistic describing the agent's problem distribution - for any specific problem it faces, the agent should do no more learning than identifying the specific value of that statistic.