Upcoming Talks

DateTimeLocation
23.10.201413:00-13:30S202 E302
Dorothea Koert, Research Talk: Inverse Kinematics for Optimal Human-robot Collaboration
DateTimeLocation
23.10.201413:30-14:00S202 E302
Janine Hoelscher, B.Sc. Final Thesis Presentation: Tactile Exploration of Material Properties
DateTimeLocation
30.10.201413:00-13:20S202 E302
Jan Mundo, M.Sc. Final Presentation: Structure Learning with Movement Primitives

Autonomous Systems at TU Darmstadt

Welcome to the Computational Learning for Autonomous Systems Group and the Intelligent Autonomous Systems Group of the Computer Science Department of the Technische Universitaet Darmstadt. Our research centers around the goal of bringing advanced motor skills to robotics using techniques from machine learning and control. Please check out our research or contact any of our lab members. As we originated out of the RObot Learning Lab in the Department for Empirical Inference and Machine Learning at the Max-Planck Institute of Intelligent Systems, we also have a few members in Tuebingen.

Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to achieve the first step of creating robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the realization of a general approach to motor skill learning, to get closer towards human-like performance in robotics. We focus on the solution of fundamental problems in robotics while developing machine-learning methods.

Computational Learning for Autonomous Systems (CLAS)

The new group on Computational Learning for Autonomous Systems Group is headed by Gerhard Neumann, who is Assistant Professor at the TU-Darmstadt since September 2014. The main focus of the CLAS group is to investigate computational learning algorithms that allow artificial agents to autonomously learn new skills from interaction with the environment, humans or other agents. We believe that such autonomously learning agents will have a great impact in many areas of everyday life, for example, autonomous robots for helping in the household, care of the elderly or the disposal of dangerous goods.

An autonomously learning agent has to acquire a rich set of different behaviours to achieve a variety of goals. The agent has to learn autonomously how to explore its environment and determine which are the important features that need to be considered for making a decision. It has to identify relevant behaviours and needs to determine when to learn new behaviours. Furthermore, it needs to learn what are relevant goals and how to re-use behaviours in order to achieve new goals. In order to achieve these objectives, our research concentrates on hierarchical learning and structured learning of robot control policies, information-theoretic methods for policy search, imitation learning and autonomous exploration, learning forward models for long-term predictions, autonomous cooperative systems and multi-agent systems and the biological aspects of autonomous learning systems.

Intelligent Autonomous Systems (IAS)

In the Intelligent Autonomous Systems Group headed by Jan Peters since July 2014 at TU Darmstadt and since May 2007 at the Max Planck Institute, we develop methods for learning models and control policy in real time, see e.g., learning models for control and learning operational space control. We are particularly interested in reinforcement learning where we try push the state-of-the-art further on and received a tremendous support by the RL community. Much of our research relies upon learning motor primitives that can be used to learn both elementary tasks as well as complex applications such as grasping or sports.

Some more information on us fore the general public can be found in a long article in the Max Planck Research magazine, small stubs in New Scientist and the Spiegel, as well as on the IEEE Blog on Robotics and Engadget.

Directions and Open Positions

In case that you are searching for our address or for directions on how to get to our lab, look at our contact information. We always have thesis opportunities for enthusiastic and driven Masters/Bachelors students (please contact Jan Peters or Gerhard Neumann). Check out the currently offered theses (Abschlussarbeiten) or suggest one yourself, drop us a line by email or simply drop by! We also occasionally have open Ph.D. or Post-Doc positions, see OpenPositions.

News

  1. Brandl, S.; Kroemer, O.; Peters, J. (2014). Generalizing Manipulations Between Objects using Warped Parameters, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Ivaldi, S.; Peters, J.; Padois, V.; Nori, F. (2014). Tools for simulating humanoid robot dynamics: a survey based on user feedback, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Maeda, G.J.; Ewerton, M.; Lioutikov, R.; Amor, H.B.; Peters, J.; Neumann, G. (2014). Learning Interaction for Collaborative Tasks with Probabilistic Movement Primitives, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Rueckert, E.; Mindt, M.; Peters, J.; Neumann, G. (2014). Robust Policy Updates for Stochastic Optimal Control, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  1. Chebotar, Y.; Kroemer, O.; Peters, J. (2014). Learning Robot Tactile Sensing for Object Manipulation, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Kroemer, O.; Peters, J. (2014). Predicting Object Interactions from Contact Distributions, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Luck, K.S.; Neumann, G.; Berger, E.; Peters, J.; Ben Amor, H. (2014). Latent Space Policy Search for Robotics, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Manschitz, S.; Kober, J.; Gienger, M.; Peters, J. (2014). Learning to Sequence Movement Primitives from Demonstrations, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  • Jan Peters will be Area Chair at Advances in Neural Information Processing Systems (NIPS 2014).
  • several cool journal papers have been accepted:
  1. Kupcsik, A.G.; Deisenroth, M.P.; Peters, J.; Ai Poh, L.; Vadakkepat, V.; Neumann, G. (conditionally accepted). Model-based Contextual Policy Search for Data-Efficient Generalization of Robot Skills, Artificial Intelligence.   See Details [Details]   BibTeX Reference [BibTex]
  2. Dann, C.; Neumann, G.; Peters, J. (2014). Policy Evaluation with Temporal Differences: A Survey and Comparison, Journal of Machine Learning Research, 15, March, pp.809-883.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Meyer, T.; Peters, J.; Zander, T.O.; Schoelkopf, B.; Grosse-Wentrup, M. (2014). Predicting Motor Learning Performance from Electroencephalographic Data, Journal of Neuroengineering and Rehabilitation, 11, 1.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Muelling, K.; Boularias, A.; Mohler, B.; Schoelkopf, B.; Peters, J. (2014). Learning Strategies in Table Tennis using Inverse Reinforcement Learning, Biological Cybernetics.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Neumann, G.; Daniel, C.; Paraschos, A.; Kupcsik, A.; Peters, J. (2014). Learning Modular Policies for Robotics, Frontiers in Computational Neuroscience.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Wierstra. D.; Schaul, T.; Glasmachers, T.; Sun, Y.; Peters, J.; Schmidhuber, J. (2014). Natural Evolution Strategies, Journal of Machine Learning Research, 15, March, pp.949-980.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Lioutikov, R.; Paraschos, A.; Peters, J.; Neumann, G. (accepted). Generalizing Movements with Information Theoretic Stochastic Optimal Control, Journal of Aerospace Information Systems.   See Details [Details]   BibTeX Reference [BibTex]
  • Six ICRA 2014 papers accepted (100% acceptance rate for our team):
  1. Kroemer, O.; van Hoof, H.; Neumann, G.; Peters, J. (2014). Learning to Predict Phases of Manipulation Tasks as Hidden States, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Ben Amor, H.; Neumann, G.; Kamthe, S.; Kroemer, O.; Peters, J. (2014). Interaction Primitives for Human-Robot Cooperation Tasks , Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Calandra, R.; Seyfarth, A.; Peters, J.; Deisenroth, M.P. (2014). An Experimental Comparison of Bayesian Optimization for Bipedal Locomotion, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Deisenroth, M.P.; Englert, P.; Peters, J.; Fox, D. (2014). Multi-Task Policy Search for Robotics, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Lioutikov, R.; Paraschos, A.; Peters, J.; Neumann, G.; (2014). Sample-Based Information-Theoretic Stochastic Optimal Control, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Bischoff, B.; Nguyen-Tuong, D.; van Hoof, H. McHutchon, A.; Rasmussen, C.E.; Knoll, A.; Peters, J.; Deisenroth, M.P. (2014). Policy Search For Learning Robot Control Using Sparse Data, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] .
  • Jan Peters will be Editor at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014)
  • Jan Peters will be Area Chair at the Seventeenth International Conference on Artificial Intelligence and Statistics (AIStats 2014).

Past News

  

zum Seitenanfang