Dear colleagues,
We are starting a new monthly reinforcement learning seminar series hosted in the Belgium-Netherlands Reinforcement Learning (BeNeRL) research community: https://www.benerl.org/seminar-series. The seminars are scheduled for the second Thursday of every month, 16:00-17:00 (CET). All seminars are online to make them easily accessible, and of course open for researchers outside of the region.
Our aim for the seminar is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work, since this practical knowledge is often not shared in talks and papers.
First seminar (Oct 12)
Speaker: Benjamin Eysenbach (https://ben-eysenbach.github.io/), assistant professor at Princeton.
Title: Connections between Reinforcement Learning and Representation Learning
Date: October 12, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on October 12!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
—------------------------------------------
Upcoming talk:
Date: October 12, 16.00-17.00 (CET)
Speaker: Benjamin Eysenbach (https://ben-eysenbach.github.io/)
Title: Connections between Reinforcement Learning and Representation Learning
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk5uQkpEUT09
Abstract: In reinforcement learning (RL), it is easier to solve a task if given a good representation. Deep RL promises to simultaneously solve an RL problem and a representation learning problem; it promises simpler methods with fewer objective functions and fewer hyperparameters. However, prior work often finds that these end-to-end approaches tend to be unstable, and instead addresses the representation learning problem with additional machinery (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations?
Bio: Benjamin Eysenbach is an assistant Professor of Computer Science at Princeton University. His research aims to develop principled reinforcement learning (RL) algorithms that obtain state-of-the-art performance with a higher degree of simplicity, scalability, and robustness than current methods. Much of his work uses ideas for probabilistic inference to make progress on an important problems in RL (e.g., long-horizon and high-dimensional reasoning, robustness, exploration). Benjamin did his PhD in machine learning at CMU, advised by Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship.