Dear colleagues,

Our next BeNeRL Reinforcement Learning Seminar (Feb 8) is coming: 
Speaker: Pierluca D'Oro (https://proceduralia.github.io), PhD student at Mila. 
Title: On building World Models better than reality
Date: February 8, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series

The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.

We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on February 8!

Kind regards,
Zhao Yang & Thomas Moerland
Leiden University

覧覧覧覧覧覧覧覧覧覧覧


Upcoming talk: 

Date: Feburary 8, 16.00-17.00 (CET)
Speaker: Pierluca D'Oro (https://proceduralia.github.io)
Title: On building World Models better than reality
Zoom: https://universiteitleiden.zoom.us/j/65545185867?pwd=VWNXQ2FYUXFXbSsvVy9tTE82eDRtZz09
Abstract: Can a world model lead to better policies than the real world when used for reinforcement learning? The talk discusses this question, dissecting the features a world model should have to be useful for policy optimization, and discussing scalable techniques to learn world models that lead to successful policies.
Bio: Pierluca D'Oro is a last-year PhD student at Mila, supervised by Pierre-Luc Bacon and Marc G. Bellemare, and a Visiting Researcher at Meta in Montreal. He works on the science of AI agents.