Dear colleagues,
 
Our next BeNeRL Reinforcement Learning Seminar (Dec. 19) is coming: 
SpeakerHojoon Lee (https://joonleesky.github.io), PhD student from KAIST AI.

TitleDesigning Neural Network Architecture for Deep Reinforcement Learning 
Date: December 19, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series:https://www.benerl.org/seminar-series
 
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
 
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on December 19!
 
Kind regards,
Zhao Yang & Thomas Moerland
VU Amsterdam & Leiden University
 
——————————————————————
 
 
Upcoming talk: 
 
Date: December 19, 16.00-17.00 (CET)
Speaker: Hojoon Lee (https://joonleesky.github.io)

TitleDesigning Neural Network Architecture for Deep Reinforcement Learning
Zoomhttps://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk5uQkpEUT09
Abstract: While scaling laws have accelerated breakthroughs in computer vision and language modeling, their effects are less predictable in reinforcement learning (RL), where simply “scaling up” data, parameters, and computations rarely guarantees better results. In this talk, I will explore the barriers that make scaling challenging in RL and introduce new architectural designs that alleviate these challenges. I will also discuss future research opportunities that can improve scaling laws in RL. 
Bio: Hojoon is a Ph.D. student at KAIST AI, advised by Professor Jaegul Choo. He previously received his M.S. from KAIST and his B.S. from Korea University. In 2024, he interned with the GranTurismo team at Sony AI, mentored by Takuma Seno and Professor Peter Stone. In 2022, he interned with the RL team at Kakao, developing a new RL framework Jordly. His research focuses on designing network architectures and algorithms for RL that can continually learn, adapt, and generalize in dynamic environments.