Forwarding on behalf of Daniel Dadush:
-----Original Message-----
From: Daniel <D.N.Dadush(a)cwi.nl>
To: dutch-optimization-seminar <dutch-optimization-seminar(a)cwi.nl>;
neo-seminar-list <neo-seminar-list(a)cwi.nl>
Cc: stefje <stefje(a)csail.mit.edu>
Date: Tuesday, 3 October 2023 11:50 AM PDT
Subject: [dutch-optimization-seminar] Stefanie Jegelka, Thursday 19
September, 4pm CET
Dear all,
I am pleased to announce the next international speaker at the Dutch
Seminar on Optimization:
Speaker: Stefanie Jegelka (MIT)
Title: Machine Learning for discrete optimization: Graph Neural
Networks, generalization under shifts, and loss functions
Date: Thursday 19 October, 4pm CET
The meeting will take place here:
https://cwi-nl.zoom.us/j/84909645595?pwd=b1M4QnNKVzNMdmNSVFNaZUJmR1kvUT09
(Meeting ID: 849 0964 5595, Passcode: 772448)
(the link will stay the same for all upcoming meetings)
Please find the talk abstract below.
Feel free to forward the talk announcement to whoever might be interested!
Hope to see you all there!
Best regards,
Daniel
(On behalf of the Organization Committee)
============
Dutch Seminar on Optimization
https://event.cwi.nl/dutch-optimization-seminar
Speaker: Stefanie Jegelka (MIT)
Title: Machine Learning for discrete optimization: Graph Neural
Networks, generalization under shifts, and loss functions
Date: Thursday 19 October, 4pm CET
Abstract:
Graph Neural Networks (GNNs) have become a popular tool for learning
algorithmic tasks, in particular related to combinatorial optimization.
In this talk, we will focus on the “algorithmic reasoning” task of
learning a full algorithm. Instead of competing on empirical benchmarks,
we will aim to get a better understanding of the model's behavior and
generalization properties, i.e., the performance on hold-out data, which
is an important question in learning-supported optimization too. We will
try to understand in particular out-of-distribution generalization in
widely used message passing GNNs, with an eye on applications in
learning for optimization: what may be an appropriate metric for
measuring shift in the data? Under what conditions will a GNN generalize
to larger graphs? In the last part of the talk, we will take a brief
look at objective (loss) functions for learning with discrete objects,
beyond GNNs.
This talk is based on joint work with Ching-Yao Chuang, Keyulu Xu,
Joshua Robinson, Nikos Karalias, Jingling Li, Mozhi Zhang, Simon S. Du,
Kenichi Kawarabayashi and Andreas Loukas.
_______________________________________________
dutch-optimization-seminar mailing list
dutch-optimization-seminar(a)cwi.nl
https://lists.cwi.nl/mailman/listinfo/dutch-optimization-seminar
Dear colleagues,
We are starting a new monthly reinforcement learning seminar series hosted in the Belgium-Netherlands Reinforcement Learning (BeNeRL) research community: https://www.benerl.org/seminar-series. The seminars are scheduled for the second Thursday of every month, 16:00-17:00 (CET). All seminars are online to make them easily accessible, and of course open for researchers outside of the region.
Our aim for the seminar is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work, since this practical knowledge is often not shared in talks and papers.
First seminar (Oct 12)
Speaker: Benjamin Eysenbach (https://ben-eysenbach.github.io/), assistant professor at Princeton.
Title: Connections between Reinforcement Learning and Representation Learning
Date: October 12, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on October 12!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
—------------------------------------------
Upcoming talk:
Date: October 12, 16.00-17.00 (CET)
Speaker: Benjamin Eysenbach (https://ben-eysenbach.github.io/)
Title: Connections between Reinforcement Learning and Representation Learning
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk…
Abstract: In reinforcement learning (RL), it is easier to solve a task if given a good representation. Deep RL promises to simultaneously solve an RL problem and a representation learning problem; it promises simpler methods with fewer objective functions and fewer hyperparameters. However, prior work often finds that these end-to-end approaches tend to be unstable, and instead addresses the representation learning problem with additional machinery (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations?
Bio: Benjamin Eysenbach is an assistant Professor of Computer Science at Princeton University. His research aims to develop principled reinforcement learning (RL) algorithms that obtain state-of-the-art performance with a higher degree of simplicity, scalability, and robustness than current methods. Much of his work uses ideas for probabilistic inference to make progress on an important problems in RL (e.g., long-horizon and high-dimensional reasoning, robustness, exploration). Benjamin did his PhD in machine learning at CMU, advised by Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship.