Forwarding on behalf of Daniel Dadush:
-----Original Message-----
From: Daniel D.N.Dadush@cwi.nl To: dutch-optimization-seminar dutch-optimization-seminar@cwi.nl; neo-seminar-list neo-seminar-list@cwi.nl Cc: stefje stefje@csail.mit.edu Date: Tuesday, 3 October 2023 11:50 AM PDT Subject: [dutch-optimization-seminar] Stefanie Jegelka, Thursday 19 September, 4pm CET
Dear all,
I am pleased to announce the next international speaker at the Dutch Seminar on Optimization:
Speaker: Stefanie Jegelka (MIT) Title: Machine Learning for discrete optimization: Graph Neural Networks, generalization under shifts, and loss functions Date: Thursday 19 October, 4pm CET
The meeting will take place here: https://cwi-nl.zoom.us/j/84909645595?pwd=b1M4QnNKVzNMdmNSVFNaZUJmR1kvUT09 (Meeting ID: 849 0964 5595, Passcode: 772448)
(the link will stay the same for all upcoming meetings)
Please find the talk abstract below. Feel free to forward the talk announcement to whoever might be interested!
Hope to see you all there!
Best regards, Daniel
(On behalf of the Organization Committee)
============
Dutch Seminar on Optimization https://event.cwi.nl/dutch-optimization-seminar
Speaker: Stefanie Jegelka (MIT) Title: Machine Learning for discrete optimization: Graph Neural Networks, generalization under shifts, and loss functions Date: Thursday 19 October, 4pm CET
Abstract: Graph Neural Networks (GNNs) have become a popular tool for learning algorithmic tasks, in particular related to combinatorial optimization. In this talk, we will focus on the “algorithmic reasoning” task of learning a full algorithm. Instead of competing on empirical benchmarks, we will aim to get a better understanding of the model's behavior and generalization properties, i.e., the performance on hold-out data, which is an important question in learning-supported optimization too. We will try to understand in particular out-of-distribution generalization in widely used message passing GNNs, with an eye on applications in learning for optimization: what may be an appropriate metric for measuring shift in the data? Under what conditions will a GNN generalize to larger graphs? In the last part of the talk, we will take a brief look at objective (loss) functions for learning with discrete objects, beyond GNNs. This talk is based on joint work with Ching-Yao Chuang, Keyulu Xu, Joshua Robinson, Nikos Karalias, Jingling Li, Mozhi Zhang, Simon S. Du, Kenichi Kawarabayashi and Andreas Loukas.
_______________________________________________ dutch-optimization-seminar mailing list dutch-optimization-seminar@cwi.nl https://lists.cwi.nl/mailman/listinfo/dutch-optimization-seminar
machine-learning-nederland@list.uva.nl