Dear Colleagues,
The YES workshop "Optimal Transport, Statistics, Machine Learning and moving in between” will take place 5-9 September 2022 in Eurandom, Eindhoven. This workshop is part of the series of Young European Statistician workshops, but has a broader scope than usual. We are very happy to feature tutorial talks by world experts Marco Cuturi (CREST-ENSAE, Apple ML Research), Jonathan Niles-Weed (NYU) and Yoav Zemel (University of Cambridge). In addition to this, there will be talks by several invited speakers. Young researchers will be given the opportunity to present their work in the format of contributed talks or posters (see call for abstracts on the page below).
For more information and registration, please visit the conference website:
https://www.eurandom.tue.nl/event/workshop-yes-optimal-transport-statistics…<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.euran…>
Best wishes,
The organizers (Rui Castro, Augusto Gerolin, Johannes Schmidt-Hieber, Oliver Tse)
Dear all,
Gentle reminder: the talk by Julia Olkhovskaya in the thematic seminar
is today!
*Julia Olkhovskaya *(Vrije Universiteit,
https://sites.google.com/view/julia-olkhovskaya/home)
*Friday June 10*, 16h00-17h00
Online on Zoom: https://uva-live.zoom.us/j/89796690874
Meeting ID: 897 9669 0874
*Lifting the Information Ratio: An Information-Theoretic Analysis of
Thompson Sampling for Contextual Bandits*
We study the Bayesian regret of the renowned Thompson Sampling algorithm
in contextual bandits with binary losses and adversarially-selected
contexts. We adapt the information-theoretic perspective of Russo and
Van Roy [2016] to the contextual setting by introducing a new concept of
information ratio based on the mutual information between the unknown
model parameter and the observed loss. This allows us to bound the
regret in terms of the entropy of the prior distribution through a
remarkably simple proof, and with no structural assumptions on the
likelihood or the prior. We also extend our results to priors with
infinite entropy under a Lipschitz assumption on the log-likelihood. An
interesting special case is that of logistic bandits with d-dimensional
parameters, K actions, and Lipschitz logits.
This is joint work with Gergely Neu, Matteo Papini and Ludovic Schwartz.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
We are excited to announce that the 38th Conference on Uncertainty in
Artificial Intelligence (UAI 2022; https://www.auai.org/uai2022) will be
held in a hybrid format in Eindhoven, The Netherlands, on August 1-5, 2022.
We will have tutorials on August 1, the main conference on August 2-4, and
workshops on August 5. The main conference is single-track and accepted 36
papers for oral presentation and 194 for poster presentation. Below we
would like to share some updates:
1. Registration is open at https://www.auai.org/uai2022/registration. The
early bird deadline is June 21st.
2. Students are encouraged to apply for student scholarships. The
deadline for this is June 15. For more information, please see
https://www.auai.org/uai2022/student_scholarships.
3. We are delighted to have Danilo J. Rezende, Eric P. Xing, Finale
Doshi-Velez, Mihaela van der Schaar, Peter Spirtes, and Zeynep Akata as
keynote speakers.
4. The UAI 2022 competition (
https://www.auai.org/uai2022/uai2022_competition) will be starting soon.
We hope to see many of you at UAI this year, either in-person or online!
Best regards,
James Cussens & Kun Zhang
UAI 2022 Program Chairs
and
Cassio de Campos & Marloes Maathuis
UAI 2022 General Chairs
Dear all,
This Friday June 10 we have Julia Olkhovskaya from the VU speaking in
the thematic seminar.
*Julia Olkhovskaya *(Vrije Universiteit,
https://sites.google.com/view/julia-olkhovskaya/home)
*Friday June 10*, 16h00-17h00
Online on Zoom: https://uva-live.zoom.us/j/89796690874
Meeting ID: 897 9669 0874
*Lifting the Information Ratio: An Information-Theoretic Analysis of
Thompson Sampling for Contextual Bandits*
We study the Bayesian regret of the renowned Thompson Sampling algorithm
in contextual bandits with binary losses and adversarially-selected
contexts. We adapt the information-theoretic perspective of Russo and
Van Roy [2016] to the contextual setting by introducing a new concept of
information ratio based on the mutual information between the unknown
model parameter and the observed loss. This allows us to bound the
regret in terms of the entropy of the prior distribution through a
remarkably simple proof, and with no structural assumptions on the
likelihood or the prior. We also extend our results to priors with
infinite entropy under a Lipschitz assumption on the log-likelihood. An
interesting special case is that of logistic bandits with d-dimensional
parameters, K actions, and Lipschitz logits.
This is joint work with Gergely Neu, Matteo Papini and Ludovic Schwartz.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
On Friday June 10 we have Julia Olkhovskaya from the VU speaking in the
thematic seminar.
*Julia Olkhovskaya *(Vrije Universiteit,
https://sites.google.com/view/julia-olkhovskaya/home)
*Friday June 10*, 16h00-17h00
Online on Zoom: https://uva-live.zoom.us/j/89796690874
Meeting ID: 897 9669 0874
*Lifting the Information Ratio: An Information-Theoretic Analysis of
Thompson Sampling for Contextual Bandits*
We study the Bayesian regret of the renowned Thompson Sampling algorithm
in contextual bandits with binary losses and adversarially-selected
contexts. We adapt the information-theoretic perspective of Russo and
Van Roy [2016] to the contextual setting by introducing a new concept of
information ratio based on the mutual information between the unknown
model parameter and the observed loss. This allows us to bound the
regret in terms of the entropy of the prior distribution through a
remarkably simple proof, and with no structural assumptions on the
likelihood or the prior. We also extend our results to priors with
infinite entropy under a Lipschitz assumption on the log-likelihood. An
interesting special case is that of logistic bandits with d-dimensional
parameters, K actions, and Lipschitz logits.
This is joint work with Gergely Neu, Matteo Papini and Ludovic Schwartz.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear colleagues,
*Best Paper Award* in Special Issue "Women in Robotics" (500CHF)
https://www.mdpi.com/journal/robotics/awards/1658
Deadline: 31 August 2022
You are welcome to submit articles by at least one female co-author to
this special issue "Women in Robotics"
https://www.mdpi.com/journal/robotics/special_issues/Women_Robotics
Guest editors: Prof. Dr. Naira Hovakimyan, Prof. Dr. Anna Esposito and
Prof. Dr. Sanaz Mostaghim
Deadline for manuscript submissions: 15 June 2022
Each award nominee will be assessed on her paper's originality, quality,
and contribution to the field by the Evaluation Committee. The winner
will receive a certificate, an award of 500 CHF, and an opportunity to
publish her next submission in Robotics free of charge.
If you are interested in this project, please feel free to contact us
via email charlene.dong(a)mdpi.com or robotics(a)mdpi.com within two weeks.
Thanks for your support. Look forward to hearing from you.
Best regards,
Ms. Charlene Dong
Managing Editor
Email: charlene.dong(a)mdpi.com
Robotics (http://www.mdpi.com/journal/robotics/)
Dear colleagues,
You are cordially invited to attend the next lecture in the KdVI General Mathematics Colloquium on Friday, April 29 at 16.00.
Max Welling (UvA) will speak about "How GNNs and Symmetries can help to solve PDEs" (see abstract below).
The lecture will be given in room C1.110 (Science Park 904) and can also be joined via Zoom https://uva-live.zoom.us/j/84560653677<https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuva-live.…>
If you are interested in receiving the announcements for future talks at the KdVI General Mathematics Colloquium, please subscribe to our mailing list https://list.uva.nl/mailman/listinfo/kdvi-math-colloq.
We hope to see you then!
Best wishes,
Eni
(on behalf of the colloquium organizers)
-----------------------------------------------------
Abstract: Deep learning has seen amazing advances over the past years, completely replacing traditional methods in fields such as speech recognition, natural language processing, image and video analysis and so on. A particularly versatile deep architecture that has gained much traction lately is the graph neural network (GNN), of which transformers represent a special case. GNNs have the desirable property that they can process graph structured data while respecting permutation symmetry. Recently, GNNs have found new applications in scientific computation, for instance to predict the properties of molecules or to predict the forces that act on atoms when they evolve (e.g. fold). In this application it is also key that geometric symmetries, such as translation and rotation symmetries are taken into consideration. Professor Max Welling will report on yet another exciting application of using GNNs to solve partial differential equations (PDEs). It turns out that GNNs are an excellent tool to develop neural PDE integrators. Moreover, PDEs are full of surprising symmetries that can be leveraged to train neural integrators with less data. Professor Max Welling will discuss this very exciting new chapter in deep learning. He will end with a discussion of whether reversely, PDEs can also serve as a model for new deep architectures.
Joint work with Johannes Brandstetter and Daniel Worrall.
Dear all,
Today, April 22, we have Tor Lattimore from DeepMind speaking in the
thematic seminar.
*Tor Lattimore *(DeepMind, http://tor-lattimore.com)
*Friday April 22*, 16h00-17h00
Online on Zoom: *https://uva-live.zoom.us/j/88233925917*
Meeting ID: 882 3392 5917
*Minimax Regret for Partial Monitoring: Infinite Outcomes and
Rustichini's Regret *
The information ratio developed by Russo and Van Roy (2014) is a
powerful tool that was recently used to derive upper bounds on the
regret for challenging sequential decision-making problems. I will talk
about how a generalised version of this machinery can be used to derive
lower bounds and give an application showing that a version of mirror
descent is minimax optimal for partial monitoring using Rustichini's
definition of regret.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
*Upcoming talks:
*Jun. 10,***Julia Olkhovskaya
<https://sites.google.com/view/julia-olkhovskaya/home>*, Vrije Universiteit
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
Happy Easter!
This Friday, April 22, we have Tor Lattimore from DeepMind speaking in
the thematic seminar.
*Tor Lattimore *(DeepMind, http://tor-lattimore.com)
*Friday April 22*, 16h00-17h00
Online on Zoom: *https://uva-live.zoom.us/j/88233925917*
Meeting ID: 882 3392 5917
*Minimax Regret for Partial Monitoring: Infinite Outcomes and
Rustichini's Regret *
The information ratio developed by Russo and Van Roy (2014) is a
powerful tool that was recently used to derive upper bounds on the
regret for challenging sequential decision-making problems. I will talk
about how a generalised version of this machinery can be used to derive
lower bounds and give an application showing that a version of mirror
descent is minimax optimal for partial monitoring using Rustichini's
definition of regret.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
*Upcoming talks:
*Jun. 10,***Julia Olkhovskaya
<https://sites.google.com/view/julia-olkhovskaya/home>*, Vrije Universiteit
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
(apologies for cross posting)
At Delft University of Technology, we have a vacancy for a
3 year PostDoc on Reinforcement Learning in the Real World
This is in the context of the Mercury Machine Learning Lab, jointly with
the University of Amsterdam and booking.com. We will focus on
fundamental techniques in reinforcement learning, moticated by
real-world problems. Possible directions of interest are:
Bayesian reinforcement learning
Multiagent / concurrent reinforcement learning
Causal reinforcement learning
Full vacancy text can be found here:
https://www.tudelft.nl/over-tu-delft/werken-bij-tu-delft/vacatures/details?…
Please forward to potential candidates, and contact Matthijs Spaan or
myself in case of questions.
--
_______________________________________________
Dr. Frans Oliehoek
Associate Professor
Delft University of Technology
E-mail: f.a.oliehoek(a)tudelft.nl
www.fransoliehoek.net