The Conference on Uncertainty in Artificial Intelligence (UAI
<https://auai.org/>) is one of the premier international conferences on
research related to learning and reasoning in the presence of uncertainty.
The conference has been held every year since 1985. The upcoming 39th
edition (UAI 2023 <https://auai.org/uai2023/>) will be an in-person
conference with virtual elements taking place in Pittsburgh, Pennsylvania,
USA from 31 July to 4 August 2023.
We invite papers that describe novel theory, methodology and applications
related to artificial intelligence, machine learning and statistics. Papers
will be assessed in a rigorous double-blind peer-review process, based on
the criteria of technical correctness, novelty, clarity of writing, and
potential impact. Authors are strongly encouraged to make code and data
available.
All accepted papers will be presented in poster sessions and spotlight
presentations (physically or remotely). Selected papers will have longer
presentations and an assigned discussant to foster debate. All accepted
papers will be published in a volume of Proceedings of Machine Learning
Research (PMLR) <http://proceedings.mlr.press/>.
Deadlines and other relevant dates can be found under important dates
<https://auai.org/uai2023/dates>.
Important dates for authors:
- 17 February 2023 (23:59 Anywhere on Earth, AoE): Paper submission
deadline
- 11-20 April 2023: Author response and discussion period
- 8 May 2023: Author notification
Papers should be submitted on OpenReview at
https://openreview.net/group?id=auai.org/UAI/2023/Conference. Please
see Submission
Instructions <https://auai.org/uai2023/submission_instructions> for more
details on how your manuscript should be formatted.
We are looking forward to building an exciting program and we aim to make
the most of the advantages that a hybrid conference can create. If you have
any particular positive or negative experiences that you would like to
share with us, please do not hesitate to email us.
If you are interested in giving a tutorial or organising a workshop at UAI
2023, please contact the tutorial chairs (uai2023chairs+tutorials(a)gmail.com)
or workshop chairs (uai2023chairs+workshop(a)gmail.com) by 10 March 2023.
Relevant dates:
- 31 July: Tutorials
- 1-3 August: Main conference
- 4 August: Workshops
Robin Evans and Ilya Shpitser
UAI 2023 Program Chairs
uai2023programchairs(a)gmail.com
Dear all,
We are writing to you to promote the third "*Conference of Young Applied
Mathematicians*", which will be held between the 18th and the 22nd of
September 2023 in Siena (SI), Italy.
The conference aims to gather young researchers (PhD students and postdocs)
working in the fields of Numerical Analysis, Numerical Modelling,
Statistics, and Machine Learning.
Every conference day will be opened by a plenary talk that will be given by
senior lecturers. The remainder of the session is dedicated to the
participants, who are strongly encouraged to present their research
projects or activities. To allow everyone to talk, two parallel sessions
will be organized, one devoted to Numerical Analysis and Numerical
Modelling and one devoted to Machine Learning and Statistics.
For further details, please visit the website:
http://www.yamc.it/
For anyone interested in taking part in this conference, this is the link
for the application form (deadline: 13th May):
https://forms.gle/pC1dhvyHo6EG1Kir5
Please keep in mind that we have limited spots. It will still be possible
to apply after the early registration closes (with an increased fee) until
all the spots are filled.
We kindly ask you to help us publicize the event.
For any questions, please contact us at info(a)yamc.it
Thank you.
Best regards,
The organizers
G. Auricchio, G. A. D'Inverno, C. Graziani, V. Lachi, F. Locatelli, G.
Loli, L. Zambon
I have an opening for a fully funded 4-year PhD position in mathematical statistics. The aim of the project is to develop theory for hypothesis testing with e-values, in particular for multiple testing, while it is also possible to work on related topics such as Bayesian learning and bandits. For more information, see:
https://workingat.vu.nl/ad/phd-position-in-mathematical-statistics/dx6kg8
Please help me spread the word!
The deadline is April 1st.
Best regards,
Rianne de Heide (r.de.heide(a)vu.nl)
Dear young researchers,
Centrum Wiskunde & Informatica (CWI) kindly invites you to a Boot Camp
on Machine Learning Theory on February 14 and 15. This event is
primarily targeted at PhD students working on theoretical aspects of
Machine Learning. It includes 8 tutorials by researchers, two keynote
lectures, a poster session and a group dinner. All in all, the Boot Camp
will provide a whirlwind tour of Machine Learning Theory in a friendly
atmosphere with plenty of interaction.
If you like to join, please register here.
<https://www.cwi.nl/en/events/cwi-research-semester-programs/machine-learnin…>
The tutorials are on these subjects:
//Statistical Theory for Deep Learning —// Johannes Schmidt-Hieber//
////Reinforcement Learning////—// Frans Oliehoek//
////Time Series////—// Jaron Sanders/
Explainability///—// Tim van Erven/
///Statistical Testing in High Dimensions////—// Rui Castro//
////Equivariant/Geometric Deep Learning////—// Gabriele Cesa//
/Quantum Learning Theory///—// Ronald de Wolf/
//Learning and Games///—// Matthias Staudigl/
/
And the two keynote lectures, which are open to all:
/Foundations of Machine Learning Systems///—// Bob Williamson,
University of Tübingen/
A Tale of Two Non-parametric Bandit Problems///—// Emilie Kaufmann,
CNRS, Univ. Lille
For more details, see the website
<https://www.cwi.nl/~wmkoolen/MLT_Sem23/bootcamp.html>.
The event takes place in the CWI building, Science Park 123, Amsterdam.
The Boot Camp is part of the Machine Learning Theory Semester Programme
<https://www.cwi.nl/~wmkoolen/MLT_Sem23/index.html>, which runs in
Spring 2023.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Dear colleagues,
Centrum Wiskunde & Informatica (CWI) kindly invites you to a lecture
afternoon focused on theory of machine learning, with two distinguished
speakers that will illuminate philosophical and technical aspects of
machine learning.
The program on 14 February:
14.00 Bob Williamson <https://fm.ls/> (University of Tübingen):
/Foundations of Machine Learning Systems/
15.00 Break
15.30 Emilie Kaufmann <https://emiliekaufmann.github.io/> (CNRS, Univ.
Lille): /A Tale of Two Non-parametric Bandit Problems/
16.30 Reception
For further details, please check the abstracts below and the website
<https://www.cwi.nl/en/events/cwi-research-semester-programs/launch-lecture-…>.
These research-level lectures are intended for a non-specialist
audience, and are freely accessible. Please register here
<https://www.cwi.nl/en/events/cwi-research-semester-programs/launch-lecture-…>.
The event takes place in the Amsterdam Science Park Congress Centre.
This "Launch Lecture", kicks off the Machine Learning Theory Semester
Programme <https://www.cwi.nl/~wmkoolen/MLT_Sem23/index.html>, which
runs in Spring 2023. Young researchers (PhD students, ....) in Machine
Learning Theory may also be interested in the encompassing Boot Camp
<https://www.cwi.nl/~wmkoolen/MLT_Sem23/bootcamp.html> on 14 and 15
February.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Bob Williamson
/Foundations of Machine Learning Systems/
*Abstract:* I will present some new insights into some foundational
assumptions about machine learning systems, including why we might want
to replace the expectation in our definition of generalisation error,
why independence is intrinsically relative and how it is intimately
related to fairness, why the data we ingest might not even have a
probability distribution, and what one might do in such cases, and how
we have been (perhaps unwittingly) working with these more exotic
notions for some time already.
Emilie Kaufmann
/A Tale of Two Non-parametric Bandit Problems/
*Abstract:* In a bandit model an agent sequentially collects samples
(rewards) from different distributions, called arms, in order to achieve
some objective related to learning or playing the best arms. Depending
on the application, different assumptions can be made on these
distributions, from Bernoulli (e.g., to model success/failure of a
treatment) to complex multi-modal distributions (e.g. to model the yield
of a crop in agriculture). In this talk, we will present non-parametric
algorithms which adapt optimally to the actual distributions of the
arms, assuming that they are bounded. We will first show the robustness
of a Non Parametric Thompson Sampling strategy to a risk-averse
performance metric. Then, we will discuss how the algorithm can be
modified to tackle pure exploration objectives, bringing new insights on
so-called Top Two algorithms.
Vacancy: Postdoctoral Researcher in Causal Inference (University of Amsterdam)
==============================================================================
We are looking for an enthousiastic postdoc who enjoys working on statistical
problems in causal inference and domain adaptation.
We have a vacancy in the recently established Mercury Machine Learning Lab
(MMLL). In this lab, researchers from the University of Amsterdam (UvA) and
Delft University of Technology (TU Delft) will be working together with data
scientists from Booking.com to develop the statistical and machine learning
foundations for a new generation of recommendation systems. Motivated by
real-world problems faced in industry that involve domain adaptation and
optimization, we will investigate fundamental scientific problems regarding
generalization and bias removal from a causal perspective.
The successful candidate will be based in the Korteweg-De Vries Institute for
Mathematics of the University of Amsterdam, the Netherlands, under supervision
of prof. dr. Joris Mooij.
Application closing date: January 31, 2023
Preferred starting date: ASAP
Duration: 3-4 years
For further information, including how to apply, see the official job advertisement at:
https://vacatures.uva.nl/UvA/job/Postdoctoral-Researcher-in-Causal-Inferenc…
-------------------------------------------------------------
Joris Mooij
Professor in Mathematical Statistics
University of Amsterdam
http://www.jorismooij.nl/
Forwarding on behalf of Frank van der Meulen:
-------- Forwarded Message --------
Subject: Mailing list
Date: Fri, 13 Jan 2023 11:52:57 +0000
From: Meulen, F.H. van der (Frank) <f.h.van.der.meulen(a)vu.nl>
To: Tim van Erven <tim(a)timvanerven.nl>
———
The *Department of Mathematics of Vrije Universiteit Amsterdam* welcomes
applications for a *three-year Postdoctoral position in Statistics* with
emphasis on statistical inference for stochastic processes, graphical
models, computational statistics and Bayesian computation. Good
programming skills and being able to connect to existing research
strengths in the department are assets.
The preferred starting date is 1 September 2023 or earlier.
Full information on the position can be found at
https://workingat.vu.nl/ad/postdoctoral-position-in-statistics/riljhf
<https://urldefense.com/v3/__https://workingat.vu.nl/ad/postdoctoral-positio…>
We have an open postdoc position to come work with us on machine learning models for heterogeneous effect estimation with applications in cognitive health at Radboud University in Nijmegen, The Netherlands. If you are interested in the intersection of machine learning and causal inference please consider applying (deadline: 8 January 2023). More details and link to the application form here: https://www.ru.nl/en/working-at/job-opportunities/postdoctoral-researcher-f…
-------- Forwarded Message --------
Subject: Posting a vacancy
Date: Mon, 19 Dec 2022 10:11:33 +0000
From: Wiel, M.A. van de (Mark) <mark.vdwiel(a)amsterdamumc.nl>
To: machine-learning-nederland-owner(a)list.uva.nl
<machine-learning-nederland-owner(a)list.uva.nl>
Hi, can I post this vacancy on the ml_ned mailing list?
Cheers, Mark van de Wiel
Postdoc vacancy at the department of epidemiology and data science,
Amsterdam UMC
Interested in improving tree-based learners in a medical context? Please
check out this vacancy:
https://werkenbij.amsterdamumc.org/en/vacatures/research/postdoc-position-o…
______________________________________________________
AmsterdamUMC disclaimer : www.amsterdamumc.org/disclaimers
Dear all,
Heads up: Umut Şimşekli's in person talk at the UvA is today:
*Umut Şimşekli* (INRIA/École Normale Supérieure,
https://www.di.ens.fr/~simsekli/)
*Monday November 14*, 16h00-17h00
In person, at the University of Amsterdam
Location: Science Park 904, Room A1.24
*Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms*
Understanding generalization in deep learning has been one of the major
challenges in statistical learning theory over the last decade. While
recent work has illustrated that the dataset and the training algorithm
must be taken into account in order to obtain meaningful generalization
bounds, it is still theoretically not clear which properties of the data
and the algorithm determine the generalization performance. In this
talk, I will approach this problem from a dynamical systems theory
perspective and represent stochastic optimization algorithms as random
iterated function systems (IFS). Well studied in the dynamical systems
literature, under mild assumptions, such IFSs can be shown to be ergodic
with an invariant measure that is often supported on sets with a fractal
structure. We will prove that the generalization error of a stochastic
optimization algorithm can be bounded based on the ‘complexity’ of the
fractal structure that underlies its invariant measure. Leveraging
results from dynamical systems theory, we will show that the
generalization error can be explicitly linked to the choice of the
algorithm (e.g., stochastic gradient descent – SGD), algorithm
hyperparameters (e.g., step-size, batch-size), and the geometry of the
problem (e.g., Hessian of the loss). We will further specialize our
results to specific problems (e.g., linear/logistic regression, one
hidden-layered neural networks) and algorithms (e.g., SGD and
preconditioned variants), and obtain analytical estimates for our bound.
For modern neural networks, we will develop an efficient algorithm to
compute the developed bound and support our theory with various
experiments on neural networks.
The talk is based on the following publication:
Camuto, A., Deligiannidis, G., Erdogdu, M. A., Gurbuzbalaban, M.,
Simsekli, U., & Zhu, L. (2021). Fractal structure and generalization
properties of stochastic optimization algorithms. Advances in Neural
Information Processing Systems, 34, 18774-18788.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl