Dear Colleagues,
There are two openings for PhD positions at the Eindhoven University of Technology, part of the recently funded AICrowd project: AI-Based Pedestrian Crowd Modeling and Management. I would appreciate it if you could forward this announcement to any potential candidates.
Are you eager to work on a pioneering PhD project at the interface between physics of flowing matter, artificial intelligence, system identification, and statistics? Do you enjoy collaborating with researchers from different fields, and combining dynamical systems modeling, optimization and statistical learning theory? Are you eager to see your work making immediate societal impact? Then one of these positions might be for you.
AICrowd project - AI-Based Pedestrian Crowd Modelling and Management vacancies:
- AI & System identification for crowd flow modeling: https://jobs.tue.nl/en/vacancy/phd-in-artificial-intelligence-for-quantitat…
- Statistics and AI for quantitative crowd dynamics modeling: https://jobs.tue.nl/en/vacancy/phd-in-statistics-and-ai-for-quantitative-cr…
With best regards,
Rui Castro
—
Dept. of Mathematics and Computer Science, TU/e
http://www.win.tue.nl/~rmcastro | +31 40 247 2499
Dear researchers,
Apologies for re-sending, this is to test a config update to prevent
future email bounces.
Centrum Wiskunde & Informatica (CWI) kindly invites you to the fifth
Seminar++ meeting on Machine Learning Theory, taking place on Wednesday
May 10 from 15:00 - 17:00. These Seminar++ meetings consist of a
one-hour lecture building up to an open problem, followed by an hour of
brainstorming time. The meeting is intended for interested researchers
including PhD students. These meetings are freely accessible without
registration. Cookies, coffee and tea will be provided in the half-time
break.
The meeting of 10 May will be hosted by:
Rianne de Heide <https://riannedeheide.github.io/> (Assistant Professor
at the Vrije Universiteit Amsterdam <https://vu.nl/>)
*Multiple testing with e-values: overview and open problems*
Abstract: Researchers in genomics and neuroimaging often perform
hundreds of thousands of hypothesis tests simultaneously. The scale of
these multiple hypothesis testing problems is enormous, and with extreme
dimensionality comes extreme risk for false positives. The field of
multiple testing addresses this problem in various ways. Recently, the
new theory of hypothesis testing with e-values has been brought to the
field of multiple testing. In this talk I will give an overview of the
most important frameworks in multiple testing and recent developments in
multiple testing with e-values. Finally, I will open the discussion for
open problems in this area, focusing on FDP estimation and confidence
with e-values. This will create a framework for fully interactive
multiple testing.
The event takes place in room L016 in the CWI building, Science Park
123, Amsterdam.
The Seminar++ Meetings are part of the Machine Learning Theory Semester
Programme
<https://www.cwi.nl/en/events/cwi-research-semester-programs/research-progra…>,
which runs in Spring 2023.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Dear researchers,
Centrum Wiskunde & Informatica (CWI) kindly invites you to the fifth
Seminar++ meeting on Machine Learning Theory, taking place on Wednesday
May 10 from 15:00 - 17:00. These Seminar++ meetings consist of a
one-hour lecture building up to an open problem, followed by an hour of
brainstorming time. The meeting is intended for interested researchers
including PhD students. These meetings are freely accessible without
registration. Cookies, coffee and tea will be provided in the half-time
break.
The meeting of 10 May will be hosted by:
Rianne de Heide <https://riannedeheide.github.io/> (Assistant Professor
at the Vrije Universiteit Amsterdam <https://vu.nl/>)
*Multiple testing with e-values: overview and open problems*
Abstract: Researchers in genomics and neuroimaging often perform
hundreds of thousands of hypothesis tests simultaneously. The scale of
these multiple hypothesis testing problems is enormous, and with extreme
dimensionality comes extreme risk for false positives. The field of
multiple testing addresses this problem in various ways. Recently, the
new theory of hypothesis testing with e-values has been brought to the
field of multiple testing. In this talk I will give an overview of the
most important frameworks in multiple testing and recent developments in
multiple testing with e-values. Finally, I will open the discussion for
open problems in this area, focusing on FDP estimation and confidence
with e-values. This will create a framework for fully interactive
multiple testing.
The event takes place in room L016 in the CWI building, Science Park
123, Amsterdam.
The Seminar++ Meetings are part of the Machine Learning Theory Semester
Programme
<https://www.cwi.nl/en/events/cwi-research-semester-programs/research-progra…>,
which runs in Spring 2023.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Dear researchers,
Centrum Wiskunde & Informatica (CWI) kindly invites you to the fourth
Seminar++ meeting on Machine Learning Theory, taking place on Wednesday
April 19 from 15:00 - 17:00. These Seminar++ meetings consist of a
one-hour lecture building up to an open problem, followed by an hour of
brainstorming time. The meeting is intended for interested researchers
including PhD students. These meetings are freely accessible without
registration. Cookies, coffee and tea will be provided in the half-time
break.
The meeting of 19 April will be hosted by:
Dirk van der Hoeven <http://dirkvanderhoeven.com/about> (Postdoc at the
University of Amsterdam <https://www.uva.nl/>)
High-Probability Risk Bounds via Sequential Predictors
*Abstract:* Online learning methods yield sequential regret bounds under
minimal assumptions and provide in-expectation results in statistical
learning. However, despite the seeming advantage of online guarantees
over their statistical counterparts, recent findings indicate that in
many important cases, regret bounds may not guarantee high probability
risk bounds. In this work we show that online to batch conversions
applied to general online learning algorithms are more powerful than
previously thought. Via a new second-order correction to the loss
function, we obtain sharp high-probability risk bounds for many
classical statistical problems, such as model selection aggregation,
linear regression, logistic regression, and—more generally—conditional
density estimation. Our analysis relies on the fact that many online
learning algorithms are improper, as they are not restricted to use
predictors from a given reference class. The improper nature enables
significant improvements in the dependencies on various problem
parameters. In the context of statistical aggregation of finite
families, we provide a simple estimator achieving the optimal rate of
aggregation in the model selection aggregation setup with general
exp-concave losses.
*First open problem:* This open problem will be the main focus of the
seminar. The result for logistic regression is nearly optimal and the
algorithm is computationally efficient in the sense that the runtime is
polynomial in the relevant problem parameters. However, it is a
polynomial with a very high degree, making the algorithm practically not
very useful for reasonable problem parameters. For in expectation
guarantees it is known how to reduce the runtime to /d^2 T/ at the cost
of a slightly worse excess risk bound, where /d/ is the dimension of the
problem and /T/ is the number of datapoints. Unfortunately it is not
immediately clear how to use the ideas from the faster algorithm to
obtain a high-probability excess risk bound with a /d^2 T/ runtime
algorithm. This open problem asks for the following: can we obtain a
reasonable excess risk bound with high-probability in /d^2 T/ runtime
for logistic regression.
*Second open problem:* Our results heavily rely on a particular
inequality for exp-concave losses. I would like to extend our ideas to a
different class of loss function, namely self-concordant loss functions.
Given previous results in statistical learning literature (see
https://arxiv.org/pdf/2105.08866.pdf), I expect this to be possible.
The event takes place in room L016 in the CWI building, Science Park
123, Amsterdam.
The Seminar++ Meetings are part of the Machine Learning Theory Semester
Programme <https://www.cwi.nl/~wmkoolen/MLT_Sem23/index.html>, which
runs in Spring 2023.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Dear colleagues,
Centrum Wiskunde & Informatica (CWI) kindly invites you to a lecture
afternoon focused on theory of machine learning, with two distinguished
speakers that will focus on philosophical and technical challenges in
learning to model and explore the unknown environment.
The program on 12 April:
14.00 Jonas Peters <http://web.math.ku.dk/~peters/> (University of
Copenhagen): /Recent Advances in Causal Inference/
15.00 Break
15.30 Tor Lattimore <https://tor-lattimore.com/> (DeepMind): /The Curse
and Blessing of Curvature for Zeroth-order Convex Optimisation/
16.30 Reception
The first talk will be delivered remotely, the second live. For further
details, please check the abstracts below and the website
<https://www.cwi.nl/en/events/cwi-research-semester-programs/mid-semester-le…>.
These research-level lectures are intended for a non-specialist
audience, and are freely accessible.
The event takes place in the Amsterdam Science Park Congress Centre.
This "Mid Semester Lecture", marks the half-way point of the Machine
Learning Theory Semester Programme
<https://www.cwi.nl/en/events/cwi-research-semester-programs/research-progra…>,
which runs in Spring 2023. Researchers in Machine Learning Theory may
also be interested in the ongoing Seminar++ meetings
<https://www.cwi.nl/en/events/cwi-research-semester-programs/seminar-part-4-…>
every other Wednesday.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
Tor Lattimore
/The Curse and Blessing of Curvature for Zeroth-order Convex Optimisation/
*Abstract:* Zeroth-order convex optimisation is still quite poorly
understood. I will tell a story about how to use gradient-based methods
without access to gradients and explain how curvature of the loss
function plays the roles of both devil and angel. The main result is a
near-optimal sample-complexity analysis of a simple and computationally
efficient second-order method applied to a quadratic surrogate loss.
The talk is based on a recent paper <https://arxiv.org/abs/2302.05371>
with András György.
Jonas Peters/
Recent Advances in Causal Inference/
*Abstract:* see website
<https://www.cwi.nl/en/events/cwi-research-semester-programs/mid-semester-le…>