Dear all,
On Friday July 7 we have Nikita Zhivotovskiy speaking in the Statistics
and Machine Learning Thematic Seminar. Nikita will be visiting us July
3-7. He has done excellent work both in statistics and machine learning,
so it is highly recommended to come meet him.
*Nikita Zhivotovskiy *(UC Berkeley, Department of Statistics,
https://sites.google.com/view/nikitazhivotovskiy/)
*Friday July 7*, 15h00-16h00
In person, at the University of Amsterdam
Location: Science Park 904, Room A1.04
*Sharper Risk Bounds for Statistical Aggregation
*In this talk, we take a fresh look at the classical results in the
theory of statistical aggregation, focusing on the transition from
global complexity to a more manageable local one. The goal of
aggregation is to combine several base predictors to achieve a
prediction nearly as accurate as the best one. This flexible approach
operates without any assumptions on the structure of the class or the
nature of the target. Though aggregation is studied in both sequential
and statistical settings, each with their unique differences, they both
traditionally use the same "global" complexity measure. Our discussion
will highlight the lesser-known PAC-Bayes localization method used in
our proofs, allowing us to prove a localized version of a classical
bound for the exponential weights estimator by Leung and Barron, and a
deviation-optimal localized bound for the Q-aggregation estimator.
Furthermore, we will explore the links between our work and ridge
regression. Joint work with Jaouad Mourtada and Tomas Vaškevičius. *
*
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear researchers,
Centrum Wiskunde & Informatica (CWI) kindly invites you to the eight
Seminar++ meeting on Machine Learning Theory, taking place on Wednesday
June 21 from 15:00 - 17:00. These Seminar++ meetings consist of a
one-hour lecture building up to an open problem, followed by an hour of
brainstorming time. The meeting is intended for interested researchers
including PhD students. These meetings are freely accessible without
registration. Cookies, coffee and tea will be provided in the half-time
break.
The meeting of 21 June will be hosted by:
Alexander Ly <https://www.alexander-ly.com/>, Postdoc at the Centrum
Wiskunde & Informatica <https://www.cwi.nl/>.
On constructing e-values for statistical practice
*Abstract:* The safe anytime-valid inference framework based on e-values
allows practitioners to adaptively design their experiments and draw
more reliable conclusions compared to conventional p-value-based
approaches. The presentation begins with a concise overview of the
recently developed general theory of e-values and the various procedures
to construct them. In order to distinguish among the different e-values,
Grünwald, de Heide and Koolen (2019) introduced the GROW criterion along
with a general procedure specifically designed to construct the optimal
e-value according to this criterion. Practical considerations and the
context of the statistical problem itself might lead us to deviate from
recommending this so-called GROW e-value. We shed light on the choices
we made when constructing e-values for fundamental classical inference
problems, such as z-tests, t-tests, one-way ANOVAs, and (generalised)
linear models. Our objective is to investigate the potential
generalisability of these context-specific solutions to a broader range
of inference problems in hope to expand the practicality and versatility
of safe anytime-valid inference.
The event takes place in room L016 in the CWI building, Science Park
123, Amsterdam.
The Seminar++ Meetings are part of the Machine Learning Theory Semester
Programme
<https://www.cwi.nl/en/events/cwi-research-semester-programs/research-progra…>,
which runs in Spring 2023.
Best regards on behalf of CWI from the program committee,
Wouter Koolen
We organize the 7th international workshop on Interactive Adaptive Learning,
to be held at ECML PKDD 2023 on September 22nd in Torino, Italy:
Interactive Adaptive Learning Workshop @ ECML PKDD
link : https://www.activeml.net/ial2023/
Key dates (Anytime-on-Earth):
Abstract Registration: Monday, June 12th 2023
Paper Submission: Wednesday, June 21st 2023
Notification: Monday, July 24th 2023
Camera Ready: Sunday, August 13th 2023
Workshop & Tutorial: Friday, September 22nd, 2021, Torino, Italy
Topics of interest include:
Novel Techniques for Active, Semi-Supervised, Transfer, or Weakly Supervised Learning
Innovative Use and Applications of Active, Semi-Supervised, Transfer, or Weakly Supervised Learning
Techniques for Combined Interactive Adaptive Learning
We welcome submissions of original regular (8-16 pages) and
short papers/extended abstracts (2-4 pages).
Each submission will be double-blind peer-reviewed;
accepted papers will be published at ceur-ws.org,
which is indexed, e.g., by Google Scholar, DBLP, Scopus,
and will be presented & discussed at the workshop.
For short papers, works-in-progress, open challenges in research
or industrial applications that initiate discussions and collaborations
are welcome.
Please format your papers according to the CEUR-WS format
and submit them via EasyChair:
https://easychair.org/conferences/?conf=ial2023
We look forward to your contributions and participation at the workshop,
the organisers,
Mirko Bunse, Barbara Hammer, Georg Krempl,
Vincent Lemaire, Alaa Othman, and Amal Saadallah.
Dear all,
On Friday July 7 we have Nikita Zhivotovskiy speaking in the Statistics
and Machine Learning Thematic Seminar. Nikita will be visiting us July
3-7. He has done excellent work both in statistics and machine learning,
so it is highly recommended to come meet him.
*Nikita Zhivotovskiy *(UC Berkeley, Department of Statistics,
https://sites.google.com/view/nikitazhivotovskiy/)
*Friday July 7*, 15h00-16h00
In person, at the University of Amsterdam
Location: Science Park 904, Room TBA
*Sharper Risk Bounds for Statistical Aggregation
*In this talk, we take a fresh look at the classical results in the
theory of statistical aggregation, focusing on the transition from
global complexity to a more manageable local one. The goal of
aggregation is to combine several base predictors to achieve a
prediction nearly as accurate as the best one. This flexible approach
operates without any assumptions on the structure of the class or the
nature of the target. Though aggregation is studied in both sequential
and statistical settings, each with their unique differences, they both
traditionally use the same "global" complexity measure. Our discussion
will highlight the lesser-known PAC-Bayes localization method used in
our proofs, allowing us to prove a localized version of a classical
bound for the exponential weights estimator by Leung and Barron, and a
deviation-optimal localized bound for the Q-aggregation estimator.
Furthermore, we will explore the links between our work and ridge
regression. Joint work with Jaouad Mourtada and Tomas Vaškevičius. *
*
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl