We warmly invite you to submit a paper and participate in our Causal
Representation Learning workshop (https://crl-workshop.github.io/) that
will be held *December 15 or 16, 2023* at NeurIPS 2023, New Orleans, USA.
Causal Representation Learning is an exciting intersection of machine
learning and causality that aims at learning low-dimensional, high-level
causal variables along with their causal relations directly from raw,
unstructured data, e.g. images.
The submission deadline is *September 29, 2023, 23:59 AoE* and the
submission link is
https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/CRL.
More information below.
***MOTIVATION AND TOPICS***
Current machine learning systems have rapidly increased in performance by
leveraging ever-larger models and datasets. Despite astonishing abilities
and impressive demos, these models fundamentally *only learn from
statistical **correlations* and struggle at tasks such as *domain
generalisation, adversarial examples, or planning*, which require
higher-order cognition. This sole reliance on capturing correlations sits
at the core of current debates about making AI systems "truly'' understand.
One promising and so far underexplored approach for obtaining visual
systems that can go *beyond correlations* is integrating ideas from
causality into representation learning.
Causal inference aims to reason about the effect of interventions or
external manipulations on a system, as well as about hypothetical
counterfactual scenarios. Similar to classic approaches to AI, it typically
assumes that the causal variables of interest are given from the outset.
However, real-world data often comprises high-dimensional, low-level
observations (e.g., RGB pixels in a video) and is thus usually not
structured into such meaningful causal units.
To this end, the emerging field of causal representation learning (CRL)
combines the strengths of ML and causality. In CRL we aim at learning
low-dimensional, high-level causal variables along with their causal
relations directly from raw, unstructured data, leading to representations
that support notions such as causal factors, interventions, reasoning, and
planning. In this sense, CRL aligns with the general goal of modern ML to
learn meaningful representations of data that are more robust, explainable,
and performant, and in our workshop we want to catalyze research in this
direction.
This workshop brings together researchers from the emerging CRL community,
as well as from the more classical causality and representation learning
communities, who are interested in learning causal, robust, interpretable
and transferrable representations. Our goal is to foster discussion and
cross-fertilization between causality, representation learning and other
fields, as well as to engage the community in identifying application
domains for this emerging new field. In order to encourage discussions, we
will welcome submissions related to any aspect of CRL, including but not
limited to:
-
Causal representation learning, including self-supervised, multi-modal
or multi-environment CRL, either in time series or in an atemporal setting,
observational or interventional,
-
Causality-inspired representation learning, including learning
representations that are only *approximately* causal, but still useful
in terms of generalization or transfer learning,
-
Abstractions of causal models or in general multi-level causal systems,
-
Connecting CRL with system identification, learning differential
equations from data or sequences of images, or in general connections to
dynamical systems,
-
Theoretical works on identifiability in representation learning broadly,
-
Real-world applications of CRL, e.g. in biology, healthcare, (medical)
imaging or robotics; including new benchmarks or datasets, or addressing
the gap from theory to practice.
***IMPORTANT DATES***
Paper submission deadline: *September 29, 23:59 AoE *
Notification to authors: October 27, 2023, 23:59 AoE
Camera-ready version and videos: December 1, 2023, 23:59 AoE
Workshop Date: December 15 or 16, 2023 at NeurIPS
***SUBMISSION INSTRUCTIONS***
As for all NeurIPS workshops, submissions should contain original and
previously unpublished research and they should be formatted using the
NeurIPS latex style. Papers should be submitted as a PDF file and should be
maximum 6 pages in length, including all main results, figures, and tables.
Appendices containing additional details are allowed, but reviewers are not
expected to take this into account.
The workshop will not have proceedings (or in other words, it will not be
archival), which means you can submit the same or extended work as a
publication to other venues after the workshop. This means we also accept
(shortened versions of) submissions to other venues, as long as they are
not published before the workshop date in December.
Submission site:
https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/CRL
***ORGANIZERS***
Sara Magliacane, University of Amsterdam and MIT-IBM Watson AI Lab
Atalanti Mastakouri, Amazon
Yuki Asano, University of Amsterdam and Qualcomm Research
Claudia Shi, Columbia University and FAR AI
Cian Eastwood, University of Edinburgh and Max Planck Institute Tübingen
Sébastien Lachapelle, Mila and Samsung’s SAIT AI Lab (SAIL)
Bernhard Schölkopf, Max Planck Institute Tübingen
Caroline Uhler, MIT and Broad Institute
Dear all,
This is a gentle reminder that tomorrow we have Nikita Zhivotovskiy
speaking in the Statistics and Machine Learning Thematic Seminar. Nikita
has done excellent work both in statistics and machine learning, so it
is highly recommended to come meet him.
*Nikita Zhivotovskiy *(UC Berkeley, Department of Statistics,
https://sites.google.com/view/nikitazhivotovskiy/)
*Friday July 7*, 15h00-16h00
In person, at the University of Amsterdam
Location: Science Park 904, Room A1.04
*Sharper Risk Bounds for Statistical Aggregation
*In this talk, we take a fresh look at the classical results in the
theory of statistical aggregation, focusing on the transition from
global complexity to a more manageable local one. The goal of
aggregation is to combine several base predictors to achieve a
prediction nearly as accurate as the best one. This flexible approach
operates without any assumptions on the structure of the class or the
nature of the target. Though aggregation is studied in both sequential
and statistical settings, each with their unique differences, they both
traditionally use the same "global" complexity measure. Our discussion
will highlight the lesser-known PAC-Bayes localization method used in
our proofs, allowing us to prove a localized version of a classical
bound for the exponential weights estimator by Leung and Barron, and a
deviation-optimal localized bound for the Q-aggregation estimator.
Furthermore, we will explore the links between our work and ridge
regression. Joint work with Jaouad Mourtada and Tomas Vaškevičius. *
*
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl