Uncertainty for Artificial Intelligence 2024<https://www.auai.org/uai2024/> registration is now open!
It takes place July 15-19 at Universitat Pompeu Fabra in Barcelona
A (non-exhaustive!) list of topics is below. Visit the website<https://auai.org/uai2024/registration> to learn more and register.
Scholarship applications for those without funding to attend are also open<https://www.auai.org/uai2024/scholarships>.
Early bird registration ends June 2nd.
Applications
Causal inference
Computer vision and image analysis
Evaluation, benchmarks, synthetic data
Graphical Models
Learning and optimization
Reinforcement learning
Missing data
Humans and AI
Natural language and speech processing, large language models
Security and privacy, adversarial ML
Statistical theory and methods
…and more!
Dear all,
The next speaker in our seminar for machine learning and UQ in scientific computing will be our new postdoc Marius Kurz, formerly at the University of Stuttgart. His talk will concern GPU-based simulation codes and the integration of machine learning methods for stable large eddy simulations, see the abstract below. The talk will take place on April 30th at 11h00 CET. For those at CWI, the location is L017, and for online attendees the zoom link is posted below. Feel free to share it. More upcoming talks can be seen here: https://www.cwi.nl/en/groups/scientific-computing/uq-seminar/seminar-ml-uq-….
Best regards,
Wouter Edeling
Join Zoom Meeting
https://cwi-nl.zoom.us/j/88489988455?pwd=NUZwRXN1alU1ZGJTbVhJc2o3L000dz09
Meeting ID: 884 8998 8455
Passcode: 303356
30 April 2024 11h00 CET: Marius Kurz (Centrum Wiskunde & Informatica): Learning to Flow: Machine Learning and Exascale Computing for Next-Generation Fluid Dynamics
The computational sciences have become an essential driver for understanding the dynamics of complex, nonlinear systems ranging from the dynamics of the earth’s climate to obtaining information about a patient’s characteristic blood flow to derive personalized approaches in medical therapy. These advances can be ascribed on one hand to the exponential increase in available computing power, which has allowed the simulation of increasingly large and complex problems and has led to the emerging generation of exascale systems in high-performance computing (HPC). On the other hand, methodological advances in discretization methods and the modeling of turbulent flow have increased the fidelity of simulations in fluid dynamics significantly. Here, the recent advances in machine learning (ML) have opened a whole field of novel, promising modeling approaches.
This talk will first introduce the potential of GPU-based simulation codes in terms of energy-to-solution using the novel GALÆXI code. Next, the integration of machine learning methods for large eddy simulation will be discussed with emphasis on their a posteriori performance, the stability of the simulation, and the interaction between the turbulence model and the discretization. Based on this, Relexi is introduced as a potent tool that allows employing HPC simulations as training environments for reinforcement learning models at scale and thus to converge HPC and ML.
Dear all,
Tomorrow, on Thursday April 25, we have Sebastien Bordt speaking in the
Statistics and Machine Learning Thematic Seminar.
*Sebastien Bordt *(University of Tübingen, https://sbordt.github.io/)
*Thursday April 25*, 14h00-15h00
In person, at the University of Amsterdam
Location: Korteweg-de Vries Institute for Mathematics, Science Park 107,
Room 320
Directions: https://kdvi.uva.nl/contact/contact.html
*Representation and Interpretation
*Researchers have proposed different conditions under which a function
is deemed "interpretable". This includes classes of functions that are
considered a priori interpretable (small decision trees, GAMs), and
post-hoc introspection methods that allow to "interpret" arbitrary
learned functions. In this talk, we discuss a perspective on
interpretability that highlights the connections between the
representation and interpretation of a function. As a concrete example,
we derive the connections between the Shapley Values of a function and
its representation as a Generalized Additive Model. We then ask whether
similar connections hold for other classes of functions, and conclude
with a discussion of different approaches in interpretable machine
learning.
Seminar organizers:
Tim van Erven
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
On Thursday April 25 we have Sebastien Bordt speaking in the Statistics
and Machine Learning Thematic Seminar.
*Sebastien Bordt *(University of Tübingen, https://sbordt.github.io/)
*Thursday April 25*, 14h00-15h00
In person, at the University of Amsterdam
Location: Korteweg-de Vries Institute for Mathematics, Science Park 907,
Room TBA
Directions: https://kdvi.uva.nl/contact/contact.html
*Representation and Interpretation
*Researchers have proposed different conditions under which a function
is deemed "interpretable". This includes classes of functions that are
considered a priori interpretable (small decision trees, GAMs), and
post-hoc introspection methods that allow to "interpret" arbitrary
learned functions. In this talk, we discuss a perspective on
interpretability that highlights the connections between the
representation and interpretation of a function. As a concrete example,
we derive the connections between the Shapley Values of a function and
its representation as a Generalized Additive Model. We then ask whether
similar connections hold for other classes of functions, and conclude
with a discussion of different approaches in interpretable machine
learning.
Seminar organizers:
Tim van Erven
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
The next speaker in our Seminar for machine learning and UQ in scientific computing will be Victorita Dolean, recently appointed as professor at Eindhoven University of Technology. The title and abstract are appended below. The talk will take place Thursday 11 April at 10AM CET. For those at CWI, the location will be L120. For online attendees, here's the Zoom link:
Join Zoom Meeting: https://cwi-nl.zoom.us/j/83063963402?pwd=cWN3RjlEOVk1SkJIYWk2MEJmSzJMdz09
Meeting ID: 830 6396 3402
Passcode: 067208
For an overview of earlier and upcoming talks, see https://www.cwi.nl/en/groups/scientific-computing/uq-seminar/seminar-ml-uq-….
Victorita Dolean (Eindhoven University of Technology): Parallelization approaches for neural network-based collocation methods
We consider neural network solvers for differential equation-based problems based on the pioneering collocation approach introduced by Lagaris et al. Those methods are very versatile as they may not require an explicit mesh, allow for the solution of parameter identification problems, and be well-suited for high- dimensional problems. However, the training of these neural network models is generally not very robust and may require a lot of hyper parameter tuning. In particular, due to the so-called spectral bias, the training is notoriously difficult when scaling up to large computational domains as well as for multiscale problems. In this work, we give an overview of the methods from the literature and we focus later on two overlapping domain decomposition-based techniques, namely finite basis physics-informed neural networks (FBPINNs) and deep do- main decomposition (Deep-DDM) methods. Whereas the former introduces the domain decomposition via a partition of unity within the classical gradient- based optimization, the latter employs a classical outer Schwarz iteration. In order to obtain scalability and robustness for multiscale problems, we consider a multi-level framework for both approaches.
Dear colleagues,
Our next BeNeRL Reinforcement Learning Seminar (April 11) is coming:
Speaker: Minqi Jiang (https://minch.co<https://minch.co/>), research scientist at Google DeepMind.
Title: Learning Curricula in Open-Ended Worlds
Date: April 11, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on April 11!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
——————————————————————
Upcoming talk:
Date: April 11, 16.00-17.00 (CET)
Speaker: Minqi Jiang (https://minch.co<https://minch.co/>)
Title: Learning Curricula in Open-Ended Worlds
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk…
Abstract: Deep reinforcement learning (RL) agents commonly overfit to their training environments, performing poorly when the environment is even mildly perturbed. Such overfitting can be mitigated by conducting domain randomization (DR) over various aspects of the training environment in simulation. However, depending on implementation, DR makes potentially arbitrary assumptions about the distribution over environment instances. In larger environment design spaces, DR can become combinatorially less likely to sample specific environment instances that may be especially useful for learning. Unsupervised Environment Design (UED) improves upon these shortcomings by directly considering the problem of automatically generating a sequence or curriculum of environment instances presented to the agent for training, in order to maximize the agent's final robustness and generality. UED methods have been shown, in both theory and practice, to produce emergent training curricula that result in deep RL agents with improved transfer performance to out-of-distribution environment instances. Such autocurricula are promising paths toward open-ended learning systems that become increasingly capable by continually generating and mastering additional challenges of their own design. This talk provides a tour of recent algorithmic developments leading to successively more powerful UED methods, followed by a discussion of key challenges and potential paths to unlocking their full potential in practice.
Bio: Minqi Jiang is a research scientist in the Autonomous Assistants group at Google DeepMind. His PhD research developed several scalable approaches for generating autocurricula that produce more robust deep RL agents in potentially open-ended environments. He is especially interested in problems at the intersection of generalization, human-AI coordination, and open-ended systems.