Dear all,
I am forwarding the announcement below on behalf of Ciara Pike-Burne. NB
This is an excellent opportunity to get advice from internationally top
researchers in learning theory.
Best,
Tim
———————————————————————————
Hi all,
We are pleased to invite you to the 6th Learning Theory Alliance
Mentorship workshop <https://let-all.com/spring24.html>, to be held on
*June 4-5, 2024*. The workshop is *free and fully virtual*.
The workshop is intended for upper-level undergraduate and all-level
graduate students as well as postdoctoral researchers who are interested
in theoretical computer science and machine learning. No prior research
experience in the field is expected, and some sessions may be of
interest to researchers in adjacent fields. We have several planned
events including:
* A “how-to” talk on how to be a good collaborator (discussing what
healthy collaborations do and don’t look like, setting expectations,
transitioning from junior to senior collaborator roles).
* A “how-to” talk on how to do theory research (covering topics such
as formulating research questions and theory problems, breaking a
larger problem into smaller toy problems, and day-to-day best
practices).
* A panel discussion on time management (for example, maintaining a
balance between learning and solving, work-life balance, deciding
how many projects to work on).
* A social hour with mentoring tables.
Our lineup includes Shuchi Chawla (UT), Adam Groce (Reed College), Zhiyi
Huang (University of Hong Kong), Varun Kanade (Oxford), Po-Ling Loh
(University of Cambridge), Audra McMillan (Apple), Ankur Moitra (MIT),
Devi Parikh (Georgia Tech), Aaditya Ramdas (CMU), and Steven Wu (CMU).
A short application form <https://forms.gle/ACZBLto6MweLd9dc6> is
required to participate with an application deadline of *Tuesday, May
28, 2024*. Students with backgrounds that are underrepresented or
underserved in related fields are especially encouraged to apply. We are
trying our best to accommodate all time zones. More information
(including the schedule) can be found on the event’s website:
https://let-all.com/spring24.html.
This workshop is part of our broader community-building initiative
called the Learning Theory Alliance. Check out http://let-all.com/ for
more details.
To connect with fellow participants and stay in touch for more
announcements, we encourage everyone to join
<https://join.slack.com/t/learningtheor-cui5258/shared_invite/zt-2421d3wfl-o…>
the LeT-All slack.
Best,
Ciara Pike-Burke, Vatsal Sharan, Ellen Vitercik, and Lydia Zakynthinou
LeT-All’s Mentoring Workshop Committee
Dear Colleagues,
It is our pleasure to invite you to join the 2nd International
Electronic Conference on Machines and Applications (IECMA 2024) after
the strong success of the previous annual online conference. IECMA 2024
will be held online from 18 to 20 June 2024.
https://sciforum.net/event/IECMA2024?utm_source=google&utm_medium=email+&ut…
The scope of this online conference is to bring together well-known
worldwide experts who are currently working on machinery and engineering
and to provide an online forum for presenting and discussing new results.
We would like to invite you to join us today by registering to IECMA
2024 FREE, where you can engage in cutting-edge discussions and
networking opportunities.
Please visit the following link to complete your registration:
https://sciforum.net/event/IECMA2024?section=#registration.
If you encounter any difficulties during the registration process,
please do not hesitate to contact us. If you have already registered,
kindly disregard this email.
Best regards,
Your IECMA 2024 Organizing Team
iecma2024(a)mdpi.com
Dear all,
On May 27 at 11h CET, Beatriz Moya from CNRS@CREATE will speak in our seminar for machine learning and UQ in scientific computing. She will talk about geometric deep learning for model order reduction, see the abstract below. For those at CWI, the location will be L120, and for online attendees the zoom link is posted below.
Kind regards,
Wouter Edeling
27 May 2024 11h00 CET: Beatriz Moya (CNRS@CREATE) Exploring the role of geometric and learning biases in Model Order Reduction and Data-Driven simulation
This talk highlights the practical application and synergistic use of geometric and learning biases in interpretable and consistent deep learning for complex problems. We propose the use of Geometric Deep Learning for Model Order Reduction. Its high generalizability, even with limited data, facilitates real-time evaluation of partial differential equations (PDEs) for complex behaviors and changing domains. Additionally, we showcase the application of Thermodynamics-Informed Machine Learning as an alternative when the physics of the system under study is not fully known. This algorithm results in a cognitive digital twin capable of self-correction for adapting to changing environments when only partial evaluations of the dynamical state are available. Finally, the integration of Geometric Deep Learning and Thermodynamics-Informed Machine Learning produces an enhanced combined effect with high applicability in real-world domains.
Join Zoom Meeting
https://cwi-nl.zoom.us/j/84808038602?pwd=K3VuMHZvZmI4L0U0ckJrYUlrUmNSZz09
Meeting ID: 848 0803 8602
Passcode: 469712
Dear colleagues,
Our next BeNeRL Reinforcement Learning Seminar (May 16) is coming:
Speaker: Edward Hu (https://edwardshu.com<https://edwardshu.com/>), PhD student from the University of Pennsylvania.
Title: The Sensory Needs of Robot Learners
Date: May 16, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.<https://www.benerl.org/seminar-series>bene<https://www.benerl.org/seminar-series>rl.org/seminar-series<https://www.benerl.org/seminar-series>
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on May 16!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
——————————————————————
Upcoming talk:
Date: May 16, 16.00-17.00 (CET)
Speaker: Edward Hu (https://edwardshu.com<https://edwardshu.com/>)
Title: The Sensory Needs of Robot Learners
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk…
Abstract: What information does a robot need from the environment to efficiently learn new behaviors? Sensory streams and policy learning are intimately entangled. From current observation, agents learn models and compute feedback for improvement. Then, agents influence future observations through environmental interaction. I will present our recent findings in investigating the close-knit relationship between sensing and learning for robotics. This talk will cover a model-based RL approach that exploits additional sensors to improve policy search and an RL agent that learns interactive perception behavior to better estimate rewards. Overall, we find that paying careful attention to the sensory input streams of the RL process leads to large gains in performance.
Bio: Edward Hu is a PhD student at the University of Pennsylvania and GRASP lab, advised by Dinesh Jayaraman. Edward is broadly interested in artificial intelligence, ranging from virtual agents to physical robots. As a result, his research spans reinforcement learning, perception, and robotics. His research has received multiple distinctions in robotics and machine learning venues like Best Paper Award at CoRL22, and spotlights at ICLR23 and ICLR24.
Uncertainty for Artificial Intelligence 2024<https://www.auai.org/uai2024/> registration is now open!
It takes place July 15-19 at Universitat Pompeu Fabra in Barcelona
A (non-exhaustive!) list of topics is below. Visit the website<https://auai.org/uai2024/registration> to learn more and register.
Scholarship applications for those without funding to attend are also open<https://www.auai.org/uai2024/scholarships>.
Early bird registration ends June 2nd.
Applications
Causal inference
Computer vision and image analysis
Evaluation, benchmarks, synthetic data
Graphical Models
Learning and optimization
Reinforcement learning
Missing data
Humans and AI
Natural language and speech processing, large language models
Security and privacy, adversarial ML
Statistical theory and methods
…and more!
Dear all,
The next speaker in our seminar for machine learning and UQ in scientific computing will be our new postdoc Marius Kurz, formerly at the University of Stuttgart. His talk will concern GPU-based simulation codes and the integration of machine learning methods for stable large eddy simulations, see the abstract below. The talk will take place on April 30th at 11h00 CET. For those at CWI, the location is L017, and for online attendees the zoom link is posted below. Feel free to share it. More upcoming talks can be seen here: https://www.cwi.nl/en/groups/scientific-computing/uq-seminar/seminar-ml-uq-….
Best regards,
Wouter Edeling
Join Zoom Meeting
https://cwi-nl.zoom.us/j/88489988455?pwd=NUZwRXN1alU1ZGJTbVhJc2o3L000dz09
Meeting ID: 884 8998 8455
Passcode: 303356
30 April 2024 11h00 CET: Marius Kurz (Centrum Wiskunde & Informatica): Learning to Flow: Machine Learning and Exascale Computing for Next-Generation Fluid Dynamics
The computational sciences have become an essential driver for understanding the dynamics of complex, nonlinear systems ranging from the dynamics of the earth’s climate to obtaining information about a patient’s characteristic blood flow to derive personalized approaches in medical therapy. These advances can be ascribed on one hand to the exponential increase in available computing power, which has allowed the simulation of increasingly large and complex problems and has led to the emerging generation of exascale systems in high-performance computing (HPC). On the other hand, methodological advances in discretization methods and the modeling of turbulent flow have increased the fidelity of simulations in fluid dynamics significantly. Here, the recent advances in machine learning (ML) have opened a whole field of novel, promising modeling approaches.
This talk will first introduce the potential of GPU-based simulation codes in terms of energy-to-solution using the novel GALÆXI code. Next, the integration of machine learning methods for large eddy simulation will be discussed with emphasis on their a posteriori performance, the stability of the simulation, and the interaction between the turbulence model and the discretization. Based on this, Relexi is introduced as a potent tool that allows employing HPC simulations as training environments for reinforcement learning models at scale and thus to converge HPC and ML.
Dear all,
Tomorrow, on Thursday April 25, we have Sebastien Bordt speaking in the
Statistics and Machine Learning Thematic Seminar.
*Sebastien Bordt *(University of Tübingen, https://sbordt.github.io/)
*Thursday April 25*, 14h00-15h00
In person, at the University of Amsterdam
Location: Korteweg-de Vries Institute for Mathematics, Science Park 107,
Room 320
Directions: https://kdvi.uva.nl/contact/contact.html
*Representation and Interpretation
*Researchers have proposed different conditions under which a function
is deemed "interpretable". This includes classes of functions that are
considered a priori interpretable (small decision trees, GAMs), and
post-hoc introspection methods that allow to "interpret" arbitrary
learned functions. In this talk, we discuss a perspective on
interpretability that highlights the connections between the
representation and interpretation of a function. As a concrete example,
we derive the connections between the Shapley Values of a function and
its representation as a Generalized Additive Model. We then ask whether
similar connections hold for other classes of functions, and conclude
with a discussion of different approaches in interpretable machine
learning.
Seminar organizers:
Tim van Erven
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
On Thursday April 25 we have Sebastien Bordt speaking in the Statistics
and Machine Learning Thematic Seminar.
*Sebastien Bordt *(University of Tübingen, https://sbordt.github.io/)
*Thursday April 25*, 14h00-15h00
In person, at the University of Amsterdam
Location: Korteweg-de Vries Institute for Mathematics, Science Park 907,
Room TBA
Directions: https://kdvi.uva.nl/contact/contact.html
*Representation and Interpretation
*Researchers have proposed different conditions under which a function
is deemed "interpretable". This includes classes of functions that are
considered a priori interpretable (small decision trees, GAMs), and
post-hoc introspection methods that allow to "interpret" arbitrary
learned functions. In this talk, we discuss a perspective on
interpretability that highlights the connections between the
representation and interpretation of a function. As a concrete example,
we derive the connections between the Shapley Values of a function and
its representation as a Generalized Additive Model. We then ask whether
similar connections hold for other classes of functions, and conclude
with a discussion of different approaches in interpretable machine
learning.
Seminar organizers:
Tim van Erven
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
The next speaker in our Seminar for machine learning and UQ in scientific computing will be Victorita Dolean, recently appointed as professor at Eindhoven University of Technology. The title and abstract are appended below. The talk will take place Thursday 11 April at 10AM CET. For those at CWI, the location will be L120. For online attendees, here's the Zoom link:
Join Zoom Meeting: https://cwi-nl.zoom.us/j/83063963402?pwd=cWN3RjlEOVk1SkJIYWk2MEJmSzJMdz09
Meeting ID: 830 6396 3402
Passcode: 067208
For an overview of earlier and upcoming talks, see https://www.cwi.nl/en/groups/scientific-computing/uq-seminar/seminar-ml-uq-….
Victorita Dolean (Eindhoven University of Technology): Parallelization approaches for neural network-based collocation methods
We consider neural network solvers for differential equation-based problems based on the pioneering collocation approach introduced by Lagaris et al. Those methods are very versatile as they may not require an explicit mesh, allow for the solution of parameter identification problems, and be well-suited for high- dimensional problems. However, the training of these neural network models is generally not very robust and may require a lot of hyper parameter tuning. In particular, due to the so-called spectral bias, the training is notoriously difficult when scaling up to large computational domains as well as for multiscale problems. In this work, we give an overview of the methods from the literature and we focus later on two overlapping domain decomposition-based techniques, namely finite basis physics-informed neural networks (FBPINNs) and deep do- main decomposition (Deep-DDM) methods. Whereas the former introduces the domain decomposition via a partition of unity within the classical gradient- based optimization, the latter employs a classical outer Schwarz iteration. In order to obtain scalability and robustness for multiscale problems, we consider a multi-level framework for both approaches.
Dear colleagues,
Our next BeNeRL Reinforcement Learning Seminar (April 11) is coming:
Speaker: Minqi Jiang (https://minch.co<https://minch.co/>), research scientist at Google DeepMind.
Title: Learning Curricula in Open-Ended Worlds
Date: April 11, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on April 11!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
——————————————————————
Upcoming talk:
Date: April 11, 16.00-17.00 (CET)
Speaker: Minqi Jiang (https://minch.co<https://minch.co/>)
Title: Learning Curricula in Open-Ended Worlds
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk…
Abstract: Deep reinforcement learning (RL) agents commonly overfit to their training environments, performing poorly when the environment is even mildly perturbed. Such overfitting can be mitigated by conducting domain randomization (DR) over various aspects of the training environment in simulation. However, depending on implementation, DR makes potentially arbitrary assumptions about the distribution over environment instances. In larger environment design spaces, DR can become combinatorially less likely to sample specific environment instances that may be especially useful for learning. Unsupervised Environment Design (UED) improves upon these shortcomings by directly considering the problem of automatically generating a sequence or curriculum of environment instances presented to the agent for training, in order to maximize the agent's final robustness and generality. UED methods have been shown, in both theory and practice, to produce emergent training curricula that result in deep RL agents with improved transfer performance to out-of-distribution environment instances. Such autocurricula are promising paths toward open-ended learning systems that become increasingly capable by continually generating and mastering additional challenges of their own design. This talk provides a tour of recent algorithmic developments leading to successively more powerful UED methods, followed by a discussion of key challenges and potential paths to unlocking their full potential in practice.
Bio: Minqi Jiang is a research scientist in the Autonomous Assistants group at Google DeepMind. His PhD research developed several scalable approaches for generating autocurricula that produce more robust deep RL agents in potentially open-ended environments. He is especially interested in problems at the intersection of generalization, human-AI coordination, and open-ended systems.