Forwarding on behalf of Benjamin Sanderse:
-------- Forwarded Message --------
Subject: Announcement SciML workshop on AIM list
Date: Fri, 27 Oct 2023 07:55:52 +0200
From: Benjamin Sanderse <b.sanderse(a)cwi.nl>
To: Tristan van Leeuwen <t.van.leeuwen(a)cwi.nl>
CC: Brune, Christoph (UT-EEMCS) <c.brune(a)utwente.nl>,
tim(a)timvanerven.nl, Muthsam, O. [Olivia] <o.muthsam(a)nwo.nl>
Announcement: Symposium on the Applications of Scientific Machine Learning
On Thursday 23 November 2023 a full-day symposium on the applications of
Scientific Machine Learning (SciML) will take place at CWI, Amsterdam.
SciML is a rapidly emerging field in which conventional computational
modeling is combined with machine learning techniques. Speakers will
talk about real-world applications of SciML (and related methods) in
diverse fields, ranging from weather forecasting to financial risk
modeling and from computational chemistry to power grid control. Lunch
and drinks will be provided.
What: Symposium on the Applications of Scientific Machine Learning
When: Thursday 23 November 2023, 9:30-18:00
Where: Turing room, Science Park 125, Amsterdam, next to the entrance of
Centrum Wiskunde & Informatica
Registration: Free but mandatory (due to limited capacity), deadline 16
November.
Speaker list: Maurice Schmeits (KNMI), Judith Dijk (TNO), Drona Kandhai
(ING and University of Amsterdam), Gabrio Rizzuti (Shearwater
Geoservices), Jan Viebahn (TenneT), Jonathan Nuttall (Deltares), Rianne
van den Berg (Microsoft Research)
The full program and the registration link can be found at
https://www.cwi.nl/en/events/cwi-research-semester-programs/sciml-symposium/
The symposium is part of the Research Semester Programme on Scientific
Machine Learning organized at CWI during the fall of 2023.
Dear all,
It is our pleasure to announce that the next Amsterdam Causality Meeting
will take place on *Thursday November 23rd, 14.30-17.30*. The schedule:
14.30-15.30: *Johannes Textor* (RU, RUMC): Are DAGs really useful for causal reasoning in complex systems?
15.30-16.30: *Julia Kowalska* (AUMC): Regression discontinuity designs from Bayesian perspective: opportunities and challenges.
16.30-17.30: drinks
The location will be the New University building on the *VU campus* (De Boelelaan 1111, Amsterdam), room *NU-5A47*.
More information, including the abstracts for the talks, is available on https://amscausality.github.io/upcoming/
Best wishes,
Sara, Joris and Stéphanie
Dear all,
Leiden is looking for a postdoc in Reinforcement Learning for Sustainable Energy.
There are still a few more days to apply!
LIACS
Thomas Moerland & Aske Plaat
-- Aske
Dear all,
I am forwarding the mentorship workshop announcement below at the
request of Lydia Zakynthinou. This is a great opportunity for early
career researchers to learn more about research communication skills in
machine learning theory. The organizers are famous learning theory
researchers, so they know what they are talking about.
Best,
Tim
PS The organizers are mostly based in the US, so when signing up, have a
look a the program and consider the time zone difference. They are
trying to accommodate all time zones.
——————————————————————————————————————
Hi all,
We are pleased to invite you to the 5th Learning Theory Alliance
Mentorship workshop <https://let-all.com/fall23.html>, to be held on
*October 26-27, 2023*. The workshop is *free and fully virtual*.
The workshop is intended for upper-level undergraduate and all-level
graduate students as well as postdoctoral researchers. No prior research
experience in the field is expected. We have several planned events
including:
* A “how-to” talk on how to communicate your research effectively
through talks (discussing strategies on how to prepare talks
addressed to a broad or specialized audience, or of varying length).
* A “discussion” that focuses on how to communicate your research
effectively in conversations (such as elevator pitches, poster
presentations, interviews).
* A social hour with mentoring tables.
Our lineup includes: Sanjoy Dasgupta (UCSD), Maryam Fazel (UW), Daniel
Hsu (Columbia University), Ashwin Pananjady (GeorgiaTech), Madeleine
Udell (Stanford), Claire Vernade (University of Tuebingen).
A short application form <https://forms.gle/8aBvaG1hRsvzCQB16> is
required to participate with an application deadline of *Friday, October
20, 2023*. Students with backgrounds that are underrepresented or
underserved in related fields are especially encouraged to apply. We are
trying our best to accommodate all time zones. More information
(including the schedule) can be found on the event’s website:
https://let-all.com/fall23.html.
This workshop is part of our broader community-building initiative
called the Learning Theory Alliance (founded by Surbhi Goel, Nika
Haghtalab, and Ellen Vitercik; advised by Peter Bartlett, Avrim Blum,
Stefanie Jegelka, Po-Ling Loh, and Jenn Wortman Vaughan). Check out
http://let-all.com/ for more details.
To connect with fellow participants and stay in touch for more
announcements, we encourage everyone to join
<https://join.slack.com/t/learningtheor-cui5258/shared_invite/zt-2421d3wfl-o…> the
LeT-All slack.
Best,
Surbhi Goel, Thodoris Lykouris, Vidya Muthukumar, Vatsal Sharan, and
Lydia Zakynthinou
LeT-All’s Mentoring Workshop Committee
--
Lydia Zakynthinou
FODSI Postdoctoral Researcher, UC Berkeley
===========================================================
PhD position in Causal Time-series Analysis for Ecology (1.0 FTE, 4yrs.)
Institution : Radboud University Nijmegen, Netherlands
Keywords : causal discovery, ecological modelling, machine learning, time-series analysis
Application deadline : 27 October 2023
Website : https://www.ru.nl/en/working-at/job-opportunities/phd-candidate-novel-metho…
===========================================================
Summary
Are you interested in applying new machine learning methods and process-based models to understand how humans are impacting biodiversity? Come work with us to develop and apply new causal methods to ecological data to uncover interactions between animal movement, vegetation, climate and human activities, with implications for ecosystem processes.
Description
We are currently experiencing a biodiversity crisis and one of the main drivers is human activities. As human activities expand, animal behaviour is being altered. One behaviour that is drastically affected is animal movement. Animal movement is an important process determining the fate of individuals and shaping the structure and dynamics of populations and ecosystem processes. Therefore, changes in movement will have wide-ranging ecological consequences.
In this project you will apply modern causal discovery algorithms for time-series data and process-based ecological modelling to examine the mechanisms of animal movement and explore ecosystem consequences of altered animal behaviour due to human pressures such as agricultural land conversion. This will involve the analysis of empirical animal movement data, environmental data, human pressure data, and species traits to explore interactions between animals and their surroundings.
We are:
The project is a collaboration between the Environmental Science cluster at the Radboud Institute for Biological and Environmental Sciences (RIBES), and the Data Science group of the Institute for Computing and Information Sciences (iCIS), both part of the Faculty of Science at the Radboud University. You will be working in both groups, at the interface of ecology and machine learning.
The mission of the Environmental Science research cluster of Radboud University is to provide high-quality scientific knowledge that can be used to help people move towards a more sustainable society. To achieve this, we aim to understand, project and address the impact of anthropogenic pressures on ecosystems and humans, from the landscape to the global scale.
The Data Science group within iCIS aims to develop theory and methods for scalable machine learning and information retrieval to address challenging problems in science and society. The group's main research foci are: the design and understanding of deep / causal machine learning methods, modern information retrieval / big data, and computational immunology, each with a keen eye on applications in other scientific domains as well as industry.
Both groups strongly promote an open, inclusive and supportive work environment.
What we expect from you:
You have an MSc degree in natural science, ecology, computer science, or a related discipline. You are open-minded, with a strong interest in multidisciplinary research, especially on the interface between machine learning and ecology. You are highly motivated to perform scientific research and obtain a PhD degree. As you will be working in two different research groups, you need to be flexible, communicative and able to work in a multidisciplinary team.
For more information about this vacancy and details on how to apply, see the website (above), or contact:
* Dr Marlee Tucker, e-mail: marlee.tucker(a)ru.nl <mailto:a.schipper@science.ru.nl> (RIBES)
* Dr Tom Claassen, tel: +31 24 3652019, e-mail: tom.claassen(a)ru.nl <mailto:tomc@cs.ru.nl> (iCIS)
===========================================================
We would like to draw your attention to the following AIM event:
Save the Date: AIM PhD networking event on November 9, 2023
On November 9th, 13:00 - 17:30, AIM is hosting another networking event in Utrecht. This meeting is intended to bring together the AIM community and aims to provide a platform for PhD students working on mathematical research in AI to present their work. You can register via the registration form.<https://docs.google.com/forms/d/e/1FAIpQLSekbbmwE6pFPu53NMlVhQORjcmOPUxUzL0…>
Preliminary program:
● 13:00 - 13:30: Walk-in with coffee/tea
● 13:30 - 14:30: Opening and keynote presentation by Booking.com
● 14:30 - 15:00: Coffee break
● 15:00 - 17:00: Presentations and poster session
● From 17:00: Drinks & networking
Call for presentations
Interested PhD students who would like to present their research at the community event can register via the registration form<https://docs.google.com/forms/d/e/1FAIpQLSekbbmwE6pFPu53NMlVhQORjcmOPUxUzL0…>. The deadline is Tuesday, October 17, 2023. Shortly after the deadline, applicants will be informed whether they can present their work at the event. Please share this call for presentations with any interested PhD candidates.
If you have questions, please contact us via aim(a)nwo.nl<mailto:aim@nwo.nl>.
Dear all,
We are delighted to announce our newly created seminar series, the
Amsterdam Causality Meeting. The aim of the seminar series is to bring
together researchers in causal inference from the VU, UvA, Amsterdam UMC
and CWI, but it is open to everyone.
We plan to organize four events per year, where each event consists of two
scientific talks and a networking event with drinks afterwards, rotating
across the different institutions in Amsterdam.
We will inaugurate our seminar series with a first event on Oct 9 at UvA
SP, here are the details:
*Date:* Monday October 9th 15:00-18:00
*Location:* UvA Science Park room D1.114 (first floor of D building in the
Faculty of Science complex)
*Program (abstracts below):*
15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical
interventions: evaluation of counterfactual performance using longitudinal
observational data
16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems
17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check
our website <https://amscausality.github.io/index>.
For announcements regarding upcoming meetings, you can also register
to our Google
group <amscausality(a)googlegroups.com>.
This meeting is financially supported by the ELLIS unit Amsterdam
<https://ellis.eu/units/amsterdam> and the Big Statistics
<https://www.bigstatistics.nl/> group.
Cheers,
Sara Magliacane
Joris Mooij
Stéphanie van der Pas
============================================================
Abstracts:
Nan van Geloven (LUMC) - *Prediction under hypothetical interventions:
evaluation of counterfactual performance using longitudinal observational
data*
Predictions under hypothetical interventions are estimates of what a
person's risk of an outcome would be if they were to follow a particular
treatment strategy, given their individual characteristics. Such
predictions can give important input to medical decision making. However,
evaluating predictive performance of interventional predictions is
challenging. Standard ways of evaluating predictive performance do not
apply when using observational data, because prediction under interventions
involves obtaining predictions of the outcome under conditions that are
different to those that are observed for a subset of individuals in the
validation dataset. This work describes methods for evaluating
counterfactual predictive performance of predictions under interventions
for time-to-event outcomes. This means we aim to assess how well
predictions would match the validation data if all individuals had followed
the treatment strategy under which predictions are made. We focus on
counterfactual performance evaluation using longitudinal observational
data, and under treatment strategies that involve sustaining a particular
treatment regime over time. We introduce an estimation approach using
artificial censoring and inverse probability weighting which involves
creating a validation dataset that mimics the treatment strategy under
which predictions are made. We extend measures of calibration,
discrimination (c-index and cumulative/dynamic AUC) and overall prediction
error (Brier score) to allow assessment of counterfactual performance. The
methods are evaluated using a simulation study, including scenarios in
which the methods should detect poor performance. Applying our methods in
the context of liver transplantation shows that our procedure allows
quantification of the performance of predictions supporting crucial
decisions on organ allocation.
Sander Beckers (UvA) - *Moral responsibility for AI systems*
As more and more decisions that have a significant ethical dimension are
being outsourced to AI systems, it is important to have a definition of
moral responsibility that can be applied to AI systems. Moral
responsibility for an outcome of an agent who performs some action is
commonly taken to involve both a causal condition and an epistemic
condition: the action should cause the outcome, and the agent should have
been aware – in some form or other – of the possible moral consequences of
their action. This paper presents a formal definition of both conditions
within the framework of causal models. I compare my approach to the
existing approaches of Braham and van Hees (BvH) and of Halpern and
Kleiman-Weiner (HK). I then generalize our definition into a degree of
responsibility.