Dear all,
We are delighted to announce our newly created seminar series, the
Amsterdam Causality Meeting. The aim of the seminar series is to bring
together researchers in causal inference from the VU, UvA, Amsterdam UMC
and CWI, but it is open to everyone.
We plan to organize four events per year, where each event consists of two
scientific talks and a networking event with drinks afterwards, rotating
across the different institutions in Amsterdam.
We will inaugurate our seminar series with a first event on Oct 9 at UvA
SP, here are the details:
*Date:* Monday October 9th 15:00-18:00
*Location:* UvA Science Park room D1.114 (first floor of D building in the
Faculty of Science complex)
*Program (abstracts below):*
15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical
interventions: evaluation of counterfactual performance using longitudinal
observational data
16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems
17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check
our website <https://amscausality.github.io/index>.
For announcements regarding upcoming meetings, you can also register
to our Google
group <amscausality(a)googlegroups.com>.
This meeting is financially supported by the ELLIS unit Amsterdam
<https://ellis.eu/units/amsterdam> and the Big Statistics
<https://www.bigstatistics.nl/> group.
Cheers,
Sara Magliacane
Joris Mooij
Stéphanie van der Pas
============================================================
Abstracts:
Nan van Geloven (LUMC) - *Prediction under hypothetical interventions:
evaluation of counterfactual performance using longitudinal observational
data*
Predictions under hypothetical interventions are estimates of what a
person's risk of an outcome would be if they were to follow a particular
treatment strategy, given their individual characteristics. Such
predictions can give important input to medical decision making. However,
evaluating predictive performance of interventional predictions is
challenging. Standard ways of evaluating predictive performance do not
apply when using observational data, because prediction under interventions
involves obtaining predictions of the outcome under conditions that are
different to those that are observed for a subset of individuals in the
validation dataset. This work describes methods for evaluating
counterfactual predictive performance of predictions under interventions
for time-to-event outcomes. This means we aim to assess how well
predictions would match the validation data if all individuals had followed
the treatment strategy under which predictions are made. We focus on
counterfactual performance evaluation using longitudinal observational
data, and under treatment strategies that involve sustaining a particular
treatment regime over time. We introduce an estimation approach using
artificial censoring and inverse probability weighting which involves
creating a validation dataset that mimics the treatment strategy under
which predictions are made. We extend measures of calibration,
discrimination (c-index and cumulative/dynamic AUC) and overall prediction
error (Brier score) to allow assessment of counterfactual performance. The
methods are evaluated using a simulation study, including scenarios in
which the methods should detect poor performance. Applying our methods in
the context of liver transplantation shows that our procedure allows
quantification of the performance of predictions supporting crucial
decisions on organ allocation.
Sander Beckers (UvA) - *Moral responsibility for AI systems*
As more and more decisions that have a significant ethical dimension are
being outsourced to AI systems, it is important to have a definition of
moral responsibility that can be applied to AI systems. Moral
responsibility for an outcome of an agent who performs some action is
commonly taken to involve both a causal condition and an epistemic
condition: the action should cause the outcome, and the agent should have
been aware – in some form or other – of the possible moral consequences of
their action. This paper presents a formal definition of both conditions
within the framework of causal models. I compare my approach to the
existing approaches of Braham and van Hees (BvH) and of Halpern and
Kleiman-Weiner (HK). I then generalize our definition into a degree of
responsibility.
On behalf of Myriam Tami:
-------- Forwarded Message --------
Subject: Postdoctoral Researcher offer at University Paris Saclay - 12
months - "Causal Inference for learning and intervening on welding
quality prediction"
Date: Thu, 31 Aug 2023 15:46:52 +0200
From: myriam.tami(a)centralesupelec.fr
To: machine-learning-nederland-owner(a)list.uva.nl
Postdoctoral Researcher offer at University Paris Saclay - 12 months -
"Causal Inference for learning and intervening on welding quality
prediction"
Dear all,
We are looking for candidates for a funded postdoc offer over 1 year
(available immediately), in causal inference, at the University of
Paris-Saclay and, more specifically, in the MICS research laboratories
(http://mics.centralesupelec.fr/en/) and Sinclair
(https://sinclair-lab.com/).
The detailed offer is attached.
Can you circulate it to all AI, machine learning, and applied
mathematics research laboratories?
Best regards,
Myriam TAMI
--
Myriam TAMI, Ph.D. - Associate Professor Web site:
https://myriamtami.github.io/ Tel: (+33) (0)175316895 Lab: MICS
CentraleSupélec, Paris-Saclay, France