Dear all,
We are delighted to announce our newly created seminar series, the Amsterdam Causality Meeting. The aim of the seminar series is to bring together researchers in causal inference from the VU, UvA, Amsterdam UMC and CWI, but it is open to everyone.
We plan to organize four events per year, where each event consists of two scientific talks and a networking event with drinks afterwards, rotating across the different institutions in Amsterdam.
We will inaugurate our seminar series with a first event on Oct 9 at UvA SP, here are the details:
*Date:* Monday October 9th 15:00-18:00 *Location:* UvA Science Park room D1.114 (first floor of D building in the Faculty of Science complex)
*Program (abstracts below):* 15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data 16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems 17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check our website https://amscausality.github.io/index. For announcements regarding upcoming meetings, you can also register to our Google group amscausality@googlegroups.com.
This meeting is financially supported by the ELLIS unit Amsterdam https://ellis.eu/units/amsterdam and the Big Statistics https://www.bigstatistics.nl/ group.
Cheers, Sara Magliacane Joris Mooij Stéphanie van der Pas
============================================================
Abstracts:
Nan van Geloven (LUMC) - *Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data*
Predictions under hypothetical interventions are estimates of what a person's risk of an outcome would be if they were to follow a particular treatment strategy, given their individual characteristics. Such predictions can give important input to medical decision making. However, evaluating predictive performance of interventional predictions is challenging. Standard ways of evaluating predictive performance do not apply when using observational data, because prediction under interventions involves obtaining predictions of the outcome under conditions that are different to those that are observed for a subset of individuals in the validation dataset. This work describes methods for evaluating counterfactual predictive performance of predictions under interventions for time-to-event outcomes. This means we aim to assess how well predictions would match the validation data if all individuals had followed the treatment strategy under which predictions are made. We focus on counterfactual performance evaluation using longitudinal observational data, and under treatment strategies that involve sustaining a particular treatment regime over time. We introduce an estimation approach using artificial censoring and inverse probability weighting which involves creating a validation dataset that mimics the treatment strategy under which predictions are made. We extend measures of calibration, discrimination (c-index and cumulative/dynamic AUC) and overall prediction error (Brier score) to allow assessment of counterfactual performance. The methods are evaluated using a simulation study, including scenarios in which the methods should detect poor performance. Applying our methods in the context of liver transplantation shows that our procedure allows quantification of the performance of predictions supporting crucial decisions on organ allocation.
Sander Beckers (UvA) - *Moral responsibility for AI systems*
As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware – in some form or other – of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK). I then generalize our definition into a degree of responsibility.
Hi Sara Nice initiative! Did you also invite the causality team at Qualcomm? Best Max
On Tue, 5 Sep 2023 at 16:24, Sara Magliacane sara.magliacane@gmail.com wrote:
Dear all,
We are delighted to announce our newly created seminar series, the Amsterdam Causality Meeting. The aim of the seminar series is to bring together researchers in causal inference from the VU, UvA, Amsterdam UMC and CWI, but it is open to everyone.
We plan to organize four events per year, where each event consists of two scientific talks and a networking event with drinks afterwards, rotating across the different institutions in Amsterdam.
We will inaugurate our seminar series with a first event on Oct 9 at UvA SP, here are the details:
*Date:* Monday October 9th 15:00-18:00 *Location:* UvA Science Park room D1.114 (first floor of D building in the Faculty of Science complex)
*Program (abstracts below):* 15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data 16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems 17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check our website https://amscausality.github.io/index. For announcements regarding upcoming meetings, you can also register to our Google group amscausality@googlegroups.com.
This meeting is financially supported by the ELLIS unit Amsterdam https://ellis.eu/units/amsterdam and the Big Statistics https://www.bigstatistics.nl/ group.
Cheers, Sara Magliacane Joris Mooij Stéphanie van der Pas
============================================================
Abstracts:
Nan van Geloven (LUMC) - *Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data*
Predictions under hypothetical interventions are estimates of what a person's risk of an outcome would be if they were to follow a particular treatment strategy, given their individual characteristics. Such predictions can give important input to medical decision making. However, evaluating predictive performance of interventional predictions is challenging. Standard ways of evaluating predictive performance do not apply when using observational data, because prediction under interventions involves obtaining predictions of the outcome under conditions that are different to those that are observed for a subset of individuals in the validation dataset. This work describes methods for evaluating counterfactual predictive performance of predictions under interventions for time-to-event outcomes. This means we aim to assess how well predictions would match the validation data if all individuals had followed the treatment strategy under which predictions are made. We focus on counterfactual performance evaluation using longitudinal observational data, and under treatment strategies that involve sustaining a particular treatment regime over time. We introduce an estimation approach using artificial censoring and inverse probability weighting which involves creating a validation dataset that mimics the treatment strategy under which predictions are made. We extend measures of calibration, discrimination (c-index and cumulative/dynamic AUC) and overall prediction error (Brier score) to allow assessment of counterfactual performance. The methods are evaluated using a simulation study, including scenarios in which the methods should detect poor performance. Applying our methods in the context of liver transplantation shows that our procedure allows quantification of the performance of predictions supporting crucial decisions on organ allocation.
Sander Beckers (UvA) - *Moral responsibility for AI systems*
As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware – in some form or other – of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK). I then generalize our definition into a degree of responsibility.
-- You received this message because you are subscribed to the Google Groups "AMLab" group. To unsubscribe from this group and stop receiving emails from it, send an email to amlab+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/amlab/CAOK6vNuFNcpkgaFAhZ_O8Gg%3DK1z-BrELH... https://groups.google.com/d/msgid/amlab/CAOK6vNuFNcpkgaFAhZ_O8Gg%3DK1z-BrELHU7Mv4U6aQMH5P%3DFAQ%40mail.gmail.com?utm_medium=email&utm_source=footer .
Hi all,
I just wanted to remind you about our first Amsterdam Causality Meeting is happening *this Monday, October 9th.* The aim of the seminar is to bring together researchers in causal inference from the VU, UvA, Amsterdam UMC and CWI, but it is open to everyone.
Here are the details in short:
*Date:* Monday October 9th 15:00-18:00 *Location:* UvA Science Park room D1.114 (first floor of D building in the Faculty of Science complex)
*Program (abstracts below):* 15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data 16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems 17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check our website https://amscausality.github.io/index. For announcements regarding upcoming meetings, you can also register to our Google group amscausality@googlegroups.com.
This meeting is financially supported by the ELLIS unit Amsterdam https://ellis.eu/units/amsterdam and the Big Statistics https://www.bigstatistics.nl/ group.
Cheers, Sara Magliacane Joris Mooij Stéphanie van der Pas
On Tue, Sep 5, 2023 at 4:24 PM Sara Magliacane sara.magliacane@gmail.com wrote:
Dear all,
We are delighted to announce our newly created seminar series, the Amsterdam Causality Meeting. The aim of the seminar series is to bring together researchers in causal inference from the VU, UvA, Amsterdam UMC and CWI, but it is open to everyone.
We plan to organize four events per year, where each event consists of two scientific talks and a networking event with drinks afterwards, rotating across the different institutions in Amsterdam.
We will inaugurate our seminar series with a first event on Oct 9 at UvA SP, here are the details:
*Date:* Monday October 9th 15:00-18:00 *Location:* UvA Science Park room D1.114 (first floor of D building in the Faculty of Science complex)
*Program (abstracts below):* 15.00-16.00: Nan van Geloven (LUMC) - Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data 16.00-17.00: Sander Beckers (UvA) - Moral responsibility for AI systems 17.00-18.00: Drinks
If you're interested in this event or in the seminar series, please check our website https://amscausality.github.io/index. For announcements regarding upcoming meetings, you can also register to our Google group amscausality@googlegroups.com.
This meeting is financially supported by the ELLIS unit Amsterdam https://ellis.eu/units/amsterdam and the Big Statistics https://www.bigstatistics.nl/ group.
Cheers, Sara Magliacane Joris Mooij Stéphanie van der Pas
============================================================
Abstracts:
Nan van Geloven (LUMC) - *Prediction under hypothetical interventions: evaluation of counterfactual performance using longitudinal observational data*
Predictions under hypothetical interventions are estimates of what a person's risk of an outcome would be if they were to follow a particular treatment strategy, given their individual characteristics. Such predictions can give important input to medical decision making. However, evaluating predictive performance of interventional predictions is challenging. Standard ways of evaluating predictive performance do not apply when using observational data, because prediction under interventions involves obtaining predictions of the outcome under conditions that are different to those that are observed for a subset of individuals in the validation dataset. This work describes methods for evaluating counterfactual predictive performance of predictions under interventions for time-to-event outcomes. This means we aim to assess how well predictions would match the validation data if all individuals had followed the treatment strategy under which predictions are made. We focus on counterfactual performance evaluation using longitudinal observational data, and under treatment strategies that involve sustaining a particular treatment regime over time. We introduce an estimation approach using artificial censoring and inverse probability weighting which involves creating a validation dataset that mimics the treatment strategy under which predictions are made. We extend measures of calibration, discrimination (c-index and cumulative/dynamic AUC) and overall prediction error (Brier score) to allow assessment of counterfactual performance. The methods are evaluated using a simulation study, including scenarios in which the methods should detect poor performance. Applying our methods in the context of liver transplantation shows that our procedure allows quantification of the performance of predictions supporting crucial decisions on organ allocation.
Sander Beckers (UvA) - *Moral responsibility for AI systems*
As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware – in some form or other – of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK). I then generalize our definition into a degree of responsibility.
machine-learning-nederland@list.uva.nl