Dear all,
Announcing a talk by Debabrota Basu (INRIA Lille France, https://debabrota-basu.github.io/) as below:
Title: When Privacy meets Partial Information: Refining the Differential Privacy Definitions, Lower Bounds, and Algorithm Designs for Sequential Learning Room: L120, CWI, 123 Science Park, Amsterdam Time: 10:30 AM - 11:30 AM
Abstract: Bandits act as an archetypal model of sequential learning, where one has limited information regarding the utilities of a set of decisions and can know more about the utility of a decision only by choosing it. The goal of a bandit algorithm is either (a) to maximise the total accumulated utility over a given number of interactions, or (b) to find the decision with maximal utility through minimal number of interactions. As bandits are are progressively used for data-sensitive applications, such as designing adaptive clinical trials, tuning hyper-parameters, recommender systems etc., it is imperative to ensure data privacy of these algorithms. Motivated by this, we study the impact of preserving Differential Privacy in bandits with different goals (both (a) and (b)). We answer three questions: i. How to define Differential Privacy in bandits as both the input and output are generated progressively through past data-driven interactions? ii. What are the changes in the fundamental hardness of bandits problems (both (a) and (b)) if we ensure ε-Differential Privacy? iii. How to modify existing bandit algorithms (both (a) and (b)) to simultaneously ensure ε-Differential Privacy and achieve optimal performance? Our study yields new information-theoretic quantities and a generic algorithm demonstrating that in most of the cases, ε-Differential Privacy can be achieved almost for free in bandits.
The talk is based on the works: https://arxiv.org/abs/2209.02570 and https://arxiv.org/abs/2309.02202.
Please email me if you would like to meet with the speaker. He is visiting CWI from 6-10 Nov. Best, Aditya.
Dear all, Reminder for talk today at 10:30 AM at L120 CWI as below.
Also on zoom: https://cwi-nl.zoom.us/j/84241959575?pwd=b1ZaOTNaNE1GU1NneG1XQ0diZVJmdz09 Best, Aditya.
-----Original Message-----
From: Aditya Aditya.Gilra@cwi.nl To: machine-learning-nederland machine-learning-nederland@list.uva.nl Date: Wednesday, 1 November 2023 6:20 PM CET Subject: Talk: 8 Nov 10:30 @ L120 CWI, Amsterdam by Debabrota Basu (INRIA Lille) -- "When Privacy meets Partial Information: Refining the Differential Privacy Definitions, Lower Bounds, and Algorithm Designs for Sequential Learning"
Dear all,
Announcing a talk by Debabrota Basu (INRIA Lille France, https://debabrota-basu.github.io/) as below:
Title: When Privacy meets Partial Information: Refining the Differential Privacy Definitions, Lower Bounds, and Algorithm Designs for Sequential Learning Room: L120, CWI, 123 Science Park, Amsterdam Time: 10:30 AM - 11:30 AM
Abstract: Bandits act as an archetypal model of sequential learning, where one has limited information regarding the utilities of a set of decisions and can know more about the utility of a decision only by choosing it. The goal of a bandit algorithm is either (a) to maximise the total accumulated utility over a given number of interactions, or (b) to find the decision with maximal utility through minimal number of interactions. As bandits are are progressively used for data-sensitive applications, such as designing adaptive clinical trials, tuning hyper-parameters, recommender systems etc., it is imperative to ensure data privacy of these algorithms. Motivated by this, we study the impact of preserving Differential Privacy in bandits with different goals (both (a) and (b)). We answer three questions: i. How to define Differential Privacy in bandits as both the input and output are generated progressively through past data-driven interactions? ii. What are the changes in the fundamental hardness of bandits problems (both (a) and (b)) if we ensure ε-Differential Privacy? iii. How to modify existing bandit algorithms (both (a) and (b)) to simultaneously ensure ε-Differential Privacy and achieve optimal performance? Our study yields new information-theoretic quantities and a generic algorithm demonstrating that in most of the cases, ε-Differential Privacy can be achieved almost for free in bandits.
The talk is based on the works: https://arxiv.org/abs/2209.02570 and https://arxiv.org/abs/2309.02202.
Please email me if you would like to meet with the speaker. He is visiting CWI from 6-10 Nov. Best, Aditya.
machine-learning-nederland@list.uva.nl