Dear all,
The talk by Samory Kpotufe is tomorrow:
Our seminar speaker this Friday in the thematic seminar is Samory
Kpotufe. Further below, there is also a list of upcoming talks that are
scheduled for the second semester.
*Samory Kpotufe* (Department of Statistics, Columbia University,
http://www.columbia.edu/~skk2175/)
Samory works on the intersection between statistics and machine
learning, with an interest in adaptive methods, and was one of the
chairs for last year's COLT learning theory conference.
*Friday November 26*, 16.00-17.00
Online on Zoom:
https://uva-live.zoom.us/j/85155421740
Meeting ID: 851 5542 1740
Please also join for online drinks after the talk.
*Some Recent Insights on Transfer and Multitask Learning*
A common situation in Machine Learning is one where training data is not
fully representative of a target population due to bias in the sampling
mechanism or due to prohibitive target sampling costs. In such
situations, we aim to ’transfer’ relevant information from the training
data (a.k.a. source data) to the target application. How much
information is in the source data about the target application? Would
some amount of target data improve transfer? These are all practical
questions that depend crucially on 'how far' the source domain is from
the target. However, how to properly measure 'distance' between source
and target domains remains largely unclear. In this talk we will argue
that much of the traditional notions of 'distance' (e.g. KL-divergence,
extensions of TV such as D_A discrepancy, density-ratios, Wasserstein
distance) can yield an over-pessimistic picture of transferability.
Instead, we show that some asymmetric notions of 'relatedness' between
source and target (which we simply term 'transfer-exponents') capture a
continuum from easy to hard transfer. Transfer-exponents uncover a rich
set of situations where transfer is possible even at fast rates; they
encode relative benefits of source and target samples, and have
interesting implications for related problems such as 'multi-task or
multi-source learning'. In particular, in the case of transfer from
multiple sources, we will discuss (if time permits) a strange phenomena:
no procedure can guarantee a rate better than that of having a single
data source, even in seemingly mild situations where multiple sources
are informative about the target. The talk is based on earlier work with
Guillaume Martinet, and ongoing work with Steve Hanneke.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
*Upcoming talks:*
Mar. 11, 2022, *Tomer Koren
<https://mschauer.github.io/StructuresSeminar/#Koren>, *Tel Aviv University
Mar. 25, 2022, *Nicolò Cesa-Bianchi
<https://mschauer.github.io/StructuresSeminar/#CesaBianchi>**,
*Università degli Studi di Milano
Apr. 8, 2022, *Julia Olkhovskaya
<https://sites.google.com/view/julia-olkhovskaya/home>*, Vrije Universiteit
Apr. 22, 2022, *Tor Lattimore
<https://mschauer.github.io/StructuresSeminar/#Lattimore>*, DeepMind
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
Our seminar speaker this Friday in the thematic seminar is Samory
Kpotufe. Further below, there is also a list of upcoming talks that are
scheduled for the second semester.
*Samory Kpotufe* (Department of Statistics, Columbia University,
http://www.columbia.edu/~skk2175/)
Samory works on the intersection between statistics and machine
learning, with an interest in adaptive methods, and was one of the
chairs for last year's COLT learning theory conference.
*Friday November 26*, 16.00-17.00
Online on Zoom:
https://uva-live.zoom.us/j/85155421740
Meeting ID: 851 5542 1740
Please also join for online drinks after the talk.
*Some Recent Insights on Transfer and Multitask Learning*
A common situation in Machine Learning is one where training data is not
fully representative of a target population due to bias in the sampling
mechanism or due to prohibitive target sampling costs. In such
situations, we aim to ’transfer’ relevant information from the training
data (a.k.a. source data) to the target application. How much
information is in the source data about the target application? Would
some amount of target data improve transfer? These are all practical
questions that depend crucially on 'how far' the source domain is from
the target. However, how to properly measure 'distance' between source
and target domains remains largely unclear. In this talk we will argue
that much of the traditional notions of 'distance' (e.g. KL-divergence,
extensions of TV such as D_A discrepancy, density-ratios, Wasserstein
distance) can yield an over-pessimistic picture of transferability.
Instead, we show that some asymmetric notions of 'relatedness' between
source and target (which we simply term 'transfer-exponents') capture a
continuum from easy to hard transfer. Transfer-exponents uncover a rich
set of situations where transfer is possible even at fast rates; they
encode relative benefits of source and target samples, and have
interesting implications for related problems such as 'multi-task or
multi-source learning'. In particular, in the case of transfer from
multiple sources, we will discuss (if time permits) a strange phenomena:
no procedure can guarantee a rate better than that of having a single
data source, even in seemingly mild situations where multiple sources
are informative about the target. The talk is based on earlier work with
Guillaume Martinet, and ongoing work with Steve Hanneke.
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
*Upcoming talks:*
Mar. 11, 2022, *Tomer Koren
<https://mschauer.github.io/StructuresSeminar/#Koren>, *Tel Aviv University
Mar. 25, 2022, *Nicolò Cesa-Bianchi
<https://mschauer.github.io/StructuresSeminar/#CesaBianchi>**,
*Università degli Studi di Milano
Apr. 22, 2022, *Tor Lattimore
<https://mschauer.github.io/StructuresSeminar/#Lattimore>*, DeepMind
[date to be confirmed], *Julia Olkhovskaya
<https://sites.google.com/view/julia-olkhovskaya/home>*, Vrije Universiteit
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
With things moving online again, the good news is that we have an
interesting speaker lined up in the thematic seminar on Friday next week:
*Samory Kpotufe* (Department of Statistics, Columbia University,
http://www.columbia.edu/~skk2175/)
Samory works on the intersection between statistics and machine
learning, with an interest in adaptive methods, and was one of the
chairs for last year's COLT learning theory conference.
*Friday November 26*, 16.00-17.00
Online on Zoom:
https://uva-live.zoom.us/j/85155421740
Meeting ID: 851 5542 1740
Please also join for online drinks after the talk.
Title and abstract: TBA
Seminar organizers:
Tim van Erven
Botond Szabo
https://mschauer.github.io/StructuresSeminar/
--
Tim van Erven<tim(a)timvanerven.nl>
www.timvanerven.nl
Dear colleagues,
*A "Women in Robotics Award" will be granted to the best paper published
in the Special Issue "Women in Robotics"
https://www.mdpi.com/journal/robotics/special_issues/Women_Robotics
Deadline: 25 December 2021
Each award nominee will be assessed on her paper's originality, quality,
and contribution to the field by the Evaluation Committee. The winner
will receive a certificate, an award of 500 CHF, and an opportunity to
publish her next submission in Robotics free of charge.*
If you are interested in this project, please feel free to contact us
via email charlene.dong(a)mdpi.com or robotics(a)mdpi.com within two weeks.
Look forward to hearing from you.
Best regards,
Ms. Charlene Dong
Managing Editor
Email: charlene.dong(a)mdpi.com
Robotics (http://www.mdpi.com/journal/robotics/)
LinkedIn: https://www.linkedin.com/company/74926083/admin/
Twitter: @RoboticsMDPI
The 2020 CiteScores of Robotics is released and it increased to 3.5
(Ranking Q2 in Control and Optimization category).
Update:
2021–2022 Robotics Travel Awards (800 CHF)
Application Deadline: 30 June 2022
https://www.mdpi.com/journal/robotics/awards
Disclaimer: MDPI recognizes the importance of data privacy and
protection. We treat personal data in line with the General Data
Protection Regulation (GDPR) and with what the community expects of us.
The information contained in this message is confidential and intended
solely for the use of the individual or entity to whom they are
addressed. If you have received this message in error, please notify me
and delete this message from your system. You may not copy this message
in its entirety or in part, or disclose its contents to anyone.