Dear all,
Unsupervised learning is a significant branch
of machine learning, with k-means clustering
being a key subproblem. However, it often lacks
interpretability. Can we make it interpretable?
Apparently we can.
On this topic, our next speaker in the Theory
of Interpretable AI Seminar, Sanjoy Dasgupta,
will review recent progress on interpretable
k-means clustering.
Speaker:
Sanjoy
Dasgupta
Date: Thursday, July 11, 15.00 Central European Time
(CET) / 9.00 am
Eastern Standard Time (EST)
Zoom link:
https://uva-live.zoom.us/j/87120549999
Title: Recent progress on interpretable clustering
The widely-used k-means procedure returns k clusters that
have arbitrary convex shapes. In high dimension, such a
clustering might not be easy to understand. A more
interpretable alternative is to constrain the clusters to be
the leaves of a decision tree with axis-parallel splits; then
each cluster is a hyper-rectangle given by a small number of
features. Is it always possible to find clusterings that are
interpretable in this sense and yet have k-means cost that is
close to the unconstrained optimum? A recent line of work has
answered this in the affirmative and moreover shown that these
interpretable clusterings are easy to construct. I will give a
survey of these results: algorithms, methods of analysis, and
open problems.
General Seminar info:
Web:
tverven.github.io/tiai-seminar/
Google calendar:
Google calendar
Upcoming speakers:
- September 5: Lesia Semenova
- October 10: Ulrike von Luxburg
Best Regards,
Michal Moshkovitz
Also on behalf of TIAI co-organizers Suraj Srinivas and Tim
van Erven