Dear all,
I'm forwarding an announcement of an online talk at the UvA by my PhD student Dirk van der Hoeven that might be of more general interest.
Best regards, Tim
-------- Forwarded Message -------- Subject: Next SPIP talk: Dirk van der Hoeven Date: Fri, 30 Oct 2020 19:00:47 +0100 From: SPIP Meetings spip.meetings@gmail.com To: [...]
Dear All,
We are happy to invite you to our next SPIP talk on *Friday, 6th November* from *16:00-17:00*. Our speaker is Dirk van der Hoeven and he will talk about ‘_Exploiting the Surrogate Gap in Online Multiclass Classification_/’//./ Dirk is a PhD student at Leiden University with Tim van Erven and he will join Nicolò Cesa-Bianchis group as a postdoc in December.
_Zoom Details:_ _ _ Topic: SPIP - Dirk van der Hoeven Time: Nov 6, 2020 04:00 PM Amsterdam, Berlin, Rome, Stockholm, Vienna
Join Zoom Meeting https://uva-live.zoom.us/j/82915951912 https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuva-live.zoom.us%2Fj%2F82915951912&data=04%7C01%7Ct.a.l.vanerven%40uva.nl%7C51c13932757142b02f6508d87cfdc176%7Ca0f1cacd618c4403b94576fb3d6874e5%7C1%7C0%7C637396776608927885%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=1rS%2BPaBIaz0XcMPMsZyfXBq4twjyJDnxhGuj2GpHDdg%3D&reserved=0
Meeting ID: 829 1595 1912 / / / / /_Abstract:_/ We present Gaptron, a randomized first-order algorithm for online multiclass classification. In the full information setting we show expected mistake bounds with respect to the logistic loss, hinge loss, and the smooth hinge loss with constant regret, where the expectation is with respect to the learner's randomness. In the bandit classification setting we show that Gaptron is the first linear time algorithm with O(K sqrt(T)) expected regret, where K is the number of classes. Additionally, the expected mistake bound of Gaptron does not depend on the dimension of the feature vector, contrary to previous algorithms with O(K sqrt(T) ) regret in the bandit classification setting. We present a new proof technique that exploits the gap between the zero-one loss and surrogate losses rather than exploiting properties such as exp-concavity or mixability, which are traditionally used to prove logarithmic or constant regret bounds.
The Organizer-team
machine-learning-nederland@list.uva.nl