Dear all,
Before the summer really starts, we have a very interesting invited speaker in the thematic seminar on Friday next week:
*Gergely Neu* (Universitat Pompeu Fabra, http://cs.bme.hu/~gergo/) * Friday June 25*, 16.00-17.00 Online on Zoom: https://uva-live.zoom.us/j/81805477265 Meeting ID: 818 0547 7265
Please also join for online drinks after the talk.
*Information-Theoretic Generalization Bounds for Stochastic Gradient Descent*
We study the generalization properties of the popular stochastic gradient descent method for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD. The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates.
Seminar organizers: Tim van Erven Botond Szabo
Dear all,
This is just to remind you of Gergely's talk tomorrow, with online drinks afterwards:
On 17/06/2021 15:10, Tim van Erven wrote:
Dear all,
Before the summer really starts, we have a very interesting invited speaker in the thematic seminar on Friday next week:
*Gergely Neu* (Universitat Pompeu Fabra, http://cs.bme.hu/~gergo/)
Friday June 25*, 16.00-17.00 Online on Zoom: https://uva-live.zoom.us/j/81805477265 Meeting ID: 818 0547 7265
Please also join for online drinks after the talk.
*Information-Theoretic Generalization Bounds for Stochastic Gradient Descent*
We study the generalization properties of the popular stochastic gradient descent method for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD. The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates.
Seminar organizers: Tim van Erven Botond Szabo
-- Tim van Erventim@timvanerven.nl www.timvanerven.nl
machine-learning-nederland@list.uva.nl