« Adaptive high order stochastic descent algorithms » at the NANMAT 2022 conference

This is a talk presented at the Numerical Analysis, Numerical Modeling, Approximation Theory (NA-NM-AT 2022) conference, Cluj-Napoca, Romania, Oct 26-28 2022

Talk materials: the slides of the presentation.

Abstract: motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability. Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *