« Adaptive high order stochastic descent algorithms » at the NANMAT 2022 conference

This is a talk presented at the Numerical Analysis, Numerical Modeling, Approximation Theory (NA-NM-AT 2022) conference, Cluj-Napoca, Romania, Oct 26-28 2022

Talk materials: the slides of the presentation.

Abstract: motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability. Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.

« Algorithms that get old : the case of generative deep neural networks », LOD 2022 conference

This is a talk presented at

The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science – September 18 – 22, 2022 – Certosa di Pontignano, Siena – Tuscany, Italy

Talk materials: the slides of the presentation.