Round table at the Dauphine Digital Days

Intervention at the round table « Tools, issues and current practice in media boards » at the Dauphine Digital Days held Nov 21-23 2022 at the Université Paris Dauphine – PSL, Paris, France.

Video (Youtube)

Executive summary: software in general and IA is used in many repetitive tasks in media (grammar correction, translation, data search, paper writing when the format is known as ‘trading day report’ or ‘election report’. But same techniques can also be used for more creative tasks, cf. craiyon,(try with « windy day in Paris »), singer, Midjourney gallery (paper on a prize won).

This opens the way to « deep fake » creation ex. youtube deepfake, which is the creation of objects that are fake but that pretend to be true. Deep fakes can and have been used to do harm and we cannot ignore it. Note that fake objects can still impact the real world (rumors can affect people and even the stock market and bansk, etc). But how to distinguish a ‘real’ object from a ‘fake’ one ? Difficult task and not sure the technology can solve it entirely. Some regulation is necessary, see our deep fakes repport. But ultimately this is within our hands and as always can be tacked with a ounce of good will.

Conference badge 🙂

« Adaptive high order stochastic descent algorithms » at the NANMAT 2022 conference

This is a talk presented at the Numerical Analysis, Numerical Modeling, Approximation Theory (NA-NM-AT 2022) conference, Cluj-Napoca, Romania, Oct 26-28 2022

Talk materials: the slides of the presentation.

Abstract: motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability. Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.

« Algorithms that get old : the case of generative deep neural networks », LOD 2022 conference

This is a talk presented at

The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science – September 18 – 22, 2022 – Certosa di Pontignano, Siena – Tuscany, Italy

Talk materials: the slides of the presentation.

Workshop on « Models, Human Behaviour and Infectious Diseases », Institut Pasteur, Paris, May 23rd 2022

Workshop on « Models, Human Behaviour and Infectious Diseases » of the Coordinated Action on Modelling of Infectious Diseases, Institut Pasteur, Paris May 23rd, 2022

Slides of the talk « From vaccination to lock-down compliance: Mean Field Games approaches to behavioral epidemiology »

« Convergence dynamics of Generative Adversarial Networks: the dual metric flows » at the CADL workshop (ICPR 2020 conference)

This is a talk presented at the CADL (Computational Aspects of Deep Learning) workshop held during the 25th ICPR conference (held virtually in Milano, IT, Jan 10-15 2021) ICPR 2020 conference

Talk materials: