Intervention à la conférence :slides
4th J.P. Morgan Global Machine Learning Conference, Paris, Nov. 29, 2022
Invited joint talk « A few key issues in Finance that Machine Learning is Helping Solve » with Pierre Brugiere presented at the 4th JP Morgan Global Machine Learning conference held in Paris, Nov 29 2022
Talk materials: slides ,link to the associated paper.
Round table at the Dauphine Digital Days
Intervention at the round table « Tools, issues and current practice in media boards » at the Dauphine Digital Days held Nov 21-23 2022 at the Université Paris Dauphine – PSL, Paris, France.
Executive summary: software in general and IA is used in many repetitive tasks in media (grammar correction, translation, data search, paper writing when the format is known as ‘trading day report’ or ‘election report’. But same techniques can also be used for more creative tasks, cf. craiyon,(try with « windy day in Paris »), singer, Midjourney gallery (paper on a prize won).
This opens the way to « deep fake » creation ex. youtube deepfake, which is the creation of objects that are fake but that pretend to be true. Deep fakes can and have been used to do harm and we cannot ignore it. Note that fake objects can still impact the real world (rumors can affect people and even the stock market and bansk, etc). But how to distinguish a ‘real’ object from a ‘fake’ one ? Difficult task and not sure the technology can solve it entirely. Some regulation is necessary, see our deep fakes repport. But ultimately this is within our hands and as always can be tacked with a ounce of good will.
Conference badge 🙂
Adaptive high order stochastic descent algorithms, NANMAT 2022 conference
This is a talk presented at the Numerical Analysis, Numerical Modeling, Approximation Theory (NA-NM-AT 2022) conference, Cluj-Napoca, Romania, Oct 26-28 2022
Talk materials: the slides of the presentation.
Abstract: motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability. Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.
Algorithms that get old : the case of generative deep neural networks, LOD 2022 conference
Mathematical models in immunology: talk at UMI-MSE Seminar, Jan 22
This is a talk titled « Mathematical models in immunology: weekly neutralizing antibodies, antibody dependent enhancement and reinfection » presented at the UMI-MSE Seminars
Talk materials: the slides of the presentation. and the VIDEO RECORDING HERE.
Measure compression in generative and unsupervised learning: talk at the CPAM 2021 conference
This is a talk presented 4th Current Trends in Applied Mathematics conference (held virtually in Iasi at the Romanian Academy of sciences, nov 20 2021)
Talk materials: the slides of the presentation. and the Video here.
Convergence dynamics of Generative Adversarial Networks: the dual metric flows — talk at the CADL workshop (ICPR 2020 conference)
This is a talk presented at the CADL (Computational Aspects of Deep Learning) workshop held during the 25th ICPR conference (held virtually in Milano, IT, Jan 10-15 2021)
Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent: talk at the ICPR 2020 conference
This is a talk on a joint work with Imen Ayadi presented at the 25th ICPR conference (held virtually in Milano, IT, Jan 10-15 2021)
Talk materials: Youtube Video of the presentation and the slides of the presentation.
Contact : Gabriel Turinici
CEREMADE, Université Paris Dauphine
Place du Marechal de Lattre de Tassigny PARIS 75016 FRANCE
Email: Gabriel.Turinici_AT_dauphine.fr (_AT_ = @)