Fake news sites: generative AI left unckecked

In a recent interview with Alexandre Boero from Clubic we discuss how recent technologies rendered possible a growing network of fake online media sites and journalists entirely generated by AI, designed to appear credible and manipulate audiences and advertisers, raising serious concerns about misinformation and the erosion of trust in digital content.

 

« Physics Informed Neural Networks for coupled radiation transport equations » at CM3P 2025 conference

This joint work with Laetitia LAGUZET has been presented at the 5th Computational Methods for Multi-scale, Multi-uncertainty and Multi-physics Problems Conference held in Porto, 1-4 July 2025.

Slides: HERE.

Abstract Physics-Informed Neural Networks (PINNs) are a type of neural network designed to incorporate physical laws directly into their learning process. These networks can model and predict solutions for complex physical systems, even with limited or incomplete data, often using a mathematical formulation of a state equation supplemented with other information.
Introduced by Raissi et al. (2019), PINNs find applications in fields like physics, engineering, and fluid mechanics, particularly for solving partial differential equations (PDEs) and other dynamic systems. In this contribution we explore a modification of PINNs to multi-physics numerical simulation involving radiation transport equations; these equations describe the propagation of a Marshak-type wave in a temperature dependent opaque medium and is considered a good benchmark for difficult multi-regime computations.

« Transformer for Time Series: An Application to the S&P500 » at FICC 2025

This joint work with Pierre Brugière has been presented at the at the 8th Future of Information and Communication Conference 2025 held in Berlin, 28-29 April 2025.

Talk materials:

Abstract : The transformer models have been extensively used with good results in a wide area of machine learning applications including Large Language Models and image generation. Here, we inquire on the applicability of this approach to financial time series. We first describe the dataset construction for two prototypical situations: a mean reverting synthetic Ornstein-Uhlenbeck process on one hand and real S&P500 data on the other hand. Then, we present in detail the proposed Transformer architecture and finally we discuss some encouraging results. For the synthetic data we predict rather accuratly the next move, and for the S&P500 we get some interesting results related to quadratic variation and volatility prediction.

« Convergence of a L2 regularized Policy Gradient Algorithm for the Multi Armed Bandit » at ICPR 2024

This joint work with Stefana-Lucia ANITA has been presented at the at the 27th International Conference on Pattern Recognition (ICPR) 2024 held in Kolkata, India, Dec 1st through 5th 2024.

Talk materials:

Abstract : Although Multi Armed Bandit (MAB) on one hand and the policy gradient approach on the other hand are among the most used frameworks of Reinforcement Learning, the theoretical properties of the policy gradient algorithm used for MAB have not been given enough attention. We investigate in this work the convergence of such a procedure for the situation when a L2 regularization term is present jointly with the ‘softmax’ parametrization. We prove convergence under appropriate technical hypotheses and test numerically the procedure including situations beyond the theoretical setting. The tests show that a time dependent regularized procedure can improve over the canonical approach especially when the initial guess is far from the solution. 

« Optimal time sampling in physics-informed neural networks » at ICPR 2024

This talk has been presented at the at the 27th International Conference on Pattern Recognition (ICPR) 2024 held in Kolkata, India, Dec 1st through 5th 2024.

Talk materials:

Abtract : Physics-informed neural networks (PINN) is a extremely powerful paradigm used to solve equations encountered in scientific computing applications. An important part of the procedure is the minimization of the equation residual which includes, when the equation is time-dependent, a time sampling. It was argued in the literature that the sampling need not be uniform but should overweight initial time instants, but no rigorous explanation was provided for this choice. In the present work we take some prototypical examples and, under standard hypothesis concerning the neural network convergence, we show that the optimal time sampling follows a (truncated) exponential distribution. In particular we explain when is best to use uniform time sampling and when one should not. The findings are illustrated with numerical examples on linear equation, Burgers’ equation and the Lorenz system.

General chair of the conference FAAI24 « Foundations and applications of artificial intelligence », Iasi, October 28-30, 2024

General chair with C. Lefter and A. Zalinescu of the conference FAAI24 « Foundations and applications of artificial intelligence » Iasi Oct 28-30 2024. At the conference I also serve as tutorial presenter.

LLM and time series at the « 6th J.P. Morgan Global Machine Learning Conference », Paris, Oct 18th, 2024

Invited joint talk « Using LLMs techniques for time series prediction » with Pierre Brugiere presented at the 6th JP Morgan Global Machine Learning conference held in Paris, Oct 18th 2024

Talk materials: slides(click here) and here a link to the associated paper.