AI4-MED: Personalized Medicine in the Era of Artificial Intelligence

 

Took part recently at a round table on AI in medecine within the AI4-MED conference. Several subjects were touched including the concerns, the safeguards, the trust in complex situations. More detailed reproduction of the discussion will follow on another outlet.

 

Deep hedging at FAAI 2025

During the FAAI 2025 conference I presented a recent work with Pierre Brugière on . See here the paper (arxiv version)  and here the slides.

Executive summary: we introduce a deep-learning framework for hedging derivatives in markets with discrete trading and transaction costs, without assuming a specific stochastic model for the underlying asset. Unlike traditional approaches such as the Black–Scholes or Leland models, which rely on strong modeling assumptions and continuous-time approximations, the proposed method learns effective hedging strategies directly from data. A key contribution is its ability to perform well with very limited training data—using as few as 256 simulated price trajectories—while outperforming classical hedging schemes in numerical experiments under a geometric Brownian motion setting. This makes the approach both robust and practical for real-world applications where data and model certainty are limited.

Fake news sites: generative AI left unckecked

In a recent interview with Alexandre Boero from Clubic we discuss how recent technologies rendered possible a growing network of fake online media sites and journalists entirely generated by AI, designed to appear credible and manipulate audiences and advertisers, raising serious concerns about misinformation and the erosion of trust in digital content.

 

« Physics Informed Neural Networks for coupled radiation transport equations » at CM3P 2025 conference

This joint work with Laetitia LAGUZET has been presented at the 5th Computational Methods for Multi-scale, Multi-uncertainty and Multi-physics Problems Conference held in Porto, 1-4 July 2025.

Slides: HERE.

Abstract Physics-Informed Neural Networks (PINNs) are a type of neural network designed to incorporate physical laws directly into their learning process. These networks can model and predict solutions for complex physical systems, even with limited or incomplete data, often using a mathematical formulation of a state equation supplemented with other information.
Introduced by Raissi et al. (2019), PINNs find applications in fields like physics, engineering, and fluid mechanics, particularly for solving partial differential equations (PDEs) and other dynamic systems. In this contribution we explore a modification of PINNs to multi-physics numerical simulation involving radiation transport equations; these equations describe the propagation of a Marshak-type wave in a temperature dependent opaque medium and is considered a good benchmark for difficult multi-regime computations.

« Transformer for Time Series: An Application to the S&P500 » at FICC 2025

This joint work with Pierre Brugière has been presented at the at the 8th Future of Information and Communication Conference 2025 held in Berlin, 28-29 April 2025.

Talk materials:

Abstract : The transformer models have been extensively used with good results in a wide area of machine learning applications including Large Language Models and image generation. Here, we inquire on the applicability of this approach to financial time series. We first describe the dataset construction for two prototypical situations: a mean reverting synthetic Ornstein-Uhlenbeck process on one hand and real S&P500 data on the other hand. Then, we present in detail the proposed Transformer architecture and finally we discuss some encouraging results. For the synthetic data we predict rather accuratly the next move, and for the S&P500 we get some interesting results related to quadratic variation and volatility prediction.

Portfolio management, risk management, statistics and dynamics of financial derivatives, M2 ISF cl+App, 2019+

Teacher: Gabriel TURINICI

Content

  • classical portfolio mangement under historical probability measure: optimal portfolio, arbitrage, APT, beta
  • Financial derivatives valuation and risk neutral probability measure
  • Volatility trading
  • Portfolio insurance: stop-loss, options, CPPI, Constant-Mix
  • Hidden or exotic options: EFT, shorts
  • Deep learning and portfolio strategies

Documents

NOTA BENE: All documents are copyrighted, cannot be copied, printed or ditributed in any way without prior WRITTEN consent from the author

Chapter nameTheoretical partImplémentationResults
Classical portfolio management
(historical measure)
slidesPython data: CSV format and PICKLE
Other data : shorter CSV (30/40)
Program: statistical normality tests to fill in
Program: optimal portfolio w/r to random portfolio, backtest to fill in
Full program: here
optimalCAC40 30_p5optimalCAC40 30_p15
optimalCAC40 30_p30
Financial derivatives and risk neutral probabilityBOOK M1 « Mouvement
Brownien et évaluation d’actifs dérivés »
slides: reminders for financial derivatives
Code: Brownian and log-normal scenario generation,
Euler-Maruyama version to correct + MC computation ;
Monte Carlo option price
Codes: price & delta of vanilla call and put options, (log-normal = Black-Scholes) model
Delta hedging : initial code (notebook or python), final version (notebook or python) Bachelier model version
Volatility tradingpdf documentCode: vol trading (another version here)Results
Portfolio insurance:
stop-loss, options,
CPPIs, Constant Mix
slides,
lsections 6.2 of M1 course textbook
Written notes
Youtube CPPI video:
part 1/2, part 2/2
Beta slippage: presentation.
Code: stop loss, CPPI, CPPI v2
Constant-Mix
dataC40
Result stop-loss, CPPI,
constant-mix
Deep learning for option pricing: basic price interpolation, advanced deep hedgingSimple price interpolation code to fill in:
python notebook
or
— pure python(rename *.txt to *.py)
Corrected code : python notebook

Advanced deep hedging : see article https://arxiv.org/abs/2505.22836
Toolscode
exemple: download Yahoo! Finance data
Misc:Projet (old version)

Historical note: 2019/21 course name: « Approches déterministes et stochastiques pour la valuation d’options » + .

Statistical Learning, M1 Math 2024+

Instructor: Gabriel TURINICI

Preamble: this course is just but an introduction, in a limited amount of time, to Statistical and Machine learning. This will prepare for the next year’s courses (some of them on my www page cf. « Deep Learning » and « Reinforcement Learning »).


1/ Introduction to statistical learning : supervised, non-supervised and reinforcement learning, general learning procedure, model evaluation, under and overfitting

2/ K-nearest neighbors and the « curse of the dimensionality »

3/ Regression in high dimensions, variable selection and model regularization (ridge, lasso)

4/ Stochastic gradient descent, mini-batch

5/ Neural networks: introduction, operator, datasets, training, examples, implementations

6/ K-means clustering


Main document for the theoretical presentations: (no distribution autoried without WRITTEN consent from the author): see your « teams » group.

Exercices, implementations: see « teams » group.


Analyse numérique: évolution (M1 Math, Université Paris Dauphine – PSL, 2005-11, 2019-2025

Responsable de cours: Gabriel TURINICI
Contenu:
1 Introduction
2 EDO
3 Calcul de dérivée et contrôle
4 EDS
Bibliographie: poly distribué

Documents de support de cours, autres documents

NOTA BENE: Tous des documents sont soumis au droit d’auteur, et ne peuvent pas être distribués sauf accord préalable ÉCRIT de l’auteur.

Supports de cours: livre en anglais « Numerical simulations of time-dependent problems : applied to epidemiology, artificial intelligence and finance »

Implementations TP:

EDO: exo sur la précision, exo stabilité , SIR(EE+H+RK4),  order for the EE/H schemes/SIR

SIR (version controle, adjoint / backward); (version 2023 here)

EDS version 2025 : implémenter : 

1/ simulation brownien

2/ calcul intégrale de W par/rapport à W

3/ calcul par Euler-Maruyama pour équation d’Ornstein-Uhlenbeck

4/ calcul par Euler-Maruyama faible pour modèle log-normal (Black-Scholes)

Version anciennes

2022/23:

2020/21: