21-22 nov. 2024 Paris (France)
Neural network approach for inference of nonlinear mixed effect models based on ordinary differential equations
Zhe Li  1@  , Quentin Clairon  2  , Rodolphe Thiébaut  3  
1 : University of Bordeaux, Department of Public Health, Inserm Bordeaux Population Health Research Centre, Inria SISTM
Univ. Bordeaux; Inserm U1219 BPH ; Inria SISTM team; VRI;
2 : University of Bordeaux, Department of Public Health, Inserm Bordeaux Population Health Research Centre, Inria SISTM, Bordeaux, France
Univ. Bordeaux; Inserm U1219 BPH ; Inria SISTM team; VRI;
3 : University of Bordeaux, Department of Public Health, Inserm Bordeaux Population Health Research Centre, Inria SISTM, Bordeaux, France
Inserm Bordeaux Population Health Research Center UMR 1219, Inria centre of Bordeaux University, team SISTM, F-33000 Bordeaux, France

Background: We propose a neural network based method for the estimation of parameters within non-linear mixed effect models based on ordinary differential equations (NLME-ODE)s from population data, i.e. harvested from different subjects. For this problem, SAEM algorithm, based on Monte-Carlo Markov Chain (MCMC) procedure to reconstruct subject specific parameter a posteriori distributions, gives accurate estimation for model and data of reasonable dimensions. However, when model/data dimension increases and available data per subject becomes sparse, such inference procedures can face degradation due to MCMC convergence issues. Thus, alternative methods needs to be proposed to regularize the inverse problem of parameter estimation in such complex setting which become increasingly considered for real data analysis, as in clinical trial for example.

Method:We rely on bayesian variational approach to construct an approximation (a lower bound) of data likelihood giving rise to the so-called Evidence Lower Bound (ELBO) of the marginal likelihood as the new objective function. Such approaches are based on the replacement of the true subject specific parameters (playing here the role of latent variables) a posteriori distribution by an approximation chosen from a family of candidate distributions. In our deep-learning framework, this family is constituted of parametrized neural networks, thus leading to the so-called variational auto-encoder setting. As often done for regularization purpose, the ELBO cost function can be seen as a trade-off between data fidelity permitted by the approximated model and fidelity to the original one. The model is trained by computing the total ELBO for all the subjects at all the available time points. The idea is to transform sampling based inference procedure, which can be intractable, into an optimization problems that can be solved with the gradient-based methods, among others. In this way, we bypass the sampling step in the MCMC based approaches in order to scale up into high dimensional/complex settings without suffering from too much computational cost and inaccurate estimation concerns. Once the point-wise estimation is done, we proceed to the variance estimation via Observed Fisher Information approximation.
To validate our strategy, we consider a partially observed two-compartments pharmacokinetic model with first-order absorption and elimination. We generated 100 simulation replicates with 100 partipants corrupted with measurement error. We use relative bias in percent (RB%), empirical variance and relative root mean squared error (RRMSE) to evaluate the performance of our method. We evaluate the variance of the estimated parameters as well as the coverage rate of the derived 95% confidence interval. Moreover, we check that our approximated method (which requires the introduction of additionnal nuisance parameters) does not come with the prize of additional identifiability issues which are highly critical for this inverse problems relying on parameter estimation interpretability.

Results & Discussions:In terms of accuracy our approach gives similar performance comparing to SAEM algorithm's estimation. Moreover, the practical coverage rate of confidence interval meets the theoretical expected value. We also ensured that our approach does not lead to new identifiability issues. The performance of the resulting algorithm is encouraging to move to a more complex configuration, with stronger nonlinearities and modeling errors.

Keywords: Neural network, non-linear mixed effect, Variational Inference, ODE

 


Personnes connectées : 1 Vie privée
Chargement...