(288g) Optimal Therapy for a Pathogenic Disease: A Stochastic Optimal Control Approach

Authors: 
Diwekar, U., Vishwamitra Research Institute, Center for Uncertain Systems: Tools for Optimization and Management
Rico-Ramirez, Sr., V., Instituto Tecnologico de Celaya
Gonzalez-Alatorre, G., Instituto Tecnologico de Celaya
Ramirez-Enriquez, O., Instituto Tecnologico de Celaya


Mathematical modeling as a tool for the treatment of a pathogenic disease has been widely proposed in the literature. Most of the modeling approaches represent the immune system dynamics as deterministic optimal control problems. In such problems, model constraints describe the evolution of the disease, which is characterized by a non-linear set of ordinary differential equation. Constraints are used to represent, for instance, concentrations of pathogens, plasma cells, and antibodies, as well as a numerical indication of patient health. The dynamic equations are then controlled by therapeutic agents that affect the rate of change of system variables. Objective functions are generally integral equations which model the trade off between pathogen concentration, organ health, and use of therapeutics. Deterministic approaches, however, do not consider uncertainties in model parameters and variability among different individuals. In practice, significant variability of relevant parameters among patients and within a given patient during the course of the disease have been reported. The success of optimal control method depends on the accuracy of the model. If the uncertainties are omitted, this can lead to significant performance degradation. Therefore, the inherent uncertainties in the patient need to be addressed. To include uncertainties in the formulation, the aim of this paper has been using stochastic optimal control theory to develop protocols for the treatment of human diseases. Based of the so called Real Options Theory, we model time dependent uncertainties as Ito processes. That results in an optimal control problem where the constraints are stochastic differential equations and the objective function is an integral equation. The optimality conditions of the problem are obtained through the stochastic maximum principle, which results in a boundary value problem. The boundary value problem is solved iteratively by using a combination of the gradient method and a stochastic version of the Runge-Kutta method derived in this work. As illustration of the proposed approach, we first solve a mathematical model for the evolution of a generic disease and obtain regimens for applying drugs in a manner that maximizes efficacy while minimizing side effects; then, a practical application to HIV treatment is also reported. We show that stochastic optimal control theory can indeed help develop clinical insight in monitoring and treating illness under uncertainties in model parameters.