Efficient assesment of uncertainty in treatment outcome for ODE models

The current paradigm in the clinic is that the maximum therapeutic benefits are obtained by killing the greatest possible number of cancer cells. The premise is that the larger is the induced cell kill the lower is the risk of developing drug resistance, an analogy made by experiences in the war against bacteria using antibiotics. That is why in general chemotherapy (as well as other cytotoxic drugs) is being administered in the maximal tolerable dose (MTD) regime. Obviously MTD assumption limits amount of patient-specific information that is being utilized in the treatment protocols as cancer-specific MTDs are being established based on large clinical trials. However, at the same time it simplifies the general treatment optimization problem that needs to be solved on the per-patient basis, because the only adjustable parameter is the interval between drug doses. This simplicity is important in the clinic, because in most of the cases there are no robust frameworks to handle larger complexity of additional drug dose optimization problem.

In recent years, some theoretical inroads have been made to design treatment protocols that depart from the MTD paradigm and held the premise to be more effective in increasing patient survival,  such as adaptive therapy. Concept of adaptive therapy is currently being investigated using various theoretical mathematical frameworks that are necessary to establish robust adaptive drug dosage protocols. In many cases the mathematical formulation of the problem consists of ordinary differential equations (ODEs) as they give the advantage of some analytical tractability and there are many existing numerical solvers that can be utilized. The search for the optimal treatment is based on either analytical approaches, such as optimal control theory, or brute force exploration of the possible treatment options space. The latter is obviously easier to implement, but is burdened with high computational cost. In a typical scenario optimal treatment schedule is searched for average (nominal) values of model parameters and no uncertainties in the patient-specific parameters are considered. It is conceivable, however, that you can come up with two different treatment protocols that for a given set of parameters result in the same tumor burden at a specified time point, but one is more sensitive to parameters perturbations. Thus, a formal assessment of the uncertainty in treatment outcome under the uncertainty in parameters values should be a part of any treatment exploration study. In this post I will show how to increase computational speed when attempting to asses distribution of treatment outcomes related to uncertainty in ODE model parameters. Presented code is written in MATLAB, but the underlying idea is valid for any other programming language.

Let us consider a simple ODE model that could be used for adaptive therapy investigations.  We describe temporal evolution of two populations (N_1 and N_2) with different growth rates (r_1 > r_2) that compete for the limited amount of space (K) and respond differently to treatment (d_1 > d_2):

\frac{dN_1}{dt}=r_1N_1\left(1-\frac{N_1+N_2}{K}\right)-d_1u(t),

\frac{dN_2}{dt}=r_2N_2\left(1-\frac{N_1+N_2}{K}\right)-d_2u(t),

where u(t) describe drug concentration and under usual pharmacokinetic assumptions is expressed as

u(t) = \sum_{t_i < t} D_i \exp(-c(t-t_i))

where D_i is the drug dose, t_i is drug administration moment, and c is clearance rate of the drug.

Let us assume that we have already established the optimal drug administration protocol and we want to asses how the treatment will perform under different perturbations in parameters values.

First we need a function that for a given set of parameters returns the total size of the population (N_1+N_2) at simulation endpoint:

function PopEnd = solveModel( init, params, treatment, Tmax )
%%INPUT
%init - 2x1 vector defining initial sizes of both populations [N_1; N_2]
%params - structure with model parameters
%treatment - moments in which drug is applied (t_i)
%Tmax - simulation time
%%OUTPUT
%PopEnd - final popultion size (N_1+N_2)

PopEnd = init;
T = [0 treatment.t Tmax];
for i = 2:length(T) %solve in each inter-dose interval
sol = ode45(@model,[T(i-1) T(i)],PopEnd);
PopEnd = sol.y(:,end); %take as initial condition last known population size
end

PopEnd = sum(PopEnd); %N_1+N_2

%definition of model equations
function y = model(t,x)
y = zeros(2,1);
%calculating current drug concentration
u = params.D*exp(-params.clr*t)*sum(exp(params.clr*(treatment.t(treatment.t<t))));
%evaluating right hand side
y(1) = params.r1*x(1)*(1-(x(1)+x(2))/params.K)-params.d1*u*x(1);
y(2) = params.r2*x(2)*(1-(x(1)+x(2))/params.K)-params.d2*u*x(2);
end

end

We will use the above function to calculate population size after the end of treatment for large set of randomly pertubed nominal parameters values. In the following examples we will perturb parameter values uniformly by up to 10%.

Basic for loop approach

The most basic approach is to solve the model N times for randomly generated parameters in the for loop:

N = 1000; %number of trials

treatment.t = [1 3 5 7 9 11 13]; %treatment schedule

init = [10^5; 10^3]; %initial condition
Tmax = 15; %simulation endpoint

PopEnd = zeros(1,N); %vector with final population sizes
for i = 1:N
%perturbing parameters up to 10%
params.r1 = 0.17*(1+(rand()-0.5)/5);
params.r2 = 0.12*(1+(rand()-0.5)/5);
params.d1 = 0.54*(1+(rand()-0.5)/5);
params.d2 = 0.24*(1+(rand()-0.5)/5);
params.K = 10^7*(1+(rand()-0.5)/5);
params.D = 0.25;
params.clr = 0.2*(1+(rand()-0.5)/5);

%solving the model
PopEnd(i) = solveModel( init, params, treatment, Tmax );
end

In the generated histogram of the final population size (see the plot below) we see that there is substantial amount of variation in treatment outcome.

Calculation of the above histogram for N = 1000 trials took about 7 seconds giving about 140 solutions per second.

The first idea to speed up the computation for larger N would be to use multiple CPUs and spread the for loop among them. There is, however, a better way that utilizes properly the CPU architecture.

Using single ODE solver invocation

Modern CPUs can perform operations on arrays and thus, perform many operations simultaneously. In case of small (low-dimensional) ODE systems numerical solvers can’t utilize that feature effectively as the computation of the next step consider only few variables at a time. However, in our case we can simply rewrite the problem of multiple solution of low-dimensional ODE system to single solution of large ODE system. Namely we can write the function calculating the solution in such a way that ith set of randomly generated parameters correspond to and 2i and 2i+1 equations in the large ODE system. In other words we feed the solver with all N sets of parameters and generate set of 2*N equations to be solved simultaneously.

function PopEnd = solveModelMult( init, params, treatment, Tmax )

PopEnd = init; %initial population
T = [0 treatment.t Tmax];
for i = 2:length(T)
sol = ode45(@model,[T(i-1) T(i)],PopEnd);
PopEnd = sol.y(:,end);
end

PopEnd = sum(reshape(PopEnd,2,[]))';

function y = model(t,x)
y = zeros(size(x));
if any(treatment.t<t)
u = params.D.*exp(-params.clr*t).*sum(exp(bsxfun(@times,params.clr,treatment.t(treatment.t<t))),2);
else
u = 0;
end
%size(u)
y(1:2:end) = params.r1.*x(1:2:end).*(1-(x(1:2:end)+x(2:2:end))./params.K)-params.d1.*u.*x(1:2:end);
y(2:2:end) = params.r2.*x(2:2:end).*(1-(x(1:2:end)+x(2:2:end))./params.K)-params.d2.*u.*x(2:2:end);
end

end

Thus, in the main script we don’t need to use for loop and we just generate all N sets of random parameters.

%initial condition
init = repmat([10^5; 10^3],N,1);
Tmax = 60; %simulation endpoint

tic
params.r1 = 0.17*(1+(rand(N,1)-0.5)/6);
params.r2 = 0.12*(1+(rand(N,1)-0.5)/6);
params.d1 = 0.54*(1+(rand(N,1)-0.5)/6);
params.d2 = 0.24*(1+(rand(N,1)-0.5)/6);
params.K = 10^7*(1+(rand(N,1)-0.5)/6);
params.D = 0.75*(1+(rand(N,1)-0.5)/6);
params.clr = 0.2*(1+(rand(N,1)-0.5)/6);

PopEnd = solveModelMult( init, params, treatment, Tmax );
t = toc();

The above code calculated N = 1000 solutions in about 0.2 seconds, which gives about 45x speed-up compared to the basic for loop approach. To check the validity of the single solver invocation approach we can compare resulting histograms.

For larger values of N we can obtain speed-up of up to 140 times when using single solver invocation approach instead of for loops, see the plot below. Of course both approaches can be parallelized and utilize all CPUs present in the system. However, parallelization of the single solver approach makes sense only for very large values of N, because for smaller N communication overhead becomes a major speed compromising factor.

Advertisement

Quick parallel implementation of local sensitivity analysis procedure for agent-based tumor growth model

In the last couple of posts I’ve shown how to implement agent-based model of cancer stem cell driven tumor growth, both in MATLAB and C++. Having the implementations we can go one step further and perform some analysis of the tumor growth characteristics predicted by the model. We will start with performing local sensitivity analysis, i.e. we will try to answer the question how the perturbation of parameter values impacts the growth dynamics. Typically it is being done by perturbing one of the parameters by a fixed amount and analyzing the response of the model to that change. In our case the response will be the percentage change in average tumor size after 3 months of growth. Sounds fairly simple, but…

We have 5 different parameters in the model: proliferation capacity, probability of division, probability of spontaneous death, probability of symmetric division, and probability of migration. Moreover, let us assume that we would like to investigate 3 different values of parameter perturbation magnitude (5%, 10% and 20%). In order to be able to analyze the change in the average size we need to have its decent estimator, i.e. we need sufficient number of simulated stochastic trajectories – let us assume that 100 simulations is enough to have a good estimation of “true” average. So in order to perform the sensitivity analysis we will need to perform 100 + 3*5*100 = 1600 simulations (remember about the growth for nominal parameters values). Even if single simulation takes typically 30 seconds, then we will wait more than 13 hours to obtain the result using single CPU – that is a lot!

After looking at the above numbers we can make a straightforward decision right now – we will use C++ instead of MATLAB, because the model implementation in C++ is a several times faster. However 1) we will need to write a lot of a code in order to perform sensitivity analysis, 2) using multiple CPU is not that straightforward as in MATLAB. Is there a better way to proceed?

Few weeks ago I’ve shown here how to wrap your C++ in order to use it from within MATLAB as a function without loosing the C++ performance. Why not to use it to make our lives easier by making sensitivity code short and harness easily the power of multiple CPUs?

We will start the coding (in MATLAB) with setting all simulation parameters

nSim = 100; %number of simulations to perform for a given set of parameters
tmax = 30*3; %number of days to simulate
N = 1000; %size of the simulation domain

%nominal parameter values [rhomax, pdiv, alpha, ps, pmig]
nominal = [10, 1/24, 0.05, 0.3,10/24];

perturb = [0.05 0.1 0.2]; %perturbation magnitudes, percent of initial value up

Now we just need to construct a loops that will iterate through all possible perturbations and simulations. If we don’t use Parallel Toolbox, i.e. don’t use multiple CPUs, it doesn’t really matter how we will do that – performance will be similar. Otherwise, implementation is not that straightforward even though from MATLABs documentation it seems that it is enough to change for to parfor. The most important thing is how will we divide the work between the CPUs. The simples idea is to spread the considered perturbations values among the CPUs – that will allow to use 15 CPUs in our setting. However, I’ve got machine with 24 CPUs, so that would be a waste of resources – bad idea. The other idea would be to use parfor loop to spread all 100 simulations for a given perturbation value on all 24 CPUs and go through all perturbation values in a simple loop – now we are using all available CPUs. But are we doing that efficiently? No. The thing is that CPUs need to be synchronized before proceeding to the next iteration of the loop going through perturbation values. So some of the CPUs will be idle while waiting for other ones to finish with parfor loop. In order to make the code even more efficient we will just use one parfor loop and throw all 1600 simulation on 24 CPUs at the same time. Let us first prepare the output variable.

HTCG = zeros(1,nSim + length(nominal)*length(perturb)*nSim);

Before writing the final piece of the code we need to solve one more issue. Namely, in the C++ implementation we used srand(time(NULL)) to initiate the seed for random number generator. It is perfectly fine when we use single CPU – each simulation will take some time and we don’t need to worry about uniqueness of the seed. The problem is when we want to use multiple CPUs – all initial parallel simulations will start with exactly the same seed. One way to solve that is to pass the current loop iteration number (i) to C++ and use srand(time(NULL)+i) – that is what I have done. After solving that issue we can write the final piece of the code.

parfor i = 1:length(HTCG)
    %%PREPARING PARAMETERS VALUES
    params = nominal; %setting parameters to nominal values
    if i>nSim %simulation is for perturbed parameters
        %translating linear index to considered parameter and perturbation value
        j = ceil((i-nSim)/(nSim*length(perturb)));
        k = ceil((mod((i-nSim)-1,nSim*length(perturb))+1)/nSim);
        %updating parameters
        params(j) = params(j)*(1+perturb(k));
        if k == 1 %if proliferation capacity parameter we need to round it
           params(j) = round(params(j)); 
        end
    end

    %%RUNNING SIMULATION AND SAVING OUTPUT
    [~, cells] = CA(params,[tmax*24,N,i]);
    HTCG(i) = length(cells)/3;
    clear mex %important! without that the internal 
              %variables in CA functions won't be cleared and next simulation
              %won't begin with single initial cell
end

Then in the command line we start the parallel pool with 24 workers (CPUs), by typing parpool(24) command, and run the code. Screenshot below shows nicely how all of the 24 CPUs are being used – no resource wasted!

CPUs occupancy

We can then add few additional lines of the code to plot the results.


nom = mean(HTCG(1:100)); %average for nominal parameters value
%calculating averages for perturbed sets values
av = squeeze(mean(reshape(HTCG(101:end),nSim,length(perturb),length(nominal))));

%plotting results of sensitivity analysis
set(0,'DefaultAxesFontSize',18)
figure(1)
clf
bar(perturb*100,(av-nom)/nom*100)
legend({'\rho_{max}', 'p_{div}', '\alpha', 'p_s', 'p_{mig}'})
xlabel('% perturbation')
ylabel('% change')

And “voila!” – the resulting figure shows that the perturbation in the proliferation capacity has the highest impact on the tumor growth dynamics.

sensResults