Estimating minimal time on PD-1/PD-L1 inhibitors for successful tumor regression

Without a doubt, introducing immune checkpoint inhibitors to the clinic was a major breakthrough in the war against cancer. For some patients tumor responses to anti-PD-1/PD-L1 or anti-CTLA4 therapies are spectacular and last long after the therapy is withdrawn. Interestingly, disease regression can occur even after an initial phase of tumor growth during the therapy. However, despite spectacular successes, therapies based on checkpoint inhibitors still suffer from relatively low response rates. There is an urgent need to establish reliable patient-specific biomarkers that would allow to solve this problem. The current focus is on using, among others, measures of pretreatment T cell infiltration, PD-L1 expression profiles, inflammatory status, and mutational burden of the tumor. Finding a reliable correlate will allow for designing patient-specific therapy scheduling or for incorporating appropriate adjuvant treatment.

In todays post, I will adopt a mathematical model that was introduced by Kuznetsov and colleagues back in 1994 (Kuznetsov et al., Bull Math Biol, 1994) to investigate the effects of anti-PD-1/PD-L1 therapy depending on the pretreatment tumor characteristics. The way in which the model was formulated more than two decades ago allows for elegant and straightforward incorporation of such a treatment. I will show how the variability in the observed response dynamics is explained by the model and how can we estimate the minimal time on PD-1/PD-L1 treatment required for a therapeutical success.

1. Mathematical model and its predictions

In his paper Kuznetsov and colleagues introduced the following relatively simple mathematical model describing tumor-immune system dynamics

\begin{array}{rcl} \dot{E}(t) & = & s+p\displaystyle\frac{E(t)N(t)}{g+N(t)}-dE(t)-mE(t)N(t)\,,\\ \dot{N}(t) & = & aN(t)\left(1-bN(t)\right)-nE(t)N(t)\,, \\ \end{array}

where E(t) and N(t) are the numbers of immune effector cells and cancer cells, respectively. Taking into account that anti-PD-1/PD-L1 treatment reduces the probability that cytotoxic T cell will be “turned off” during the interaction with cancer cell, we can assume that the treatment decreases the value of parameter m. To reduce the complexity of the following analysis, but without the loss of generality, we won’t consider any pharmacokinetics and pharmacodynamics of the drug, i.e. we assume  simply that during the treatment parameter m has a lower value m_T.

First, we should check what is the predicted model dynamics for the nominal parameter values proposed by Kuznetsov and others. Let us simplify that task by performing non-dimensionalization procedure first. After defining x = E/E_0,  y = N/N_0, and \tau = nN_0t with E_0 = N_0 = 10^6 we obtain the slightly simpler system

\begin{array}{rcl} \dot{x} & = & \sigma+\rho\displaystyle\frac{xy}{\eta+y}-\delta x-\mu xy\,,\\ \dot{y} & = & \alpha y \left(1-\beta y\right)-xy\,, \\ \end{array}

where \sigma = \frac{s}{nE_0N_0}, \rho = \frac{p}{nN_0}, \eta = \frac{g}{N_0} \mu = \frac{m}{n}, \delta = \frac{d}{nN_0}, \alpha = \frac{a}{nN_0}, and \beta = bN_0. We can see that in the non-dimensional model \mu is the parameter affected by treatment.

Using the parameters proposed in Kuznetsov’s original paper we obtain the following values of non-dimensional parameters:

\sigma = 0.118\,, \quad \rho = 1.131\,, \quad \eta = 20.19\,, \quad \mu = 0.00311\,,
\delta = 0.374\,, \quad \alpha=1.636\,, \quad \beta = 0.002

We can start our analysis with evaluating numerically the model behavior for the above set of parameters. In order to do that systemically we will plot the phase portrait in which behavior of the solutions in the whole phase space can be viewed. MATLAB is quite a convenient  environment in which we can do that quite easily by using the built-in quiver function:

%% DEFINING MODEL PARAMETERS
sigma = 0.118; rho = 1.131; eta = 20.19; mu = 0.00311;
delta = 0.374; alpha = 1.636; beta = 0.002;

%% DEFINING THE MODEL (INLINE FUNCTION)
rhs = @(t,x)([sigma+rho*x(1,:).*x(2,:)./(eta+x(2,:))-mu*x(1,:).*x(2,:)-delta*x(1,:);...
alpha*x(2,:).*(1-beta*x(2,:))-x(1,:).*x(2,:)]);

%% FUNCTION RETURNING MODEL SOLUTION ON [0,100] FOR GIVEN INITIAL CONDITION
solve = @(init)(ode45(rhs,[0 100],init));

%% EVALUATING THE VECTOR FIELD IN [0, 3.5]x[0, 450]
Npoints = 30;
x = linspace(0.,3.5,Npoints);
y = linspace(0,450,Npoints);

dx = x(2)-x(1);

dy = y(2)-y(1);
[X, Y] = meshgrid(x,y);
G = rhs([],[reshape(X,1,[]); reshape(Y,1,[])]);
U = reshape(G(1,:),Npoints,Npoints);
V = reshape(G(2,:),Npoints,Npoints)*dx/dy;
N = sqrt(U.^2+V.^2);
U = U./N; V = V./N;

%% EVALUATING MODEL SOLUTIONS FOR DIFFERENT INITIAL CONDITIONS
initCond = [0, 2.58; 0, 1; 0, 0.1; 0, 20; 0.76, 267.8;...
0.755, 267.8; 1.055, 450; 0.6, 450; 2.1, 450];
sols = cell(1,size(initCond,1));
for i = 1:size(initCond,1)
sols{i} = solve(initCond(i,:));
end

%% PLOTTING THE RESULTS
figure(1)
clf
hold on
[X1, Y1] = meshgrid(0:Npoints-1,0:Npoints-1);
q = quiver(X1,Y1,U,V); %plotting vector field
q.Color = 'red';
q.AutoScaleFactor = 0.75;
for i = 1:length(sols) %plotting each solution
plot(sols{i}.y(1,:)/max(x)*(Npoints-1),sols{i}.y(2,:)/max(y)*(Npoints-1),'k')
end
hold off

After running the above script we should see an image similar to the one below.

phasePortrait

We can see in the phase portrait that there are two steady states to which solutions tend to, one with substantial tumor burden, i.e. with large value of variable y, the second reflecting probably subclinical disease. Of course, those are the initial conditions that govern to which of them the solution will tend to. Let us now check analytically if the above behavior is not a numerical artifact.

First of all, the above system has a smooth vector field and thus, solutions exist and are unique. Moreover, it is easy to check that the solutions are bounded and non-negative. We are interested in calculating all of the possible steady states of the system and evaluating their stability. For any set of parameters we always have a semi-trivial steady state \left(\frac{\sigma}{\delta}, 0\right), i.e. the state without any tumor cells. By calculating nullclines and checking their intersections we obtain the following polynomial equation for other possible steady states

W(y)=C_3y^3+C_2y^2+C_1y+C_0 = 0\,,
where
\begin{array}{rcl} C_3 & = & \mu\beta\,,\\ C_2 & = & -\mu+\beta\left(\mu\eta+\delta-\rho \right)\,, \\ C_1 & = & \frac{\sigma}{\alpha}+\rho-\mu\eta-\delta+\delta\eta\beta\,, \\ C_0 & = & \eta\left(\frac{\sigma}{\alpha}-\delta\right)\,. \\ \end{array}

We can see that, as we have cubic polynomial, there are up to three additional steady states. The exact number of solutions depends obviously on particular C_i values.

The local stability of the steady states can be evaluated by checking the eigenvalues of Jacobian matrix for the considered system:

J(x,y) = \left[ \begin{array}{cc} \frac{\rho y}{\eta + y}-\mu y-\delta & -\mu x + \frac{\rho \eta x}{(\eta + y)^2} \\ -y & \alpha\left(1-2\beta y\right)-x \\ \end{array} \right].

If all eigenvalues of J(S) have negative real parts then we know that the steady state S is locally asymptoticaly stable; if any of the eigenvalues have positive real part, then we know that the steady state is unstable.

An additional step that will be further required to draw definite conclusions is to show that there are no cycles in the system, i.e. all solutions tend to one of the steady states. We can do that by using Dulac-Bendixon theorem with the function defined as B(x,y) = \frac{1}{xy}.

We are interested in behavior of the system for different values of parameter \mu under the treatment. Thus, having the above equations we will plot a bifurcation diagram for the parameter \mu = \mu_T, i.e. we will plot existing steady states and their stability for different values of parameter \mu_T. This can be achieved by using the following MATLAB script:

%% DEFINING MODEL PARAMETERS
sigma = 0.118; rho = 1.131; eta = 20.19; mu = 0.00311;
delta = 0.374; alpha = 1.636; beta = 0.002;

%% DEFINING FUNCTION RETURNING JACOBIAN MATRIX FOR GIVEN x,y AND mu
J = @(x,y,mu)([rho*y/(eta+y)-mu*y-delta, -mu*x+rho*eta*x/(eta+y)^2 ; ...
-y, alpha*(1-2*beta*y)-x]);

%% DEFINING POLYNOMIAL COEFFICIENTS FOR DIFFERENT MU
C = @(mu)( [mu*beta; -mu+beta*(eta*mu+delta-rho); ...
sigma/alpha+rho-eta*mu-delta+beta*delta*eta; ...
eta*(sigma/alpha-delta)]);

%% PLOTTING BIFFURCATION DIAGRAM
muMesh = linspace(0,5*mu,100); %mesh for mu
figure(1)
clf
hold on
for mu = muMesh
StStatesY = roots(C(mu)); %solving W(y) = 0;
StStatesY(imag(StStatesY) ~=0) = []; %deleting complex roots
StStatesY(StStatesY < 0) = []; %deleting negative roots
StStatesX = alpha*(1-beta*StStatesY); %calculating x coordinate
indx = StStatesX < 0;
StStatesX(indx) = []; %deleting steady states with negative x
StStatesY(indx) = []; %deleting steady states with negative x
for i = 1:length(StStatesY) %evaluating stability and plotting
Jeval = J(StStatesX(i), StStatesY(i), mu);
if all(real(eig(Jeval))<0) %point is stable
plot(mu,StStatesY(i),'r.');
else
plot(mu,StStatesY(i),'k.');
end
end
plot(mu, 0,'k.'); %always unstable semi-trivial steady state
end
hold off

After running the above code we should see a plot similar to the one below, but without the red dots that indicate the value of \mu_T considered originally by Kuznetsov and colleagues.

biff

We can see in the bifurcation diagram that for nominal \mu value we have two stable steady states, so the results of our initial numerical investigations are confirmed. Moreover, we see that:

  • if nominal, i.e. without treatment, value of \mu is too large then we can reduce the tumor size only temporary. In other words once we stop the treatment the tumor will grow back to the clinically detectable sizes.
  • if nominal, i.e. without treatment, value of \mu is smaller than a given threshold (\approx 0.013) then we can durably reduce the tumor to subclinical disease state, if during the treatment \mu_T is sufficiently small (drug dose is sufficiently large) for sufficient amount of time. In other words, the tumor won’t grow back once we stop the treatment.

This type of behavior is typically referred to as hysteresis. Most importantly this results is consistent with clinical observation that in some patients long lasting responses may be achieved even after the drug withdrawal.

From a mathematical point of view everything boils down to the question if we can force the trajectory to intersect the curve separating basins of attractions in the treatment free case. Once this is achieved we can withdraw the treatment and the patient’s immune system will take care of the disease by itself. Thus, for each initial condition in the phase space we should look for the value of \tau (scaled time) for which the intersection (if possible) occurs.

First, we need to approximate the curve separating the basins of attraction. This can be achieved by the following MATLAB script:

%% DEFINING MODEL PARAMETERS
sigma = 0.118; rho = 1.131; eta = 20.19; mu = 0.00311;
delta = 0.374; alpha = 1.636; beta = 0.002;

%% CALCULATING BIGGEST VALUE OF STEADY STATE FOR Y
Smax = max(roots([mu*beta; -mu+beta*(eta*mu+delta-rho); ...
sigma/alpha+rho-eta*mu-delta+beta*delta*eta; ...
eta*(sigma/alpha-delta)]));

%% DEFINING THE MODEL (INLINE FUNCTION)
rhs = @(t,x)([sigma+rho*x(1,:).*x(2,:)./(eta+x(2,:))-mu*x(1,:).*x(2,:)-delta*x(1,:);...
alpha*x(2,:).*(1-beta*x(2,:))-x(1,:).*x(2,:)]);

%% FUNCTION RETURNING MODEL SOLUTION ON [0,100] FOR GIVEN INITIAL CONDITION
solve = @(init)(ode45(rhs,[0 100],init));

%% CALCULATING LOWER PART OF THE SEPARATING CURVE
yInit = 50; %initial value
yTol = 1e-5; %tolerance
dy = 5; %initial step size

while dy > yTol
sol = solve([0 yInit]);
if abs(sol.y(2,end)-Smax) < 30 %reached maximal steady state yInitPrev = yInit; yInit = yInit-dy; else dy = dy/2; yInit = yInitPrev; end end %% CALCULATING UPPER PART OF THE SEPARATING CURVE xInit = 0.5; %initial value xTol = 1e-5; %tolerance dx = 0.5; %initial step size while dx > xTol
sol = solve([xInit Smax+100]);
if abs(sol.y(2,end)-Smax) < 30 xInitPrev = xInit; xInit = xInit+dx; else dx = dx/2; xInit = xInitPrev; end end %% CALCULATING THE FINAL CURVE sol2 = solve([0 yInit]); sol1 = solve([xInit Smax+100]); sol1.y = sol1.y(:,1:find(diff(sol1.y(2,:))>0,1,'first'));
sol2.y = sol2.y(:,1:find(diff(sol2.y(1,:))<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span><0,1,'first'));

curve.x = [sol2.y(1,:) sol1.y(1,end:-1:1) ];
curve.y = [sol2.y(2,:) sol1.y(2,end:-1:1) ];

%% PLOTTING
figure(1)
clf
plot(curve.x, curve.y)

Then having the curve we can implement the numerical solver in a way that it stops integration once the separating curve is reached. This is done by the following MALTAB script:

function sol = solveKuznetsovNondimensionalBasins( par, Tmax, init, curve )

init = init(:);

opts = odeset('RelTol',1e-8,'AbsTol',1e-8,'Events',@stops);
sol = ode45(@odesystem, [0 Tmax],init,opts);

function y = odesystem(~,x)
y = zeros(2,1);
y(1) = par.sigma+par.rho*x(1).*x(2)./(par.eta+x(2))-par.mu*x(1).*x(2)-par.delta*x(1);
y(2) = par.alpha*x(2).*(1-par.beta*x(2))-x(1).*x(2);
end

function [value,isterminal,direction] = stops(~,y)
value = y(2)-interp1(curve.x, curve.y, y(1));
isterminal = 1; % Stop the integration
direction = 0; % Any direction
end

end

Using the above procedures we can look at how trajectories starting from the same initial condition (initial point) will behave for different \mu_T values.
exSols

In the above plot we see that if the drug dose is not sufficient (\mu reduction is not sufficient) then no matter how long we keep the patient on therapy, the tumor will grow to its maximal possible size (compare \mu_T = 0.9\mu) – therapy fails. However, if we increase the dosage, i.e. reduce \mu even further, then we can move the tumor growth trajectory to the favorable outcome (compare \mu_T = 0.7\mu and \mu_T = 0.4\mu). Interestingly, as observed in a clinic, the favorable outcome is achieved after initial phase of tumor growth under the therapy.

Finally, for each initial point we can evaluate if for a given dose (value of \mu_T under the treatment) treatment is possible and, if yes, what is the minimal time on therapy to achieve durable tumor regression. Below is the plot showing the results of such evaluation with \tau rescaled to original units.

treatTime

As it can be seen in the above plot, the possibility to observe therapeutical success depends strongly on the dose of the drug, time on therapy, and initial T cell infiltration. It is conceivable that the above simple model, if successfully calibrated with patient data, could augment patient-specific treatment design.

Advertisement

Efficient assesment of uncertainty in treatment outcome for ODE models

The current paradigm in the clinic is that the maximum therapeutic benefits are obtained by killing the greatest possible number of cancer cells. The premise is that the larger is the induced cell kill the lower is the risk of developing drug resistance, an analogy made by experiences in the war against bacteria using antibiotics. That is why in general chemotherapy (as well as other cytotoxic drugs) is being administered in the maximal tolerable dose (MTD) regime. Obviously MTD assumption limits amount of patient-specific information that is being utilized in the treatment protocols as cancer-specific MTDs are being established based on large clinical trials. However, at the same time it simplifies the general treatment optimization problem that needs to be solved on the per-patient basis, because the only adjustable parameter is the interval between drug doses. This simplicity is important in the clinic, because in most of the cases there are no robust frameworks to handle larger complexity of additional drug dose optimization problem.

In recent years, some theoretical inroads have been made to design treatment protocols that depart from the MTD paradigm and held the premise to be more effective in increasing patient survival,  such as adaptive therapy. Concept of adaptive therapy is currently being investigated using various theoretical mathematical frameworks that are necessary to establish robust adaptive drug dosage protocols. In many cases the mathematical formulation of the problem consists of ordinary differential equations (ODEs) as they give the advantage of some analytical tractability and there are many existing numerical solvers that can be utilized. The search for the optimal treatment is based on either analytical approaches, such as optimal control theory, or brute force exploration of the possible treatment options space. The latter is obviously easier to implement, but is burdened with high computational cost. In a typical scenario optimal treatment schedule is searched for average (nominal) values of model parameters and no uncertainties in the patient-specific parameters are considered. It is conceivable, however, that you can come up with two different treatment protocols that for a given set of parameters result in the same tumor burden at a specified time point, but one is more sensitive to parameters perturbations. Thus, a formal assessment of the uncertainty in treatment outcome under the uncertainty in parameters values should be a part of any treatment exploration study. In this post I will show how to increase computational speed when attempting to asses distribution of treatment outcomes related to uncertainty in ODE model parameters. Presented code is written in MATLAB, but the underlying idea is valid for any other programming language.

Let us consider a simple ODE model that could be used for adaptive therapy investigations.  We describe temporal evolution of two populations (N_1 and N_2) with different growth rates (r_1 > r_2) that compete for the limited amount of space (K) and respond differently to treatment (d_1 > d_2):

\frac{dN_1}{dt}=r_1N_1\left(1-\frac{N_1+N_2}{K}\right)-d_1u(t),

\frac{dN_2}{dt}=r_2N_2\left(1-\frac{N_1+N_2}{K}\right)-d_2u(t),

where u(t) describe drug concentration and under usual pharmacokinetic assumptions is expressed as

u(t) = \sum_{t_i < t} D_i \exp(-c(t-t_i))

where D_i is the drug dose, t_i is drug administration moment, and c is clearance rate of the drug.

Let us assume that we have already established the optimal drug administration protocol and we want to asses how the treatment will perform under different perturbations in parameters values.

First we need a function that for a given set of parameters returns the total size of the population (N_1+N_2) at simulation endpoint:

function PopEnd = solveModel( init, params, treatment, Tmax )
%%INPUT
%init - 2x1 vector defining initial sizes of both populations [N_1; N_2]
%params - structure with model parameters
%treatment - moments in which drug is applied (t_i)
%Tmax - simulation time
%%OUTPUT
%PopEnd - final popultion size (N_1+N_2)

PopEnd = init;
T = [0 treatment.t Tmax];
for i = 2:length(T) %solve in each inter-dose interval
sol = ode45(@model,[T(i-1) T(i)],PopEnd);
PopEnd = sol.y(:,end); %take as initial condition last known population size
end

PopEnd = sum(PopEnd); %N_1+N_2

%definition of model equations
function y = model(t,x)
y = zeros(2,1);
%calculating current drug concentration
u = params.D*exp(-params.clr*t)*sum(exp(params.clr*(treatment.t(treatment.t<t))));
%evaluating right hand side
y(1) = params.r1*x(1)*(1-(x(1)+x(2))/params.K)-params.d1*u*x(1);
y(2) = params.r2*x(2)*(1-(x(1)+x(2))/params.K)-params.d2*u*x(2);
end

end

We will use the above function to calculate population size after the end of treatment for large set of randomly pertubed nominal parameters values. In the following examples we will perturb parameter values uniformly by up to 10%.

Basic for loop approach

The most basic approach is to solve the model N times for randomly generated parameters in the for loop:

N = 1000; %number of trials

treatment.t = [1 3 5 7 9 11 13]; %treatment schedule

init = [10^5; 10^3]; %initial condition
Tmax = 15; %simulation endpoint

PopEnd = zeros(1,N); %vector with final population sizes
for i = 1:N
%perturbing parameters up to 10%
params.r1 = 0.17*(1+(rand()-0.5)/5);
params.r2 = 0.12*(1+(rand()-0.5)/5);
params.d1 = 0.54*(1+(rand()-0.5)/5);
params.d2 = 0.24*(1+(rand()-0.5)/5);
params.K = 10^7*(1+(rand()-0.5)/5);
params.D = 0.25;
params.clr = 0.2*(1+(rand()-0.5)/5);

%solving the model
PopEnd(i) = solveModel( init, params, treatment, Tmax );
end

In the generated histogram of the final population size (see the plot below) we see that there is substantial amount of variation in treatment outcome.

Calculation of the above histogram for N = 1000 trials took about 7 seconds giving about 140 solutions per second.

The first idea to speed up the computation for larger N would be to use multiple CPUs and spread the for loop among them. There is, however, a better way that utilizes properly the CPU architecture.

Using single ODE solver invocation

Modern CPUs can perform operations on arrays and thus, perform many operations simultaneously. In case of small (low-dimensional) ODE systems numerical solvers can’t utilize that feature effectively as the computation of the next step consider only few variables at a time. However, in our case we can simply rewrite the problem of multiple solution of low-dimensional ODE system to single solution of large ODE system. Namely we can write the function calculating the solution in such a way that ith set of randomly generated parameters correspond to and 2i and 2i+1 equations in the large ODE system. In other words we feed the solver with all N sets of parameters and generate set of 2*N equations to be solved simultaneously.

function PopEnd = solveModelMult( init, params, treatment, Tmax )

PopEnd = init; %initial population
T = [0 treatment.t Tmax];
for i = 2:length(T)
sol = ode45(@model,[T(i-1) T(i)],PopEnd);
PopEnd = sol.y(:,end);
end

PopEnd = sum(reshape(PopEnd,2,[]))';

function y = model(t,x)
y = zeros(size(x));
if any(treatment.t<t)
u = params.D.*exp(-params.clr*t).*sum(exp(bsxfun(@times,params.clr,treatment.t(treatment.t<t))),2);
else
u = 0;
end
%size(u)
y(1:2:end) = params.r1.*x(1:2:end).*(1-(x(1:2:end)+x(2:2:end))./params.K)-params.d1.*u.*x(1:2:end);
y(2:2:end) = params.r2.*x(2:2:end).*(1-(x(1:2:end)+x(2:2:end))./params.K)-params.d2.*u.*x(2:2:end);
end

end

Thus, in the main script we don’t need to use for loop and we just generate all N sets of random parameters.

%initial condition
init = repmat([10^5; 10^3],N,1);
Tmax = 60; %simulation endpoint

tic
params.r1 = 0.17*(1+(rand(N,1)-0.5)/6);
params.r2 = 0.12*(1+(rand(N,1)-0.5)/6);
params.d1 = 0.54*(1+(rand(N,1)-0.5)/6);
params.d2 = 0.24*(1+(rand(N,1)-0.5)/6);
params.K = 10^7*(1+(rand(N,1)-0.5)/6);
params.D = 0.75*(1+(rand(N,1)-0.5)/6);
params.clr = 0.2*(1+(rand(N,1)-0.5)/6);

PopEnd = solveModelMult( init, params, treatment, Tmax );
t = toc();

The above code calculated N = 1000 solutions in about 0.2 seconds, which gives about 45x speed-up compared to the basic for loop approach. To check the validity of the single solver invocation approach we can compare resulting histograms.

For larger values of N we can obtain speed-up of up to 140 times when using single solver invocation approach instead of for loops, see the plot below. Of course both approaches can be parallelized and utilize all CPUs present in the system. However, parallelization of the single solver approach makes sense only for very large values of N, because for smaller N communication overhead becomes a major speed compromising factor.

Some ideas about how to efficently store simulation data

After my last post about visualization of 3D simulated tumors @jc_atlantis made an excellent point that another reason why 3D agent-based simulations are not performed frequently is the large amount of generated data. Of course, one approach to overcome that problem would be to store only information that is really essential and not the 1-1 simulation snapshot. However, in some cases we can’t know in advance what information will be important and thus, we need better ways to save our simulation outputs.

In this post I will show few tricks (in C++) how to efficiently store output of 2D agent-based model presented in one of the previous posts. As a result we will reduce the size of generated simulation output from about 1Gb to reasonable 35Mb. Of course, presented ideas can be utilized in 3D agent-based model and in other programing languages like Python, Java or MATLAB.

First of all let us define the task: we want to store information about spatial distribution of 1,000,000 cells on 2D lattice. Each cell is described by two variables: remaining proliferation potential (p, unsigned char variable) and if it is cancer stem cell (is_stem, boolean variable). All of the cells to be saved are stored in the STL vector. The above assumptions are defined by the following piece of the C++ code:

struct cell {//defining a cell
    unsigned int place; //varible keeping linear index to cell's place on
    unsigned char p; //remaining proliferation potential
    bool is_stem; //is the cell a cancer stem cell?

    cell(unsigned int a=0, unsigned char b=0, bool c=0):place(a), p(b), is_stem(c){}; //constructor
};

static const int N = 2000; //lattice size
vector<cell> cells; //vector containing all cells present in the system

Each cell is occupying  (usigned int = 4 bytes) + (unsigned char = 1 byte) + (bool  = 1 byte) = 6 bytes of memory. Hence, as we want to save 1 mln cells,  our output file shouldn’t exceed 6 megabytes (Mb) of memory.

1. Outputting to text file

Information about cells is typically written to a text file in which each line describes one cell and variables values are separated by special character (standard .csv format). That format allows simulation output to be easily loaded to other programs for further analysis. Moreover, the code generating and loading the output files is very simple:

void save_cells_ASCII(string fileName) {
    ofstream file_op(fileName);

    for(int i=0; i<cells.size();i++)
        file_op << cells.at(i).place << " " << (int)cells.at(i).p << " " << cells.at(i).is_stem <<  endl;          file_op.close(); } void load_cells_ASCII(string fileName) {     ifstream myfile (fileName);     string line;          unsigned int place;     int p;     bool is_stem;          while ( getline (myfile,line) ) {             istringstream iss(line);             iss >> place >> p >> is_stem;
            cells.push_back(cell(place, (unsigned char)p, is_stem));
    }

    myfile.close();
}

Code above generated a file of 11Mb of size – way more to be expected from the cell definition. Why is it so? Explanation is quite straightforward. In the generated text file each character occupies 1 byte of disk space – it is OK for the proliferation capacity (p) and boolean variable (is_stem), as they occupy the same amount of space in the memory. However, the cell position 2234521, which when stored as unsigned int occupies 4 bytes of space in the memory, occupies 7 bytes of space on the disk. In addition, each space between the written value in the text file occupies additional byte of space. All of the above together generates the wasteful 11Mb file on the disk.

2. Binary files

In order to to have a file on disk occupying exactly the same amount of space as the variables in the memory, we need to use binary files. It is a little bit more complicated to operate on binary files, but the idea is simple: we just write each variable byte by byte (1 byte = 8 bits). First, we write functions that 1) prepare array of pointers to char (byte) variables that will be saved; 2) read simulation snapshot from array of pointers to char variables.

void save_cells_binary(char** data, unsigned long* sizeData) {
    unsigned long Ncells = cells.size(); //number of cells
    int sizeOfCell = sizeof(int)+sizeof(char)+sizeof(bool);
    *sizeData = Ncells*sizeOfCell;
    *data = (char*)malloc( *sizeData );

    memcpy( *data, (char*)&Ncells, sizeof(unsigned long));
    memcpy((char*)&Ncells, *data, sizeof(unsigned long));

    for(int i=0; i<Ncells; i++) {
        memcpy(*data+i*sizeOfCell+sizeof(unsigned long), (char*)&cells.at(i).place, sizeof(int));
        memcpy(*data+i*sizeOfCell+sizeof(int)+sizeof(unsigned long), (char*)&cells.at(i).p, sizeof(char));
        memcpy(*data+i*sizeOfCell+sizeof(int)+sizeof(char)+sizeof(unsigned long), (char*)&cells.at(i).is_stem, sizeof(bool));
    }
}

void load_cells_binary(char* data) {
    unsigned long Ncells;
    memcpy((char*)&Ncells, data, sizeof(unsigned long));
    int sizeOfCell = sizeof(int)+sizeof(char)+sizeof(bool);

    unsigned int place;
    unsigned char p;
    bool is_stem;

    for(int i=0; i<Ncells; i++) {         memcpy((char*)&place, data+sizeof(unsigned long) + i*sizeOfCell, sizeof(unsigned long));         memcpy((char*)&p, data+sizeof(unsigned long) + i*sizeOfCell+sizeof(int), sizeof(char));         memcpy((char*)&is_stem, data+sizeof(unsigned long) + i*sizeOfCell+sizeof(int)+sizeof(char), sizeof(bool));                  cells.push_back(cell(place, p, is_stem));     } } 

Now we need only functions that will operate on the binary files and 1) read the char array from the binary file; 2) save char array to binary file.

 void readWholeBinary(string fileName, char** data, unsigned long* sizeData) {     ifstream file_op(fileName, ios::binary | ios::ate);     *sizeData = file_op.tellg();     file_op.seekg(0, ios::beg);     *data = (char*)malloc( *sizeData );     file_op.read(*data, *sizeData);     file_op.close(); } void saveWholeBinary(string fileName, char* data, unsigned long sizeData, unsigned long originalSize) {     ofstream file_op(fileName, ios::binary | ios::out);     if (originalSize>sizeData) //there was compression, we ass original file size to the beginning of the file
        file_op.write((char*)&originalSize, sizeof(unsigned long));
    file_op.write(data, sizeData);
    file_op.close();
}

Executing the above code generates 5.51 Mb file on the disk (we had about 950,000 cells to save). This is about half of the space that is occupied by the text file!

3. Using zip compression

All of us probably used zip compression to store or send files through e-mail. Why won’t we write files generated by our simulation already compressed with zip? There are quite a few C++ zip libraries, but I’ve chosen zlib library as it is quite easy to use (an excellent tutorial can be found here). It operates on the char arrays that we already generate on the way to save binary files, so compressing the file before writing takes only few lines of the code.

void compressFunction (char** dataCompressed, char* dataOriginal, unsigned long* sizeDataCompressed, unsigned long sizeDataOriginal) {
    *sizeDataCompressed  = ((sizeDataOriginal) * 1.1) + 12;
    *dataCompressed = (char*)malloc(*sizeDataCompressed);
    compress((Bytef*)(*dataCompressed),sizeDataCompressed,(Bytef*)dataOriginal,sizeDataOriginal );// size of source data in bytes
}

void uncompressFunction (char** dataUncompressed, char* dataCompressed, unsigned long sizeDataCompressed, unsigned long* sizeDataUncompressed) {
    memcpy((char*)sizeDataUncompressed, dataCompressed, sizeof(unsigned long));
    *dataUncompressed = (char*)malloc( *sizeDataUncompressed );
    uncompress((Bytef*)(*dataUncompressed), sizeDataUncompressed, (Bytef*)(dataCompressed+sizeof(unsigned long)), sizeDataCompressed-sizeof(unsigned long));
}

The binary file after compression takes 3.57 Mb of space on disk – way better.

4. Collapsing the variables

We can easily save a byte of memory per cell if we notice that the proliferation capacity variable in our simulations has the value smaller than 50. Unsigned char, however, can store the value up to 255. Thus, we can add the value 100 to the remaining proliferation capacity if the cell is cancer stem cell and forget about saving boolean variable.

 memcpy(*data+i*sizeOfCell+sizeof(unsigned long), (char*)&cells.at(i).place, sizeof(int));
        val = cells.at(i).p+(unsigned char)cells.at(i).is_stem*100;
        memcpy(*data+i*sizeOfCell+sizeof(int)+sizeof(unsigned long), (char*)&val, sizeof(char));

5. Using information about the space to reduce the file size

The most amount of disk space is used by the information about the cells location on the lattice (unsigned int = 4 bytes). Can we skip writing that information for most of the cells? Yes, we can. In the code below we just write the cells row by row, and store the information about the location of the first cell in the lattice row. If there is an empty space between the cells in the row we put the value 255 and it the remainder of the row is empty we put value 254.

void save_cells_usingSpace(char** data, unsigned long* sizeData) {

    unsigned long Ncells = cells.size(); //number of cells
    int sizeOfCell = sizeof(int)+sizeof(char)+sizeof(bool);
    *sizeData = 0; //we will update that value
    *data = (char*)malloc( Ncells*sizeOfCell ); //we alloc more than we need

    unsigned char lat[N][N] = {0};
    int x, y;
    for (int i = 0; i<cells.size(); i++) {
        x = cells.at(i).place % N;
        y = floor((double)cells.at(i).place/(double)N);
        lat[x][y] = (unsigned char)cells.at(i).is_stem*(unsigned char)100 + (unsigned char)cells.at(i).p + (unsigned char)1;
    }

    memcpy(*data, (char*)&N, sizeof(int));
    *sizeData += sizeof(int);

    //255 means empty space, 254 means end of line
    unsigned char es = 255, el = 254;

    for (unsigned short int i = 0; i<N; i++) {

        int sep = 0;
        bool st = false;

        for (unsigned short int j = 0; j<N; j++) {             if(st == false && lat[i][j]>0) {
                st = true;
                memcpy(*data+*sizeData,(char*)&i, sizeof(unsigned short int));
                memcpy(*data+*sizeData+sizeof(unsigned short int),(char*)&j, sizeof(unsigned short int));
                memcpy(*data+*sizeData+2*sizeof(unsigned short int),(char*)&lat[i][j], sizeof(unsigned char));
                *sizeData += 2*sizeof(unsigned short int)+sizeof(unsigned char);
            } else if (st == true && lat[i][j] == 0 ) { //we have empty space
                sep++;
            } else if (st==true && lat[i][j]>0){
                for (int k = 0; k<sep; k++) {
                     memcpy(*data+*sizeData,(char*)&es, sizeof(unsigned char));
                     *sizeData += sizeof(unsigned char);
                }

                sep = 0;
                memcpy(*data+*sizeData,(char*)&lat[i][j], sizeof(unsigned char));
                *sizeData += sizeof(unsigned char);
            }
        }

        if (st == true) {
            memcpy(*data+*sizeData,(char*)&el, sizeof(unsigned char));
            *sizeData += sizeof(unsigned char);
        }
    }
}

void load_cells_usingSpace(char* data, unsigned long dataSize) {
    unsigned short int x, y;
    unsigned char read;

    int Nm=0;
    memcpy((char*)&Nm, data, sizeof(int));

    unsigned long i = sizeof(int);

    while(i < dataSize) {

    memcpy((char*)&x,data+i, sizeof(unsigned short int));
    memcpy((char*)&y,data+i+sizeof(unsigned short int), sizeof(unsigned short int));
    i += 2*sizeof(unsigned short int);

        unsigned int add = 0;

        while (true) {
            memcpy((char*)&read,data+i, sizeof(unsigned char));
            i+=sizeof(unsigned char);
            if (read == 254) { //end of line
                break;
            } else if (read < 255){//actual cell                 cells.push_back(cell((unsigned int)x + (unsigned int)y*Nm + add*Nm,(read-1) % 100,(read - 1)>=100));
            }
            add++;
        }
    }

}

The above approach when combined with zip compression generated file that takes only 0.47 Mb of disk space. That is less then a byte per cell!

6. Overall comparison

Fig 1. shows the amount of used space by a file written using different approaches presented above. As it can be seen the last approach uses about 25x less disk space than the standard text files approach. When saving simulation snapshots every 10 days the total amount of generated data was reduced from 1 Gb when using standard text file approach to about 35 Mb!
fileSizeFigure 1. Comparison of the amount of used disk space.

What is important binary files approach is way faster that text file approach, even when using zip compression algorithm, see Fig. 2.

speed

Figure 2. File read/write speed for different save/read functions.

Of course, the trade-off is that it won’t be that easy to read our simulation files to other programs. However, we can always write wrapper functions that will allow other programs to interpret our files (see e.g. this post).

Parameter dependent implementations

When dealing with ordinary or partial differential equations we know that existing numerical solvers have different performance depending on the specific mathematical problem. In particular, the performance of any given solver can dramatically change if we change model parameters. Let us take the well known logistic equation as an example. Let C(t) be the current number of cancer cells, r the growth rate, and K the maximal number of cancer cells that can be sustained by the host. Then the growth dynamics can be described by the following equation:

\frac{dC(t)}{dt}=rC(t)\left(1-\frac{C(t)}{K}\right).

The above equation has of course analytical solution, but let us use a MATLAB ordinary differential equations solver to find a numerical solution. Generally it is a good idea to start with the solver that has the highest order – in our case it will be ode45 based on the Runge-Kutta method. We will also consider another solver, ode15s, which is designed for solving stiff systems. We fix the value of carrying capacity K, initial condition C(0), and time interval in which we solve the equation. We will investigate the performance of both solvers for different values of growth rate r. The plots below show that for lower values of growth rate ode45 outperforms the ode15s, but situation changes when the growth rate is large (while keeping small relative difference between the calculated solutions).

timesdifference

That transition in solvers performance can be explained when one looks how solution changes for different values of growth rates (see figure below) – the larger the growth rate the steeper the solution. Stiff solvers, like ode15s, are designed to effectively deal with such phase transitions.

solution

So, if we need to consider parameter dependent performance of the numerical solvers for differential equations, should we do the same for agent based models of cancer growth? Should we create parameter dependent implementations and switch between them on the fly? I think that yes, and I’ll try to convince you with the simple example below.

Let us consider a very simple CA: cell can either divide or migrate if there is a free spot – nothing else. We can adapt the very basic C++ implementation from the previous post as a baseline: we have a boolean matrix (in which value true indicates that the spot is occupied by the cell) and additional vector of cells keeping information about locations of all viable cells present in the system. At each simulation step we go through all of the cells in a vector in random order and decide about faith of each cell.

Let us tweak the code and at each iteration drop from memory (from vector) cells that are quiescent (completely surrounded). If the cell stays quiescent for prolonged time we save some computational time by skipping its neighborhood search. However, after each migration event we need to add some cells back to the vector – that is additional cost generated be searching the lattice again. The essential part of the tweaked code in C++ is:

   for (int i=0; i<nSteps; i++) {
        random_shuffle(cells.begin(), cells.end()); //shuffling cells
        while (!cells.empty()) {
            currCell=cells.back(); //pick the cell
            cells.pop_back();
            newSite = returnEmptyPlace(currCell);
            if (newSite) {//if there is a free spot in neigh
                if ((double)rand()/(double)RAND_MAX < pDiv) {//cell decides to proliferate
                    lattice[newSite] = true;
                    cellsTmp.push_back(currCell);
                    cellsTmp.push_back(newSite);
                    nC++;
                } else if ((double)rand()/(double)RAND_MAX < pmig) {//cell decides to migrate
                    toDelete.push_back(currCell);
                    lattice[newSite] = true;
                } else {//cell is doing nothing
                    cellsTmp.push_back(currCell);
                }
            } 
        }
        cells.swap(cellsTmp);
        
        //freeing spots after cell migration and adding cells to vector
        for(int j =0; j<toDelete.size(); j++)
            lattice[toDelete.at(j)] = false;
    
        c = !toDelete.empty();
        while(!toDelete.empty()) {
            newSite = toDelete.back();
            toDelete.pop_back();
            //adding cells that can have space after migration event
            for(int j=0;j<8;j++) {//searching through neighborhood
                if (newSite+indcNeigh[j]>=0) {
                    cells.push_back(newSite+indcNeigh[j]);
                }
            }
        }
        
        //removing duplicates in cells vector
        if (c) {
            sort( cells.begin(), cells.end() );
            cells.erase( unique( cells.begin(), cells.end() ), cells.end() );
        }
    }

Depending on the proliferation/migration rates we can imagine that the tweaked code will have different performance. For low  probability of migration event only cells on the tumor boundary will have some space (green cells on the left figure below) and we save a lot of time by dropping quiescent cells (blue cells in figures below). For higher migration rates, however, less than 30% of the cells are quiescent at each proliferation step (compare right figure below) and the benefit of dropping cells can be outweighed by the after migration update steps.
solid diffusive

The plot below shows the computational time needed to reach 500,000 cells when using the baseline code (all cells, blue line) and tweaked one (only proliferation, dashed green line) for a fixed proliferation probability. For lower proliferation rates change in the performance would occur even earlier.

timeCA

I think that this simple example show that we always need to think how our model will behave before choosing the way to implement it.

Wrapping C++ code of agent-based tumor growth model into MATLAB

Last week I posted a C++ implementation of basic cancer stem cell driven tumor growth model. About 100 lines of a code allowed to perform simulation, but there was nothing about exporting the results, doing more complicated analysis, visualization, or performing data fitting. Of course each of those tasks can be performed by writing another large piece of the code.

Two weeks ago I posted a MATLAB code for the very similar model, which was quite fast but a lot slower than C++ implementation. On the other hand it was straightforward to visualize the tumor and it’s not much of a hassle to perform more complicated analysis or data fitting using variety of MATLABs built-in functions.

Each of those two approaches has their own advantages: C++ is fast, MATLAB has a lot of built-in in tools.

Today I will show how to take the best out of both approaches, i.e. I’ll show how to easily wrap our C++ code into MATLAB. The whole C++ based simulation will be invoked from MATLAB as a function and results will be visible straight from it.

First we need to add necessary libraries.

#include "mex.h"
#include <matrix.h>

Then we need to modify our original definition of global variables (see previous post) so they can be adjusted based on the input parameters (for example lattice has to be dynamically allocated).

int N;
bool* lattice;
vector<cell> cells;

char pmax;
double pDiv, alpha, ps, pmig; 
int* indcNeigh;

Then the only thing that we need to do is to delete old main() function and use mexFunction() instead. In the mexFunction() we need to read first the parameters that will be passed from MATLAB (input parameters to the function). Then we need to assign the memory for the lattice and set other auxiliary variables. Finally, after performing the simulation we need to write variables to the output, so the MATLAB can see the results.

void mexFunction(
		 int          nlhs, //number of output arguments
		 mxArray      *plhs[], //output arrays
		 int          nrhs, //number of input arguments
		 const mxArray *prhs[] //input arrays
		 )
{
    /*READING INPUT PARAMETERS*/
    double *par=mxGetPr(prhs[0]); //model parameters, first input variable
    double *settings=mxGetPr(prhs[1]); //simulation settings, second input variable

    int Nsteps = settings[0]; //number of simulation steps to perform
    N=(int)settings[1]; //size of the lattice
    pmax = par[0]; //divisions before proliferative capacity exhaustion 
    pDiv = par[1]; //division probability 
    alpha = par[2]; //spontaneous death probability 
    ps = par[3]; //probability of symmetric division 
    pmig = par[4]; //probability of migration
    
    /*CREATING LATTICE AND AUXILARY VARIABLES*/
    //creating lattice with a given size
    lattice = new bool [N*N];
    fill_n(lattice, N*N, false);
    
    indcNeigh = new int [8];//cell neighborhood
    //assigning values
    indcNeigh[0] = -N-1; indcNeigh[1] = -N; indcNeigh[2] = -N+1;
    indcNeigh[3] = -1; indcNeigh[4] = 1; indcNeigh[5] = N-1;
    indcNeigh[6] = N; indcNeigh[7] = N+1;

    /*OLD PIECE OF MAIN() FUNCTION*/	
    srand(time(NULL)); //initialize random number generator
    initialize(); //initialize CA
    simulate(Nsteps); //simulate CA
    
    /*WRITING TO OUTPUT VARIABLES*/
    //1) writing the lattice as a logical matrix
	plhs[0] = mxCreateLogicalMatrix(N,N); //lattice
    mxLogical *ptr = mxGetLogicals(plhs[0]);
    for(int i=0; i<N*N; i++) 
        ptr[i] = lattice[i];

    //2) writing the cells vector as a vector
    plhs[1] = mxCreateDoubleMatrix(1,cells.size()*3,mxREAL);
    double* ptr2 = mxGetPr(plhs[1]);
    for(int i=0; i<cells.size(); i++) {
        ptr2[3*i] = cells.at(i).place;
        ptr2[3*i+1] = cells.at(i).p;
        ptr2[3*i+2] = cells.at(i).is_stem;
    }
}

After making all of the changes we can compile our .cpp file using the MATLAB command “mex CA.cpp”. Remember all other pieces of C++ associated with model itself remain the same – there is no need to change anything.

From now on we can perform our simulation in MATLAB environment by invoking the following command

[lattice, cells] = CA([pmax, pdiv, alpha, ps,pmig],[nSteps, N]);

and further use of all MATLABs benefits, especially data fitting features. At the same time all of the simulations will be performed at C++ speed! Imagine using Parallel Toolbox in MATLAB to harness multiple CPUs with only few lines of the code.