Cancer Stem Cell CA in Python

Python doesn’t seem to be the first programming language people go to when developing cellular automata models. However, given that Python is an object-orientated language that is easy to read and write, it might actually be ideal for such models, especially if you prefer to think from the perspective of the agent (if you’d rather model using matrices you can do that too by using Python’s NumPy package). As we’ll see, it can also be pretty fast. Below I’ll cover one way to implement the Cancer Stem cell CA described in the Cancer Stem Cell CA in Matlab post.

GETTING PYTHON

If you don’t have Python, the easiest way to get it and nearly all of its scientific packages is to download the Anaconda distribution from https://store.continuum.io/cshop/anaconda/. This installation and all modules will be kept in it’s own folder, so it won’t interfere with the Python that comes bundled in your OS. It also means that uninstalling is just a matter of dropping the folder in the trash. You may also want to download an IDE, a list of which can be found on the Anaconda website (http://docs.continuum.io/anaconda/ide_integration ;my personal favorite is PyCharm).

As far as versions are concerned, the “go-to” version is Python 2.7. However, 2.7 is no longer being developed so you will eventually have to migrate to Python 3. For a long time many avoided this upgrade because several of the most important packages hadn’t been ported over, but that has since changed and most of the critical packages like Numpy, SciPy, Matplotlib, Pandas, etc… are now available in Python 3. If you choose to go with Python 2, it’s important to note that when dividing you need to use a float in the denominator, as division by an integer returns a rounded down integer. This isn’t the case with Python 3, however.

OVERVIEW

Before we begin coding, I’d like to provide an overview of how the model will be implemented. Instead of using a matrix to track cells, we’ll create a dictionary of cells. The keys to this dictionary will be the cell’s position, while the value will be the cell itself. Each cell we create will know the positions of it’s neighbors, and the cell will use the dictionary of cells to look up its neighbors (a dictionary in Python is really just a hash table, so this look-up is fairly fast). A nice advantage of using a dictionary is that we can easily add and remove entries from it, giving us the ability to iterate only across cells, not empty positions in a matrix. This effectively allows the domain to grow and shrink according to how many cells are present. During each round of this model, we’ll shuffle the values in this dictionary and iterate through them, having each cell execute it’s actions.

One final note. I’ve taken screenshots of the code to make it easier to read, but the raw code can be found in this .doc file: CSC_CA_in_python. It’s not possible to upload .py files, so if you download the code, just change the extension from .doc to .py and then run the CA. Or, just copy and paste into your own file and run it.

GETTING THINGS READY

Let’s start coding by importing the packages we will be using.The first is NumPy, Python’s package for scientific computing (http://www.numpy.org), a package you’ll almost always need. Second, we’ll import the package we’ll use to visualize our results, matplotlib (http://matplotlib.org). Next, we’ll import Numba, a ‘just in time’ compiler than can dramatically speed up sections of code in which numerical calculations are called for (http://numba.pydata.org). Finally, importing the random module will allow us to randomly select cells.

imports

We’ll begin writing up our CA by defining a few methods that we’ll use to construct the world of our CA. First, lets construct that dictionary which will contain the list of positions in a central position’s Moore neighborhood. To do this, we can use both list comprehension and dictionary comprehension. Basically, these approaches allow us to build lists or dictionaries without creating an empty list and messy for loop to populate that list/dictionary. Not only is list comprehension more convenient and cleaner, but it is also faster than the alternative.

builders

SPEEDUP TRICKS

Here we’ll setup a few methods to speed up our CA. We’ll frequently request a random binomial number and shuffle cells, so we’ll call those methods once, here at the top, so they don’t have to be reevaluated each time we call them, something that would slow down our model. Next, we’ll create a few methods to speed up the generation of random binomial numbers (something we’ll do frequently) by creating methods to call binomial using numba’s just in time compiler (jit). To do this, all we have to do is create a wrapper method, and then add the @jit decorator. This small modification provides a dramatic speed-up; without it the CA and plotting takes 16 min, but adding the simple @jit decorator reduces the time it takes down to around 5 minutes! Who says Python is always slow 🙂

speedups

CANCER CELL CLASS

cancer_cell_class

OK, now that everything is set up, we’ll start creating our Cancer Cell class, which defines all of the cancer cell’s attributes and behaviors. First, we define the mandatory __init__ method, which defines the attributes each newly created cancer cell will have. In this case, each new cancer cell will be assigned a position (pos), the number of divisions it has remaining (divisions_remaining), and the list containing the positions in its neighborhood (neighbor_pos_list). This list is assigned simply by using the cell’s position to look up that list in the dictionary_of_neighbor_pos_lists that we created earlier.

Now that we’ve defined our cell’s attributes, we can start defining what exactly our cells will do. Because this is a fairly simple CA, we’ll just define all of the behaviors in one method (sort of…), which we’ll call act. First we’ll write up the bit of code for cell division. Recall that a cancer cell can only divide if that the cell is lucky and has an empty space in its neighborhood. In this case, if divide_q is 1, the cell is lucky and has a chance to divide. If the cell is indeed lucky, it then uses the locate_empty_neighbor_position method to find all of the free spaces in its neighborhood (the second requirement for division). Jumping to that locate_empty_neighbor_position method, the first thing we’ll do is create a list of positions that are NOT in the cell dictionary, that is, spaces which are currently unoccupied. If there are actually empty positions, a randomly selected empty position in the neighborhood is returned. Moving back to the act method, a new Cancer Cell is created, assigned that random position, made to look up its neighbor list in the dictionary_of_neighbor_pos_lists, and then added to the cell dictionary. Finally, the dividing cell decreases the number of divisions that it can undergo by one.

After a cell has tried to divide, it will determine if it needs to die. Recall that a cell will die for one of two reasons: 1) it dies because it has exceed its maximum proliferative potential, that is,  divisions_remaining <= 0; or 2) it dies spontaneously, that is, if die_q = 1 . If either of these conditions are true, the cell and it’s position are deleted from the cell dictionary, thus removing it from the CA and freeing up space.

CANCER STEM CELL CLASS

cancer_stem_cell_class

Now that our Cancer Cell class is created, we can start defining the behaviors of Cancer Stem Cells. We could simply clog up our code by adding a bunch of if/else statements to our Cancer Cell’s act method, but instead we’ll create a Cancer Stem Cell class, which will be a subclass of the Cancer Cell class. Classes in Python have inheritance, which means that our new Cancer Stem Cell class has all of attributes and methods of the Cancer Cell class, and so we don’t need to redefine all attributes or re-write the locate_empty_neighbor_position method. The only attribute we’ll redefine is the PLOT_ID, which we’ll use when visualizing the results of our CA. Next, we’ll redefine the act method. The process of division is the same as before, except that the stem cell can either divide symmetrically or asymmetrically. Which type of division occurs is determined by calling the divide_symmetrically_q method. If divide_symmetrically = 1, a new Cancer Stem Cell is created and added to the cell dictionary; if divide_symmetrically = 0, a normal Cancer Cell is created and added to the dictionary instead. Now we have our Cancer Cells and Cancer Stem Cells setup and ready for action!

RUN THE MODEL

run2_7

The last thing we have to do is setup the initial conditions of our CA, which we’ll do under the if __name__ == “__main__” line. First, we’ll define the constants at the top. Next, we’ll create the initial Cancer Stem Cell, which will be positioned in the middle of world. We do this by initializing this first stem cell, using the position in the center of the world and DICTIONARY_OF_NEIGHBOR_POS_LISTS, and then add it to the cell dictionary. Next, we’ll copy the cells in the cell dictionary to a list (line 189 for Python 2, or line 192 for Python 3). While copying the values to a list isn’t ideal, it does provide two advantages. First, we can shuffle this list, allowing us to move through our cells randomly. Second, this list is what allows us to change the cell dictionary on the fly, as the length of a dictionary cannot be changed while iterating through it. Finally, we loop through our list of randomly ordered cells, having each one conduct their actions by calling their act method, a process that is repeated until maximum number of reps has been met.

VISUALIZE RESULTS

visualization

After we’ve completed our simulation, we would like to visualize our results, so we create a visualization_matrix, which is just matrix of zeros. Because we don’t care about the order of our dictionary, and won’t be modifying it’s length, we just iterate through the cells in our cell dictionary. Each cell then adds it’s PLOT_ID number to the matrix, so that stem cells will be one color, and normal cells another color. Now we use matplotlib’s imshow and show methods to visualize the results:

CSC_CA

CONCLUSION

Using Python has its pros and cons. The pros are that it’s easy to read and write, reducing development time and making it easier to share code. The con is that, in its native form, Python is not the fastest. However, as we’ve seen, there are tools to increase performance. Numba is only one of those tools, but there are others, with the most popular probably being Numba Pro, CUDA, Cython, and PyPy. There are also ways to take advantage of multithreading and mulitprocessing to speed up your models even more. So, if its possible to use these tools, you can have both fast development and execution speed! I hope you found this post helpful, and that maybe you’ll even consider using Python for your next project.

Advertisement

Tumor growth under angiogenic signaling – data fitting (part 1)

In todays post I will show how to fit the model to the experimental data using MATLAB with Optimization Toolbox. It will be one of the many posts in which I will try to illustrate problems with parameter estimation. Let’s start with introducing the model.

Hahnfeldt and colleagues proposed a mathematical model of the concept that tumor growth and host blood vessel support is bidirectionally modulated (Hahnfeldt et al., Cancer Research, 1999). Tumor volume (V) and effective vascular support (K) that defines the tumor carrying capacity are time-dependent variables described by a set of coupled ordinary differential equations (ODEs). Tumor growth is assumed to be governed by the Gompertz law:

\frac{dV}{dt}=-\lambda_1V\ln\left(\frac{V}{K}\right),

With constant effective vascular support K=Kmax, initial rapid tumor growth is followed by a slowdown as the tumor volume approaches carrying capacity Kmax. To account for reciprocal interaction of the tumor with the host vasculature, carrying capacity through vascular support can be described by a variable K modulated by the tumor V:

\frac{dK}{dt}=-\lambda_2K+bV-dKV^{2/3},

where -\lambda_2K represents spontaneous loss of functional vasculature, bV represents vessels growth stimulation due to factors secreted by the tumor proportionally to its size, and dKV^{2/3} describes endogenous inhibition of previously generated vasculature due to factors secreted by the tumor proportionally to the tumor surface-to-volume ratio.

We will try to see how well the model works on the actual experimental data (not the one from the paper). Quick Google image search using the term “mouse bevacizumab” (bavacizumab is antiangiogenic drug) returned a figure from the paper by Tsukihara an others (Tsukihara et al., Anticancer Research, 2015) in which we have a plot of relative tumor volume in time in the presence or absence of bevacizumab. We will use that figure to grab the experimental data. There is a very good tool available on MATLAB Central called “grabit” that allows to easily collect the data from the plot (screenshot below).

grabit

After calibrating the the axis, we can grab the average values and the standard deviation (for each point first grab the average then the top of the standard deviation bar, skip the point at day 0) first for the control experiment and than for Bevacizumab alone. After that it is enough to save the data into the file e.g. measurements.mat. First we need to write function that will read saved file and restructure it to the format that we want to work on.

function data = processMeasurements( file )
    %load the file with the measurements
    tmp = load(file); 
    %x values are in the first column and we need to take 
    %every second point (we have taken average and standard deviation at the same point).
    %We also round the values to the full days
    data.t = round(tmp.measurements(1:2:end/2,1))'; 
    %now we grab the averages. In first row we will have data for control
    %and in the second row for bevacizumab
    data.average = reshape(tmp.measurements(1:2:end,2),[],2)';
    %now we grab the standard deviations. In first row we will have data for control
    %and in the second row for bevacizumab
    data.std = reshape(tmp.measurements(2:2:end,2)-tmp.measurements(1:2:end,2),[],2)';
end

To make sure that everything is structured properly we can write the function that will plot grabbed data,

function plotData(data)
    figure(1)
    clf
    hold on
    errorbar(data.t,data.average(1,:), data.std(1,:),'Color','r','LineStyle','none','Marker','o');
    errorbar(data.t,data.average(2,:), data.std(2,:),'Color','b','LineStyle','none','Marker','s');
    hold off
    xlabel('day');
    ylabel('RTV (mean and std)');
    legend({'Control','Bevacizumab'},'Location','NorthWest')
end

which should result in the following plot

plottedData

Now we can start working on the code that will fit the Hahnfeldt model to the data. In the part 1 of the post we will focus only on fitting to the control data – fitting both curves will be covered in part 2. The main script that we will run is the following

%read and restructure experimental data
data = processMeasurements('measurements.mat');

%set initial values for parameters and initial conditions
params = initializeParams();

%set which params should be estimated
%names need to be consistent with the params structure!
%in most of the studies based on Hahnfeldt model lambda2 is assumed to be 0
%we will do the same and won't fit its value
paramsToFit = {'lambda1','b','d'};

%perform data fitting
[params, err] = fitParameters(params,data,paramsToFit);

%solve the model for new parameters
sol = solveModel(data.t(end),params);

%plot data together with solution
plotDataAndSol(data, sol);

We have already implemented processMeasurements function, so we can write down the initializeParams function.

function params = initializeParams()

    %model parameters (we take the values from Hahnfeldt et al. paper)
    params.lambda1 = 0.192;
    params.lambda2 = 0;
    params.b = 5.85;
    params.d = 0.00873;
    
    %initialConditions
    %in the experimental paper implanted tumors had about 8 cubic milimiters
    params.V0 = 8;
    %initial tumor is not visualized. It is typically assumed that avascular tumor can grow to about 1 mm in radius
    params.K0 = 4/3*pi;
    
end

Now we need to code the fitParameters function. We will use the lsqnonlin function that is provided in Optimization Toolbox, as it handles by default constrains on the parameters values (we assume that all parameters are non-negative).

function [params, err] = fitParameters(params,data,paramsToFit)

    %preparing initial parameters values
    x0 = zeros(size(paramsToFit));
    for i = 1:length(paramsToFit)
        x0(i) = params.(paramsToFit{i});
    end
    
    opt = optimset('Display','iter');
    %Fmin is the function that returns the fit error for a given set of parameters
    %it will need data, params and paramsToFit variables so we pass them
    %as additional arguments to the lsqnonlin function 
    %(they will be passed forward to Fmin)
    flag = 0;
    while flag<1 %repeat until solver reached tolerance or converged
        opt = optimset('Display','iter');
        [x, fval, ~, flag] = lsqnonlin(@Fmin,x0,zeros(size(x0,[],opt,data,params,paramsToFit);
        x0 = x;
    end
    
    %rewritng x (vector of fitted values) back into params structure
    for i = 1:length(paramsToFit)
        params.(paramsToFit{i}) = x(i);
    end
    
end

The Fmin function will calculate the sum of squared differences between the model solution and the experimental data.

function err = Fmin( x, data, params,paramsToFit )

    %rewritng x into params structure
    for i = 1:length(paramsToFit)
        params.(paramsToFit{i}) = x(i);
    end
  
        %we solve the model
        sol = solveModel([0 data.t],params);
        %we calculate relative tumor volume at points of measurements
        RTV = sol.y(1,2:end)./sol.y(1,1);
        %we calculate the error
        err = RTV - data.average(1,:);

end

Now we need to write the solveModel function that returns model solution for given set of parameters and initial conditions.

function sol = solveModel( T, params )
    %we assume that initial conditions are in the params structure

    %we use MATLAB built-in ODE solver to calculate the solution
    sol = ode45(@ODE,[0 T(end)],[params.V0 params.K0]);

    %if user provided time mesh instead of only final point
    %we evaluate the solution on that mesh
    if length(T)>1
        sol.y = deval(sol,T);
        sol.x = T;
    end

    %model definition
    function dy = ODE(~,y)
       dy = zeros(2,1);
       dy(1) = -params.lambda1*y(1)*log(y(1)/y(2));
       dy(2) = -params.lambda2*y(2)+params.b*y(1)-params.d*y(2)*y(1)^(2/3);
    end

end

Finally, the function that will plot the solution and the data on the same plot.

function plotDataAndSol(data,sol)

    figure(2)
    clf
    hold on
    errorbar(data.t,data.average(1,:), data.std(1,:),'Color','r','LineStyle','none','Marker','o');
    plot(sol.x,sol.y(1,:)./sol.y(1,1),'k');
    hold off
    xlabel('day');
    ylabel('RTV (mean and std)');
    legend({'Data','Model'})

end

Now we are ready to run the main script. This results in the following plot, which shows nice correspondence of the model solution to the experimental data,

fitResults

with estimated parameters values \lambda_1 = 0.0785, b =1509.7 , d = 13.95. It seems that there is nothing more to do – we estimated the parameters and nicely fitted model to the experimental data. However…

Let’s assume that we didn’t grab the data from the plot – we don’t consider that source of the noise. We all know, however, that there is uncertainty in experimental measurements, which is hard to estimate. Let us perform the same fitting procedure, but for the data that is perturbed by up to 1% (each point, uniformly) – quite a small measurement error!

Plotting the estimated values of parameters (see below) shows that even such a small error introduces huge variability with estimated parameters values. Parameter b has the values in the range from 384 to 1720 and d in the range from 3 to 11. How certain can we be with our parameters estimations then? That is quite a simple model and “clean” experimental data…

scatterPlot

Quick implementation of hexagonal lattice for ABM

In the agent based modeling we are typically more interested in the rules governing the cell fate rather than the basic setting of the lattice (if we don’t look at the off lattice model). However, the particular setting of the computational domain can have an effect on the model dynamics. In 2D we typically consider the following lattices and neighborhoods:

picture1

with the rectangular grid with Moore neighborhood being probably most frequently utilized. With von Neumann neighborhood we have the fewest number of neighbors and cells are spatially saturated earlier. The problem with Moore neighborhood is that the distance to all of the sites is not the same. Hexagonal grid has good properties (same distance, 6 neighbors), but is less frequently utilized, because implementation is more involved. In todays post I will show that essentially there is no difference in implementation between all of those lattices.

We will use the codes for basic ABM model posted before as a template (MATLAB version here and C++ version here). In both cases we considered rectangular lattice with Moore neighborhood. If we set migration probability to zero, set large value of proliferation capacity and set the spontaneous death rate to zero, i.e. we simulate essentially only division events, we will see in visualizations of simulated tumors what kind of neighborhood we assumed:

square

Let us see how the visualization looks like for different types of neighborhoods/lattices.

In both implementations we had a separate array defining the neighborhood of the cell. Thus, in order to modify the code to von Neumann neighborhood we need to change only two consecutive lines in the MATLAB code in which we define the neighborhood and the permutations table:

aux = int32([-N -1 1 N])'; %indices to heighborhood
Pms = perms(uint8(1:4))'; %permutations

In C++ implementation we need to modify the definition of the neighborhood:

static const int indcNeigh[] = {-N, -1, 1, N};//neighborhood

and the for loop in the returnEmptyPlace function:

for(int j=0;j<4;j++) {//searching through neighborhood
        if (!lattice[indx+indcNeigh[j]]) {
            neigh[nF] = indx+indcNeigh[j];
            nF++;
        }
}

As it could be expected, switching to von Neumann neighborhood made the simulated tumor diamond shaped.

diamond

Let us now consider hexagonal lattice. The very first observation that we need to make is that hexagonal lattice is essentially an rectangular lattice with shifted odd (or even) rows and two definitions of neighborhood (one for cells in odd rows, second for cells in even rows), see picture below

concept

Thus, in the modified implementation we again only need to change the definition of the neighborhood and add the condition for choosing the proper one. In MATLAB it can be achieved by introducing two definitions of neighborhoods

auxR = int32([-N -1 1 N-1 N N+1])'; %indices to heighborhood
auxL = int32([-N -1 1 -N-1 N -N+1])'; %indices to heighborhood

Pms = perms(uint8(1:6))'; %permutations

and modify the lines in the main loop in which we create the neighborhood for all viable cells (create variable S)

    odd = mod(cells,2) ~= 0; %selecting cells in the odd rows
    S = zeros(6,length(cells));
    if any(odd) %creating neighborhood for the cells in the odd rows
        SR = bsxfun(@plus,cells(odd),auxR(Pms(:,randi(nP,1,sum(odd)))));
        S(:,odd) = SR;
    end
    even = ~odd;
    if any(even) %creating neighborhood for the cells in the even rows
        SL = bsxfun(@plus,cells(even),auxL(Pms(:,randi(nP,1,sum(even)))));
        S(:,even) = SL;
    end

In C++ the changes are even easier. We again introduce two neighborhoods:

static const int indcNeighR[] = {-N,-1,1,N-1,N,N+1};//neighborhood
static const int indcNeighL[] = {-N,-1,1,-N-1,N,-N+1};//neighborhood

and modify the returnEmptyPlace function

for(int j=0;j<6;j++) {//searching through neighborhood
        if (indx % 2) {//if odd row
            if (!lattice[indx+indcNeighR[j]]) {
                neigh[nF] = indx+indcNeighR[j];
                nF++;
            }
        } else {//if even row
            if (!lattice[indx+indcNeighL[j]]) {
                neigh[nF] = indx+indcNeighL[j];
                nF++;
            }
        }
}

We might also want to change the code for visualization. In the previous settings it was enough to represent each cell as a pixel in the image. In case of hexagonal lattice in order to be correct we need to somehow shift the pixels in odd (or even rows). The easiest way to achieve that is to represent each cell as 4 pixels and adjust position according to the row number:

drawing

The modified visualization code in MATLAB is the following

function visualizeHex( N, cells, cellsIsStem, cellsPmax, pmax )

    %select cells in the odd rows
    odd = mod(cells,2) ~= 0;

    M = ones(2*N,2*N,3); %matrix for image, we expand it
    
    %disperse cell by one spot
    i = mod(double(cells)-1,N)+1; %row, REMEMBER THAT CELLS ARE UINTs!
    j = ceil(double(cells)/N); %column
    cells = (i*2-1)+(2*j-1)*2*N;
   
    %add cells to the top right in odd rows
    cells = [cells reshape(bsxfun(@plus,cells(odd),[-1; 2*N-1; 2*N]),1,[])];
    cellsIsStem = [cellsIsStem reshape(repmat(cellsIsStem(odd),3,1),1,[])];
    cellsPmax = [cellsPmax reshape(repmat(cellsPmax(odd),3,1),1,[])];
    
    %add cells to top left in even rows
    even = [~odd false(1,3*sum(odd))];
    cells = [cells reshape(bsxfun(@plus,cells(even),[-1; -2*N-1; -2*N]),1,[])];
    cellsIsStem = [cellsIsStem reshape(repmat(cellsIsStem(even),3,1),1,[])];
    cellsPmax = [cellsPmax reshape(repmat(cellsPmax(even),3,1),1,[])];
    
    %plotting
    color = hot(3*pmax);
    M(cells(~cellsIsStem)) = color(cellsPmax(~cellsIsStem)+1,1);
    M(cells(~cellsIsStem)+4*N*N) = color(cellsPmax(~cellsIsStem)+1,2);
    M(cells(~cellsIsStem)+8*N*N) = color(cellsPmax(~cellsIsStem)+1,3);

    CSCs = cells(cellsIsStem);
    M(CSCs) = color(2*pmax,1);
    M(CSCs+4*N*N) = color(2*pmax,2);
    M(CSCs+8*N*N) = color(2*pmax,3);

    figure(1)
    clf
    imshow(M);
    
end

Finally, simulating the tumor with exactly the same parameters settings on the hexagonal lattice results in way more circular tumor.

hexagonal