# 16a. Stochastic (Gillespie) simulation

```
[1]:
```

```
# Colab setup ------------------
import os, sys, subprocess
if "google.colab" in sys.modules:
cmd = "pip install --upgrade biocircuits multiprocess watermark"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
# ------------------------------
try:
import multiprocess
except:
import multiprocessing as multiprocess
import tqdm
import numpy as np
import scipy.stats as st
import numba
import biocircuits
# Plotting modules
import iqplot
import bokeh.io
import bokeh.plotting
bokeh.io.output_notebook()
```

## Sampling out of probability distributions

Sampling out of a distribution involves using a random number generator to *simulate* the **story** that generates the distribution. So, if you know the story and you have a computer handy, you can sample to be able to get a plot of your distribution, though you will not get the analytical form.

Let’s demonstrate this with the Binomial distribution. We know that if we flip a coin that has probability \(p\) of landing heads, the number of heads in \(n\) flips is Binomially distributed. Imagine for a moment that we did not know that, but will instead take to sampling using the story. We will take \(n = 25\) and \(p = 0.25\) and compute \(P(h \mid n, p)\), the probability of getting \(h\) heads in \(n\) flips, each with probability \(p\) of landing heads. We will draw 10, 30, 100, and 300 samples by simulating the story of the Binomial distribution and plot the result obtained by sampling along with the expected Binomial distribution.

```
[2]:
```

```
def simulate_coinflips(n, p, size=1):
"""
Simulate n_samples sets of n coin flips with prob. p of heads.
"""
n_heads = np.empty(size, dtype=np.int64)
for i in range(size):
n_heads[i] = np.sum(np.random.random(size=n) < p)
return n_heads
size = (30, 100, 1000, 10000)
n = 25
p = 0.25
h_plot = np.arange(26)
theor_dist = st.binom.pmf(h_plot, n, p)
plots = []
for n_samp in size:
plot = bokeh.plotting.figure(
frame_height=200,
frame_width=300,
x_axis_label="h",
y_axis_label="P(h)",
title=f"{n_samp} samples",
)
h = simulate_coinflips(n, p, size=n_samp)
plot.circle(h_plot, theor_dist)
plot.segment(x0=h_plot, x1=h_plot, y0=0, y1=theor_dist)
plot.circle(
np.arange(h.max() + 1), np.bincount(h) / n_samp, color="tomato",
)
plots.append(plot)
bokeh.io.show(bokeh.layouts.gridplot(plots, ncols=2))
```

As we can see, if we *sample* out of the probability distribution, we can approximately calculate the actual distribution. If we sample enough, the approximation is very good.

Sampling is such a powerful strategy that highly efficient algorithms with convenient APIs have been developed to sample out of named probability distributions. For example, we could have used `np.random.binom()`

as a drop-in (and *much* more efficient) replacement for the `simulate_coinflips()`

function above.

## Sampling out of distributions defined by master equations

We will use the same strategy for solving master equations. We will find a way to *sample* out of the distribution that is governed by the master equation. This technique was pioneered by Dan Gillespie in the late 1970s. For that reason, these sampling techniques are often called **Gillespie simulations**. The algorithm is sometimes referred to as a **stochastic simulation algorithm**, or SSA.

### Example: Unregulated gene expression

Here will explore how this algorithm works by looking at simple production of a protein. We will include the production of RNA to demonstrate how this is done with a master equation that has more than one species.

For simple protein production, we have the following reactions.

\begin{align} \text{DNA} \rightarrow \text{mRNA} \rightarrow \text{protein} \end{align}

#### Macroscale equations

As we have seen before, the deterministic dynamics, which describe mean concentrations over a large population of cells, are described by the ODEs

\begin{align} \frac{\mathrm{d}m}{\mathrm{d}t} &= \beta_m - \gamma_m m, \\[1em] \frac{\mathrm{d}p}{\mathrm{d}t} &= \beta_p m - \gamma_p p. \end{align}

The same equations should hold if \(m\) and \(p\) represent the mean *numbers* of molecules; we would just have to appropriately rescale the constants. Assuming the \(m\) and \(p\) are now numbers (so we are not free to pick their units), we can nondimensionalize using \(\gamma_m\) to nondimensionalize time. This leads to redefinition of parameters and variables

\begin{align} &\beta_m V/\gamma_m \to \beta_m, \\[1em] &\beta_p/\gamma_m \to \beta_p, \\[1em] &\gamma_m t \to t, \end{align}

where \(V\) is the entire volume of the system of interest. The dimensionless equations are

\begin{align} \frac{\mathrm{d}m}{\mathrm{d}t} &= \beta_m - m, \\[1em] \frac{\mathrm{d}p}{\mathrm{d}t} &= \beta_p m - \gamma p, \end{align}

with \(\gamma = \gamma_p/\gamma_m\).

#### The Master equation

We can write a master equation for these dynamics. In this case, each state is defined by an mRNA copy number \(m\) and a protein copy number \(p\). So, we will write a master equation for \(P(m, p, t)\).

\begin{align} \frac{\mathrm{d}P(m,p;t)}{\mathrm{d}t} &= \beta_m P(m-1,p;t) + (m+1)P(m+1,p;t) - \beta_m P(m,p;t) - mP(m,p;t) \nonumber \\[1em] &+\beta_p mP(m,p-1;t) + \gamma (p+1)P(m,p+1;t) - \beta_p mP(m,p;t) - \gamma p P(m,p;t). \end{align}

We implicitly define \(P(m, p; t) = 0\) if \(m < 0\) or \(p < 0\). This is the master equation we will sample from using the stochastic simulation algorithm (SSA), also known as the Gillespie algorithm.

### The Gillespie algorithm

The Gillespie algorithm, also called a stochastic simulation algorithm (SSA), is a way to sample the story behind a master equation, as will become clear momentarily as we work through the algorithm and introduce some terminology.

The transition probabilities are also called **propensities** in the context of stochastic simulation. The propensity for a given transition, say indexed \(i\), is denoted as \(a_i\). The equivalence to notation we introduced for master equations is that if transition \(i\) results in the change of state from \(n'\) to \(n\), then \(a_i = W(n\mid n')\).

To cast this problem for a Gillespie simulation, we can write each change of state (moving either the copy number of mRNA or protein up or down by 1 in this case) and their respective propensities.

\begin{align} \begin{array}{ll} \text{reaction, }r_i & \text{propensity, } a_i \\ m \rightarrow m+1,\;\;\;\; & \beta_m \\[0.3em] m \rightarrow m-1, \;\;\;\; & m\\[0.3em] p \rightarrow p+1, \;\;\;\; & \beta_p m \\[0.3em] p \rightarrow p-1, \;\;\;\; & \gamma p. \end{array} \end{align}

Note that specifying the reactions and their respective propensities has the same information as specifying the master equation itself.

We will not carefully prove that the Gillespie algorithm samples from the probability distribution governed by the master equation, but will state the principles behind it. The basic idea is that events (such as those outlined above) are rare, discrete, separate events. I.e., each event is an arrival of a Poisson process. The Gillespie algorithm starts with some state, \((m_0,p_0)\). Then a state change, *any* state change, will happen in some time \(\Delta t\) that has a certain
probability distribution (which we will show is Exponential momentarily). The probability that the state change that happens is reaction \(j\) is proportional to \(a_j\). That is to say, state changes with high propensities are more likely to occur. Thus, choosing which of the \(n\) state changes happens in \(\Delta t\) is a matter of drawing an integer \(j\) in \([1,n]\) where the probability of drawing \(j\) is

\begin{align} \frac{a_j}{\sum_i a_i}. \end{align}

Now, how do we determine how long the state change took? The probability density function describing that a *given* state change \(i\) takes place in time \(t\) is

\begin{align} P(t\mid a_i) = a_i\, \mathrm{e}^{-a_i t}, \end{align}

since the time it takes for arrival of a Poisson process is Exponentially distributed. The probability that it has *not* arrived in time \(\Delta t\) is the probability that the arrival time is greater than \(\Delta t\), given by the complementary cumulative distribution function for the Exponential distribution.

\begin{align} P(t > \Delta t\mid a_i) = \int_{\Delta t}^\infty \mathrm{d}t\,P(t\mid a_i) = \mathrm{e}^{-a_i \Delta t}. \end{align}

Now, say we have \(n\) processes that arrive in time \(t_1, t_2, \ldots\). The probability that *none* of them arrive before \(\Delta t\) is

\begin{align} P(t_1 > \Delta t, t_2 > \Delta t, \ldots) &= P(t_1 > \Delta t) P(t_2 > \Delta t) \cdots = \prod_i \mathrm{e}^{-a_i \Delta t} = \mathrm{exp}\left[-\Delta t \sum_i a_i\right]. \end{align}

This is the same as the probability of a single Poisson process with \(a = \sum_i a_i\) not arriving before \(\Delta t\). So, the probability that it *does* arrive in \(\Delta t\) is Exponentially distributed with mean \(\left(\sum_i a_i\right)^{-1}\).

So, we know how to choose a state change and we also know how long it takes. The Gillespie algorithm then proceeds as follows.

Choose an initial condition, e.g., \(m = p = 0\).

Calculate the propensity for each of the enumerated state changes. The propensities may be functions of \(m\) and \(p\), so they need to be recalculated for every \(m\) and \(p\) we encounter.

Choose how much time the reaction will take by drawing out of an exponential distribution with a mean equal to \(\left(\sum_i a_i\right.)^{-1}\). This means that a change arises from a Poisson process.

Choose what state change will happen by drawing a sample out of the discrete distribution where \(P_i = \left.a_i\middle/\left(\sum_i a_i\right)\right.\). In other words, the probability that a state change will be chosen is proportional to its propensity.

Increment time by the time step you chose in step 3.

Update the states according to the state change you choose in step 4.

If \(t\) is less than your pre-determined stopping time, go to step 2. Else stop.

Gillespie proved that this algorithm samples the probability distribution described by the master equation in his seminal papers in 1976 and 1977. (We recommend reading the latter.) You can also read a concise discussion of how the algorithm samples the master equation in section 4.2 of Del Vecchio and Murray.

### Coding up a Gillespie simulation

To code up the Gillespie simulation of the simple gene expression example, we first make an an array that gives the changes in the counts of \(m\) and \(p\) for each of the four reactions. This is a way of encoding the updates in the particle counts that we get from choosing the respective state changes.

```
[3]:
```

```
# Column 0 is change in m, column 1 is change in p
simple_update = np.array(
[
[1, 0], # Make mRNA transcript
[-1, 0], # Degrade mRNA
[0, 1], # Make protein
[0, -1], # Degrade protein
],
dtype=np.int64,
)
```

Next, we make a function that updates the array of propensities for each of the four reactions. We update the propensities (which are passed into the function as an argument) instead of instantiating them and returning them to save on memory allocation while running the code. It has the added benefit that it forces you to keep track of the indices corresponding to the update matrix. This helps prevent bugs. It will naturally be a function of the current population of molecules. It may in general also be a function of time, so we explicitly allow for time dependence (even though we will not use it in this simple example) as well.

```
[4]:
```

```
def simple_propensity(propensities, population, t, beta_m, beta_p, gamma):
"""Updates an array of propensities given a set of parameters
and an array of populations.
"""
# Unpack population
m, p = population
# Update propensities
propensities[0] = beta_m # Make mRNA transcript
propensities[1] = m # Degrade mRNA
propensities[2] = beta_p * m # Make protein
propensities[3] = gamma * p # Degrade protein
```

### Making a draw

Finally, we write a general function that draws a choice of reaction and the time interval for that reaction. This is the heart of the Gillespie algorithm, so we will take some time to discuss speed. First, to get the time interval, we sample a random number from an exponential distribution with mean \(\left(\sum_i a_i\right)^{-1}\). This is easily done using the `np.random.exponential()`

function.

Next, we have to select which reaction will take place. This amounts to drawing a sample over the discrete distribution where \(P_i = a_i\left(\sum_i a_i\right)^{-1}\), or the probability of each reaction is proportional to its propensity. This can be done using `scipy.stats.rv_discrete`

, which allows specification of an arbitrary discrete distribution. We will write a function to do this.

```
[5]:
```

```
def sample_discrete_scipy(probs):
"""Randomly sample an index with probability given by probs."""
return st.rv_discrete(values=(range(len(probs)), probs)).rvs()
```

This is a nice one-liner, but is it fast? There may be significant overhead in setting up the `scipy.stats`

discrete random variable object to sample from each time. Remember, we can’t just do this once because the array `probs`

changes with each step in the SSA because the propensities change. We will therefore write a less elegant, but maybe faster way of doing it.

Another way to sample the distribution is to generate a uniformly distributed random number \(q\), with \(0 < q < 1\) and return the value \(j\) such that

\begin{align} \sum_{i=0}^{j-1} p_i < q < \sum_{i=0}^{j}p_i. \end{align}

We’ll code this up.

```
[6]:
```

```
def sample_discrete(probs):
"""Randomly sample an index with probability given by probs."""
# Generate random number
q = np.random.rand()
# Find index
i = 0
p_sum = 0.0
while p_sum < q:
p_sum += probs[i]
i += 1
return i - 1
```

Now let’s compare the speeds using the `%timeit`

magic function. This is a useful tool to help diagnose slow spots in your code.

```
[7]:
```

```
# Make dummy probs
probs = np.array([0.1, 0.3, 0.4, 0.05, 0.15])
print('Result from scipy.stats:')
%timeit sample_discrete_scipy(probs)
print('\nResult from hand-coded method:')
%timeit sample_discrete(probs)
```

```
Result from scipy.stats:
587 µs ± 18.4 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Result from hand-coded method:
1.05 µs ± 5.89 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```

Wow! The less concise method is a couple of orders of magnitude faster! So, we will ditch using `scipy.stats`

, and use our hand-built sampler instead. (You can find a much more thorough discussion on speed considerations in Gillespie simulations in Technical Appendix 16b.)

Now we can write a function to do our draws.

```
[8]:
```

```
def gillespie_draw(propensity_func, propensities, population, t, args=()):
"""
Draws a reaction and the time it took to do that reaction.
Parameters
----------
propensity_func : function
Function with call signature propensity_func(population, t, *args)
used for computing propensities. This function must return
an array of propensities.
propensities : ndarray
Propensities for each reaction as a 1D Numpy array.
population : ndarray
Current population of particles
t : float
Value of the current time.
args : tuple, default ()
Arguments to be passed to `propensity_func`.
Returns
-------
rxn : int
Index of reaction that occured.
time : float
Time it took for the reaction to occur.
"""
# Compute propensities
propensity_func(propensities, population, t, *args)
# Sum of propensities
props_sum = propensities.sum()
# Compute next time
time = np.random.exponential(1.0 / props_sum)
# Compute discrete probabilities of each reaction
rxn_probs = propensities / props_sum
# Draw reaction from this distribution
rxn = sample_discrete(rxn_probs)
return rxn, time
```

### Gillespie time stepping

Now we are ready to write our main loop. We will only keep the counts at pre-specified time points. This saves on RAM, and we really only care about the values at given time points anyhow.

Note that this function is generic. All we need to specify our system is the following.

A function to compute the propensities

How the updates for a given reaction are made

Initial population

Additionally, we specify necessary parameters, an initial condition, and the time points at which we want to store our samples. So, providing the propensity function and update are analogous to providing the time derivatives when using `scipy.integrate.odeint()`

.

```
[9]:
```

```
def gillespie_ssa(propensity_func, update, population_0, time_points, args=()):
"""
Uses the Gillespie stochastic simulation algorithm to sample
from probability distribution of particle counts over time.
Parameters
----------
propensity_func : function
Function of the form f(params, t, population) that takes the current
population of particle counts and return an array of propensities
for each reaction.
update : ndarray, shape (num_reactions, num_chemical_species)
Entry i, j gives the change in particle counts of species j
for chemical reaction i.
population_0 : array_like, shape (num_chemical_species)
Array of initial populations of all chemical species.
time_points : array_like, shape (num_time_points,)
Array of points in time for which to sample the probability
distribution.
args : tuple, default ()
The set of parameters to be passed to propensity_func.
Returns
-------
sample : ndarray, shape (num_time_points, num_chemical_species)
Entry i, j is the count of chemical species j at time
time_points[i].
"""
# Initialize output
pop_out = np.empty((len(time_points), update.shape[1]), dtype=np.int64)
# Initialize and perform simulation
i_time = 1
i = 0
t = time_points[0]
population = population_0.copy()
pop_out[0, :] = population
propensities = np.zeros(update.shape[0])
while i < len(time_points):
while t < time_points[i_time]:
# draw the event and time step
event, dt = gillespie_draw(
propensity_func, propensities, population, t, args
)
# Update the population
population_previous = population.copy()
population += update[event, :]
# Increment time
t += dt
# Update the index
i = np.searchsorted(time_points > t, True)
# Update the population
pop_out[i_time : min(i, len(time_points))] = population_previous
# Increment index
i_time = i
return pop_out
```

### Running and parsing results

We can now run a set of SSA simulations and plot the results. We will run 100 trajectories and store them, using \(\beta_p = \beta_m = 10\) and \(\gamma = 0.4\). We will also use the nifty package `tqdm`

to give a progress bar so we know how long it is taking.

```
[10]:
```

```
# Specify parameters for calculation
args = (10.0, 10.0, 0.4)
time_points = np.linspace(0, 50, 101)
population_0 = np.array([0, 0], dtype=int)
size = 100
# Seed random number generator for reproducibility
np.random.seed(42)
# Initialize output array
samples = np.empty((size, len(time_points), 2), dtype=np.int64)
# Run the calculations
for i in tqdm.tqdm(range(size)):
samples[i, :, :] = gillespie_ssa(
simple_propensity, simple_update, population_0, time_points, args=args
)
```

```
100%|████████████████████████████████████████████████████████████████████████████| 100/100 [00:19<00:00, 5.17it/s]
```

We now have our samples, so we can plot the trajectories. For visualization, we will plot every trajectory as a thin blue line, and then the average of the trajectories as a thick orange line.

```
[11]:
```

```
# Set up plots
plots = [
bokeh.plotting.figure(
frame_width=300,
frame_height=200,
x_axis_label="dimensionless time",
y_axis_label="number of mRNAs",
),
bokeh.plotting.figure(
frame_width=300,
frame_height=200,
x_axis_label="dimensionless time",
y_axis_label="number of proteins",
),
]
# Plot trajectories and mean
for i in [0, 1]:
for x in samples[:, :, i]:
plots[i].line(
time_points, x, line_width=0.3, alpha=0.2, line_join="bevel"
)
plots[i].line(
time_points,
samples[:, :, i].mean(axis=0),
line_width=6,
color="orange",
line_join="bevel",
)
# Link axes
plots[0].x_range = plots[1].x_range
bokeh.io.show(bokeh.layouts.gridplot(plots, ncols=2))
```

We can also compute the steady state properties by considering the end of the simulation. The last 50 time points are at steady state, so we will average over them.

```
[12]:
```

```
print("mRNA mean copy number =", samples[:, -50:, 0].mean())
print("protein mean copy number =", samples[:, -50:, 1].mean())
print("\nmRNA variance =", samples[:, -50:, 0].std() ** 2)
print("protein variance =", samples[:, -50:, 1].std() ** 2)
print("\nmRNA noise =", samples[:, -50:, 0].std() / samples[:, -50:, 0].mean())
print(
"protein noise =", samples[:, -50:, 1].std() / samples[:, -50:, 1].mean()
)
```

```
mRNA mean copy number = 10.0748
protein mean copy number = 251.6994
mRNA variance = 10.26200496
protein variance = 2145.33703964
mRNA noise = 0.317965262817717
protein noise = 0.18402023679851773
```

Finally, we can compute the steady state probability distributions. To plot them, we plot the empirical cumulative distribution function (ECDF) from the sampling. The theoretical distribution for mRNA is Poisson, which is overlayed in orange.

```
[13]:
```

```
# mRNA ECDF
p_m = iqplot.ecdf(
samples[:, -50:, 0].flatten(),
frame_width=300,
frame_height=200,
x_axis_label="mRNA copy number",
style="staircase"
)
# Theoretial mRNA CDF (Poisson)
p_m.circle(
np.arange(25), st.poisson.cdf(np.arange(25), args[0]), color="orange"
)
# protein ECDF
p_p = iqplot.ecdf(
samples[:, -50:, 1].flatten(),
frame_width=300,
frame_height=200,
x_axis_label="protein copy number",
style="staircase",
)
bokeh.io.show(bokeh.layouts.gridplot([p_m, p_p], ncols=2))
```