# Practical sessions

Here we’ll gather the material and tasks for the daily sessions. Solutions to the tasks are found here

### Some tips for frequent problems

**The order**in which you create and manipulate the`system`

and`simulation`

matters!

Remember that the workflow should always be

```
system = readdy.ReactionDiffusionSystem(...)
# ... system configuration
simulation = system.simulation(...)
# ... simulation setup
simulation.run(...)
# ... analysis of results
```

If you made a mistake while registering species, potentials, etc., just create a new `system`

and run the configuration again. The same goes for the `simulation`

, just create a new one. In the jupyter notebooks it is sufficient, to just run all the cells again.

**Simulation box**and box potential for confining particles

If you want to confine particles to some cube without periodic boundaries, the box potential must be fully **inside** the simulation box. Remember that the box is centered around `(0,0,0)`

. For example the following will confine `A`

particles to a cube of edge length 10

```
system = readdy.ReactionDiffusionSystem(
box_size=[12., 12., 12.], unit_system=None,
periodic_boundary_conditions=[False, False, False])
system.add_species("A", 1.)
origin = np.array([-5., -5., -5.])
extent = np.array([10., 10., 10.])
system.potentials.add_box(
"A", force_constant=10.,
origin=origin, extent=extent)
```

The two vectors `origin`

and `extent`

span a cube in 3D space, the picture you should have in mind is the following

**Initial placement**of particles inside a certain volume

If you have already defined `origin`

and `extent`

of your confining box potential, it is easy to generate random positions uniformly in this volume

```
n_particles = 30
uniform = np.random.uniform(size=(n_particles,3))
init_pos = uniform * extent + origin
```

Here `uniform`

is a matrix `Nx3`

where each row is a vector in the unit cube $\in{[0,1]\times[0,1]\times[0,1]}$ . This is multiplied with `extent`

, yielding a uniform cube ${[0,\mathrm{extent}_0]\times[0,\mathrm{extent}_1]\times[0,\mathrm{extent}_2]}$. If you add the `origin`

to this you get this cube at the right position (with respect to our box coordinates centered around `(0,0,0)`

), i.e.

**2D plane**

If you want to confine particles to a 2D plane, just use the box potential but make the `extent`

in one dimension very small, i.e.

```
system = readdy.ReactionDiffusionSystem(
box_size=[12., 12., 3.], unit_system=None,
periodic_boundary_conditions=[False, False, False])
system.add_species("A", 1.)
origin = np.array([-5., -5., -0.01])
extent = np.array([10., 10., 0.02])
system.potentials.add_box(
"A", force_constant=10.,
origin=origin, extent=extent)
```

Having defined `origin`

and `extent`

it is now easy to add particles to this 2D plane. **Note** that I also made the `box_size`

in z direction smaller, however it should be large enough.

**Output file size**

Please make use of your `/storage/mi/user`

directories, your `home`

will fill up quicker.

```
data_dir = "/storage/mi/user" # replace user with your username
simulation.output_file = os.path.join(data_dir, "myfile.h5")
```

Additionally, use a `stride`

on your observables, e.g.

```
simulation.record_trajectory(stride=100)
# or
simulation.observe.particle_positions(stride=100)
```

**Reaction descriptor language**

In expressions like

```
system.reactions.add("fus: A +(2) B-> C", rate=0.1)
```

the value in the parentheses `+(2)`

is the reaction distance.

**look at VMD output if something is fishy**

This might reveal some obvious mistakes. Therefore you must have registered the according observable

```
simulation.record_trajectory(stride=100) # use appropriate stride
# ... run simulation
traj = readdy.Trajectory(simulation.output_file)
traj.convert_to_xyz(particle_radii={"A": 0.5, "B": 1.})
# particle_radii is optional here
```

Then in a bash shell do

```
vmd -e myfile.h5.xyz.tcl
```

or prefix with `!`

in the jupyter notebook.

# Monday - ReaDDy intro

### Task 1) installation

Get miniconda and install it

```
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
```

If your `home`

directory is limited in space, you might want to install it under `storage`

, i.e. when prompted for install location, choose either `/home/mi/user/miniconda3`

or `/storage/mi/user/miniconda3`

(replace `user`

with your username). Your `~/.bashrc`

should contain the line

```
. /storage/mi/user/miniconda3/etc/profile.d/conda.sh
```

to enable the `conda`

command. If you try out `which conda`

, you should see its function definition, and you’re good to go.

Create a conda environment `workshop`

```
conda create -n workshop
```

and activate it.

```
conda activate workshop
```

Add `conda-forge`

as default channel

```
conda config --add channels conda-forge --env
```

and install the latest `readdy`

in it

```
(workshop) $ conda install -c readdy readdy
```

Check if it worked, start a python interpreter and do

```
import readdy
print(readdy.__version__)
```

If this does not return an error, you are ready to readdy (almost).
Make sure you also have `vmd`

installed. It should be installed on the universities machines.

### Task 2) ReaDDy introduction presentation

Follow along the ReaDDy introduction. You should be able to reproduce the presented usage in your own ipython notebook.

### Task 3) Number of particles

Consider the reaction

The reaction distance should be $R=1$.

Set up a periodic box with dimensions $20\times20\times20$

Add 1000 A particles with $D=1$. The diffusion constant of B should be $D=1$. They should be uniformly distributed in the box, use

```
n_particles = 1000
init_pos = np.random.uniform(size=(n_particles,3)) * extent + origin
simulation.add_particles("A", init_pos)
```

Observe the number of particles with a stride of 1. Additionally you can print the current number of particles using the callback functionality

```
simulation.observe.number_of_particles(
1, ["A","B"],
callback=lambda x: print("A {}, B {}".format(x[0],x[1]))
)
```

Simulate the system for 10000 steps with time step size 0.01.

**3a)** Obtain the number of particles and plot them as a function of time.

The analytic solution of the concentration of particles subject to the reaction above is obtained by solving the ODE

which yields

**3b)** Fit the function $a(t)$ to your counts data, to obtain the parameter $k$. For $a_0$ use `n_particles`

. Make use of scipy.optimize.curve_fit. Can the function $a(t)$ describe the data well?

**3c)** Repeat the procedures a) and b) but change the diffusion coefficient to $D=0.01$ and change the microscopic reaction rate to $\lambda=1$. What happened to your fit?

### Task 4) Crowded mixture, MSD

The time dependent mean squared displacement (MSD) is defined as

where $\mathbf{x}_t$ is the position of a particle at time $t$, and $\mathbf{x}_0$ is its initial position. The difference of these is then squared, yielding a squared travelled distance. This quantity is then averaged over all particles.

The **task** is to set up a crowded mixture of particles and **measure the MSD**. There is one species A with diffusion coefficient 0.1. The simulation box shall be $20\times20\times20$ and non-periodic, i.e. define a box potential that keeps particles inside, e.g. with an extent $19\times19\times19$ and appropriate origin.

Define a harmonic repulsion between A particles with force constant 100 and interaction distance 2.

Add 1000 A particles to the simulation, uniformly distributed within the box potential.

Observe the positions of particles with a stride of 1.

Run the simulation with a time step size of 0.01 for 20000 steps.

In the post-processing you have to calculate the MSD from the observed positions. Implement the following steps:

- Convert positions list into numpy array $T\times N\times3$
- From every particles position subtract the initial position
- Square this
- Average over particles
`np.mean(...)`

Since the positions observable returns a list of list, it might come in handy to convert this to a numpy array of shape $T\times N\times3$ (step 1). You may use the following brute force method to do this

```
T = len(positions)
N = len(positions[0])
pos = np.zeros(shape=(T, N, 3))
for t in range(T):
for n in range(N):
pos[t, n, 0] = positions[t][n][0]
pos[t, n, 1] = positions[t][n][1]
pos[t, n, 2] = positions[t][n][2]
```

You shall implement steps 2.-4. by yourself.

**4a)** Finally plot the MSD as a function of time in a log-log plot, let’s call this the *crowded* result. (Hint: You may find that there is an initial jump in the squared displacements. Equilibrating the particle positions before starting the measurement may help you there.)

**4b)** Repeat the whole procedure, but do not register the harmonic repulsion between particles, this shall be the *free* result. Compare the MSD to the previous result, possibly in the same plot.

**4c)** Additionally plot the analytical solution for freely diffusing particles

From your resulting plot identify “finite size saturation”, “subdiffusion”, “reduced normal diffusion”, “ballistic/normal diffusion”

### Task 5) Crowded mixture, RDF

The radial distribution function (RDF) $g(r)$ describes how likely it is to find two particles at distance $r$. Compare the RDF of harmonically repelling particles and the RDF of non-repelling particles.

Therefore set up the same system as in Task 4) but this time the system shall be periodic and there is no box potential.

You may want to equilibrate the initial particle positions.

Instead of observing the particle positions, observe the radial distribution function

```
simulation.observe.rdf(stride=1000, bin_borders=np.arange(0.,10.,0.2),
types_count_from="A", types_count_to="A",
particle_to_density=n_particles/system.box_volume)
```

In the post-processing, you shall use

```
_, bin_centers, distribution = traj.read_observable_rdf()
```

to obtain the observable. The `distribution`

contains multiple $g(r)$, one for each time it was recorded. Average them over time.

**5a)** Plot the RDF as a function of the distance (i.e. mean`distribution`

as function of `bin_centers`

)

**5b)** Perform the same procedure but for non-interacting particles, compare with the previous result, preferably in the same plot. What are your observations?

# Tuesday - Lotka Volterra

This session will cover a well studied reaction diffusion system, the Lotka Volterra system, that describes a predator prey dynamics. We will investigate how parameters in these system affect spatiotemporal correlations.

### Task 1) well mixed

Simulate a Lotka-Volterra/predator-prey system for the given parameters and **determine the period of oscillation**.

There are two particle species `A`

(prey) and `B`

(predator) with the same diffusion coefficient $D=0.7$. Both particle species shall be confined to a quadratic 2D plane with an edge length of roughly 100, and non periodic boundaries, i.e. the 2D plane must be fully contained in the simulation box. Choose a very small thickness for the plane, e.g. 0.02, and a force constant of 50.

Particles of the **same** species interact via harmonic repulsion, with a force constant of 50, and an interaction distance of 2.

There are three reactions for the Lotka Volterra system: `birth`

, `eat`

, and `decay`

.

Initialize a system by randomly positioning 1000 particles of each species on the 2D plane. Run the simulation for 100,000 integration steps with a step size of 0.01. Observe the number of particles and the trajectory, with a stride of 100.

From the observable, plot the number of particles for the two species as a function of time in the same plot. Make sure to label the lines accordingly. Additionally plot the phase space trajectory, i.e. plot the number of predator particles against the number of prey particles.

**Question**: What is the period of oscillation?

### Task 2) diffusion influenced

We simulate the same system as in Task 1) but now introduce a scaling parameter $\ell$. This scaling parameter is used to control the ratio of reaction rates $\lambda$ and the rate of diffusion $D$.

For a given parameter $\ell$

- change all reaction rate constants $\tilde\lambda= \lambda\times\sqrt{\ell}$
- change all diffusion coefficients $\tilde D= D/\sqrt{\ell}$

where the tilde ($\tilde\,$) denotes the actually used value in the simulation and the value without tilde is the one from Task 1).

This means that the ratio

is controlled by $\ell$.

Perform the same simulation as in Task 1) but for different scaling parameters $\ell$, each time saving the resulting plots (time series and phase plot) to a file, use `plt.savefig(...)`

. Run each simulation for $n$ number of integration steps, where

Vary the scaling parameter from 1 to 400.

**Question**: For which $\ell$ do you see a qualitative change of behavior in the system, both from looking at the plots and the VMD visualization? Which cases are reaction-limited and which cases are diffusion-limited?

### Task 3)

Starting from the parameters of a diffusion influenced system from Task 2), set up a simulation, where prey particles form a traveling wavefront, closely followed by predators. Therefore you might want to change the simulation box and box potential parameters to get a 2D rectangular plane.

Feel free to adjust all parameters. You can experiment with other spatial patterns as well, have a look at this paper.

# Wednesday - Self assembly

This session will cover macromolecules and how they assemble into superstructures

### Task 1) polymer and radius of gyration

In this task we will calculate the radius of gyration of a polymer as a function of the stiffness of the polymer.

We will only need one particle species `monomer`

with diffusion constant $D=0.1$. Note that this will not be a *normal* species but will be a **topology species**:

```
system.add_topology_species("monomer", 0.1)
```

The **simulation box** shall have `box_size= [32.,32.,32.]`

, and be **non-periodic**. This means that there has to be a **box potential** registered for the type `monomer`

with force constant 50, `origin=np.array([-15,-15,-15])`

and `extent=np.array([30,30,30])`

.

Between monomers there should be a harmonic repulsion potential with `force_constant=50`

and `interaction_distance=1`

In order to build a polymer we use topologies. At first we need to register a type of toplogy

```
system.topologies.add_type("polymer")
```

The monomers in a polymer must be held together by harmonic bonds, defined by pairs of particle types

```
system.topologies.configure_harmonic_bond(
"monomer", "monomer", force_constant=100., length=1.)
```

For this example we also specify an angular potential, that is defined for a triplet of particle types

```
system.topologies.configure_harmonic_angle(
"monomer", "monomer", "monomer", force_constant=stiffness,
equilibrium_angle=np.pi)
```

where an `equilibrium_angle`

$=\pi$ means 180 degrees.
The next step is creating the **simulation**. Then we specify the observables, the trajectory and the particle positions

```
simulation.record_trajectory(stride=10000)
simulation.observe.particle_positions(stride=1000)
```

Now we need to set the initial positions of particles and set up the polymer connectivity, therefore we do a 3D random walk that generates us the initial positions of the chain of particles

```
n_particles = 50
init_pos = [np.zeros(3)]
for i in range(1, n_particles):
displacement = np.random.normal(size=(3))
# normalize the random vetor
displacement /= np.sqrt(np.sum(displacement * displacement))
# append the new position to the chain
init_pos.append(init_pos[i-1]+displacement)
init_pos = np.array(init_pos)
```

Now that we have generated the positions we have to add them to the simulation together with the actual topology instance

```
top = sim.add_topology("polymer", len(init_pos)*["monomer"], init_pos)
```

The last step is to define the connectivity

```
for i in range(1, n_particles):
top.get_graph().add_edge(i-1, i)
```

Now that we have defined the simulation object we can run the simulation

```
if os.path.exists(sim.output_file):
os.remove(sim.output_file)
sim.run(1000000, 2e-2)
```

**1a)** Implement the simulation described above for `stiffness=0.1`

. From the recorded trajectory, have a look at the VMD output. Does the structure of the polymer change in the initial phase?

**1b)** Do the same simulation as above, but now calculate the radius of gyration as a function of time. Given the positions observable, you can calculate the radius of gyration as follows (the assertion statements may help you understand how the arrays are shaped):

```
times, positions = traj.read_observable_particle_positions()
# convert to numpy array
T = len(positions)
N = len(positions[0])
pos = np.zeros(shape=(T, N, 3))
for t in range(T):
for n in range(N):
pos[t, n, 0] = positions[t][n][0]
pos[t, n, 1] = positions[t][n][1]
pos[t, n, 2] = positions[t][n][2]
# calculate radius of gyration
mean_pos = np.mean(pos, axis=1)
difference = np.zeros_like(pos)
for i in range(n_particles):
difference[:,i] = pos[:,i] - mean_pos
assert difference.shape == (T,N,3)
# square and sum over coordinates (axis=2)
squared_radius_g = np.sum(difference * difference, axis=2)
assert squared_radius_g.shape == (T,N)
# average over particles (axis=1)
squared_radius_g = np.mean(squared_radius_g, axis=1)
radius_g = np.sqrt(squared_radius_g)
assert radius_g.shape == times.shape == (T,)
```

Plot the radius of gyration as a function of time, is it equilibrated? Calculate its time average and standard deviation over the whole timeseries.

**1c)** Put all of the above into one function `simulate(stiffness)`

that performs the whole simulation and analysis for a given stiffness parameter and returns the mean (one number only, not an array) and standard deviation of the radius of gyration. The function body should roughly consist of

```
def simulate(stiffness):
system = readdy.ReactionDiffusionSystem(...)
... # system configuration
simulation = system.simulation(...)
out_file = os.path.join(data_dir, "polymer_"+str(stiffness)+".h5")
... # simulation setup
simulation.run(...)
... # calculation of radius of gyration
return mean_radius_of_gyration, deviation_radius_of_gyration
```

Measure the mean radius of gyration with standard deviation as a function of the stiffness parameter for values `[0.1, 0.5, 1.0, 5.0, 10.]`

. This might look like:

```
values = [0.1, 0.5, 1., 5., 10.]
rgs = []
devs = []
for stiffness in values:
rg, dev = simulate(stiffness)
rgs.append(rg)
devs.append(dev)
```

Plot the mean radius of gyration as a function of the stiffness. Explain your result by looking also at the VMD output.

### Task 2) linear filament assembly

In this task we will also look at a linear polymer chain, but this time we will not place it initially, but instead we will let it self-assemble from monomers in solution. The polymer that we will build will have the following structure

```
head--core--(...)--core--tail
```

where `(...)`

means that there can be many core particles, but the structure is always linear.

The simulation box is **periodic** with `box_size=[20., 20., 20.]`

.

We define three **topology particle species** and one normal particle species

```
system.add_species("substrate", 0.1)
system.add_topology_species("head", 0.1)
system.add_topology_species("core", 0.1)
system.add_topology_species("tail", 0.1)
```

We also define one topology type

```
system.topologies.add_type("filament")
```

There should be the following potentials for topology particles

```
system.topologies.configure_harmonic_bond(
"head", "core", force_constant=100, length=1.)
system.topologies.configure_harmonic_bond(
"core", "core", force_constant=100, length=1.)
system.topologies.configure_harmonic_bond(
"core", "tail", force_constant=100, length=1.)
```

Like in task 1) the polymer should be stiff, we can compactly write this for all triplets of particle types:

```
triplets = [
("head", "core", "core"),
("core", "core", "core"),
("core", "core", "tail"),
("head", "core", "tail")
]
for (t1, t2, t3) in triplets:
system.topologies.configure_harmonic_angle(
t1, t2, t3, force_constant=50.,
equilibrium_angle=np.pi)
```

We now introduce a topology reaction. They allow changes to the graph of a topology in form of a reaction. Here we will use the following definition.

```
system.topologies.add_spatial_reaction(
"attach: filament(head) + (substrate) -> filament(core--head)",
rate=5.0, radius=1.5
)
```

Using the documentation, familiarize yourself, what this means. Do not hesitate to ask, since topology reactions can become quite a tricky concept!

Next create a simulation object. We want to observe the following

```
simulation.record_trajectory(stride=100)
simulation.observe.topologies(stride=100)
```

Add one filament topology to the simulation

```
init_top_pos = np.array([
[ 1. ,0. ,0.],
[ 0. ,0. ,0.],
[-1. ,0. ,0.]
])
top = simulation.add_topology(
"filament", ["head", "core", "tail"], init_top_pos)
top.get_graph().add_edge(0, 1)
top.get_graph().add_edge(1, 2)
```

Additionally we need substrate particles, that can attach themselves to the filament

```
n_substrate = 300
origin = np.array([-10., -10., -10.])
extent = np.array([20., 20., 20.])
init_pos = np.random.uniform(size=(n_substrate,3)) * extent + origin
simulation.add_particles("substrate", positions=init_pos)
```

Then, run the simulation

```
if os.path.exists(simulation.output_file):
os.remove(simulation.output_file)
dt = 5e-3
simulation.run(400000, dt)
```

One important observable will be the length of the filament as a function of time. It can be obtained from the trajectory as follows:

```
times, topology_records = traj.read_observable_topologies()
chain_length = [ len(tops[0].particles) for tops in topology_records ]
```

The last line is a list comprehension. `tops`

is a list of topologies for a given time step. Hence, `tops[0]`

is the first (and in this case, the only) topology in the system. `tops[0].particles`

is a list of particles belonging to this topology. Thus, its length yields the length of the filament.

**2a)** Have a look at the VMD output. Describe what happens? Additionally plot the length of the filament as a function of time. **Note** that you shall now plot the simulation time and not the time step indices, i.e. do the following

```
times = np.array(times) * dt
```

where `dt`

is the time step size.

**2b)** Using your data of the filament-length, fit a function of the form

to your data. You should use scipy.optimize.curve_fit to do so

```
import scipy.optimize as so
def func(t, a, b):
return a*(1. - np.exp(-b * t)) + 3.
popt, pcov = so.curve_fit(func, times, chain_length)
print("popt", popt)
print("pcov", pcov)
f = lambda t: func(t, popt[0], popt[1])
plt.plot(times, chain_length, label="data")
plt.plot(times, f(times), label=r"fit $f(t)=a(1-e^{-bt})+3$")
plt.xlabel("Time")
plt.ylabel("Filament length")
plt.legend()
plt.show()
```

**Question:** Given the result of the fitting parameters

- How large is the equilibration rate?
- What will be the length of the filament for $t\to\infty$?

**2c)** We now introduce a disassembly reaction for the `tail`

particle. This is done by adding the following to your system configuration.

```
def rate_function(topology):
"""
if the topology has at least (head, core, tail)
the tail shall be removed with a fixed probability per time
"""
vertices = topology.get_graph().get_vertices()
if len(vertices) > 3:
return 0.05
else:
return 0.
def reaction_function(topology):
"""
find the tail and remove it,
and make the adjacent core particle the new tail
"""
recipe = readdy.StructuralReactionRecipe(topology)
vertices = topology.get_graph().get_vertices()
tail_idx = None
adjacent_core_idx = None
for v in vertices:
if topology.particle_type_of_vertex(v) == "tail":
adjacent_core_idx = v.neighbors()[0].get().particle_index
tail_idx = v.particle_index
recipe.separate_vertex(tail_idx)
recipe.change_particle_type(tail_idx, "substrate")
recipe.change_particle_type(adjacent_core_idx, "tail")
return recipe
system.topologies.add_structural_reaction(
"filament",
reaction_function=reaction_function,
rate_function=rate_function)
```

Familiarize yourself with this kind of structural topology reaction

Repeat the same analysis as before, and also observe your VMD output.

- How large is the equilibration rate?
- What will be the length of the filament for $t\to\infty$?

# Thursday - Custom potentials

### Task 0) Implementing a C++ potential

Make sure you are within a conda environment that has `readdy`

installed, say `workshop`

.
Clone the template for a custom potential into a directory called `mypotential`

```
git clone https://github.com/chrisfroe/custom-potential-template.git mypotential
cd mypotential
git submodule init
git submodule update
```

This should give you the following directory structure

```
mypotential
├── binding.cpp
├── cmake
│ └── Modules
│ └── FindREADDY.cmake
├── CMakeLists.txt
├── run.py
├── setup.py
└── pybind11/
```

Implement the potential in `binding.cpp`

and build it

```
(workshop) $ python setup.py develop
```

Make sure that it works, run the following short python script:

```
import numpy as np
import readdy
import mypot as mp
system = readdy.ReactionDiffusionSystem([10, 10, 10])
system.add_species("A", 0.001)
# use the potential you implemented
system.potentials.add_custom_external("A", mp.MyExternalPotential, 1., 1.)
#system.potentials.add_custom_pair("A", "A", mp.MyPairPotential, 1., 1., 1.)
sim = system.simulation(kernel="SingleCPU")
sim.add_particles("A", np.random.random((3000, 3)))
sim.run(1000, .1)
```

.

### Task 1) External potential, double well

**1a)** Consider the **external** potential

that only depends on the x-coordinate, $a,b>0$.

Plot this potential in the range $[-2b,+2b]$ for $a=4$ and $b=5$. How will particles behave in this potential?

**1b)** Implement $U$ as `MyExternalPotential`

.
The according force on a particle is then given by

Note that the force only has a x-component. Y and z component are 0. Hints for the `binding.cpp`

:

- The
`MyExternalPotential`

C++ class needs two parameters`forceConst`

($a$) and`distance`

($b$), implemented as private members - These are given to the class as constructor arguments
- The
`PYBIND_MODULE`

section also needs to accomodate to the two arguments

```
// binding.cpp
class MyExternalPotential : public readdy::model::potentials::PotentialOrder1{
public:
explicit MyExternalPotential(
ParticleType ptype,
readdy::scalar forceConst,
readdy::scalar distance)
: PotentialOrder1(ptype),
forceConst(forceConst),
distance(distance) { }
// ...
private:
readdy::scalar forceConst;
readdy::scalar distance;
};
PYBIND11_MODULE (mypot, m) {
py::module::import("readdy");
py::class_<MyExternalPotential, readdy::model::potentials::PotentialOrder1>(m, "MyExternalPotential")
.def(py::init<ParticleType, readdy::scalar, readdy::scalar>());
}
```

The `// ...`

denotes the parts you have to fill in.

Once you are done implementing it, you have to compile it

```
(workshop) $ python setup.py develop
```

If this returns no errors the potential has been built, and the according module was made available to your `(workshop)`

environment.

**1c)** You can now go to your jupyter notebook and set up a system. The simulation box will have `box_size=[30, 30, 30]`

and will be **periodic**. There will be one species `A`

with `diffusion_constant=0.1`

. The only potential in the system will be the one you implemented:

```
system.potentials.add_custom_external("A", mp.MyExternalPotential, 4., 5.)
```

The last two arguments are forwarded to the C++ constructor, i.e. they are `forceConst`

and `distance`

.

Simulate the system for 1000 particles initially placed in the origin `(0,0,0)`

```
simulation.add_particles("A", np.zeros((1000,3)))
```

Observe the trajectory and the particle positions

```
simulation.record_trajectory(100)
simulation.observe.particle_positions(100)
```

Run the simulation for 100000 steps and a timestep of $\tau=0.05$.

Have a look at the VMD output, do your observations match your expectations?

**1d)** Make a histogram of all x-positions. Therefore use the positions observable in the following way

```
times, positions = traj.read_observable_particle_positions()
# convert to numpy array
T = len(positions)
N = len(positions[0])
pos = np.zeros(shape=(T, N, 3))
for t in range(T):
for n in range(N):
pos[t, n, 0] = positions[t][n][0]
pos[t, n, 1] = positions[t][n][1]
pos[t, n, 2] = positions[t][n][2]
# only look at x
xs = pos[:,:,0].flatten()
# make the histogram
hist, bin_edges = np.histogram(
xs, bins=np.arange(-15.,15.,0.5), density=True)
# from the bin_edges, calculate the bin_centers
bin_centers = []
for i in range(len(bin_edges)-1):
bin_centers.append(0.5 * (bin_edges[i] + bin_edges[i+1]))
bin_centers = np.array(bin_centers)
```

Plot the histogram `hist`

as a function of the x-coordinates `bin_centers`

. What is the ratio of probability of being in the transition state $x=0$ over the probability of being in one of the ground states $x=b$?

**1e)** In statistical mechanics, the above histogram describes a probability distribution $\rho(x)$. When a system is closed, isolated and coupled to a heat bath with temperature $T$, one can assume that the distribution is related to the potential $U$ in the following way:

where $k_BT=1$ in our case. Using this assumption, try to recover your potential, i.e. solve the above equation for $U$, and calculate it as a function of x, given your histogram.

Compare this result for $U$ against the true potential you have implemented. What could be sources of error here?

### Task 2) Pair potential, nuclear waffles

Have a look at this paper. There are two species N and P with potentials,

with the following constants

Symbol | Value |
---|---|

$a$ | $110\,\mathrm{MeV}$ |

$b$ | $-26\,\mathrm{MeV}$ |

$c$ | $24\,\mathrm{MeV}$ |

$\Lambda$ | $1.25\,\mathrm{fm}^2$ |

$\lambda$ | $10\,\mathrm{fm}$ |

$\alpha$ | $1.439836\times 10^2\,\mathrm{eV\,fm}$ (changed on purpose) |

The latter part of $V_{PP}$ is a screened electrostatic potential already available in ReaDDy, see here.

The gaussian potentials however are not implemented in ReaDDy. This is your task.

**2a)** Implement the **pair** potential

as `MyPairPotential`

. $A$ is the `prefactor`

regulating the strength and the sign of interaction ($A>0$ repulsive, $A<0$ attractive). $L$ is the `variance`

, that determines the width of the potential. $r_0$ is the `cutoff`

. Use these names also in your `binding.cpp`

for the constructor arguments.

You might want to have a look at how this potential actually looks like.

**Note:** From the potential $U(r)$ you first have to derive the analytical expression for the force $F(\mathbf{x}_j- \mathbf{x}_i)$ acting between two particles located at $\mathbf{x}_i$ and $\mathbf{x}_j$. The following identities may help you to calculate the force.

When you are done implementing, compile it

```
(workshop) $ python setup.py develop
```

**2b)**
Now make use of your pair potential in a jupyter notebook.

Consider a **periodic** system with `box_size=[200., 200., 8.]`

. There should be the two species `N`

and `P`

, standing for neutrons and protons. Both with `diffusion_constant=0.1`

.

There shall be a 2D plane, but this shall extend the simulation box in x and y.

```
# box, keeping particles on the 2d plane
system.potentials.add_box(
particle_type="N", force_constant=100.,
origin=[-110., -110., -0.01], extent=[220., 220., 0.02])
system.potentials.add_box(
particle_type="P", force_constant=100.,
origin=[-110., -110., -0.01], extent=[220., 220., 0.02])
```

Now configure the custom potentials, i.e. $V_{NN}$, $V_{PP}$ and $V_{NP}$ mentioned above. Therefore use the following constants

```
a = 110.
b = -26.
c = 24.
variance = 1.25
l = 10.
cutoff = 3.5
alpha = 1.439e2
proton_fraction = 0.4
total_n = 1357 * 4
n_p = int(total_n * proton_fraction)
n_n = int(total_n * (1. - proton_fraction))
```

The potentials then read

```
# the custom potentials
system.potentials.add_custom_pair(
"N", "P", mp.MyPairPotential, a, variance, cutoff)
system.potentials.add_custom_pair(
"N", "P", mp.MyPairPotential, b-c, 2.*variance, cutoff)
system.potentials.add_custom_pair(
"N", "N", mp.MyPairPotential, a, variance, cutoff)
system.potentials.add_custom_pair(
"N", "N", mp.MyPairPotential, b+c, 2.*variance, cutoff)
system.potentials.add_custom_pair(
"P", "P", mp.MyPairPotential, a, variance, cutoff)
system.potentials.add_custom_pair(
"P", "P", mp.MyPairPotential, b-c, 2.*variance, cutoff)
system.potentials.add_screened_electrostatics(
"P", "P", electrostatic_strength=alpha, inverse_screening_depth=1/l,
repulsion_strength=0., repulsion_distance=1., exponent=1, cutoff=2.*cutoff
)
```

Compare this with $V_{NN}$, $V_{PP}$ and $V_{NP}$ and validate that I did not make an error here.

Next is the simulation: use the **CPU kernel**. Observe the **trajectory** with a **stride** of 10, and the radial distribution function for the separate pairs

```
sim.observe.rdf(
100, bin_borders=np.arange(0.,25.,0.1),
types_count_from=["P"], types_count_to=["P"],
particle_to_density=float(n_p)*3./200./200.,
save={"name": "PP", "chunk_size":100}
)
sim.observe.rdf(
100, bin_borders=np.arange(0.,25.,0.1),
types_count_from=["N"], types_count_to=["N"],
particle_to_density=float(n_n)*3./200./200.,
save={"name": "NN", "chunk_size":100}
)
sim.observe.rdf(
100, bin_borders=np.arange(0.,25.,0.1),
types_count_from=["N"], types_count_to=["P"],
particle_to_density=float(n_p)*3./200./200.,
save={"name": "NP", "chunk_size":100}
)
```

Add particles to the system. There shall be a certain amount of protons. `n_n`

and `n_p`

were defined above.

```
origin = np.array([-100., -100., -0.01])
extent = np.array([200., 200., 0.02])
init_pos = np.random.uniform(size=(n_p,3)) * extent + origin
sim.add_particles("P", init_pos)
init_pos = np.random.uniform(size=(n_n,3)) * extent + origin
sim.add_particles("N", init_pos)
```

Finally run the simulation for 20000 steps and a time step size of 0.05

**2c)** Obtain the radial distribution functions that were saved under different names. Average them over the time axis (`np.mean(...)`

). Plot them all in the same plot. **Note:** Since we were calculating the RDF on a 2D plane, we have to correct for this factor. This means plot the distributions as follows

```
# note the additional " * bin_centers "
plt.plot(bin_centers, mean_distribution * bin_centers)
```

- How long are position correlations in this system, in absolute lengths?
- How many peaks are in the RDF of
`PP`

? - What is the apparent collision radius of the
`P`

particles only considering their`PP`

correlation? - What is the apparent collision radius of the
`N`

particles only considering their`NN`

correlations?

**2d)** Also observe your VMD output. Can you see waffles? What are the differences of how Schneider *et al* in their paper simulated the system and what we did? Is ReaDDy an appropriate tool for such a system?