With the help of Docker and Docker-compose it is possible to run CATMAID without much manual setup involved. With Docker alone, CATMAID will be available as demo locally, but no added data is persisted after a restart. With Docker-compose however, it is possible to keep added data. In both variants, a superuser is created by default with the username “admin” and the password “admin”.

CATMAID demo with Docker

If you want to try CATMAID before performing a complete installation, a Docker image is available containing a running basic CATMAID installation. Docker is a system for distributing programs, dependencies, and system configuration in containers that work like lightweight virtual machines.

After installing Docker, download and run the CATMAID image:

docker run -p 8000:80 --name catmaid catmaid/catmaid-standalone

Navigate your browser to http://localhost:8000 and you should see the CATMAID landing page. You can log in as a superuser with username “admin” and password “admin”. The Docker image contains a few example CATMAID projects and stacks, but you can add your own through the admin page.


Make sure you change the default password of the admin user.


Any users, projects, stacks or annotations you add to the running Docker container will by default be lost when you next run it. To save these changes, you must commit them with docker. However, this is not a best practice for using Docker, and we currently do not recommend the CATMAID Docker image for production use.

Persistence with Docker compose

Using Docker-compose is an alternative to the demo mode described above. With Docker-compose, the database, the webserver and CATMAID run in different containers. The database container stores the database outside of the container so it is kept over restarts. To run this setup, first install install Docker-compose:

sudo sh -c "curl -L`uname -s`-`uname -m` > /usr/local/bin/docker-compose"
sudo chmod +x /usr/local/bin/docker-compose
sudo sh -c "curl -L > /etc/bash_completion.d/docker-compose"

Next clone the catmaid-compose repo to a convenient location. Note that by default the database will be stored in this location, too:

git clone
cd catmaid-docker

The database (and static files) will be saved outside of the containers in the folder volumes. This allows to optionally create a symlink with this name to a different location for the database.

Run containers:

docker-compose up

Navigate your browser to http://localhost:8000 and you should see the CATMAID landing page. You can log in as a superuser with username “admin” and password “admin”. The Docker image contains a few example projects, which are added by default. To disable these, set CM_EXAMPLE_PROJECTS=false in the environment section of the app service (in docker-compose.yaml) before starting the containers for the first time. This is also the place where database details can be configured.

Additionally, the environment option CM_IMPORTED_SKELETON_FILE_MAXIMUM_SIZE can be used to set the maximum allowed import file size in bytes.


Make sure you change the default password of the admin user.

Start on boot

This is easiest done with systemd. Create a new service file, e.g. /etc/systemd/system/catmaid.service:


ExecStart=/usr/bin/docker-compose -f /home/catmaid/catmaid/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /home/catmaid/catmaid/docker-compose.yml stop


This still requires manual rebuilds during updates.

Updating docker images

Docker images are not updated automatically. Which images are currently locally available can be checked with:

docker images

Which images containers are currently running can be seen with:

docker ps

Depending on whether a standalone docker image or a docker-compose setup is used, updating is done slighly differently.

Standalone docker

If you want to persist changes from the currently running container, you can export the database first:

docker exec -u postgres catmaid /usr/bin/pg_dumpall --clean -U postgres > backup.pgsql

And if you want to make sure you can go back to the old version, you could commit a new docker images with the current state:

docker commit catmaid catmaid:old

Before updating the images, make sure to stop the containers using docker stop catmaid (if you didn’t used --name with docker run, use the container ID instead of “catmaid”).

First update the CATMAID base image:

docker pull catmaid/catmaid

Then, to update catmaid-standalone (regular Docker) use:

docker pull catmaid/catmaid-standalone

If no previous state should be persisted, the docker container can be started normally again:

docker run -p 8000:80 --name catmaid catmaid/catmaid-standalone

If you however want to start the new container from a previously saved database dump, set the DB_FIXTURE variable to true and pipe the backup file to the docker run command:

cat backup.pgsql | docker run -p 8000:80 -i -e DB_FIXTURE=true --name catmaid catmaid/catmaid-standalone

The database will then be initialized with the data from the pg_dumpall image in the file backup.pgsql, created above. The Docker image will automatically apply all missing database migrations.


Before updating the docker images, the database should be backed up. The easiest way to do this and also be able to quickly restore in case something goes wrong, is to perform a file based copy of the volumes folder after stopping the database. To stop the database, call the following three commands from the catmaid-docker directory (containing the docker-compose.yml file):

PG_STOP_CMD='export PGCTL=$(which pg_ctl); su postgres -c "${PGCTL} stop"'
docker exec -i -t catmaid-docker_db_1 /bin/bash -c "${PG_STOP_CMD}"
docker-compose stop

And then copy the complete volumes folder:

sudo cp -r volumes volumes.backup

Next update your local copy of the docker-compose repository:

git pull origin master

Then update your docker images:

docker-compose pull

Finally the docker containers have to be built and started again:

docker-compose up --build

In case a newly pulled docker image introduces a new Postgres version, CATMAID’s docker-compose start-up script will detect this and abort the container execution with a warning. This warning says that an automatic update of the data files can be performed, but this will only be done if DB_UPDATE=true is set in the docker-compose.yml file. If you don’t see such a warning, the update should be successful. If you see this warning, a few additional steps are required. First DB_UPDATE=true has to be added as environment variable of the db app in the docker-compose.yml file. The docker-compose setup needs then to be rebuilt and run:

docker-compose up --build

After a successful upgrade, the DB_UPDATE variable should be set to false again, to not accidentally upgrade the data files without ensuring a back-up has been made.

Notes on shared memory in Docker

Due to the low default allowed shared memory in Docker containers (64MB), bigger instances might run into an error similar to this:

Traceback (most recent call last):
psycopg2.OperationalError: could not resize shared memory segment
"/PostgreSQL.909036009" to 70019784 bytes: No space left on device

To fix this, the allowed shared memory (which is what Postgres makes heavy use of) can be increased. When running docker directly, add the --shm-size=2g option. If docker-compose is in use, add shm_size: '2gb' to the build context:

     shm_size: '2gb'

For more available shared memory, increase the example of 2gb.

Parameterizing Docker containers

Both the standalone Docker container and the docker-compose setup can be parameterized with various options. Some of them have already been discussed above. Generally, Docker parameters are provided as environment variables. For the regular Docker setup this happens by adding -e KEY=VALUE parameters to the docker run call. For docker-compose, the respective entries have to be added to the docker-compose.yaml file. The available settings can broadly be categorized in infrastructure settings (database, webserver) and CATMAID settings.

The following infrastructure settings are available:

The dabase hostname. Default: localhost
The port the database is listening on. Default: 5432
The name of the CATMAID database. Default: catmaid
The user as who to connect to the databae. Default: catmaid_user
The password of the database user. Default: catmaid_password. Please change this!
The maximum number of allowed database connections. Default: 50
Whether the contaienr should try to tune the database on initial startup. Default: true
Whether the next start of the container should include a database tuning update. Default: false
Whether or not to expect raw SQL as input on stdin. This can be piped directly to the database. Assuming there is simple database dump with text SQL commands in the file backup.sql, the following command can be used to load it into the container database: cat backup.sql | docker run -i -e DB_FIXTURE=true --name catmaid catmaid/catmaid-standalone. Default: false.
The amount of memory, the docker instance should have available. This is the basis for tweaking some database parameters. By default, this is estimated automatically, but can be overridden in terms of megabtes of memory, i.e. a value of 4096 means 4GB.

The following CATMAID settings are available. If anything, the administration password should be changed to something more secure (CM_INITIAL_ADMIN_PASS).

This admin user is created during initial setup. Default: admin
This initial password of the admin user defined in CM_INITIAL_ADMIN_USER. This should be changed to something more secure! Default: admin
This initial email address of the admin user defined in CM_INITIAL_ADMIN_USER. Default: admin@localhost.local
The first name of the admin user defined in CM_INITIAL_ADMIN_USER. Default: Super
The last name of the admin user defined in CM_INITIAL_ADMIN_USER. Default: User
Whether or not to run CATMAID in debug mode. Default: false
Whether or not to setup example projects. Default: true

A set of project and stack definitions that the container will set up initiall. The expected format is JSON as it is returned by the /projects/export API endpoint. This can be a multiline environment variable, but Docker is somewhat picky about how this is provided.

Consider the following JSON representation of a Drosophila larva L1 project, stored in the file larva-l1-project.json:

  "project": {
    "title": "L1 CNS",
    "stacks": [{
      "title": "L1 CNS",
      "dimension": "(28128, 31840, 4841)",
      "mirrors": [{
        "fileextension": "jpg",
        "position": 3,
        "tile_source_type": 4,
        "tile_height": 512,
        "tile_width": 512,
        "title": "Example tiles",
        "url": ""
      "resolution": "(3.8,3.8,50)",
      "translation": "(0,0,6050)"

This can now be used in the CM_INITIAL_PROJECTS environment variable like this as a docker run parameter:

-e CM_INITIAL_PROJECTS="$(cat larva-l1-project.json)"

Alterantively, such a JSON block could be included also directly into the call on the command line:

docker run … -e CM_INITIAL_PROJECTS='[{
  "project": {
}]' -e …

The parameter string provided to the catmaid_import_projects management command by the importer to import the projects and stacks provided in CM_INITIAL_PROJECTS. This can for instance be give the anonymous user read permissions on the imported data:

CM_INITIAL_PROJECTS_IMPORT_PARAMS="--permission user:AnonymousUser:can_browse"
The maximum allowed file size for skeletons that are imported through the API into the container. In Bytes.
The network interface in the container, the CATMAID application server should be listening on. Default: (all interfaces).
The network port in the container, the CATMAID application server should be listening on. Default: 8000
Whether the CATMAID configurating should be updated on container start. Normally, the settings are updated on initial container start. Default: false
Where CATMAID can expect to be able to write data. This can be useful to make this folder accessible through a Docker volume. Default: “/tmp”.
The maximum number of reconstruction nodes that should be loaded by a single field of view query. Default: 10000
How the back-end node providers should be configured. Default: “[‘postgis2d’]
The subdirectory relative to the domain root that CATMAID is running in, e.g. “/catmaid”. By default, no subdirectory is used (“”).
Which servers to trust to bypass CSRF checks. None by default (“”). The format is expected to be a Python like list, e.g. ‘[“”].

A JSON string representing a set of client settings that are used as default instance level client settings. Already defined settings take precedence. By default no client settings are provided (“”).

This is an example that will set the neuron name rendering to prefer a name set by an annotation that is meta-annotated with “Neuron name”:

CLIENT_SETTINGS: '{"neuron-name-service": {"component_list": [{"id": "skeletonid", "name": "Skeleton ID"}, {"id": "neuronname", "name": "Neuron name"}, {"id": "all-meta", "name": "All annotations annotated with \"neuron name\"", "option": "neuron name"}]}}'
Normally, the above client settings are only used if there is none already defined for a user. To enforce the use of the CM_CLIENT_SETTINGS settings, this can be set to true. Default: false
The timezone this server runs in. By default CATMAID tries to guess. Otherwise see