This is our Tutorial page!

Watch our most recent
YouTube video

In this tutorial, I will show you how to Dockerize a Django app that uses Celery & Redis.

Many of my projects are worked on be numerous developers with different machines and operating systems. So Docker is a sure-fire way of eliminating the "it doesn't work on my machine" problem.

Note: There is a full tutorial video available to set up this Django project from scratch.

Don't want to read this tutorial? Okay, fine. I have put this video together for you.

99 - Did Coding Video


Also, the code for this tutorial can be found in this GitHub repository. Just clone down the project from the did_django_schedule_jobs_v2 repo, into your local development directory using the docker branch:


git clone --branch docker

Okay, let's jump into it...


This tutorial will focus on the following:

  1. Cloning the GitHub repository and setting everything up
  2. Configuring Docker
  3. Testing


I will be using Gmail to send user confirmation emails. You will need a Google app password. Don't worry, I have you covered! Watch this video to set up a Google app password.

Also, you will need to download and install the latest version of Docker and Docker Compose on your machine


Firstly, Let's get our code.

Navigate to your development directory and create a new project directory. Open Visual Studio Code in your new directory.

The easiest way to do this is to click into the directory address and type 'cmd'.

This will open a new Command Prompt (cmd). To open Visual Studio Code you can use the following command in cmd:

mkdir did_django_schedule_jobs_v2 && cd did_django_schedule_jobs_v2
code .

This will open visual code.

Open a new git terminal in Visual Studio Code and add the following commands:

git clone .
cp .env.template .env

Remove the following variables from the newly created .env file. We don't need them anymore:


Docker setup:

Lets create a Dockerfile and file

Open a new terminal in Visual Studio Code and use the following command:

cd backend && echo > Dockerfile && echo >

Add the following code to the newly created Dockerfile:

FROM python:3.9


RUN apt-get update && apt-get -y install netcat &&  apt-get -y install gettext

RUN mkdir /code
COPY . /code/

RUN pip install --upgrade pip
RUN pip install -r /code/requirements/Base.txt

RUN chmod +x /code/
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
ENTRYPOINT ["/code/"]

Add the following code to the newly created entrypoint file:


python flush --no-input
python makemigrations
python migrate

exec "$@"

So what have we done here?

We started off by creating a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

  • We begin with the official Python:3.9 Docker image
  • We set some environment variables
    1. PYTHONDONTWRITEBYTECODE: This will remove .pyc files from our container which is a good optimization.
    2. PYTHONUNBUFFERED: This will buffer our output so it will look “normal” within Docker for us.
  • We install the necessary dependencies.
  • We then make a new directory called 'code' and set this as the working directory. The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
  • We update pip and install the project requirements.
  • We are then adding the correct permissions to our celery.log and entrypoint files. An ENTRYPOINT allows you to configure a container that will run as an executable. In our case, we are flushing and then migrating to our database.

We can no go ahead and create a docker-compose file. Add the following to your terminal:

cd ..
echo > docker-compose.yml

Add the following code to the newly created docker-compose file:

version: '3.8'

        image: postgres:13.0-alpine
            - postgres_data:/var/lib/postgresql/data/
            - POSTGRES_USER=did_django_schedule_jobs_v2_docker
            - POSTGRES_PASSWORD=did_django_schedule_jobs_v2_Password
            - POSTGRES_DB=did_django_schedule_jobs_v2_dev
        container_name: did_django_schedule_jobs_v2_db
        context: ./backend
        dockerfile: Dockerfile
      restart: always
      command: python runserver
        - ./backend/:/usr/src/backend/
        - 8000:8000
        - ./.env
        - db
      container_name: did_django_schedule_jobs_v2_django_app
      image: redis:6-alpine
        - "6379:6379" 
      container_name: did_django_schedule_jobs_v2_redis
      restart: always
        context: ./backend
      command: celery -A did_django_schedule_jobs_v2 worker --loglevel=info --logfile=logs/celery.log
        - ./backend:/usr/src/backend
        - ./.env
        - db
        - redis
        - app
      container_name: did_django_schedule_jobs_v2_celery_worker
      build: ./backend
      command: celery -A did_django_schedule_jobs_v2 beat -l info
        - ./backend:/usr/src/backend
        - ./.env
        - db
        - redis
        - app
      container_name: did_django_schedule_jobs_v2_celery_beat
        context: ./backend
      command: celery -A did_django_schedule_jobs_v2 flower  --broker=redis://host.docker.internal:6379//
        - 5555:5555
        - ./.env
        - db
        - app
        - redis
        - celery_worker
      container_name: did_django_schedule_jobs_v2_flower

So what have we done here?

Docker Compose is a tool for defining and running multi-container Docker applications. Our docker-compose.yml file defines the configurations for the applications services. Then, with a single command, we can create and start all the services from in our docker-compose file.

You will notice that we are now using Flower to handle the monitoring and administration of Celery. We will need to add this dependency to the requirements.

Whilst we are at it, we should also add the PostgreSQL requirements.

Open backend/requirements/Base.txt


You will also notice that we need to create a new directory and python file to capture Celery logs

Use the following command to create this.

cd backend && mkdir logs && cd logs && echo This is our celery log > celery.log && cd ../..

You now need to add some environment variables.

Open the .env file and change the following variables to match your own.

Note: Follow this tutorial to get your app password

DONOT_REPLY_EMAIL = **Add email**
GOOGLE_APP_PASSWORD = **Add app password**

We now need to add database credentials for PostgreSQL.

Whilst still in the .env file, add the following variables.


We should now be able to fire up the image and container. This can be done with a single command:

docker-compose -f docker-compose.yml up -d --build

Note: If you experience an error in Docker then you will need to change the docker files to "LF" instead of "CLRF"

You should now be able to visit your working app in a web browser

Great! Now we need to set up a superuser.

You can access the Docker container to run Python and Django scripts with the following code:

docker exec -it did_django_schedule_jobs_v2_django_app bash

Now create a superuser with the following:

python createsuperuser

Note: Use your own username and password:

Now log into the admin page on your web browser.

Finished :) Let's check that our project is running correctly. Visit with your Web browser.


We have now Dockerized our Django project and it works perfectly. The new project is a Dockerized Django app that uses Celery, Celery Beat, Redis and Flower to handle scheduled jobs and logging.

Start typing and press Enter to search

Did Coding

At Did Coding, we produce easy-to-follow coding tutorial on our YouTube channel.
Please get in touch if you would like to find out more...

Did Demo Logo
Our socials