This is our Tutorial page!

Watch our most recent
YouTube video


In this tutorial, I will show you how to Dockerize a Django app that uses Celery & Redis.

Many of my projects are worked on be numerous developers with different machines and operating systems. So Docker is a sure-fire way of eliminating the "it doesn't work on my machine" problem.

Note: There is a full tutorial video available to set up this Django project from scratch.

Don't want to read this tutorial? Okay, fine. I have put this video together for you.

99 - Did Coding Video

 

Also, the code for this tutorial can be found in this GitHub repository. Just clone down the project from the did_django_schedule_jobs_v2 repo, into your local development directory using the docker branch:

 

git clone --branch docker git@github.com:bobby-didcoding/did_django_schedule_jobs_v2.git

Okay, let's jump into it...

Syllabus:

This tutorial will focus on the following:

  1. Cloning the GitHub repository and setting everything up
  2. Configuring Docker
  3. Testing

Prerequisites:

I will be using Gmail to send user confirmation emails. You will need a Google app password. Don't worry, I have you covered! Watch this video to set up a Google app password.

Also, you will need to download and install the latest version of Docker and Docker Compose on your machine

Setup:

Firstly, Let's get our code.

Navigate to your development directory and create a new project directory. Open Visual Studio Code in your new directory.

The easiest way to do this is to click into the directory address and type 'cmd'.

This will open a new Command Prompt (cmd). To open Visual Studio Code you can use the following command in cmd:

mkdir did_django_schedule_jobs_v2 && cd did_django_schedule_jobs_v2
code .

This will open visual code.

Open a new git terminal in Visual Studio Code and add the following commands:

git clone git@github.com:bobby-didcoding/did_django_schedule_jobs_v2.git .
cp .env.template .env

Remove the following variables from the newly created .env file. We don't need them anymore:

CELERY_BROKER=redis://127.0.0.1:6379
CELERY_BACKEND=django-db

Docker setup:

Lets create a Dockerfile and entrypoint.sh file

Open a new terminal in Visual Studio Code and use the following command:

cd backend && echo > Dockerfile && echo > entrypoint.sh

Add the following code to the newly created Dockerfile:

FROM python:3.9

ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1

RUN apt-get update && apt-get -y install netcat &&  apt-get -y install gettext

RUN mkdir /code
COPY . /code/
WORKDIR /code

RUN pip install --upgrade pip
RUN pip install -r /code/requirements/Base.txt

RUN chmod +x /code/entrypoint.sh
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
ENTRYPOINT ["/code/entrypoint.sh"]

Add the following code to the newly created entrypoint file:

#!/bin/sh

python manage.py flush --no-input
python manage.py makemigrations
python manage.py migrate

exec "$@"

So what have we done here?

We started off by creating a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

  • We begin with the official Python:3.9 Docker image
  • We set some environment variables
    1. PYTHONDONTWRITEBYTECODE: This will remove .pyc files from our container which is a good optimization.
    2. PYTHONUNBUFFERED: This will buffer our output so it will look “normal” within Docker for us.
  • We install the necessary dependencies.
  • We then make a new directory called 'code' and set this as the working directory. The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
  • We update pip and install the project requirements.
  •  
  • We are then adding the correct permissions to our celery.log and entrypoint files. An ENTRYPOINT allows you to configure a container that will run as an executable. In our case, we are flushing and then migrating to our database.
  •  

We can no go ahead and create a docker-compose file. Add the following to your terminal:

cd ..
echo > docker-compose.yml

Add the following code to the newly created docker-compose file:

version: '3.8'

services:
    db:
        image: postgres:13.0-alpine
        volumes:
            - postgres_data:/var/lib/postgresql/data/
        environment:
            - POSTGRES_USER=did_django_schedule_jobs_v2_docker
            - POSTGRES_PASSWORD=did_django_schedule_jobs_v2_Password
            - POSTGRES_DB=did_django_schedule_jobs_v2_dev
        container_name: did_django_schedule_jobs_v2_db
  
    app:
      build:
        context: ./backend
        dockerfile: Dockerfile
      restart: always
      command: python manage.py runserver 0.0.0.0:8000
      volumes:
        - ./backend/:/usr/src/backend/
      ports:
        - 8000:8000
      env_file:
        - ./.env
      depends_on:
        - db
      container_name: did_django_schedule_jobs_v2_django_app
    
    redis:
      image: redis:6-alpine
      ports:
        - "6379:6379" 
      container_name: did_django_schedule_jobs_v2_redis
        
    celery_worker:
      restart: always
      build:
        context: ./backend
      command: celery -A did_django_schedule_jobs_v2 worker --loglevel=info --logfile=logs/celery.log
      volumes:
        - ./backend:/usr/src/backend
      env_file:
        - ./.env
      depends_on:
        - db
        - redis
        - app
      container_name: did_django_schedule_jobs_v2_celery_worker
  
    celery-beat:
      build: ./backend
      command: celery -A did_django_schedule_jobs_v2 beat -l info
      volumes:
        - ./backend:/usr/src/backend
      env_file:
        - ./.env
      depends_on:
        - db
        - redis
        - app
      container_name: did_django_schedule_jobs_v2_celery_beat
  
    flower:
      build:
        context: ./backend
      command: celery -A did_django_schedule_jobs_v2 flower  --broker=redis://host.docker.internal:6379//
      ports:
        - 5555:5555
      env_file:
        - ./.env
      depends_on:
        - db
        - app
        - redis
        - celery_worker
      container_name: did_django_schedule_jobs_v2_flower
  
volumes:
    postgres_data:

So what have we done here?

Docker Compose is a tool for defining and running multi-container Docker applications. Our docker-compose.yml file defines the configurations for the applications services. Then, with a single command, we can create and start all the services from in our docker-compose file.

You will notice that we are now using Flower to handle the monitoring and administration of Celery. We will need to add this dependency to the requirements.

Whilst we are at it, we should also add the PostgreSQL requirements.

Open backend/requirements/Base.txt

...
flower==1.0.0
psycopg2-binary==2.9.1
...

You will also notice that we need to create a new directory and python file to capture Celery logs

Use the following command to create this.

cd backend && mkdir logs && cd logs && echo This is our celery log > celery.log && cd ../..

You now need to add some environment variables.

Open the .env file and change the following variables to match your own.

Note: Follow this tutorial to get your app password

DONOT_REPLY_EMAIL = **Add email**
GOOGLE_APP_PASSWORD = **Add app password**

We now need to add database credentials for PostgreSQL.

Whilst still in the .env file, add the following variables.

SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=did_django_schedule_jobs_v2_dev
SQL_USER=did_django_schedule_jobs_v2_docker
SQL_PASSWORD=did_django_schedule_jobs_v2_Password
SQL_HOST=db

We should now be able to fire up the image and container. This can be done with a single command:

docker-compose -f docker-compose.yml up -d --build

Note: If you experience an error in Docker then you will need to change the docker files to "LF" instead of "CLRF"

You should now be able to visit your working app in a web browser

Great! Now we need to set up a superuser.

You can access the Docker container to run Python and Django scripts with the following code:

docker exec -it did_django_schedule_jobs_v2_django_app bash

Now create a superuser with the following:

python manage.py createsuperuser

Note: Use your own username and password:

Now log into the admin page on your web browser.

Finished :) Let's check that our project is running correctly. Visit http://127.0.0.1:8000/ with your Web browser.

Conclusion

We have now Dockerized our Django project and it works perfectly. The new project is a Dockerized Django app that uses Celery, Celery Beat, Redis and Flower to handle scheduled jobs and logging.


Start typing and press Enter to search

Did Coding

At Did Coding, we produce easy-to-follow coding tutorial on our YouTube channel.
Please get in touch if you would like to find out more...

Did Demo Logo
Our socials