This is our Tutorial page!
Watch our most recent
YouTube video
Okay, this video marks the last part of the series. We will be covering Continuous Delivery (CD) in this video. By the end of this video, you will understand how to continuously deploy your project to DigitalOcean using GitHub actions.
What is continuous Delivery (CD)?
Continuous Delivery is all about getting your code changes in-front of your users as quickly, safely and sustainably as possible. Code changes could be bug fixes, new features and configuration changes. The key here is 'continuouse'.
It's worth noting that I have been using CD here at Did Coding for 2 years and has enabled me to be far more efficient. If nothing else, CD ensures that my code is always in a deployable state.
I could give you a basic project to demonstrate how this all works but where is the fun in that? I am going to use THIS project as its as close to a 'real world' project as I can get without being ovrly complicated.
This project is a Dockerized Django project that uses a combination of Celery, Celery Beat, Redis and Flower to manage ADHOC and schedule jobs. We will also be using Nginx and Gunicorn for good measure. Like I said above, this is a pretty darn close to a production stack so it should be a useful tutorial.
Before I start, here are a few links for you to get your teeth into:
Don't want to read this tutorial? Okay, fine. I have put this video together for you.
Also, the code for this tutorial can be found in this GitHub repository. Just clone down the project from the did_django_schedule_jobs_v2 repo, into your local development directory using the docker-cd branch:
git clone --branch docker-cd git@github.com:bobby-didcoding/did_django_schedule_jobs_v2.git
Okay, let's jump into it...
Syllabus:
This tutorial will focus on the following:
- Cloning the GitHub repository
- Configuring project
- DigitalOcean config
- GitHub access key and secrets
- Packages and Actions
- Deploy
Prerequisites:
I will be using Gmail to send user confirmation emails. You will need a Google app password. Don't worry, I have you covered! Watch this video to set up a Google app password.
One last thing. You will need a domain! I buy most of mine via Interserve and 123 Reg. But others are available.
Setup:
Firstly, Let's get our code.
Navigate to your development directory and create a new project directory. Open Visual Studio Code in your new directory.
The easiest way to do this is to click into the directory address and type 'cmd'.
This will open a new Command Prompt (cmd). To open Visual Studio Code you can use the following command in cmd:
mkdir did_django_schedule_jobs_v2 && cd did_django_schedule_jobs_v2
code .
This will open visual code.
Open a new git terminal in Visual Studio Code and add the following commands:
git clone --branch docker-prod git@github.com:bobby-didcoding/did_django_schedule_jobs_v2.git .
Now set up your own repository on github and update the projects remote to point to your newly created repo.
git remote set-url origin git@github.com:USERNAME/REPOSITORY.git
git remote -v
git branch -M main
git checkout main
git push -u origin main
Project setup:
This won't take long. We just need to add a few files.
Now, lets create some create the necessary files.
mkdir .github && cd .github
mkdir workflows && cd workflows
echo > main.yml
cd ../..
echo > docker-compose.cicd.yml
mkdir logs && cd logs
echo > celery.log
cd ..
You will notice that we have created a main.yml within the .github/workflows. This will be used to create jobs for GitHub actions.
We also created a new compose file call docker-compose.cicd.yml. This will be used to create the necessary images in GitHub.
We need to make one or two little changes to our DockerFile to ensure the corect directories are created in our docker container. Replace the code in backend/DockerFile with the following:
###########
# BUILDER #
###########
FROM python:3.9 as builder
WORKDIR /usr/src/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update && apt-get -y install netcat
COPY . /usr/src/app/
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r /usr/src/app/requirements/Prod.txt
#########
# FINAL #
#########
FROM python:3.9
# create directory for the user
RUN mkdir -p /home/app
# create the appuser user in appuser group
# RUN addgroup -S app && adduser -S app -G app
ARG user=app
ARG group=app
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group} && useradd -u ${uid} -g ${group} -s /bin/sh ${user}
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/logs
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get -y install netcat && apt-get -y install gettext && apt-get -y install nano
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements/Prod.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy celery logs
RUN touch /home/app/web/logs/celery.log
RUN chmod +x /home/app/web/logs/celery.log
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R ${user}:${group} $HOME
RUN ["chmod", "+x", "/home/app/web/entrypoint.prod.sh"]
We now need to alter our gitignore file as we need the celery log to be pushed to our repo. Replace the gitignore file with the following code:
mediafiles/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
# *.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
# env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
.DS_Store
We'll put a pin in our project configuration for now as the code will not make sense until we've tinkered with GitHub and DigitalOcean.
DigitalOcean config:
Let's get started!..
Visit DigitalOcean and set up an account. This only takes a few minutes.
DigitalOcean has an API that allows you to configure Droplets and Managed Databases via your command line. However, I prefer to use there interface. So feel free to set up DigitalOcean the way you feel most comfortable.
You will need the following
- Droplet with Docker and Docker compose installed
- Managed Database
- Domain/DNS configuration
Droplet with Docker and Docker compose installed:
Docker and Docker Compose will need to be installed on a virtual machine. You can configure a new droplet with Docker and Docker Compose pre installed. However, I have a good video to help you set one up from scratch. You will need this to continue following along.
You will need to install a few packages on our new virtual machine. Here is an easy way to connect using DigitalOcean's built in droplet console. Click into your droplet and select console from the top right panel:
This will open a terminal and allow you to install your packages and add ssh keys. Use the following code to begin configurations:
sudo apt update
sudo apt upgrade
We now need to create a new user and add SSH keys:
adduser **username**
usermod -aG sudo **username**
gpasswd -a **username** sudo
cd
cd /home/**username**
mkdir cicd && mkdir .ssh && cd .ssh
sudo nano authorized_keys
Add your SSH public key into the nano. Use Ctrl + S to save and Ctrl + X to exit. Do the same for root
cd
sudo nano authorized_keys
Now, finish preparing your virtual machine.
sudo apt install python3-pip python3-dev libpq-dev postgresql-client nginx
snap install certbot --classic
Now set up the firewall:
sudo ufw allow openssh
sudo ufw enable
We can now configure Nginx. Use the following to create a new config file:
Note: change the domain to your own
sudo nano /etc/nginx/sites-available/didcoding.uk
Paste the following into the nano and use Ctrl + S to save and Ctrl + X to exit:
server {
server_name didcoding.uk www.didcoding.uk;
location / {
proxy_pass http://localhost:8080;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
We can can now repeat this for our Flower subdomain:
sudo nano /etc/nginx/sites-available/flower.didcoding.uk
Paste the following into the nano and use Ctrl + S to save and Ctrl + X to exit:
server {
server_name flower.didcoding.uk;
location / {
proxy_pass http://localhost:5555;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
We can now link both of our config files to our site:
sudo ln -s /etc/nginx/sites-available/didcoding.uk /etc/nginx/sites-enabled
sudo ln -s /etc/nginx/sites-available/flower.didcoding.uk /etc/nginx/sites-enabled
You can test the Nginx config with the following:
sudo nginx -t
We can now add Nginx to our firewall:
sudo systemctl restart nginx
sudo ufw delete allow 8000
sudo ufw allow 'Nginx Full'
Managed Database:
Also, We will be using DigitalOcean's managed database service. Don't worry, I have a good video to help you set one up.
Domain/DNS configuration:
This part does not take too long.
Visit DigitalOcean.
Now, click on the green 'Create' button and select 'Domains/DNS'
This will redirect you to a new page. Type you domain name into the highlighted field and click the blue 'Create Domain' button.
You will be directed to the your newly set up domain.
Type '@' into the first highlighted field and select your droplet in the second highlighted field. Now click the blue 'Create Record' button
Repeat the last step but change '@' for 'www'.
Repeat the last step but change 'www' for 'flower'.
We have just created the correct A records for our project. The "A" stands for "address" and this is the most fundamental type of DNS record.
We can now secure a SSL certificate for our domain and sub domain.
Head back to your console and use the following code:
sudo certbot --nginx -d didcoding.uk -d www.didcoding.uk -d flower.didcoding.uk
Lastly, set up Certbot to automatically renew our SSL certificates:
sudo certbot renew --dry-run
DigitalOcean is all good to go. Make a note of your new IP address and managed database information as you will need it when we come to set up GitHub.
GitHub access key and secrets:
GitHub Packages is a platform for hosting and managing packages, including containers and other dependencies. GitHub Packages combines your source code and packages in one place to provide integrated permissions management and billing, so you can centralize your software development on GitHub.
You can integrate GitHub Packages with GitHub APIs, GitHub Actions, and webhooks to create an end-to-end DevOps workflow that includes your code, CI, and deployment solutions. We'll use it to store Docker images.
Let's get a personal access token. Log into your GitHub account and visit https://github.com/settings/apps.
Now click "Personal access tokens". Then, click "Generate new token".
You will now be given a whole bunch of options.
You need to focus on the following 3 things:
- Name your token
- Select 'No expiration'
- Select 'write:packages' and 'delete:packages' in scopes
You can now click the green 'Generate token' button and make a note of your new personal token.
We now need to set up some GitHub action secrets. I love using action secrets as they bring another level of secruty to my projects as they negate the need for .env files and all sensitive information is encrypted.
Navagate to your repositories secrets tab. Your URL will look something like this:
https://github.com/your-username/your-repo-name/settings/secrets/actions
You need to add the following secrets for this project:
- DIGITAL_OCEAN_IP_ADDRESS: This is you Digital Ocean Droplet IP address
- DJANGO_ALLOWED_HOSTS: This is your domain name
- DONOT_REPLY_EMAIL: This is your google mail email address
- GOOGLE_APP_PASSWORD: This is your google app password
- NAMESPACE: This is your GitHub username
- PERSONAL_ACCESS_TOKEN: This is the token we created a moment ago
- POSTGRES_DB: This is your DigitalOcean managed database name
- POSTGRES_PASSWORD: This is the database user password
- POSTGRES_USER: This is the database username
- PRIVATE_KEY: This is your SSH private key
- SECRET_KEY: Your Django secret key
- SQL_DATABASE: This is your DigitalOcean managed database name
- SQL_HOST: This is the database host
- SQL_PASSWORD: This is the database user password
- SQL_PORT: This is the database post
- SQL_USER: This is the database username
Packages and Actions
Okay, let's run a quick test. Let's create an image of our project and get it into GitHub packages.
Note: You will need to add your own details into the ** comments.
docker build -f backend/Dockerfile.prod -t ghcr.io/**USERNAME**/**REPOSITORY_NAME**/app:latest ./backend
docker login ghcr.io -u **USERNAME** -p **PERSONAL_ACCESS_TOKEN**
docker push ghcr.io/**USERNAME**/**REPOSITORY_NAME**/app:latest
If all went well, you will now be able to view the package at one of the following URLs.
https://github.com/orgs/**USERNAME**/packages
https://github.com/**USERNAME**?tab=packages
If you are seeing our package, we're ready to move one!
We are now ready to configure our GitHub Actions.
"GitHub Actions make it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want
Go back to VS Code, open .github/workflows/main.yml and add the following code:
name: Continuous Integration and Delivery
on: [push]
env:
APP_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/app
CELERY_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/celery
BEAT_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/beat
FLOWER_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/flower
NGINX_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx
jobs:
build:
name: Build Docker Images
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout@v1
- name: Add environment variables to .env
run: |
echo DEBUG=0 >> .env
echo PRODUCTION=1 >> .env
echo SQL_ENGINE=django.db.backends.postgresql_psycopg2 >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=${{ secrets.SECRET_KEY }} >> .env
echo SQL_DATABASE=${{ secrets.SQL_DATABASE }} >> .env
echo SQL_USER=${{ secrets.SQL_USER }} >> .env
echo SQL_PASSWORD=${{ secrets.SQL_PASSWORD }} >> .env
echo SQL_HOST=${{ secrets.SQL_HOST }} >> .env
echo SQL_PORT=${{ secrets.SQL_PORT }} >> .env
echo DONOT_REPLY_EMAIL=${{ secrets.DONOT_REPLY_EMAIL }} >> .env
echo GOOGLE_APP_PASSWORD=${{ secrets.GOOGLE_APP_PASSWORD }} >> .env
echo DJANGO_ALLOWED_HOSTS=${{ secrets.DJANGO_ALLOWED_HOSTS }} >> .env
- name: Set environment variables
run: |
echo "APP_IMAGE=$(echo ${{env.APP_IMAGE}} )" >> $GITHUB_ENV
echo "CELERY_IMAGE=$(echo ${{env.CELERY_IMAGE}} )" >> $GITHUB_ENV
echo "BEAT_IMAGE=$(echo ${{env.BEAT_IMAGE}} )" >> $GITHUB_ENV
echo "FLOWER_IMAGE=$(echo ${{env.FLOWER_IMAGE}} )" >> $GITHUB_ENV
echo "NGINX_IMAGE=$(echo ${{env.NGINX_IMAGE}} )" >> $GITHUB_ENV
- name: Log in to GitHub Packages
run: echo ${PERSONAL_ACCESS_TOKEN} | docker login ghcr.io -u ${{ secrets.NAMESPACE }} --password-stdin
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull ${{ env.APP_IMAGE }} || true
docker pull ${{ env.CELERY_IMAGE }} || true
docker pull ${{ env.BEAT_IMAGE }} || true
docker pull ${{ env.FLOWER_IMAGE }} || true
docker pull ${{ env.NGINX_IMAGE }} || true
- name: Build images
run: |
docker-compose -f docker-compose.cicd.yml build
- name: Push images
run: |
docker push ${{ env.APP_IMAGE }}
docker push ${{ env.CELERY_IMAGE }}
docker push ${{ env.BEAT_IMAGE }}
docker push ${{ env.FLOWER_IMAGE }}
docker push ${{ env.NGINX_IMAGE }}
So what are e doing here?
We have basically defined our first Github job that will be triggered when we push our code.
You can see that we are referencing the GitHub Action secrets that we have added into our GitHub repository. We are also referencing docker-compose.cicd.yml. We need to content to this file.
Open docker-compoce.cicd.yml and add the following code:
version: '3'
services:
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env
container_name: did_django_schedule_jobs_v2_db_prod
networks:
- main_prod
app:
build:
context: ./backend
dockerfile: Dockerfile.prod
cache_from:
- "${APP_IMAGE}"
image: "${APP_IMAGE}"
restart: always
command: gunicorn did_django_schedule_jobs_v2.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- .env
depends_on:
- db
networks:
- main_prod
container_name: did_django_schedule_jobs_v2_django_app_prod
redis:
image: redis:6-alpine
expose:
- 6379
ports:
- "6379:6379"
networks:
- main_prod
container_name: did_django_schedule_jobs_v2_redis_prod
celery_worker:
restart: always
build:
context: ./backend
dockerfile: Dockerfile.prod
cache_from:
- "${CELERY_IMAGE}"
image: "${CELERY_IMAGE}"
command: celery -A did_django_schedule_jobs_v2 worker --loglevel=info --logfile=logs/celery.log
volumes:
- ./backend:/home/app/web/
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- redis
- app
container_name: did_django_schedule_jobs_v2_celery_worker_prod
celery-beat:
build:
context: ./backend
dockerfile: Dockerfile.prod
cache_from:
- "${BEAT_IMAGE}"
image: "${BEAT_IMAGE}"
command: celery -A did_django_schedule_jobs_v2 beat -l info
volumes:
- ./backend:/home/app/web/
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- redis
- app
container_name: did_django_schedule_jobs_v2_celery_beat_prod
flower:
build:
context: ./backend
dockerfile: Dockerfile.prod
cache_from:
- "${FLOWER_IMAGE}"
image: "${FLOWER_IMAGE}"
command: "celery -A did_django_schedule_jobs_v2 flower
--broker=redis://redis:6379//
--env-file=.env
--basic_auth=bobby:password"
ports:
- 5555:5555
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- app
- redis
- celery_worker
container_name: did_django_schedule_jobs_v2_flower_prod
nginx:
container_name: did_django_schedule_jobs_v2_nginx_prod
restart: always
build:
context: ./nginx
cache_from:
- "${NGINX_IMAGE}"
image: "${NGINX_IMAGE}"
ports:
- "8080:8080"
networks:
- main_prod
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- app
volumes:
postgres_data:
static_volume:
media_volume:
networks:
main_prod:
driver: bridge
Now replace the contents of docker-compose.prod.yml with the following code:
version: '3'
services:
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env
container_name: did_django_schedule_jobs_v2_db_prod
networks:
- main_prod
app:
image: "${APP_IMAGE}"
restart: always
command: gunicorn did_django_schedule_jobs_v2.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 8000:8000
env_file:
- .env
depends_on:
- db
networks:
- main_prod
container_name: did_django_schedule_jobs_v2_django_app_prod
redis:
image: redis:6-alpine
expose:
- 6379
ports:
- "6379:6379"
networks:
- main_prod
container_name: did_django_schedule_jobs_v2_redis_prod
celery_worker:
restart: always
image: "${CELERY_IMAGE}"
command: celery -A did_django_schedule_jobs_v2 worker --loglevel=info --logfile=logs/celery.log
volumes:
- ./backend:/home/app/web/
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- redis
- app
container_name: did_django_schedule_jobs_v2_celery_worker_prod
celery-beat:
image: "${BEAT_IMAGE}"
command: celery -A did_django_schedule_jobs_v2 beat -l info
volumes:
- ./backend:/home/app/web/
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- redis
- app
container_name: did_django_schedule_jobs_v2_celery_beat_prod
flower:
image: "${FLOWER_IMAGE}"
command: "celery -A did_django_schedule_jobs_v2 flower
--broker=redis://redis:6379//
--env-file=..env
--basic_auth=bobby:password"
ports:
- 5555:5555
networks:
- main_prod
env_file:
- .env
depends_on:
- db
- app
- redis
- celery_worker
container_name: did_django_schedule_jobs_v2_flower_prod
nginx:
container_name: did_django_schedule_jobs_v2_nginx_prod
restart: always
image: "${NGINX_IMAGE}"
ports:
- "8080:8080"
networks:
- main_prod
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- app
volumes:
postgres_data:
static_volume:
media_volume:
networks:
main_prod:
driver: bridge
We have just created a dockerfile that is being used to build images for GitHub packages. We have then alters our production docker-compose file to use the new GitHub images.
We can now push our code to our GitHub repository and with any luck our build job will be triggered. Use the following code to create a new branch and commit our changes:
git checkout -b cicd
git add -A
git commit -m "initial commit"
git push -u origin cicd
You will be able to acess the job in your repositories action tab.
https://github.com/**USERNAME**/**REPOSITORY_NAME**/actions
Make sure the job successfully completes. You will see something like this when finished.
Remove the old 'did_django_job_scheduler_v2/app' package that we created earlier. You can do this by clicking into the package, clicking package settings and clicking the delete package button.
We can now add a dploy job to our main.yml file. Replace the contents of .github/workflows/main.yml with the following code:
name: Continuous Integration and Delivery
on: [push]
env:
APP_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/app
CELERY_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/celery
BEAT_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/beat
FLOWER_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/flower
NGINX_IMAGE: ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx
jobs:
build:
name: Build Docker Images
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout@v1
- name: Add environment variables to .env
run: |
echo DEBUG=0 >> .env
echo PRODUCTION=1 >> .env
echo SQL_ENGINE=django.db.backends.postgresql_psycopg2 >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=${{ secrets.SECRET_KEY }} >> .env
echo SQL_DATABASE=${{ secrets.SQL_DATABASE }} >> .env
echo SQL_USER=${{ secrets.SQL_USER }} >> .env
echo SQL_PASSWORD=${{ secrets.SQL_PASSWORD }} >> .env
echo SQL_HOST=${{ secrets.SQL_HOST }} >> .env
echo SQL_PORT=${{ secrets.SQL_PORT }} >> .env
echo DONOT_REPLY_EMAIL=${{ secrets.DONOT_REPLY_EMAIL }} >> .env
echo GOOGLE_APP_PASSWORD=${{ secrets.GOOGLE_APP_PASSWORD }} >> .env
echo DJANGO_ALLOWED_HOSTS=${{ secrets.DJANGO_ALLOWED_HOSTS }} >> .env
- name: Set environment variables
run: |
echo "APP_IMAGE=$(echo ${{env.APP_IMAGE}} )" >> $GITHUB_ENV
echo "CELERY_IMAGE=$(echo ${{env.CELERY_IMAGE}} )" >> $GITHUB_ENV
echo "BEAT_IMAGE=$(echo ${{env.BEAT_IMAGE}} )" >> $GITHUB_ENV
echo "FLOWER_IMAGE=$(echo ${{env.FLOWER_IMAGE}} )" >> $GITHUB_ENV
echo "NGINX_IMAGE=$(echo ${{env.NGINX_IMAGE}} )" >> $GITHUB_ENV
- name: Log in to GitHub Packages
run: echo ${PERSONAL_ACCESS_TOKEN} | docker login ghcr.io -u ${{ secrets.NAMESPACE }} --password-stdin
env:
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull ${{ env.APP_IMAGE }} || true
docker pull ${{ env.CELERY_IMAGE }} || true
docker pull ${{ env.BEAT_IMAGE }} || true
docker pull ${{ env.FLOWER_IMAGE }} || true
docker pull ${{ env.NGINX_IMAGE }} || true
- name: Build images
run: |
docker-compose -f docker-compose.cicd.yml build
- name: Push images
run: |
docker push ${{ env.APP_IMAGE }}
docker push ${{ env.CELERY_IMAGE }}
docker push ${{ env.BEAT_IMAGE }}
docker push ${{ env.FLOWER_IMAGE }}
docker push ${{ env.NGINX_IMAGE }}
deploy:
name: Deploy to DigitalOcean
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
steps:
- name: Checkout master
uses: actions/checkout@v1
- name: Add environment variables to .env
run: |
echo DEBUG=0 >> .env
echo PRODUCTION=1 >> .env
echo SQL_ENGINE=django.db.backends.postgresql_psycopg2 >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=${{ secrets.SECRET_KEY }} >> .env
echo SQL_DATABASE=${{ secrets.SQL_DATABASE }} >> .env
echo SQL_USER=${{ secrets.SQL_USER }} >> .env
echo SQL_PASSWORD=${{ secrets.SQL_PASSWORD }} >> .env
echo SQL_HOST=${{ secrets.SQL_HOST }} >> .env
echo SQL_PORT=${{ secrets.SQL_PORT }} >> .env
echo DONOT_REPLY_EMAIL=${{ secrets.DONOT_REPLY_EMAIL }} >> .env
echo GOOGLE_APP_PASSWORD=${{ secrets.GOOGLE_APP_PASSWORD }} >> .env
echo APP_IMAGE=${{ env.APP_IMAGE }} >> .env
echo CELERY_IMAGE=${{ env.CELERY_IMAGE }} >> .env
echo BEAT_IMAGE=${{ env.BEAT_IMAGE }} >> .env
echo FLOWER_IMAGE=${{ env.FLOWER_IMAGE }} >> .env
echo NGINX_IMAGE=${{ env.NGINX_IMAGE }} >> .env
echo NAMESPACE=${{ secrets.NAMESPACE }} >> .env
echo DJANGO_ALLOWED_HOSTS=${{ secrets.DJANGO_ALLOWED_HOSTS }} >> .env
echo PERSONAL_ACCESS_TOKEN=${{ secrets.PERSONAL_ACCESS_TOKEN }} >> .env
- name: Add the private SSH key to the ssh-agent
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
mkdir -p ~/.ssh
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
ssh-keyscan github.com >> ~/.ssh/known_hosts
ssh-add - <<< "${{ secrets.PRIVATE_KEY }}"
- name: Build and deploy images on DigitalOcean
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
scp -o StrictHostKeyChecking=no -r ./.env ./backend ./logs ./nginx ./docker-compose.prod.yml root@${{ secrets.DIGITAL_OCEAN_IP_ADDRESS }}:/home/bobby/cicd
ssh -o StrictHostKeyChecking=no root@${{ secrets.DIGITAL_OCEAN_IP_ADDRESS }} << 'ENDSSH'
cd /home/bobby/cicd
source .env
docker login ghcr.io -u $NAMESPACE -p $PERSONAL_ACCESS_TOKEN
docker pull $APP_IMAGE
docker pull $CELERY_IMAGE
docker pull $BEAT_IMAGE
docker pull $FLOWER_IMAGE
docker pull $NGINX_IMAGE
docker-compose -f docker-compose.prod.yml up -d
ENDSSH
So what are e doing here?
We expanded on our first build job and added a deploy job. Our deploy job will be triggered when we merge code into our main branch.
You now need to push your code to GitHub:
git add -A
git commit -m "final push"
git push -u origin cicd
Check to make sure the build went well. If it did, make a pull request to main and merge the changes. This will trigger the deploy job.
docker exec -it did_django_schedule_jobs_v2_django_app_prod bash
python manage.py migrate
python manage.py collectstatic
exit
You will now need to restart the celery beat container as it requires a few database tables to work:
docker restart did_django_schedule_jobs_v2_celery_beat_prod
Conclusion
We have successfully Dockerised a Django project that uses Celery, Redis and Flower. We have then set up HTTPS and deployed our project using a Digital Ocean droplet.
Start typing and press Enter to search