I’ve been using Docker for my staging and production environments, but I’ve recently figured out how to make Docker work for my development environment as well.
When I work on my personal web applications, I have three environments:
- Production – the actual application that serves the users
- Staging – a replica of the production environment on my laptop
- Development – the environment where I write source code, unit/integration test, debug, integrate, etc.
While having a development environment that is significantly different (ie. not using Docker) from the staging/production environments is not an issue, I’ve really enjoyed the switch to using Docker for development.
The key aspects that were important to me when deciding to switch to Docker for my development environment were:
- Utilize the Flask development server instead of a production web server (Gunicorn)
- Allow easy access to my database (Postgres)
- Maintain my unit/integration testing capability
This blog post shows how to configure Docker and Docker Compose for creating a development environment that you can easily use on a day-to-day basis for developing a Flask application.
For reference, my Flask project that is the basis for this blog post can be found on GitLab.
The architecture for this Flask application is illustrated in the following diagram:
Each key component has its own sub-directory in the repository:
$ tree . ├── docker-compose.yml ├── nginx │ ├── Dockerfile ├── postgresql │ └── Dockerfile * Not included in git repository └── web ├── Dockerfile ├── create_postgres_dockerfile.py ├── instance ├── project ├── requirements.txt └── run.py
Configuration of Dockerfiles and Docker Compose for Production
The setup for my application utilizes separate Dockerfiles for the web application, Nginx, and Postgres; these services are integrated together using Docker Compose.
Web Application Service
Originally, I had been using the python-*:onbuild image for my web application image, as this seemed like a convenient and reasonable option (it provided the standard configurations for a python project). However, in reading the notes in the python page on Docker Hub, the use of the python-*:onbuild images are not recommended anymore.
Therefore, I created a Dockerfile that I use for my web application that creates a non-root user (flask) for the container to run as:
FROM python:3.6.1 MAINTAINER Patrick Kennedy <firstname.lastname@example.org> # Create the group and user to be used in this container RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask # Create the working directory (and set it as the working directory) RUN mkdir -p /home/flask/app/web WORKDIR /home/flask/app/web # Install the package dependencies (this step is separated # from copying all the source code to avoid having to # re-install all python packages defined in requirements.txt # whenever any source code change is made) COPY requirements.txt /home/flask/app/web RUN pip install --no-cache-dir -r requirements.txt # Copy the source code into the container COPY . /home/flask/app/web RUN chown -R flask:flaskgroup /home/flask USER flask
It may seem odd or out of sequence to copy the requirements.txt file from the local system into the container separately from the entire repository, but this is intentional. If you copy over the entire repository and then ‘pip install’ all the packages in requirements.txt, any change in the repository will cause all the packages to be re-installed (this can take a long time and is unnecessary) when you build this container. A better approach is to first just copy over the requirements.txt file and then run ‘pip install’. If changes are made to the repository (not to requirements.txt), then the cached intermediate container (or layer in your service) will be utilized. This is a big time saver, especially during development. Of course, if you make a change to requirements.txt, this will be detected during the next build and all the python packages will be re-installed in the intermediate container.
Here is the Dockerfile that I use for my Nginx service:
FROM nginx:1.11.3 RUN rm /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/ RUN rm /etc/nginx/conf.d/default.conf COPY family_recipes.conf /etc/nginx/conf.d/
There is a lot of complexity when it comes to configuring Nginx, so please refer to my blog post entitled ‘How to Configure Nginx for a Flask Web Application‘.
The Dockerfile for the postgres service is very simple, but I actually use a python script (create_postgres_dockerfile.py) to auto-generate it based on the credentials of my postgres database. The structure of the Dockerfile is:
FROM postgres:9.6 # Set environment variables ENV POSTGRES_USER <postgres_user> ENV POSTGRES_PASSWORD <postgres_password> ENV POSTGRES_DB <postgres_database>
Docker Compose is a great tool for connecting different services (ie. containers) to create a fully functioning application. The configuration of the application is defined in the docker-compose.yml file:
version: '2' services: web: restart: always build: ./web expose: - "8000" volumes: - /usr/src/app/web/project/static command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app depends_on: - postgres nginx: restart: always build: ./nginx ports: - "80:80" volumes: - /www/static volumes_from: - web depends_on: - web data: image: postgres:9.6 volumes: - /var/lib/postgresql command: "true" postgres: restart: always build: ./postgresql volumes_from: - data expose: - "5432"
The following commands need to be run to build and then start these containers:
docker-compose build docker-compose -f docker-compose.yml up -d
Additionally, I utilize a script to re-initialize the database, which is frequently used in the staging environment:
docker-compose run --rm web python ./instance/db_create.py
To see the application, utilize your favorite web browser and navigate to http://ip_of_docker_machine/ to access the application; this will often be http://192.168.99.100/. The command ‘docker-machine ip’ will tell you the IP address to use.
Changes Needed for Development Environment
The easiest way to make the necessary changes for the development environment is to create the changes in the docker-compose.override.yml file.
Docker Compose automatically checks for docker-compose.yml and docker-compose.override.yml when the ‘up’ command is used. Therefore, in development use ‘docker-compose up -d’ and in production or staging use ‘docker-compose -f docker-compose.yml up -d’ to prevent the loading of docker-compose.override.yml.
Here are the contents of the docker-compose.override.yml file:
version: '2' services: web: build: ./web ports: - "5000:5000" environment: - FLASK_APP=run.py - FLASK_DEBUG=1 volumes: - ./web/:/usr/src/app/web command: flask run --host=0.0.0.0 postgres: ports: - "5432:5432"
Each line in the docker-compose.override.yml overrides the applicable setting from docker-compose.ml.
Web Application Service
For the web application container, the web server is being switched from Gunicorn (used in production) to the Flask development server. The Flask development server allows auto-reloading of the application whenever a change is made and has debugging capability right in the browser when exceptions occurs. These are create features to have during development. Additionally, port 5000 is now accessible from the web application container. This allows the developer to gain access to the Flask web server by navigating to http://ip_of_docker_machine:5000.
For the postgres container, the only change that is made is to allow access to port 5432 by the host machine instead of just other services. For reference, here is a good explanation of the use of ‘ports’ vs. ‘expose’ from Stack Overflow.
This change allows direct access to the postgres database using the psql shell. When accessing the postgres database, I prefer specifying the URI:
This allows you access to the postgres database, which will come in really handy at some point during development (almost a guarantee).
While there are no override commands for the Nginx service, this service will be basically ignored during development, as the web application is accessed directly through the Flask web server by navigating to http://ip_of_docker_machine:5000/. I have not found a clear way to disable a service, so the Nginx service is left untouched.
Alternate Solution with Nginx Service
While there is a nice simplicity to just using the Flask development server (web service) with the Postgres database (postgres service), it is also possible to utilize the Nginx service in development. This approach has the advantage of being more similar to the production environment.
In order to implement this configuration, the docker-compose.override.yml file needs to be changed by updating the highlighted lines:
version: '2' services: web: build: ./web ports: - "8000:8000" environment: - FLASK_APP=run.py - FLASK_DEBUG=1 volumes: - ./web/:/usr/src/app/web command: flask run --host=0.0.0.0 --port 8000 postgres: ports: - "5432:5432"
With this configuration, you will just navigate to http://ip_of_docker_machine/ to access the application (for example, http://192.168.99.100/). The command ‘docker-machine ip’ will tell you the IP address to use.
Running the Development Application
The following commands should be run to build and run the containers:
docker-compose stop # If there are existing containers running, stop them docker-compose build docker-compose up -d
Since you are running in a development environment with the Flask development server, you will need to navigate to http://ip_of_docker_machine:5000/ to access the application (for example, http://192.168.99.100:5000/). The command ‘docker-machine ip’ will tell you the IP address to use.
Another helpful command that allows quick access to the logs of a specific container is:
docker-compose logs <service>
For example, to see the logs of the web application, run ‘docker-compose logs web’. In the development environment, you should see something similar to:
$ docker-compose logs web Attaching to flaskrecipeapp_web_1 web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) web_1 | * Restarting with stat web_1 | * Debugger is active! web_1 | * Debugger pin code: ***-***-***
Docker is an amazing product that I have really come to enjoy using for my development environment. I really feel that using Docker makes you think about your entire architecture, as Docker provides such an easy way to start integrating complex services, like web services, databases, etc.
Using Docker for a development environment does require a good deal of setup, but once you have the configuration working, it’s a great way for quickly developing your application while still having that one foot towards the production environment.
Docker Compose File (version 2) Reference
Dockerizing Flask With Compose and Machine – From Localhost to the Cloud
NOTE: This was the blog post that got me really excited to learn about Docker!
Docker Compose for Development and Production – GitHub – Antonis Kalipetis
Also, check out Antonis’ talk from DockerCon17 on YouTube.
Overview of Docker Compose CLI
Guidance for Docker Image Authors
Docker Command Reference
Start or Re-start Docker Machine:
$ docker-machine start default
$ eval $(docker-machine env default)
Build all of the images in preparation for running your application:
$ docker-compose build
Using Docker Compose to run the multi-container application (in daemon mode):
$ docker-compose up -d
$ docker-compose -f docker-compose.yml up -d
View the logs from the different running containers:
$ docker-compose logs
$ docker-compose logs web # or whatever service you want
Stop all of the containers that were started by Docker Compose:
$ docker-compose stop
Run a command in a specific container:
$ docker-compose run –rm web python ./instance/db_create.py
$ docker-compose run web bash
Check the containers that are running:
$ docker ps
Stop all running containers:
$ docker stop $(docker ps -a -q)
Delete all running containers:
$ docker rm $(docker ps -a -q)
Delete all untagged Docker images
$ docker rmi $(docker images | grep “^