When you need to launch multiple containers that work as a single application, it is difficult to manage with a single Dockerfile.
That’s why we need to use “docker compose” so that we can manage our containers from a higher level.
The application that we will test is a simple Django application.
The database is PostgreSQL, and it will also be installed using Docker.
Normally, it is not recommended to install the database using Docker. The database should be running on their own, and other application should be able to access at any time, moreover, the data should be persistent as well. If you put the database into the docker composer, then every time when you need to deploy the new application, you have to redeploy your database. (Of course, there is a way to keep the data. However, since it is recommended to use the database service from the cloud infrastructures such as Azure or AWS, so not worthy of arguing!)
Anyway, I just wanted to show how to handle multi-container installation when one container is dependent on the other container.
The challenging part in this scenario is when you build the Django application it needs to be connected with the database before it actually runs.
Then you may think, “What’s the problem?”.
Well, the problem is, if Django finished its launching earlier than PostgreSQL, it will raise the error showing it cannot find the database and Docker will stop.
Then you may think, “Maybe we can put some waiting time?”.
Well, you cannot just put an arbitrary waiting time because from some system it might be installed faster than the others or vice versa.
Here I introduce the common way to deal with dependent multi-container installation using docker compose.
Introducing the sample application
This is a just website keeps adding a new image as a card-UI every time user refresh this page.
It will be much easier if you see the following code.
As you can see, every time the user calls the index page, it will call the random_picture function and store the image as a base64 encoded version into the database. Then return all the posts data to render into the HTML page.
Pretty simple, right?
I assume that you can intuitively understand the code.
There are two services (containers): ‘app’ and ‘postgres’.
The ‘app’ container will be built using Docker file in the root path.
The ‘postgres’ container will be built using Docker file in ‘postgres’ folder.
The most important parameter is “links”. Since we are opening this database to the ‘app’ container, there should be a link between ‘app’ and ‘postgres’ containers. (If you open the database port publicly, then you don’t have to use this parameter.)
Now, let’s take a look Dockerfile for ‘app’ container.
I think nothing is special until line 6.
And if you have ever used docker before, you would end up with CMD [“python,” “run,” something something].
However, since we are building our application from the very scratch of Django, we need a further process.
Therefore, we put all the extra commands into ‘entry_point.sh’ file.
The most important part of this shell script is from line 4 to 7.
This loop will repeat until “python manage.py migrate” works properly (not raising an error.)
Of course, there must be several ways to check the status of database such as sending “health check” stuff.
You can use whatever approaches that you want, just remember to repeat that process “until it works”.
Alright that’s it!
You can find the complete source from here!