Learn to walk with Docker and PostgreSQL

After some years the virtualization tool Docker proofed it’s importance for the software industry. Usually when you hear something about virtualization you may could think this is something for administrators and will not effect me as a developer as much. But wait. You’re might not right. Because having some basic knowledge about Docker as a developer will helps you in your daily business.

Step 0: create a local bridged network

docker network create -d bridge --subnet=172.18.0.0/16 services

The name of the network is services an bind to the IP address range 172.18.0.0 to 172.18.0.255. You can proof the success yourself by typing:

docker network ls

An output like the one below should appear:

NETWORK ID     NAME       DRIVER    SCOPE
ac2f58175229   bridge     bridge    local
a01dc5513882   host       host      local
1d3d3ac42a40   none       null      local
82da585ee2df   services   bridge    local

The network step is important, because it defines a permanent connection, how applications need to establish a connect with the PostgreSQL DBMS. If you don’t do this Docker manage the IP address and when you run multiple containers on your machine the IP addresses could changed after a system reboot. This depends mostly on the order how the containers got started.

Step 1: create the container and initialize the database

docker run -d --name pg-dbms --restart=no \
--net services --ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
postgres:11

If you wish that your PostgreSQL is always up after you restart your system, you should change the restart policy form no to always. The second line configure the network connection we had define in step 0. After you created the instance pg-dbms of your PostgreSQL 11 Docker image, you need to cheek if it was success. This you can do by the

docker ps -a

command. When your container is after around 30 seconds still running you did everything right.

Step 2: copy the initialized database directory to a local directory on your host system

docker cp pg-dbms:/var/lib/postgresql/data /home/ed/postgres

The biggest problem with the current container is, that all data will got lost, when you erase the container. This means wen need to find a way how to save this data permanently. The easiest way is to copy the data directory from your container to an directory to your host system. The copy command needs tow parameters source and destination. for the source you need to specify the container were you want to grab the files. in our case the container is named pg-dbms. The destination is a PostgreSQL folder in the home directory of the user ed. If you use Windows instead of Linux it works the same. Just adapt the directory path and try to avoid white-spaces. When the files appeared in the defined directory you’re done with this step.

Step 3: stop the current container

docker stop pg-dbms

In the case you wish to start a container, just replace the word stop for the word start. The container we created to grab the initial files for the PostgreSQL DBMS we don’t need no longer, so we can erase it, but to do that as first the running container have to be stopped.

Step 4: start the current container

docker start pg-dbms

After the container is stopped we are able to erase it.

Step 5: recreate the container with an external volume

docker run -d --name pg-dbms --restart=no \
--net services --ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
-v /media/ed/memory/pg:/var/lib/postgresql/data \
postgres:11

Now we can link the directory with the exported initial database to a new created PostgreSQL container. that’s all. The big benefit of this activities is, that now every database we create in PostgreSQL and the data of this database is outside of the docker container on our local machine. This allows a much more simpler backup and prevent losing information when a container has to be updated.

If you have instead of PostgreSQL other images where you need to grab files to reuse them you can use this tutorial too. just adapt to the image and the paths you need. The procedure is almost the same. If you like to get to know more facts about Docker you can watch also my video Docker Basics in less then 10 Minutes. In the case you like this short tutorial share it with your friends and colleagues. To stay informed don’t forget to subscribe to my newsletter.

The official PostgreSQL Docker Image in DockerHub: https://hub.docker.com/_/postgres

Docker Basics in less than 10 minutes

This short tutorial covers the most fundamental steps to use docker in your development tool chain. After we introduced the basic theory, we will learn how to install docker on a Linux OS (Ubuntu Mate). When this is done we have a short walk through to download an image and instantiate the container. The example use the official PHP 7.3 image with an Apache 2 HTTP Server.

Modern Times

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

  1. Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.

  2. The necessity of orchestrated builds is a hint of architectural problems. Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.

  3. Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.

  4. Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.

  5. Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.

  6. There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.

  7. We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.

  8. Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.

  9. Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.

Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.

  • First: understand the problem.
  • Second: investigate about a standard solution for the process.
  • Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.

If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.

By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.