Spring Cleaning for Docker

Anyone interested in this somewhat specialized article doesn’t need an explanation of what Docker is and what this virtualization tool is used for. Therefore, this article is primarily aimed at system administrators, DevOps engineers, and cloud developers. For those who aren’t yet completely familiar with the technology, I recommend our Docker course: From Zero to Hero.

In a scenario where we regularly create new Docker images and instantiate various containers, our hard drive is put under considerable strain. Depending on their complexity, images can easily reach several hundred megabytes to gigabytes in size. To prevent creating new images from feeling like downloading a three-minute MP3 with a 56k modem, Docker uses a build cache. However, if there’s an error in the Dockerfile, this build cache can become quite bothersome. Therefore, it’s a good idea to clear the build cache regularly. Old container instances that are no longer in use can also lead to strange errors. So, how do you keep your Docker environment clean?

While docker rm <container-nane> and docker rmi <image-id> will certainly get you quite far, in build environments like Jenkins or server clusters, this strategy can become a time-consuming and tedious task. But first, let’s get an overview of the overall situation. The command docker system df will help us with this.

root:/home# docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          15        9         5.07GB    2.626GB (51%)
Containers      9         7         11.05MB   5.683MB (51%)
Local Volumes   226       7         6.258GB   6.129GB (97%)
Build Cache     0         0         0B        0B

Before I delve into the details, one important note: The commands presented are very efficient and will irrevocably delete the corresponding areas. Therefore, only use these commands in a test environment before using them on production systems. Furthermore, I’ve found it helpful to also version control the commands for instantiating containers in your text file.

The most obvious step in a Docker system cleanup is deleting unused containers. Specifically, this means that the delete command permanently removes all instances of Docker containers that are not running (i.e., not active). If you want to perform a clean slate on a Jenkins build node before deployment, you can first terminate all containers running on the machine with a single command.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

The -f parameter suppresses the confirmation prompt, making it ideal for automated scripts. Deleting containers frees up relatively little disk space. The main resource drain comes from downloaded images, which can also be removed with a single command. However, before images can be deleted, it must first be ensured that they are not in use by any containers (even inactive ones). Removing unused containers offers another practical advantage: it releases ports blocked by containers. A port in a host environment can only be bound to a container once, which can quickly lead to error messages. Therefore, we extend our script to include the option to delete all Docker images not currently used by containers.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Another consequence of our efforts concerns Docker layers. For performance reasons, especially in CI environments, you should avoid using them. Docker volumes, on the other hand, are less problematic. When you remove the volumes, only the references in Docker are deleted. The folders and files linked to the containers remain unaffected. The -a parameter deletes all Docker volumes.

docker volume prune -a -f

Another area affected by our cleanup efforts is the build cache. Especially if you’re experimenting with creating new Dockerfiles, it can be very useful to manually clear the cache from time to time. This prevents incorrectly created layers from persisting in the builds and causing unusual errors later in the instantiated container. The corresponding command is:

docker buildx prune -f

The most radical option is to release all unused resources. There is also an explicit shell command for this.

docker volume prune -a -f

We can, of course, also use the commands just presented for CI build environments like Jenkins or GitLab CI. However, this might not necessarily lead to the desired result. A proven approach for Continuous Integration/Continuous Deployment is to set up your own Docker registry where you can deploy your self-built images. This approach provides a good backup and caching system for the Docker images used. Once correctly created, images can be conveniently deployed to different server instances via the local network without having to constantly rebuild them locally. This leads to a proven approach of using a build node specifically optimized for Docker images/containers to optimally test the created images before use. Even on cloud instances like Azure and AWS, you should prioritize good performance and resource efficiency. Costs can quickly escalate and seriously disrupt a stable project.

In this article, we have seen that in-depth knowledge of the tools used offers several opportunities for cost savings. The motto “We do it because we can” is particularly unhelpful in a commercial environment and can quickly degenerate into an expensive waste of resources.


Index & Abbreviations

[A]

[B]

[C]

[D]

[E]

[F]

[G]

[H]

[I]

[J]

[K]

[L]

[M]

[N]

[O]

[P]

[Q]

[R]

[S]

[T]

[U]

[V]

[W]

[Y]

[Z]

[X]

return to the table of content: Apache Maven Master Class

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z

[A]

[B]

[C]

[D]

[E]

[F]

[G]

[H]

[I]

[J]

[K]

[L]

[M]

[N]

[O]

[P]

[Q]

[R]

[S]

[T]

[U]

[V]

[W]

[Y]

[Z]

[X]

return to the table of content: Apache Maven Master Class

Apache Maven Master Class

Apache Maven (Maven for short) was first released on March 30, 2002, as an Apache Top-Level Project under the free Apache 2.0 License. This license also allows free use by companies in a commercial environment without paying license fees.

The word Maven comes from Yiddish and means something like “collector of knowledge.”

Maven is a pure command-line program developed in the Java programming language. It belongs to the category of build tools and is primarily used in Java software development projects. In the official documentation, Maven describes itself as a project management tool, as its functions extend far beyond creating (compiling) binary executable artifacts from source code. Maven can be used to generate quality analyses of program code and API documentation, to name just a few of its diverse applications.

Benefits


  Online Course (yearly subsciption / 365 days)

Maven Master Class
m 3.47 Milli-Bitcoin

Target groups

This online course is suitable for both beginners with no prior knowledge and experienced experts. Each lesson is self-contained and can be individually selected. Extensive supplementary material explains concepts and is supported by numerous references. This allows you to use the Apache Maven Master Class course as a reference. New content is continually being added to the course. If you choose to become an Apache Maven Master Class member, you will also have full access to exclusive content.

Developer

  • Maven Basics
  • Maven on the Command Line
  • IDE Integration
  • Archetypes: Creating Project Structures
  • Test Integration (TDD & BDD) with Maven
  • Test Containers with Maven
  • Multi-Module Projects for Microservices

Build Manager / DevOps

  • Release Management with Maven
  • Deploy to Maven Central
  • Sonatype Nexus Repository Manager
  • Maven Docker Container
  • Creating Docker Images with Maven
  • Encrypted Passwords
  • Process & Build Optimization

Quality Manager

  • Maven Site – The Reporting Engine
  • Determine and evaluate test coverage
  • Static code analysis
  • Review coding style specifications

In-Person Live Training – Build Management with Apache Maven

Cookbook: Maven Source Code Samples

Our Git repository contains an extensive collection of various code examples for Apache Maven projects. Everything is clearly organized by topic.

Back to table of contents: Apache Maven Master Class

  1. Token Replacement
  2. Compiler Warnings
  3. Excecutable JAR Files
  4. Enforcments
  5. Unit & Integration Testing
  6. Multi Module Project (JAR / WAR)
  7. BOM – Bill Of Materials (Dependency Management)
  8. Running ANT Tasks
  9. License Header – Plugin
  10. OWASP
  11. Profiles
  12. Maven Wrapper
  13. Shade Ueber JAR (Plugin)
  14. Java API Documantation (JavaDoc)
  15. Java Sources & Test Case packaging into JARs
  16. Docker
  17. Assemblies
  18. Maven Reporting (site)
  19. Flatten a POM
  20. GPG Signer

Featureitis

You don’t have to be a software developer to recognize a good application. But from my own experience, I’ve often seen programs that were promising and innovative at the start mutate into unwieldy behemoths once they reach a certain number of users. Since I’ve been making this observation regularly for several decades now, I’ve wondered what the reasons for this might be.

The phenomenon of software programs, or solutions in general, becoming overloaded with details was termed “featuritis” by Brooks in his classic book, “The Mythical Man-Month.” Considering that the first edition of the book was published in 1975, it’s fair to say that this is a long-standing problem. Perhaps the most famous example of featureitis is Microsoft’s Windows operating system. Of course, there are countless other examples of improvements that make things worse.

The phenomenon of software programs, or solutions in general, becoming overloaded with details is what Brooks called “featuritis.” Windows users who were already familiar with Windows XP and then confronted with its wonderful successor Vista, only to be appeased again by Windows 7, and then nearly had a heart attack with Windows 8 and 8.1, were calmed down again at the beginning of Windows 10. At least for a short time, until the forced updates quickly brought them back down to earth. And don’t even get me started on Windows 11. The old saying about Windows was that every other version is junk and should be skipped. Well, that hasn’t been true since Windows 7. For me, Windows 10 was the deciding factor in completely abandoning Microsoft, and like many others, I bought a new operating system. Some switched to Apple, and those who couldn’t afford or didn’t want the expensive hardware, like me, opted for a Linux system. This shows how a lack of insight can quickly lead to the loss of significant market share. Since Microsoft isn’t drawing any conclusions from these developments, this fact seems to be of little concern to the company. For other companies, such events can quickly push them to the brink of collapse, and beyond.

One motivation for adding more and more features to an existing application is the so-called product life cycle, which is represented by the BCG matrix in Figure 1.

With a product’s launch, it’s not yet certain whether it will be accepted by the market. If users embrace it, it quickly rises to stardom and reaches its maximum market position as a cash cow. Once market saturation is reached, it degrades to a slow seller. So far, so good. Unfortunately, the prevailing management view is that if no growth is generated compared to the previous quarter, market saturation has already been reached. This leads to the nonsensical assumption that users must be forced to accept an updated version of the product every year. Of course, the only way to motivate a purchase is to print a bulging list of new features on the packaging.

Since well-designed features can’t simply be churned out on an assembly line, a redesign of the graphical user interface is thrown in as a free bonus every time. Ultimately, this gives the impression of having something completely new, as it requires a period of adjustment to discover the new placement of familiar functions. It’s not as if the redesign actually streamlines the user experience or increases productivity. The arrangement of input fields and buttons always seems haphazardly thrown together.

But don’t worry, I’m not calling for an update boycott; I just want to talk about how things can be improved. Because one thing is certain: thanks to artificial intelligence, the market for software products will change dramatically in just a few years. I don’t expect complex and specialized applications to be produced by AI algorithms anytime soon. However, I do expect that these applications will have enough poorly generated AI-generated code sequences, which the developer doesn’t understand, injected into their codebases, leading to unstable applications. This is why I’m rethinking clean, handcrafted, efficient, and reliable software, because I’m sure there will always be a market for it.

I simply don’t want an internet browser that has mutated into a communication hub, offering chat, email, cryptocurrency payments, and who knows what else, in addition to simply displaying web pages. I want my browser to start quickly when I click something, then respond quickly and display website content correctly and promptly. If I ever want to do something else with my browser, it would be nice if I could actively enable this through a plugin.

Now, regarding the problem just described, the argument is often made that the many features are intended to reach a broad user base. Especially if an application has all possible options enabled from the start, it quickly engages inexperienced users who don’t have to first figure out how the program actually works. I can certainly understand this reasoning. It’s perfectly fine for a manufacturer to focus exclusively on inexperienced users. However, there is a middle ground that considers all user groups equally. This solution isn’t new and is very well-known: the so-called product lines.

In the past, manufacturers always defined target groups such as private individuals, businesses, and experts. These user groups were then often assigned product names like Home, Enterprise, and Ultimate. This led to everyone wanting the Ultimate version. This phenomenon is called Fear Of Missing Out (FOMO). Therefore, the names of the product groups and their assigned features are psychologically poorly chosen. So, how can this be done better?

An expert focuses their work on specific core functions that allow them to complete tasks quickly and without distractions. For me, this implies product lines like Essentials, Pure, or Core.

If the product is then intended for use by multiple people within the company, it often requires additional features such as external user management like LDAP or IAM. This specialized product line is associated with terms like Enterprise, Company, Business, and so on.

The cluttered end result, actually intended for NOOPS, has all sorts of things already activated during installation. If people don’t care about the application’s startup and response time, then go for it. Go all out. Throw in everything you can! Here, names like Ultimate, Full, and Maximized Extended are suitable for labeling the product line. The only important thing is that professionals recognize this as the cluttered version.

Those who cleverly manage these product lines and provide as many functions as possible via so-called modules, which can be installed later, enable high flexibility even in expert mode, where users might appreciate the occasional additional feature.

If you install tracking on the module system beforehand to determine how professional users upgrade their version, you’ll already have a good idea of ​​what could be added to the new version of Essentials. However, you shouldn’t rely solely on downloads as the decision criterion for this tracking. I often try things out myself and delete extensions faster than the installation process took if I think they’re useless.

I’d like to give a small example from the DevOps field to illustrate the problem I just described. There’s the well-known GitLab, which was originally a pure code repository hosting project. The name still reflects this today. An application that requires 8 GB of RAM on a server in its basic installation just to make a Git repository accessible to other developers is unusable for me, because this software has become a jack-of-all-trades over time. Slow, inflexible, and cluttered with all sorts of unnecessary features that are better implemented using specialized solutions.

In contrast to GitLab, there’s another, less well-known solution called SCM-Manager, which focuses exclusively on managing code repositories. I personally use and recommend SCM-Manager because its basic installation is extremely compact. Despite this, it offers a vast array of features that can be added via plugins.

I tend to be suspicious of solutions that claim to be an all-in-one solution. To me, that’s always the same: trying to do everything and nothing. There’s no such thing as a jack-of-all-trades, or as we say in Austria, a miracle worker!

When selecting programs for my workflow, I focus solely on their core functionality. Are the basic features promised by the marketing truly present and as intuitive as possible? Is there comprehensive documentation that goes beyond a simple “Hello World”? Does the developer focus on continuously optimizing core functions and consider new, innovative concepts? These are the questions that matter to me.

Especially in commercial environments, programs are often used that don’t deliver on their marketing promises. Instead of choosing what’s actually needed to complete tasks, companies opt for applications whose descriptions are crammed with buzzwords. That’s why I believe that companies that refocus on their core competencies and use highly specialized applications for them will be the winners of tomorrow.