The Future of Build Management
It’s not just high-level languages, which need to convert source code into machine code to make it executable, that require build tools. These tools are now also available for modern scripting languages like Python, Ruby, and PHP, as their scope of responsibility continues to expand. Looking back at the beginnings of this tool category, one inevitably encounters make, the first official representative of what we now call a build tool. Make’s main task was to generate machine code and package the files into a library or executable. Therefore, build tools can be considered automation tools. It’s logical that they also take over many other recurring tasks that arise in a developer’s daily work. For example, one of the most important innovations responsible for Maven’s success was the management of dependencies on other program libraries.
Another class of automation tools that has almost disappeared is the installer. Products like Inno Setup and Wise Installer were used to automate the installation process for desktop applications. These installation routines are a special form of deployment. The deployment process, in turn, depends on various factors. First and foremost, the operating system used is, of course, a crucial criterion. But the type of application also has a significant influence. Is it, for example, a web application that requires a defined runtime environment (server)? We can already see here that many of the questions being asked now fall under the umbrella of DevOps.
As a developer, it’s no longer enough to simply know how to write program code and implement functions. Anyone wanting to build a web application must first get the corresponding server running on which the application will execute. Fortunately, there are now many solutions that significantly simplify the provisioning of a working runtime. But especially for beginners, it’s not always easy to grasp the whole topic. I still remember questions in relevant forums about downloading Java Enterprise, but only finding that the application server was included.
Where automation solutions were lacking in the early 2000s, the challenge today is choosing the right tool. There’s an analogy here from the Java universe. When the Gradle build tool appeared on the market, many projects migrated from Maven to Gradle. The argument was that it offered greater flexibility. Often, the ability to define orchestrated builds was needed—that is, the sequence in which subprojects are created. Instead of acknowledging that this requirement represented an architectural shortcoming and addressing it, complex and difficult-to-manage build logic was built in Gradle. This, in turn, made customizations difficult to implement, and many projects were migrated back to Maven.
From DevOps automations, so-called pipelines have become established. Pipelines can also be understood as processes, and these processes can, in turn, be standardized. The best example of a standardized process is the build lifecycle defined in Maven, also known as the default lifecycle. This process defines 23 sequential steps, which, broadly speaking, perform the following tasks:
- Resolving and deploying dependencies
- Compiling the source code
- Compiling and running unit tests
- Packaging the files into a library or application
- Deploying the artifact locally for use in other local development projects
- Running integration tests
- Deploying the artifacts to a remote repository server.
This process has proven highly effective in countless Java projects over the years. However, if you run this process as a pipeline on a CI server like Jenkins, you won’t see much. The individual steps of the build lifecycle are interdependent and cannot be triggered individually. It’s only possible to exit the lifecycle prematurely. For example, after packaging, you can skip the subsequent steps of local deployment and running the integration tests.

A weakness of the build process described here becomes apparent when creating web applications. Web frontends usually contain CSS and JavaScript code, which is also automatically optimized. To convert variables defined in SCSS into correct CSS, a SASS preprocessor must be used. Furthermore, it is very useful to compress CSS and JavaScript files as much as possible. This obfuscation process optimizes the loading times of web applications. However, there are already countless libraries for CSS and JavaScript that can be managed with the NPM tool. NPM, in turn, provides so-called development libraries like Grunt, which enable CSS processing and optimization.
We can see how complex the build process of modern applications can become. Compilation is only a small part of it. An important feature of modern build tools is the optimization of the build process. An established solution for this is creating incremental builds. This is a form of caching where only changed files are compiled or processed.

But what needs to be done during a release? This process is only needed once an implementation phase is complete, to prepare the artifact for distribution. While it’s possible to include all the steps involved in a release in the build process, this would lead to longer build times. Longer local build times disrupt the developer’s workflow, making it more efficient to define a separate process for this.
An important condition for a release is that all used libraries must also be in their final release versions. If this isn’t the case, it cannot be guaranteed that subsequent releases of this version are identical. Furthermore, all test cases must run correctly, and a failure will abort the process. Additionally, a corresponding revision tag should be set in the source control repository. The finished artifacts must be signed, and API documentation must be created. Of course, the rules described here are just a small selection, and some of the tasks can even be parallelized. By using sophisticated caching, creating a release can be accomplished quickly, even for large monoliths.
Furthermore, by utilizing sophisticated caching, creating a release can be accomplished quickly, even for large monoliths. For Maven, for example, no complete release process, similar to the build process, has been defined. Instead, the community has developed a special plugin that allows for the semi-automation of simple tasks that arise during a release.
If we take a closer look at the topic of documentation and reporting, we find ample opportunities to describe a complete process. Creating API documentation would be just one minor aspect. Far more compelling about standardized reporting are the various code inspections, some of which can even be performed in parallel.
Of course, deployment is also essential. Due to the diversity of potential target environments, a different strategy is appropriate here. One possible approach would be broad support for configuration tools like Ansible, Chef, and Puppet. Virtualization technologies such as Docker and LXC containers are also standard in the age of cloud computing. The main task of deployment would then be provisioning the target environment and deploying the artifacts from a repository server. A wealth of different deployment templates would significantly simplify this process.
If we consistently extrapolate from these assumptions, we conclude that there can be different types of projects. These would be classic development projects, from which artifacts for libraries and applications are created; test projects, which in turn contain the created artifacts as dependencies; and, of course, deployment projects for providing the infrastructure. The area of automated deployment is also reflected in the concepts of Infrastructure as Code and GitOps, which can be taken up and further developed here.
Clean Desk – More Than Just Security
As a child, I liked to reply to my mother that only a genius could master chaos when she told me to tidy my room. A very welcome excuse to shirk my responsibilities. When I started an apprenticeship in a trade after finishing school, the first thing my master craftsman emphasized was: keeping things tidy. Tools had to be put back in their bags after use, opened boxes of the same materials had to be refilled, and of course, there was also the need to sweep up several times a day. I can say right away that I never perceived these things as harassment, even if they seemed annoying at first. Because we quickly learned the benefits of the motto “keep things clean.”
Tools that are always put back in their place give us a quick overview of whether anything is missing. So we can then go looking for it, and the likelihood of things being stolen is drastically reduced. With work materials, too, you maintain a good overview of what’s been used up and what needs to be replaced. Five empty boxes containing only one or two items not only take up space but also lead to miscalculations of available resources. Finally, it’s also true that one feels less comfortable in a dirty environment, and cleanliness demonstrates to the client that one works in a focused and planned manner.
Due to this early experience, when the concept of Clean Desk was introduced as a security measure in companies a few years ago, I didn’t immediately understand what was expected of me. After all, the Clean Desk principle had been second nature to me long before I completed my computer science degree. But let’s start at the beginning. First, let’s look at what Clean Desk actually is and how to implement it.
Anyone who delves deeply into the topic of security learns one of the first things they learn: most successful attacks aren’t carried out using complicated technical maneuvers. They’re much more mundane and usually originate from within, not from the outside. True to the adage, opportunity makes the thief. When you combine this fact with the insights of social engineering, a field primarily shaped by the hacker Kevin Mitnick, a new picture emerges. It’s not always necessary to immediately place your own employees under suspicion. In a building, there are external cleaning staff, security personnel, or tradespeople who usually have easy access to sensitive areas. Therefore, the motto should always be: trust is good, but control is better, which is why a Clean Desk Policy is implemented.
The first rule is: anyone leaving their workstation for an extended period must switch off their devices. This applies especially at the end of the workday. Otherwise, at least the desktop should be locked. The concept behind this is quite simple: Security vulnerabilities cannot be exploited from switched-off devices to hack into the company network from the outside. Furthermore, it reduces power consumption and prevents fires caused by short circuits. To prevent the devices from being physically stolen, they are secured to the desk with special locks. I’ve personally experienced devices being stolen during lunch breaks.
Since I myself have stayed in hotels a lot, my computer’s hard drive is encrypted as a matter of course. This also applies to all external storage devices such as USB sticks or external SSDs. If the device is stolen, at least no one can access the data stored on it.
It goes without saying that secure encryption is only possible with a strong password. Many companies have specific rules that employee passwords must meet. It’s also common practice to assign a new password every 30 to 90 days, and this new password must be different from the last three used.
It’s often pointed out that passwords shouldn’t be written on a sticky note stuck to the monitor. I’ve never personally experienced this. It’s much more typical for passwords to be written under the keyboard or mousepad.
Another aspect to consider is notes left on desks, wall calendars, and whiteboards. Even seemingly insignificant information can be quite valuable. Since it’s rather difficult to decide what truly needs protecting and what doesn’t, the general rule is: all notes should be stored securely at the end of the workday, inaccessible to outsiders. Of course, this only works if lockable storage space is available. In sensitive sectors like banking and insurance, the policy even goes so far as to prohibit colleagues from entering their vacation dates on wall calendars.
Of course, these considerations also include your own wastebasket. It’s essential to ensure that confidential documents are disposed of in specially secured containers. Otherwise, the entire effort to maintain confidentiality becomes pointless if you can simply pull them out of the trash after work.
But the virtual desktop is also part of the Clean Desk Policy. Especially in times of virtual video conferences and remote work, strangers can easily catch a glimpse of your workspace. This reminds me of my lecture days when a professor had several shortcuts to the trash on his desktop. We always joked that he was recycling. Separate trash folders for Word, Excel, etc. files.
The Clean Desk Policy has other effects as well. It’s much more than just a security concept. Employees who consistently implement this policy also bring more order to their thoughts and can thus work through tasks one by one with greater focus, leading to improved performance. Personal daily planning is usually structured so that all started tasks can be completed by the end of the workday. This is similar to the trades. Tradespeople also try to complete their jobs by the end of the workday to avoid having to return for a short time the next day. A considerable amount of time is spent on preparation.
Implementing a Clean Desk Policy follows the three Ps (Plan, Protect & Pick). At the beginning of the day, employees decide which tasks need to be completed (Plan), and select the corresponding documents and necessary materials for easy access. At the end of the day, everything is securely stored. During working hours, it must also be ensured that no unauthorized persons have access to information, for example, during breaks. This daily, easy-to-implement routine of preparation and follow-up quickly becomes a habit, and the time required can be reduced to just a few minutes, so that hardly any work time is wasted.
With a Clean Desk Policy, the overwhelming piles of paper disappear from your desk, and by considering which tasks need to be completed each day, you can focus better on them, which significantly improves productivity. At the end of the day, you can also mentally cross some items off your to-do list, leading to greater satisfaction.
Spring Cleaning for Docker
Anyone interested in this somewhat specialized article doesn’t need an explanation of what Docker is and what this virtualization tool is used for. Therefore, this article is primarily aimed at system administrators, DevOps engineers, and cloud developers. For those who aren’t yet completely familiar with the technology, I recommend our Docker course: From Zero to Hero.
In a scenario where we regularly create new Docker images and instantiate various containers, our hard drive is put under considerable strain. Depending on their complexity, images can easily reach several hundred megabytes to gigabytes in size. To prevent creating new images from feeling like downloading a three-minute MP3 with a 56k modem, Docker uses a build cache. However, if there’s an error in the Dockerfile, this build cache can become quite bothersome. Therefore, it’s a good idea to clear the build cache regularly. Old container instances that are no longer in use can also lead to strange errors. So, how do you keep your Docker environment clean?
While docker rm <container-nane> and docker rmi <image-id> will certainly get you quite far, in build environments like Jenkins or server clusters, this strategy can become a time-consuming and tedious task. But first, let’s get an overview of the overall situation. The command docker system df will help us with this.
root:/home# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 15 9 5.07GB 2.626GB (51%)
Containers 9 7 11.05MB 5.683MB (51%)
Local Volumes 226 7 6.258GB 6.129GB (97%)
Build Cache 0 0 0B 0BBefore I delve into the details, one important note: The commands presented are very efficient and will irrevocably delete the corresponding areas. Therefore, only use these commands in a test environment before using them on production systems. Furthermore, I’ve found it helpful to also version control the commands for instantiating containers in your text file.
The most obvious step in a Docker system cleanup is deleting unused containers. Specifically, this means that the delete command permanently removes all instances of Docker containers that are not running (i.e., not active). If you want to perform a clean slate on a Jenkins build node before deployment, you can first terminate all containers running on the machine with a single command.
Abonnement / Subscription
[English] This content is only available to subscribers.
[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.
The -f parameter suppresses the confirmation prompt, making it ideal for automated scripts. Deleting containers frees up relatively little disk space. The main resource drain comes from downloaded images, which can also be removed with a single command. However, before images can be deleted, it must first be ensured that they are not in use by any containers (even inactive ones). Removing unused containers offers another practical advantage: it releases ports blocked by containers. A port in a host environment can only be bound to a container once, which can quickly lead to error messages. Therefore, we extend our script to include the option to delete all Docker images not currently used by containers.
Abonnement / Subscription
[English] This content is only available to subscribers.
[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.
Another consequence of our efforts concerns Docker layers. For performance reasons, especially in CI environments, you should avoid using them. Docker volumes, on the other hand, are less problematic. When you remove the volumes, only the references in Docker are deleted. The folders and files linked to the containers remain unaffected. The -a parameter deletes all Docker volumes.
docker volume prune -a -f
Another area affected by our cleanup efforts is the build cache. Especially if you’re experimenting with creating new Dockerfiles, it can be very useful to manually clear the cache from time to time. This prevents incorrectly created layers from persisting in the builds and causing unusual errors later in the instantiated container. The corresponding command is:
docker buildx prune -f
The most radical option is to release all unused resources. There is also an explicit shell command for this.
docker volume prune -a -f
We can, of course, also use the commands just presented for CI build environments like Jenkins or GitLab CI. However, this might not necessarily lead to the desired result. A proven approach for Continuous Integration/Continuous Deployment is to set up your own Docker registry where you can deploy your self-built images. This approach provides a good backup and caching system for the Docker images used. Once correctly created, images can be conveniently deployed to different server instances via the local network without having to constantly rebuild them locally. This leads to a proven approach of using a build node specifically optimized for Docker images/containers to optimally test the created images before use. Even on cloud instances like Azure and AWS, you should prioritize good performance and resource efficiency. Costs can quickly escalate and seriously disrupt a stable project.
In this article, we have seen that in-depth knowledge of the tools used offers several opportunities for cost savings. The motto “We do it because we can” is particularly unhelpful in a commercial environment and can quickly degenerate into an expensive waste of resources.
03. Release Management & Process Automation
Cookbook: Maven Source Code Samples
Our Git repository contains an extensive collection of various code examples for Apache Maven projects. Everything is clearly organized by topic.
Back to table of contents: Apache Maven Master Class
- Token Replacement
- Compiler Warnings
- Excecutable JAR Files
- Enforcments
- Unit & Integration Testing
- Multi Module Project (JAR / WAR)
- BOM – Bill Of Materials (Dependency Management)
- Running ANT Tasks
- License Header – Plugin
- OWASP
- Profiles
- Maven Wrapper
- Shade Ueber JAR (Plugin)
- Java API Documantation (JavaDoc)
- Java Sources & Test Case packaging into JARs
- Docker
- Assemblies
- Maven Reporting (site)
- Flatten a POM
- GPG Signer
Apache Maven Master Class
Apache Maven (Maven for short) was first released on March 30, 2002, as an Apache Top-Level Project under the free Apache 2.0 License. This license also allows free use by companies in a commercial environment without paying license fees.
The word Maven comes from Yiddish and means something like “collector of knowledge.”
Maven is a pure command-line program developed in the Java programming language. It belongs to the category of build tools and is primarily used in Java software development projects. In the official documentation, Maven describes itself as a project management tool, as its functions extend far beyond creating (compiling) binary executable artifacts from source code. Maven can be used to generate quality analyses of program code and API documentation, to name just a few of its diverse applications.
Benefits
- Access to all subscription articles
- Accessing the Maven Sample Git Repository
- Regular updates
- Email support
- Regular live video FAQs
- Video workshops
- Submit your own topic suggestions
- 30% discount on Maven training
Target groups
This online course is suitable for both beginners with no prior knowledge and experienced experts. Each lesson is self-contained and can be individually selected. Extensive supplementary material explains concepts and is supported by numerous references. This allows you to use the Apache Maven Master Class course as a reference. New content is continually being added to the course. If you choose to become an Apache Maven Master Class member, you will also have full access to exclusive content.

Developer
- Maven Basics
- Maven on the Command Line
- IDE Integration
- Archetypes: Creating Project Structures
- Test Integration (TDD & BDD) with Maven
- Test Containers with Maven
- Multi-Module Projects for Microservices

Build Manager / DevOps
- Release Management with Maven
- Deploy to Maven Central
- Sonatype Nexus Repository Manager
- Maven Docker Container
- Creating Docker Images with Maven
- Encrypted Passwords
- Process & Build Optimization

Quality Manager
- Maven Site – The Reporting Engine
- Determine and evaluate test coverage
- Static code analysis
- Review coding style specifications
References & Literature
Index & Abbreviations
[A]
[B]
[C]
[D]
[E]
[F]
[G]
[H]
[I]
[J]
[K]
[L]
[M]
[N]
[O]
[P]
[Q]
[R]
[S]
[T]
[U]
[V]
[W]
[Y]
[Z]
[X]
return to the table of content: Apache Maven Master Class
A
- agiles Manifest
- (Apache) Ant
- Archetypen (archetypes)
B
C
D
- Deploy
- Docker Maven Image
- DRY – Don’t repeat yourself [2]
- DSL – Domain Specific Language
E
- EAR – Enterprise Archive
- EJB – Enterprise Java Beans
- Enforcer Plugin
- EoL – End of Live
F
G
- GAV Parameter [2]
- Goal [2]
- GPG – GNU Privacy Guard
H
I
- IDE – Integrierte Entwicklungsumgebung
- Installation (Linux / Windows)
- inkrementelle Builds
- (Apache) Ivy
J
- JAR – Java Archive
- jarsigner Plugin
- JDK – Java Development Kit
K
- Keytool (Java)
- KISS – Keep it simple, stupid.
- Kochbuch: Maven Codebeispiele
L
M
N
O
- OWASP – Open Web Application Security Project [2]
P
- Plugin
- Production Candidate
- POM – Project Object Model
- Maven Properties
- Prozess
Q
R
S
T
- target – Verzeichnis
- Token Replacement
U
V
W
- WAR – Web Archive
X
Y
Z
[A]
[B]
[C]
[D]
[E]
[F]
[G]
[H]
[I]
[J]
[K]
[L]
[M]
[N]
[O]
[P]
[Q]
[R]
[S]
[T]
[U]
[V]
[W]
[Y]
[Z]
[X]
return to the table of content: Apache Maven Master Class
![jConf Peru 2021 [2]](https://elmar-dott.com/wp-content/uploads/2021-JCONF-Peru-Deploy-MVN-Central.webp)









