Tag Archives: Programming
01. Fundamentals
Index & Abbreviations
[A]
[B]
[C]
[D]
[E]
[F]
[G]
[H]
[I]
[J]
[K]
[L]
[M]
[N]
[O]
[P]
[Q]
[R]
[S]
[T]
[U]
[V]
[W]
[Y]
[Z]
[X]
return to the table of content: Apache Maven Master Class
A
- agiles Manifest
- (Apache) Ant
- Archetypen (archetypes)
B
C
D
- Deploy
- Docker Maven Image
- DRY – Don’t repeat yourself [2]
- DSL – Domain Specific Language
E
- EAR – Enterprise Archive
- EJB – Enterprise Java Beans
- Enforcer Plugin
- EoL – End of Live
F
G
- GAV Parameter [2]
- Goal [2]
- GPG – GNU Privacy Guard
H
I
- IDE – Integrierte Entwicklungsumgebung
- Installation (Linux / Windows)
- inkrementelle Builds
- (Apache) Ivy
J
- JAR – Java Archive
- jarsigner Plugin
- JDK – Java Development Kit
K
- Keytool (Java)
- KISS – Keep it simple, stupid.
- Kochbuch: Maven Codebeispiele
L
M
N
O
- OWASP – Open Web Application Security Project [2]
P
- Plugin
- Production Candidate
- POM – Project Object Model
- Maven Properties
- Prozess
Q
R
S
T
- target – Verzeichnis
- Token Replacement
U
V
W
- WAR – Web Archive
X
Y
Z
[A]
[B]
[C]
[D]
[E]
[F]
[G]
[H]
[I]
[J]
[K]
[L]
[M]
[N]
[O]
[P]
[Q]
[R]
[S]
[T]
[U]
[V]
[W]
[Y]
[Z]
[X]
return to the table of content: Apache Maven Master Class
Content | Apache Maven [en]
References & Literature
Apache Maven Master Class
Apache Maven (Maven for short) was first released on March 30, 2002, as an Apache Top-Level Project under the free Apache 2.0 License. This license also allows free use by companies in a commercial environment without paying license fees.
The word Maven comes from Yiddish and means something like “collector of knowledge.”
Maven is a pure command-line program developed in the Java programming language. It belongs to the category of build tools and is primarily used in Java software development projects. In the official documentation, Maven describes itself as a project management tool, as its functions extend far beyond creating (compiling) binary executable artifacts from source code. Maven can be used to generate quality analyses of program code and API documentation, to name just a few of its diverse applications.
Benefits
- Access to all subscription articles
- Accessing the Maven Sample Git Repository
- Regular updates
- Email support
- Regular live video FAQs
- Video workshops
- Submit your own topic suggestions
- 30% discount on Maven training
Target groups
This online course is suitable for both beginners with no prior knowledge and experienced experts. Each lesson is self-contained and can be individually selected. Extensive supplementary material explains concepts and is supported by numerous references. This allows you to use the Apache Maven Master Class course as a reference. New content is continually being added to the course. If you choose to become an Apache Maven Master Class member, you will also have full access to exclusive content.

Developer
- Maven Basics
- Maven on the Command Line
- IDE Integration
- Archetypes: Creating Project Structures
- Test Integration (TDD & BDD) with Maven
- Test Containers with Maven
- Multi-Module Projects for Microservices

Build Manager / DevOps
- Release Management with Maven
- Deploy to Maven Central
- Sonatype Nexus Repository Manager
- Maven Docker Container
- Creating Docker Images with Maven
- Encrypted Passwords
- Process & Build Optimization

Quality Manager
- Maven Site – The Reporting Engine
- Determine and evaluate test coverage
- Static code analysis
- Review coding style specifications
Cookbook: Maven Source Code Samples
Our Git repository contains an extensive collection of various code examples for Apache Maven projects. Everything is clearly organized by topic.
Back to table of contents: Apache Maven Master Class
- Token Replacement
- Compiler Warnings
- Excecutable JAR Files
- Enforcments
- Unit & Integration Testing
- Multi Module Project (JAR / WAR)
- BOM – Bill Of Materials (Dependency Management)
- Running ANT Tasks
- License Header – Plugin
- OWASP
- Profiles
- Maven Wrapper
- Shade Ueber JAR (Plugin)
- Java API Documantation (JavaDoc)
- Java Sources & Test Case packaging into JARs
- Docker
- Assemblies
- Maven Reporting (site)
- Flatten a POM
- GPG Signer
Featureitis
You don’t have to be a software developer to recognize a good application. But from my own experience, I’ve often seen programs that were promising and innovative at the start mutate into unwieldy behemoths once they reach a certain number of users. Since I’ve been making this observation regularly for several decades now, I’ve wondered what the reasons for this might be.
The phenomenon of software programs, or solutions in general, becoming overloaded with details was termed “featuritis” by Brooks in his classic book, “The Mythical Man-Month.” Considering that the first edition of the book was published in 1975, it’s fair to say that this is a long-standing problem. Perhaps the most famous example of featureitis is Microsoft’s Windows operating system. Of course, there are countless other examples of improvements that make things worse.
The phenomenon of software programs, or solutions in general, becoming overloaded with details is what Brooks called “featuritis.” Windows users who were already familiar with Windows XP and then confronted with its wonderful successor Vista, only to be appeased again by Windows 7, and then nearly had a heart attack with Windows 8 and 8.1, were calmed down again at the beginning of Windows 10. At least for a short time, until the forced updates quickly brought them back down to earth. And don’t even get me started on Windows 11. The old saying about Windows was that every other version is junk and should be skipped. Well, that hasn’t been true since Windows 7. For me, Windows 10 was the deciding factor in completely abandoning Microsoft, and like many others, I bought a new operating system. Some switched to Apple, and those who couldn’t afford or didn’t want the expensive hardware, like me, opted for a Linux system. This shows how a lack of insight can quickly lead to the loss of significant market share. Since Microsoft isn’t drawing any conclusions from these developments, this fact seems to be of little concern to the company. For other companies, such events can quickly push them to the brink of collapse, and beyond.
One motivation for adding more and more features to an existing application is the so-called product life cycle, which is represented by the BCG matrix in Figure 1.

With a product’s launch, it’s not yet certain whether it will be accepted by the market. If users embrace it, it quickly rises to stardom and reaches its maximum market position as a cash cow. Once market saturation is reached, it degrades to a slow seller. So far, so good. Unfortunately, the prevailing management view is that if no growth is generated compared to the previous quarter, market saturation has already been reached. This leads to the nonsensical assumption that users must be forced to accept an updated version of the product every year. Of course, the only way to motivate a purchase is to print a bulging list of new features on the packaging.
Since well-designed features can’t simply be churned out on an assembly line, a redesign of the graphical user interface is thrown in as a free bonus every time. Ultimately, this gives the impression of having something completely new, as it requires a period of adjustment to discover the new placement of familiar functions. It’s not as if the redesign actually streamlines the user experience or increases productivity. The arrangement of input fields and buttons always seems haphazardly thrown together.
But don’t worry, I’m not calling for an update boycott; I just want to talk about how things can be improved. Because one thing is certain: thanks to artificial intelligence, the market for software products will change dramatically in just a few years. I don’t expect complex and specialized applications to be produced by AI algorithms anytime soon. However, I do expect that these applications will have enough poorly generated AI-generated code sequences, which the developer doesn’t understand, injected into their codebases, leading to unstable applications. This is why I’m rethinking clean, handcrafted, efficient, and reliable software, because I’m sure there will always be a market for it.
I simply don’t want an internet browser that has mutated into a communication hub, offering chat, email, cryptocurrency payments, and who knows what else, in addition to simply displaying web pages. I want my browser to start quickly when I click something, then respond quickly and display website content correctly and promptly. If I ever want to do something else with my browser, it would be nice if I could actively enable this through a plugin.
Now, regarding the problem just described, the argument is often made that the many features are intended to reach a broad user base. Especially if an application has all possible options enabled from the start, it quickly engages inexperienced users who don’t have to first figure out how the program actually works. I can certainly understand this reasoning. It’s perfectly fine for a manufacturer to focus exclusively on inexperienced users. However, there is a middle ground that considers all user groups equally. This solution isn’t new and is very well-known: the so-called product lines.
In the past, manufacturers always defined target groups such as private individuals, businesses, and experts. These user groups were then often assigned product names like Home, Enterprise, and Ultimate. This led to everyone wanting the Ultimate version. This phenomenon is called Fear Of Missing Out (FOMO). Therefore, the names of the product groups and their assigned features are psychologically poorly chosen. So, how can this be done better?
An expert focuses their work on specific core functions that allow them to complete tasks quickly and without distractions. For me, this implies product lines like Essentials, Pure, or Core.
If the product is then intended for use by multiple people within the company, it often requires additional features such as external user management like LDAP or IAM. This specialized product line is associated with terms like Enterprise, Company, Business, and so on.
The cluttered end result, actually intended for NOOPS, has all sorts of things already activated during installation. If people don’t care about the application’s startup and response time, then go for it. Go all out. Throw in everything you can! Here, names like Ultimate, Full, and Maximized Extended are suitable for labeling the product line. The only important thing is that professionals recognize this as the cluttered version.
Those who cleverly manage these product lines and provide as many functions as possible via so-called modules, which can be installed later, enable high flexibility even in expert mode, where users might appreciate the occasional additional feature.
If you install tracking on the module system beforehand to determine how professional users upgrade their version, you’ll already have a good idea of what could be added to the new version of Essentials. However, you shouldn’t rely solely on downloads as the decision criterion for this tracking. I often try things out myself and delete extensions faster than the installation process took if I think they’re useless.
I’d like to give a small example from the DevOps field to illustrate the problem I just described. There’s the well-known GitLab, which was originally a pure code repository hosting project. The name still reflects this today. An application that requires 8 GB of RAM on a server in its basic installation just to make a Git repository accessible to other developers is unusable for me, because this software has become a jack-of-all-trades over time. Slow, inflexible, and cluttered with all sorts of unnecessary features that are better implemented using specialized solutions.
In contrast to GitLab, there’s another, less well-known solution called SCM-Manager, which focuses exclusively on managing code repositories. I personally use and recommend SCM-Manager because its basic installation is extremely compact. Despite this, it offers a vast array of features that can be added via plugins.
I tend to be suspicious of solutions that claim to be an all-in-one solution. To me, that’s always the same: trying to do everything and nothing. There’s no such thing as a jack-of-all-trades, or as we say in Austria, a miracle worker!
When selecting programs for my workflow, I focus solely on their core functionality. Are the basic features promised by the marketing truly present and as intuitive as possible? Is there comprehensive documentation that goes beyond a simple “Hello World”? Does the developer focus on continuously optimizing core functions and consider new, innovative concepts? These are the questions that matter to me.
Especially in commercial environments, programs are often used that don’t deliver on their marketing promises. Instead of choosing what’s actually needed to complete tasks, companies opt for applications whose descriptions are crammed with buzzwords. That’s why I believe that companies that refocus on their core competencies and use highly specialized applications for them will be the winners of tomorrow.
Mismanagement and Alpha Geeks
When I recently picked up Camille Fournier’s book “The Manager’s Path,” I was immediately reminded of Tom DeMarco. He wrote the classic “Peopleware” and, in the early 2000s, published “Adrenaline Junkies and Form Junkies.” It’s a list of stereotypes you might encounter in software projects, with advice on how to deal with them. After several decades in the business, I can confirm every single word from my own experience. And it’s still relevant today, because people are the ones who make projects, and we all have our quirks.
For projects to be successful, it’s not just technical challenges that need to be overcome. Interpersonal relationships also play a crucial role. One important factor in this context, which often receives little attention, is project management. There are shelves full of excellent literature on how to become a good manager. The problem, unfortunately, is that few who hold this position actually fulfill it, and even fewer are interested in developing their skills further. The result of poor management is worn-down and stressed teams, extreme pressure in daily operations, and often also delayed delivery dates. It’s no wonder, then, that this impacts product quality.
One of the first sayings I learned in my professional life was: “Anyone who thinks a project manager actually manages projects also thinks a butterfly folds lemons.” So it seems to be a very old piece of wisdom. But what is the real problem with poor management? Anyone who has to fill a managerial position has a duty to thoroughly examine the candidate’s skills and suitability. It’s easy to be impressed by empty phrases and a list of big names in the industry on a CV, without questioning actual performance. In my experience, I’ve primarily encountered project managers who often lacked the necessary technical expertise to make important decisions. It wasn’t uncommon for managers in IT projects to dismiss me with the words, “I’m not a technician, sort this out amongst yourselves.” This is obviously disastrous when the person who’s supposed to make the decisions can’t make them because they lack the necessary knowledge. An IT project manager doesn’t need to know which algorithm will terminate the project faster. Evaluations can be used to inform decisions. However, a basic understanding of programming is essential. Anyone who doesn’t know what an API is and why version compatibility prevents modules that will later be combined into a software product from working together has no right to act as a decision-maker. A fundamental understanding of software development processes and the programming paradigms used is also indispensable for project managers who don’t work directly with the code.
I therefore advocate for vetting not only the developers you hire for their skills, but also the managers who are to be brought into a company. For me, external project management is an absolute no-go when selecting my projects. This almost always leads to chaos and frustration for everyone involved, which is why I reject such projects. Managers who are not integrated into the company and whose performance is evaluated based on project success, in my experience, do not deliver high-quality work. Furthermore, internal managers, just like developers, can develop and expand their skills through mentoring, training, and workshops. The result is a healthy, relaxed working environment and successful projects.
The title of this article points to toxic stereotypes in the project business. I’m sure everyone has encountered one or more of these stereotypes in their professional environment. There’s a lot of discussion about how to deal with these individuals. However, I would like to point out that hardly anyone is born a “monster.” People are the way they are, a result of their experiences. If a colleague learns that looking stressed and constantly rushed makes them appear more productive, they will perfect this behavior over time.
Camille Fournier aptly described this with the term “The Alpha Geek.” Someone who has made their role in the project indispensable and has an answer for everything. They usually look down on their colleagues with disdain, but can never truly complete a task without others having to redo it. Unrealistic estimates for extensive tasks are just as typical as downplaying complex issues. Naturally, this is the darling of all project managers who wish their entire team consisted of these “Alpha Geeks.” I’m quite certain that if this dream could come true, it would be the ultimate punishment for the project managers who create such individuals in the first place.
To avoid cultivating “alpha geeks” within your company, it’s essential to prevent personality cults and avoid elevating personal favorites above the rest of the team. Naturally, it’s also crucial to constantly review work results. Anyone who marks a task as completed but requires rework should be reassigned until the result is satisfactory.
Personally, I share Tom DeMarco’s view on the dynamics of a project. While productivity can certainly be measured by the number of tasks completed, other factors also play a crucial role. My experience has taught me that, as mentioned earlier, it’s essential to ensure employees complete all tasks thoroughly and thoroughly before taking on new ones. Colleagues who downplay a task or offer unrealistic, low-level assessments should be assigned that very task. Furthermore, there are colleagues who, while having relatively low output, contribute significantly to team harmony.
When I talk about people who build a healthy team, I don’t mean those who simply hand out sweets every day. I’m referring to those who possess valuable skills and mentor their colleagues. These individuals typically enjoy a high level of trust within the team, which is why they often achieve excellent results as mediators in conflicts. It’s not the people who try to be everyone’s darling with false promises, but rather those who listen and take the time to find a solution. They are often the go-to people for everything and frequently have a quiet, unassuming demeanor. Because they have solutions and often lend a helping hand, they themselves receive only average performance ratings in typical process metrics. A good manager quickly recognizes these individuals because they are generally reliable. They are balanced and appear less stressed because they proceed calmly and consistently.
Of course, much more could be said about the stereotypes in a software project, but I think the points already made provide a good basic understanding of what I want to express. An experienced project manager can address many of the problems described as they arise. This naturally requires solid technical knowledge and some interpersonal skills.
Of course, we must also be aware that experienced project managers don’t just appear out of thin air. They need to be developed and supported, just like any other team member. This definitely includes rotations through all technical departments, such as development, testing, and operations. Paradigms like pair programming are excellent for this. The goal isn’t to turn a manager into a programmer or tester, but rather to give them an understanding of the daily processes. This also strengthens confidence in the skills of the entire team, and mentalities like “you have to control and push lazy and incompetent programmers to get them to lift a finger” don’t even arise. In projects that consistently deliver high quality and meet their deadlines, there’s rarely a desire to introduce every conceivable process metric.
Blockchain – an introduction
The blockchain concept is a fundamental component of various crypto payment methods such as Bitcoin and Ethereum. But what exactly is blockchain technology, and what other applications does this concept have? Essentially, blockchain is structured like a backward-linked list. Each element in the list points to its predecessor. So, what makes blockchain so special?
Blockchain extends the list concept by adding various constraints. One of these constraints is ensuring that no element in the list can be altered or removed. This is relatively easy to achieve using a hash function. We encode the content of each element in the list into a hash using a hash algorithm. A wide range of hash functions are now available, with SHA-512 being a current standard. This hash algorithm is already implemented in the standard library of almost every programming language and can be used easily. Specifically, this means that the SHA-512 hash is generated from all the data in a block. This hash is always unique and never occurs again. Thus, the hash serves as an identifier (ID) to locate a block. An entry in the block is a reference to its predecessors. This reference is the hash value of the predecessor, i.e., its ID. When implementing a blockchain, it is essential that the hash value of the predecessor is included in the calculation of the hash value of the current block. This detail ensures that elements in the blockchain can only be modified with great difficulty. Essentially, to manipulate the element one wishes to alter, all subsequent elements must also be changed. In a large blockchain with a very large number of blocks, such an undertaking entails an enormous computational effort that is very difficult, if not impossible, to accomplish.
This chaining provides us with a complete transaction history. This also explains why crypto payment methods are not anonymous. Even though the effort required to uniquely identify a transaction participant can be enormous, if this participant also uses various obfuscation methods with different wallets that are not linked by other transactions, the effort increases exponentially.
Of course, the mechanism just described still has significant weaknesses. Transactions, i.e., the addition of new blocks, can only be considered verified and secure once enough successors have been added to the blockchain to ensure that changes are more difficult to implement. For Bitcoin and similar cryptocurrencies, a transaction is considered secure if there are five subsequent transactions.
To avoid having just one entity storing the transaction history—that is, all the blocks of the blockchain—a decentralized approach comes into play. This means there is no central server acting as an intermediary. Such a central server could be manipulated by its operator. With sufficient computing power, this would allow for the rebuilding of even very large blockchains. In the context of cryptocurrencies, this is referred to as chain reorganization. This is also the criticism leveled at many cryptocurrencies. Apart from Bitcoin, no other decentralized and independent cryptocurrency exists. If the blockchain, with all its contained elements, is made public and each user has their own instance of this blockchain locally on their computer, where they can add elements that are then synchronized with all other instances of the blockchain, then we have a decentralized approach.
The technology for decentralized communication without an intermediary is called peer-to-peer (P2P). P2P networks are particularly vulnerable in their early stages, when there are only a few participants. With a great deal of computing power, one could easily create a large number of so-called “Zomi peers” that influence the network’s behavior. Especially in times when cloud computing, with providers like AWS and Google Cloud Platform, can provide virtually unlimited resources for relatively little money, this is a significant problem. This point should not be overlooked, particularly when there is a high financial incentive for fraudsters.
There are also various competing concepts within P2P. To implement a stable and secure blockchain, it is necessary to use only solutions that do not require supporting backbone servers. The goal is to prevent the establishment of a master chain. Therefore, questions must be answered regarding how individual peers can find each other and which protocol they use to synchronize their data. By protocol, we mean a set of rules, a fixed framework for how interaction between peers is regulated. Since this point is already quite extensive, I refer you to my 2022 presentation for an introduction to the topic.
Another feature of blockchain blocks is that their validity can be easily and quickly verified. This simply requires generating the SHA-512 hash of the entire contents of a block. If this hash matches the block’s ID, the block is valid. Time-sensitive or time-critical transactions, such as those relevant to payment systems, can also be created with minimal effort. No complex time servers are needed as intermediaries. Each block is appended with a timestamp. This timestamp must, however, take into account the location where it is created, i.e., specify the time zone. To obscure the location of the transaction participants, all times in the different time zones can be converted to the current UTC 0.
To ensure that the time is correctly set on the system, a time server can be queried for the current time when the software starts, and a correction message can be displayed if there are any discrepancies.
Of course, time-critical transactions are subject to a number of challenges. It must be ensured that a transaction was carried out within a defined time window. This is a problem that so-called real-time systems have to deal with. The double-spending problem also needs to be prevented—that is, the same amount being sent twice to different recipients. In a decentralized network, this requires confirmation of the transaction by multiple participants. Classic race conditions can also pose a problem. Race conditions can be managed by applying the Immutable design pattern to the block elements.
To prevent the blockchain from being disrupted by spam attacks, we need a solution that makes creating a single block expensive. We achieve this by incorporating computing power. The participant creating a block must solve a puzzle that requires a certain amount of computing time. If a spammer wants to flood the network with many blocks, their computing power increases exorbitantly, making it impossible for them to generate an unlimited number of blocks in a short time. This cryptographic puzzle is called a nonce, which stands for “number used only once.” The nonce mechanism in the blockchain is also often referred to as Proof of Work (POW) and is used in Bitcoin to verify the blocks by the miners.
The nonce is a (pseudo)random number for which a hash must be generated. This hash must then meet certain criteria. These could be, for example, two or three leading zeros in the hash. To prevent arbitrary hashes from being inserted into the block, the random number that solves the puzzle is stored directly. A nonce that has already been used cannot be used again, as this would circumvent the puzzle. When generating the hash from the nonce, it must meet the requirements, such as leading zeros, to be accepted.
Since finding a valid nonce becomes increasingly difficult as the number of blocks in a blockchain grows, it is necessary to change the rules for such a nonce cyclically, for example, every 2048 blocks. This also means that the rules for a valid nonce must be assigned to the corresponding blocks. Such a set of rules for the nonce can easily be formulated using a regular expression (regex).
We’ve now learned a considerable amount about the ruleset for a blockchain. So it’s time to consider performance. If we were to simply store all the individual blocks of the blockchain in a list, we would quickly run out of memory. While it’s possible to store the blocks in a local database, this would negatively impact the blockchain’s speed, even with an embedded solution like SQLite. A simple solution would be to divide the blockchain into equal parts, called chunks. A chunk would have a fixed length of 2048 valid blocks, and the first block of a new chunk would point to the last block of the previous chunk. Each chunk could also contain a central rule for the nonce and store metadata such as minimum and maximum timestamps.
To briefly recap our current understanding of the blockchain ruleset, we’re looking at three different levels. The largest level is the blockchain itself, which contains fundamental metadata and configurations. Such configurations include the hash algorithm used. The second level consists of so-called chunks, which contain a defined set of block elements. As mentioned earlier, chunks also contain metadata and configurations. The smallest element of the blockchain is the block itself, which comprises an ID, the described additional information such as a timestamp and nonce, and the payload. The payload is a general term for any data object that is to be made verifiable by the blockchain. For Bitcoin and other cryptocurrencies, the payload is the information about the amount being transferred from Wallet A (source) to Wallet B (destination).

Blockchain technology is also suitable for many other application scenarios. For example, the hash values of open-source software artifacts could be stored in a blockchain. This would allow users to download binary files from untrusted sources and verify them against the corresponding blockchain. The same principle could be applied to the signatures of antivirus programs. Applications and other documents could also be transmitted securely in governmental settings. The blockchain would function as a kind of “postal stamp.” Accounting, including all receipts for goods and services purchased and sold, is another conceivable application.
Depending on the use case, an extension of the blockchain would be the unique signing of a block by its creator. This would utilize the classic PKI (Public Key Infrastructure) method with public and private keys. The signer stores their public key in the block and creates a signature using their private key via the payload, which is then also stored in the block.
Currently, there are two freely available blockchain implementations: BitcoinJ and Web3j for Ethereum. Of course, it’s possible to create your own universally applicable blockchain implementation using the principles just described. The pitfalls, naturally, lie in the details, some of which I’ve already touched upon in this article. Fundamentally, however, blockchain isn’t rocket science and is quite manageable for experienced developers. Anyone considering trying their hand at their own implementation now has sufficient basic knowledge to delve deeper into the necessary details of the various technologies involved.









