Featureitis

You don’t have to be a software developer to recognize a good application. But from my own experience, I’ve often seen programs that were promising and innovative at the start mutate into unwieldy behemoths once they reach a certain number of users. Since I’ve been making this observation regularly for several decades now, I’ve wondered what the reasons for this might be.

The phenomenon of software programs, or solutions in general, becoming overloaded with details was termed “featuritis” by Brooks in his classic book, “The Mythical Man-Month.” Considering that the first edition of the book was published in 1975, it’s fair to say that this is a long-standing problem. Perhaps the most famous example of featureitis is Microsoft’s Windows operating system. Of course, there are countless other examples of improvements that make things worse.

The phenomenon of software programs, or solutions in general, becoming overloaded with details is what Brooks called “featuritis.” Windows users who were already familiar with Windows XP and then confronted with its wonderful successor Vista, only to be appeased again by Windows 7, and then nearly had a heart attack with Windows 8 and 8.1, were calmed down again at the beginning of Windows 10. At least for a short time, until the forced updates quickly brought them back down to earth. And don’t even get me started on Windows 11. The old saying about Windows was that every other version is junk and should be skipped. Well, that hasn’t been true since Windows 7. For me, Windows 10 was the deciding factor in completely abandoning Microsoft, and like many others, I bought a new operating system. Some switched to Apple, and those who couldn’t afford or didn’t want the expensive hardware, like me, opted for a Linux system. This shows how a lack of insight can quickly lead to the loss of significant market share. Since Microsoft isn’t drawing any conclusions from these developments, this fact seems to be of little concern to the company. For other companies, such events can quickly push them to the brink of collapse, and beyond.

One motivation for adding more and more features to an existing application is the so-called product life cycle, which is represented by the BCG matrix in Figure 1.

With a product’s launch, it’s not yet certain whether it will be accepted by the market. If users embrace it, it quickly rises to stardom and reaches its maximum market position as a cash cow. Once market saturation is reached, it degrades to a slow seller. So far, so good. Unfortunately, the prevailing management view is that if no growth is generated compared to the previous quarter, market saturation has already been reached. This leads to the nonsensical assumption that users must be forced to accept an updated version of the product every year. Of course, the only way to motivate a purchase is to print a bulging list of new features on the packaging.

Since well-designed features can’t simply be churned out on an assembly line, a redesign of the graphical user interface is thrown in as a free bonus every time. Ultimately, this gives the impression of having something completely new, as it requires a period of adjustment to discover the new placement of familiar functions. It’s not as if the redesign actually streamlines the user experience or increases productivity. The arrangement of input fields and buttons always seems haphazardly thrown together.

But don’t worry, I’m not calling for an update boycott; I just want to talk about how things can be improved. Because one thing is certain: thanks to artificial intelligence, the market for software products will change dramatically in just a few years. I don’t expect complex and specialized applications to be produced by AI algorithms anytime soon. However, I do expect that these applications will have enough poorly generated AI-generated code sequences, which the developer doesn’t understand, injected into their codebases, leading to unstable applications. This is why I’m rethinking clean, handcrafted, efficient, and reliable software, because I’m sure there will always be a market for it.

I simply don’t want an internet browser that has mutated into a communication hub, offering chat, email, cryptocurrency payments, and who knows what else, in addition to simply displaying web pages. I want my browser to start quickly when I click something, then respond quickly and display website content correctly and promptly. If I ever want to do something else with my browser, it would be nice if I could actively enable this through a plugin.

Now, regarding the problem just described, the argument is often made that the many features are intended to reach a broad user base. Especially if an application has all possible options enabled from the start, it quickly engages inexperienced users who don’t have to first figure out how the program actually works. I can certainly understand this reasoning. It’s perfectly fine for a manufacturer to focus exclusively on inexperienced users. However, there is a middle ground that considers all user groups equally. This solution isn’t new and is very well-known: the so-called product lines.

In the past, manufacturers always defined target groups such as private individuals, businesses, and experts. These user groups were then often assigned product names like Home, Enterprise, and Ultimate. This led to everyone wanting the Ultimate version. This phenomenon is called Fear Of Missing Out (FOMO). Therefore, the names of the product groups and their assigned features are psychologically poorly chosen. So, how can this be done better?

An expert focuses their work on specific core functions that allow them to complete tasks quickly and without distractions. For me, this implies product lines like Essentials, Pure, or Core.

If the product is then intended for use by multiple people within the company, it often requires additional features such as external user management like LDAP or IAM. This specialized product line is associated with terms like Enterprise, Company, Business, and so on.

The cluttered end result, actually intended for NOOPS, has all sorts of things already activated during installation. If people don’t care about the application’s startup and response time, then go for it. Go all out. Throw in everything you can! Here, names like Ultimate, Full, and Maximized Extended are suitable for labeling the product line. The only important thing is that professionals recognize this as the cluttered version.

Those who cleverly manage these product lines and provide as many functions as possible via so-called modules, which can be installed later, enable high flexibility even in expert mode, where users might appreciate the occasional additional feature.

If you install tracking on the module system beforehand to determine how professional users upgrade their version, you’ll already have a good idea of ​​what could be added to the new version of Essentials. However, you shouldn’t rely solely on downloads as the decision criterion for this tracking. I often try things out myself and delete extensions faster than the installation process took if I think they’re useless.

I’d like to give a small example from the DevOps field to illustrate the problem I just described. There’s the well-known GitLab, which was originally a pure code repository hosting project. The name still reflects this today. An application that requires 8 GB of RAM on a server in its basic installation just to make a Git repository accessible to other developers is unusable for me, because this software has become a jack-of-all-trades over time. Slow, inflexible, and cluttered with all sorts of unnecessary features that are better implemented using specialized solutions.

In contrast to GitLab, there’s another, less well-known solution called SCM-Manager, which focuses exclusively on managing code repositories. I personally use and recommend SCM-Manager because its basic installation is extremely compact. Despite this, it offers a vast array of features that can be added via plugins.

I tend to be suspicious of solutions that claim to be an all-in-one solution. To me, that’s always the same: trying to do everything and nothing. There’s no such thing as a jack-of-all-trades, or as we say in Austria, a miracle worker!

When selecting programs for my workflow, I focus solely on their core functionality. Are the basic features promised by the marketing truly present and as intuitive as possible? Is there comprehensive documentation that goes beyond a simple “Hello World”? Does the developer focus on continuously optimizing core functions and consider new, innovative concepts? These are the questions that matter to me.

Especially in commercial environments, programs are often used that don’t deliver on their marketing promises. Instead of choosing what’s actually needed to complete tasks, companies opt for applications whose descriptions are crammed with buzzwords. That’s why I believe that companies that refocus on their core competencies and use highly specialized applications for them will be the winners of tomorrow.


Mismanagement and Alpha Geeks

When I recently picked up Camille Fournier’s book “The Manager’s Path,” I was immediately reminded of Tom DeMarco. He wrote the classic “Peopleware” and, in the early 2000s, published “Adrenaline Junkies and Form Junkies.” It’s a list of stereotypes you might encounter in software projects, with advice on how to deal with them. After several decades in the business, I can confirm every single word from my own experience. And it’s still relevant today, because people are the ones who make projects, and we all have our quirks.

For projects to be successful, it’s not just technical challenges that need to be overcome. Interpersonal relationships also play a crucial role. One important factor in this context, which often receives little attention, is project management. There are shelves full of excellent literature on how to become a good manager. The problem, unfortunately, is that few who hold this position actually fulfill it, and even fewer are interested in developing their skills further. The result of poor management is worn-down and stressed teams, extreme pressure in daily operations, and often also delayed delivery dates. It’s no wonder, then, that this impacts product quality.

One of the first sayings I learned in my professional life was: “Anyone who thinks a project manager actually manages projects also thinks a butterfly folds lemons.” So it seems to be a very old piece of wisdom. But what is the real problem with poor management? Anyone who has to fill a managerial position has a duty to thoroughly examine the candidate’s skills and suitability. It’s easy to be impressed by empty phrases and a list of big names in the industry on a CV, without questioning actual performance. In my experience, I’ve primarily encountered project managers who often lacked the necessary technical expertise to make important decisions. It wasn’t uncommon for managers in IT projects to dismiss me with the words, “I’m not a technician, sort this out amongst yourselves.” This is obviously disastrous when the person who’s supposed to make the decisions can’t make them because they lack the necessary knowledge. An IT project manager doesn’t need to know which algorithm will terminate the project faster. Evaluations can be used to inform decisions. However, a basic understanding of programming is essential. Anyone who doesn’t know what an API is and why version compatibility prevents modules that will later be combined into a software product from working together has no right to act as a decision-maker. A fundamental understanding of software development processes and the programming paradigms used is also indispensable for project managers who don’t work directly with the code.

I therefore advocate for vetting not only the developers you hire for their skills, but also the managers who are to be brought into a company. For me, external project management is an absolute no-go when selecting my projects. This almost always leads to chaos and frustration for everyone involved, which is why I reject such projects. Managers who are not integrated into the company and whose performance is evaluated based on project success, in my experience, do not deliver high-quality work. Furthermore, internal managers, just like developers, can develop and expand their skills through mentoring, training, and workshops. The result is a healthy, relaxed working environment and successful projects.

The title of this article points to toxic stereotypes in the project business. I’m sure everyone has encountered one or more of these stereotypes in their professional environment. There’s a lot of discussion about how to deal with these individuals. However, I would like to point out that hardly anyone is born a “monster.” People are the way they are, a result of their experiences. If a colleague learns that looking stressed and constantly rushed makes them appear more productive, they will perfect this behavior over time.

Camille Fournier aptly described this with the term “The Alpha Geek.” Someone who has made their role in the project indispensable and has an answer for everything. They usually look down on their colleagues with disdain, but can never truly complete a task without others having to redo it. Unrealistic estimates for extensive tasks are just as typical as downplaying complex issues. Naturally, this is the darling of all project managers who wish their entire team consisted of these “Alpha Geeks.” I’m quite certain that if this dream could come true, it would be the ultimate punishment for the project managers who create such individuals in the first place.

To avoid cultivating “alpha geeks” within your company, it’s essential to prevent personality cults and avoid elevating personal favorites above the rest of the team. Naturally, it’s also crucial to constantly review work results. Anyone who marks a task as completed but requires rework should be reassigned until the result is satisfactory.

Personally, I share Tom DeMarco’s view on the dynamics of a project. While productivity can certainly be measured by the number of tasks completed, other factors also play a crucial role. My experience has taught me that, as mentioned earlier, it’s essential to ensure employees complete all tasks thoroughly and thoroughly before taking on new ones. Colleagues who downplay a task or offer unrealistic, low-level assessments should be assigned that very task. Furthermore, there are colleagues who, while having relatively low output, contribute significantly to team harmony.

When I talk about people who build a healthy team, I don’t mean those who simply hand out sweets every day. I’m referring to those who possess valuable skills and mentor their colleagues. These individuals typically enjoy a high level of trust within the team, which is why they often achieve excellent results as mediators in conflicts. It’s not the people who try to be everyone’s darling with false promises, but rather those who listen and take the time to find a solution. They are often the go-to people for everything and frequently have a quiet, unassuming demeanor. Because they have solutions and often lend a helping hand, they themselves receive only average performance ratings in typical process metrics. A good manager quickly recognizes these individuals because they are generally reliable. They are balanced and appear less stressed because they proceed calmly and consistently.

Of course, much more could be said about the stereotypes in a software project, but I think the points already made provide a good basic understanding of what I want to express. An experienced project manager can address many of the problems described as they arise. This naturally requires solid technical knowledge and some interpersonal skills.

Of course, we must also be aware that experienced project managers don’t just appear out of thin air. They need to be developed and supported, just like any other team member. This definitely includes rotations through all technical departments, such as development, testing, and operations. Paradigms like pair programming are excellent for this. The goal isn’t to turn a manager into a programmer or tester, but rather to give them an understanding of the daily processes. This also strengthens confidence in the skills of the entire team, and mentalities like “you have to control and push lazy and incompetent programmers to get them to lift a finger” don’t even arise. In projects that consistently deliver high quality and meet their deadlines, there’s rarely a desire to introduce every conceivable process metric.


RTFM – usable documentation

An old master craftsman used to say: “He who writes, remains.” His primary intention was to obtain accurate measurements and weekly reports from his journeymen. He needed this information to issue correct invoices, which was crucial to the success of his business. This analogy can also be readily applied to software development. It wasn’t until Ruby, the programming language developed in Japan by Yukihiro Matsumoto, had English-language documentation that Ruby’s global success began.

We can see, therefore, that documentation can be of considerable importance to the success of a software project. It’s not simply a repository of information within the project where new colleagues can find necessary details. Of course, documentation is a rather tedious subject for developers. It constantly needs to be kept up-to-date, and they often lack the skills to put their own thoughts down on paper in a clear and organized way for others to understand.

I myself first encountered the topic of documentation many years ago through reading the book “Software Engineering” by Johannes Siedersleben. Ed Yourdon was quoted there as saying that before methods like UML, documentation often took the form of a Victorian novella. During my professional life, I’ve also encountered a few such Victorian novellas. The frustrating thing was: after battling through the textual desert—there’s no other way to describe the feeling than as overcoming and struggling—you still didn’t have the information you were looking for. To paraphrase Goethe’s Faust: “So here I stand, poor fool, no wiser than before.”

Here we already see a first criticism of poor documentation: inappropriate length and a lack of information. We must recognize that writing isn’t something everyone is born with. After all, you became a software developer, not a book author. For the concept of “successful documentation,” this means that you shouldn’t force anyone to do it and instead look for team members who have a knack for it. This doesn’t mean, however, that everyone else is exempt from documentation tasks. Their input is essential for quality. Proofreading, pointing out errors, and suggesting additions are all necessary tasks that can easily be shared.

It’s highly advisable to provide occasional rhetorical training for the team or individual team members. The focus should be on precise, concise, and understandable expression. This also involves organizing your thoughts so they can be put down on paper and follow a clear and coherent structure. The resulting improved communication has a very positive impact on development projects.

Up-to-date documentation that is easy to read and contains important information quickly becomes a living document, regardless of the format chosen. This is also a fundamental concept for successful DevOps and agile methodologies. These paradigms rely on good information exchange and address the avoidance of information silos.

One point that really bothers me is the statement: “Our tests are the documentation.” Not all stakeholders can program and are therefore unable to understand the test cases. Furthermore, while tests demonstrate the behavior of functions, they don’t inherently demonstrate their correct usage. Variations of usable solutions are also usually missing. For test cases to have a documentary character, it’s necessary to develop specific tests precisely for this purpose. In my opinion, this approach has two significant advantages. First, the implementation documentation remains up-to-date, because changes will cause the test case to fail. Another positive effect is that the developer becomes aware of how their implementation is being used and can correct a flawed design in a timely manner.

Of course, there are now countless technical solutions that are suitable for different groups of people, depending on their perspective on the system. Issue and bug tracking systems, such as the commercial JIRA or the open-source Redmine, map entire processes. They allow testers to assign identified problems and errors in the software to a specific release version. Project managers can use release management to prioritize fixes, and developers document the implemented fixes. That’s the theory. In practice, I’ve seen in almost every project how the comment function in these systems is misused as a chat to describe the change status. The result is a bug report with countless useless comments, and any real, relevant information is completely missing.

Another widespread technical solution in development projects is the use of enterprise wikis. They enhance simple wikis with navigation and allow the creation of closed spaces where only explicitly authorized user groups receive granular permissions such as read or write. Besides the widely used commercial solution Confluence, there’s also a free alternative called BlueSpice, which is based on MediaWiki. Wikis allow collaborative work on a document, and individual pages can be exported as PDFs in various formats. To ensure that the wiki pages remain usable, it’s important to maintain clean and consistent formatting. Tables should fit their content onto a single A4 page without unwanted line breaks. This improves readability. There are also many instances where bulleted lists are preferable to tables for the sake of clarity.

This brings us to another very sensitive topic: graphics. It’s certainly true that a picture is often worth a thousand words. But not always! When working with graphics, it’s important to be aware that images often require a considerable amount of time to create and can often only be adapted with significant effort. This leads to several conclusions to make life easier. A standard program (format) should be used for creating graphics. Expensive graphics programs like Photoshop and Corel should be avoided. Graphics created for wiki pages should be attached to the wiki page in their original, editable form. A separate repository can also be set up for this purpose to allow reuse in other projects.

If an image offers no added value, it’s best to omit it. Here’s a small example: It’s unnecessary to create a graphic depicting ten stick figures with a role name or person underneath. Here, it is advisable to create a simple list, which is also easier to supplement or adapt.

But you should also avoid overloaded graphics. True to the motto “more is better,” overly detailed information tends to cause confusion and can lead to misinterpretations. A recommended book is “Documenting and Communicating Software Architectures” by Stefan Zörner. In this book, he effectively demonstrates the importance of different perspectives on a system and which groups of people are addressed by a specific viewpoint. I would also like to take this opportunity to share his seven rules for good documentation:

  1. Write from the reader’s perspective.
  2. Avoid unnecessary repetition.
  3. Avoid ambiguity; explain notation if necessary.
  4. Use standards such as UML.
  5. Include the reasons (why).
  6. Keep the documentation up-to-date, but never too up-to-date.
  7. Review the usability.

Anyone tasked with writing the documentation, or ensuring its progress and accuracy, should always be aware that it contains important information and presents it correctly and clearly. Concise and well-organized documentation can be easily adapted and expanded as the project progresses. Adjustments are most successful when the affected area is as cohesive as possible and appears only once. This centralization is achieved through references and hyperlinks, so that changes in the original document are reflected in the references.

Of course, there is much more to say about documentation; it’s the subject of numerous books, but that would go beyond the scope of this article. My main goal was to raise awareness of this topic, as paradigms like Agile and DevOps rely on a good flow of information.


Conway’s law

During my work as a Configuration Manager / DevOps for large web projects, I have watched companies disregard Conway’s Law and fail miserably. Such failure then often manifested itself in significant budget overruns and missed deadlines.

The internal infrastructure in the project collaboration was exactly modeled on the internal organizational structures and all experiences and established standards were ‘bent’ to fit the internal organization. This resulted in problems that made the CI/CD pipelines particularly cumbersome and resulted in long execution times. But also adjustments could only be made with a lot of effort. Instead of simplifying existing processes and aligning them with established standards, excuses were made to leave everything as it was before. Let’s take a look at what Conway’s Law is and why we should know it.

The US American researcher and programmer Melvin E. Conway received his doctorate from Case Western Reserve University in 1961. His area of expertise is programming languages and compiler design.

In 1967, he submitted to The Harvard Business Review his paper “How Do Committees Invent?” and was rejected. The reason given was that his thesis was not substantiated. However, Datamation, the largest IT magazine at the time, accepted his article and published it in April 1968. And this paper has since been widely cited. The core statement is:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Conway, Melvin E. “How do Committees Invent?” 1968, Datamation, vol. 14, num. 4, pp. 28–31

When Fred Brooks cited the essay in his legendary 1975 book, The Mythical Man-Month, he called this key statement Conway’s Law. Brooks recognized the connection between Conway’s Law and management theory. In this regard, we find the following example in the article:

Because the design which occurs first is almost never the best possible, the prevailing system concept may need to change. Therefore, flexibility of organization is important to effective design.

An often-cited example of an “ideal” team size in terms of Conway’s Law is Amazon’s two-pizza rule, which states that individual project teams should have no more members than two pizzas can fill in one meeting. The most important factor to consider in team alignment, however, is the ability to work across teams and not live in silos.

Conway’s Law was not intended as a joke or Zen koan, but is a valid sociological observation. Take a look at structures from government agencies and their digital implementation. But also processes found in large corporations have been emulated by software systems. Such applications are considered very cumbersome and complicated, so that they find little acceptance among users and they prefer to fall back on alternatives. Unfortunately, it is often impossible to simplify processes in large organizational structures for politically motivated reasons.

Among other things, there is a detailed article by Martin Fowler, who deals explicitly with software architectures and elaborates the importance of the coupling of objects and modules.The communication of the developers among themselves plays a substantial role, in order to obtain best possible results. This circumstance over the importance of communication was taken up also by the agile software development and converted as essential point.Especially when distributed teams work on a joint project, the time difference is a limiting factor in team communication.This must then be designed particularly efficiently.

In 2010, Jonny Leroy and Matt Simons coined the term Inverse Conway Maneuver in the article “Dealing with creaky legacy platforms”:

Conway’s Law … can be summarized as “Dysfunctional organizations tend to create dysfunctional applications.” To paraphrase Einstein, you can’t fix a problem from within the same mindset that created it, so it is often worth investigating whether restructuring your organization or team would prevent the new application from displaying all the same structural dysfunctions as the original. In what could be termed an “inverse Conway maneuver,” you may want to begin by breaking down silos that constrain the team’s ability to collaborate effectively.

Since the 2010s, a new architectural style has entered the software industry. The so-called microservices, which are created by small agile teams. The most important criterion of a microservice compared to a modular monolith is that a microservice can be seen as an independently viable module or subsystem. On the one hand, this allows the microservice to be reused in other applications. On the other hand, there is a strong encapsulation of the functional domain, which opens up a very high flexibility for adaptations.

However, Conway’s law can be applied to many other areas and is not exclusively limited to the software industry. This is what makes the work so valuable.

Ressourcen

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Version Number Anti-Patterns

After the gang of four (GOF) Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides published the book, Design Patterns: Elements of Reusable Object-Oriented Software, learning how to describe problems and solutions became popular in almost every field in software development. Likewise, learning to describe don’ts and anti-pattern became equally as popular.

In publications that discussed these concepts, we find helpful recommendations for software design, project management, configuration management, and much more. In this article, I will share my experience dealing with version numbers for software artifacts.

Most of us are already familiar with a method called semantic versioning, a powerful and easy-to-learn rule set for how version numbers have to be structured and how the segments should increase.

Version numbering example:

  • Major: Incompatible API changes.
  • Minor: Add new functionality.
  • Patch: Bugfixes and corrections.
  • Label: SNAPSHOT marking the “under development” status.

An incompatible API Change occurs when an externally accessible function or class was deleted or renamed. Another possibility is a change in the signature of a method. This means the return value or parameters has been changed from its original implementation. In these scenarios, it’s necessary to increase the Major segment of the version number. These changes present a high risk for API consumers because they need to adapt their own code.

When dealing with version numbers, it’s also important to know that 1.0.0 and 1.0 are equal. This has effect to the requirement that versions of a software release have to be unique. If not, it’s impossible to distinguish between artifacts. Several times in my professional experience, I was involved in projects where there was no well-defined processes for creating version numbers. The effect of these circumstances was that the team had to secure the quality of the artifact and got confused with which artifact version they were currently dealing with.

The biggest mistake I ever saw was the storage of the version of an artifact in a database together with other configuration entries. The correct procedure should be: place the version inside the artifact in a way that no one after a release can change from outside. The trap you could fall into is the process of how to update the version after a release or installation.

Maybe you have a checklist for all manual activities during a release. But what happens after a release is installed in a testing stage and for some reason another version of the application has to be installed. Are you still aware of changing the version number manually? How do you find out which version is installed or when the information of the database is incorrect?

Detect the correct version in this situation is a very difficult challenge. For that reason, we have the requirement to keep the version inside of the application. In the next step, we will discuss a secure and simple way on how to solve an automatic approach to this problem.

Our precondition is a simple Java library build with Maven. By default, the version number of the artifact is written down in the POM. After the build process, our artifact is created and named like: artifact-1.0.jar or similar. As long we don’t rename the artifact, we have a proper way to distinguish the versions. Even after a rename with a simple trick of packaging and checking, then, in the META-INF folder, we are able to find the correct value.

If you have the Version hardcoded in a property or class file, it would also work fine, as long you don’t forget to always update it. Maybe the branching and merging in SCM systems like Git could need your special attention to always have the correct version in your codebase.

Another solution is using Maven and the token placement mechanism. Before you run to try it out in your IDE, keep in mind that Maven uses to different folders: sources and resources. The token replacement in sources will not work properly. After a first run, your variable is replaced by a fixed number and gone. A second run will fail. To prepare your code for the token replacement, you need to configure Maven as a first in the build lifecycle:

<build>
   <resources>
      <resource>
         <directory>src/main/resources/</directory>
         <filtering>true</filtering>
      </resource> 
   </resources>
   <testResources>
      <testResource>
         <directory>src/test/resources/</directory>
         <filtering>true</filtering>
      </testResource>
   </testResources>
</build>

After this step, you need to know the ${project.version} property form the POM. This allows you to create a file with the name version.property in the resources directory. The content of this file is just one line: version=${project.version}. After a build, you find in your artifact the version.property with the same version number you used in your POM. Now, you can write a function to read the file and use this property. You could store the result in a constant for use in your program. That’s all you have to do!

Example: https://github.com/ElmarDott/TP-CORE/blob/master/src/main/java/org/europa/together/utils/Constraints.java

Non-Functional Requirements: Quality

By experience, most of us know how difficult it is to express what we mean talking about quality. Why is that so?  There exist many different views on quality and every one of them has its importance. What has to be defined for our project is something that fits its needs and works with the budget. Trying to reach perfectionism can be counterproductive if a project is to be terminated successfully. We will start based on a research paper written by B. W. Boehm in 1976 called “Quantitative evaluation of software quality.” Boehm highlights the different aspects of software quality and the right context. Let’s have a look more deeply into this topic.

When we discuss quality, we should focus on three topics: code structure, implementation correctness, and maintainability. Many managers just care about the first two aspects, but not about maintenance. This is dangerous because enterprises will not invest in individual development just to use the application for only a few years. Depending on the complexity of the application the price for creation could reach hundreds of thousands of dollars. Then it’s understandable that the expected business value of such activities is often highly estimated. A lifetime of 10 years and more in production is very typical. To keep the benefits, adaptions will be mandatory. That implies also a strong focus on maintenance. Clean code doesn’t mean your application can simply change. A very easily understandable article that touches on this topic is written by Dan Abramov. Before we go further on how maintenance could be defined we will discuss the first point: the structure.

Scaffolding Your Project

An often underestimated aspect in development divisions is a missing standard for project structures. A fixed definition of where files have to be placed helps team members find points of interests quickly. Such a meta-structure for Java projects is defined by the build tool Maven. More than a decade ago, companies tested Maven and readily adopted the tool to their established folder structure used in the projects. This resulted in heavy maintenance tasks, given the reason that more and more infrastructure tools for software development were being used. Those tools operate on the standard that Maven defines, meaning that every customization affects the success of integrating new tools or exchanging an existing tool for another.

Another aspect to look at is the company-wide defined META architecture. When possible, every project should follow the same META architecture. This will reduce the time it takes a new developer to join an existing team and catch up with its productivity. This META architecture has to be open for adoptions which can be reached by two simple steps:

  1. Don’t be concerned with too many details;
  2. Follow the KISS (Keep it simple, stupid.) principle.

A classical pattern that violates the KISS principle is when standards heavily got customized. A very good example of the effects of strong customization is described by George Schlossnagle in his book “Advanced PHP Programming.” In chapter 21 he explains the problems created for the team when adopting the original PHP core and not following the recommended way via extensions. This resulted in the effect that every update of the PHP version had to be manually manipulated to include its own development adaptations to the core. In conjunction, structure, architecture, and KISS already define three quality gates, which are easy to implement.

The open-source project TP-CORE, hosted on GitHub, concerns itself with the afore-mentioned structure, architecture, and KISS. There you can find their approach on how to put it in practice. This small Java library rigidly defined the Maven convention with his directory structure. For fast compatibility detection, releases are defined by semantic versioning. The layer structure was chosen as its architecture and is fully described here. Examination of their main architectural decisions concludes as follows:

Each layer is defined by his own package and the files following also a strict rule. No special PRE or POST-fix is used. The functionality Logger, for example, is declared by an interface called Logger and the corresponding implementation LogbackLogger. The API interfaces can detect in the package “business” and the implementation classes located in the package “application.” Naming like ILogger and LoggerImpl should be avoided. Imagine a project that was started 10 years ago and the LoggerImpl was based on Log4J. Now a new requirement arises, and the log level needs to be updated during run time. To solve this challenge, the Log4J library could be replaced with Logback. Now it is understandable why it is a good idea to name the implementation class like the interface, combined with the implementation detail: it makes maintenance much easier! Equal conventions can also be found within the Java standard API. The interface List is implemented by an ArrayList. Obviously, again the interface is not labeled as something like IList and the implementation not as ListImpl .

Summarizing this short paragraph, a full measurement rule set was defined to describe our understanding of structural quality. By experience, this description should be short. If other people can easily comprehend your intentions, they willingly accept your guidance, deferring to your knowledge. In addition, the architect will be much faster in detecting rule violations.

Measure Your Success

The most difficult part is to keep a clean code. Some advice is not bad per se, but in the context of your project, may not prove as useful. In my opinion, the most important rule would be to always activate the compiler warning, no matter which programming language you use! All compiler warnings will have to be resolved when a release is prepared. Companies dealing with critical software, like NASA, strictly apply this rule in their projects resulting in utter success.

Coding conventions about naming, line length, and API documentation, like JavaDoc, can be simply defined and observed by tools like Checkstyle. This process can run fully automated during your build. Be careful; even if the code checkers pass without warnings, this does not mean that everything is working optimally. JavaDoc, for example, is problematic. With an automated Checkstyle, it can be assured that this API documentation exists, although we have no idea about the quality of those descriptions.

There should be no need to discuss the benefits of testing in this case; let us rather take a walkthrough of test coverage. The industry standard of 85% of covered code in test cases should be followed because coverage at less than 85% will not reach the complex parts of your application. 100% coverage just burns down your budget fast without resulting in higher benefits. A prime example of this is the TP-CORE project, whose test coverage is mostly between 92% to 95%. This was done to see real possibilities.

As already explained, the business layer contains just interfaces, defining the API. This layer is explicitly excluded from the coverage checks. Another package is called internal and it contains hidden implementations, like the SAX DocumentHandler. Because of the dependencies the DocumentHandler is bound to, it is very difficult to test this class directly, even with Mocks. This is unproblematic given that the purpose of this class is only for internal usage. In addition, the class is implicitly tested by the implementation using the DocumentHandler. To reach higher coverage, it also could be an option to exclude all internal implementations from checks. But it is always a good idea to observe the implicit coverage of those classes to detect aspects you may be unaware of.

Besides the low-level unit tests, automated acceptance tests should also be run. Paying close attention to these points may avoid a variety of problems. But never trust those fully automated checks blindly! Regularly repeated manual code inspections will always be mandatory, especially when working with external vendors. In our talk at JCON 2019, we demonstrated how simply test coverage could be faked. To detect other vulnerabilities you can additionally run checkers like SpotBugs and others more.

Tests don’t indicate that an application is free of failures, but they indicate a defined behavior for implemented functionality.

For a while now, SCM suites like GitLab or Microsoft Azure support pull requests, introduced long ago in GitHub. Those workflows are nothing new; IBM Synergy used to apply the same technique. A Build Manager was responsible to merge the developers’ changes into the codebase. In a rapid manner, all the revisions performed by the developer are just added into the repository by the Build Manager, who does not hold a sufficiently profound knowledge to decide about the implementation quality. It was the usual practice to simply secure that the build is not broken and always the compile produce an artifact.

Enterprises have discovered this as a new strategy to handle pull requests. Now, managers often make the decision to use pull requests as a quality gate. In my personal experience, this slows down productivity because it takes time until the changes are available in the codebase. Understanding of the branch and merge mechanism helps you to decide for a simpler branch model, like release branch lines. On those branches tools like SonarQube operate to observe the overall quality goal.

If a project needs an orchestrated build, with a defined order how artifacts have to create, you have a strong hint for a refactoring.

The coupling between classes and modules is often underestimated. It is very difficult to have an automated visualization for the bindings of modules. You will find out very fast the effect it has when a light coupling is violated because of an increment of complexity in your build logic.

Repeat Your Success

Rest assured, changes will happen! It is a challenge to keep your application open for adjustments. Several of the previous recommendations have implicit effects on future maintenance. A good source quality simplifies the endeavor of being prepared. But there is no guarantee. In the worst cases the end of the product lifecycle, EOL is reached, when mandatory improvements or changes cannot be realized anymore because of an eroded code base, for example.

As already mentioned, light coupling brings with it numerous benefits with respect to maintenance and reutilization. To reach this goal is not that difficult as it might look. In the first place, try to avoid as much as possible the inclusion of third-party libraries. Just to check if a String is empty or NULL it is unnecessary to depend on an external library. These few lines are fast done by oneself. A second important point to be considered in relation to external libraries: “Only one library to solve a problem.” If your project deals with JSON then decide one one implementation and don’t incorporate various artifacts. These two points heavily impact on security: a third-party artifact we can avoid using will not be able to cause any security leaks.

After the decision is taken for an external implementation, try to cover the usage in your project by applying design patterns like proxy, facade, or wrapper. This allows for a replacement more easily because the code changes are not spread around the whole codebase. You don’t need to change everything at once if you follow the advice on how to name the implementation class and provide an interface. Even though a SCM is designed for collaboration, there are limitations when more than one person is editing the same file. Using a design pattern to hide information allows you an iterative update of your changes.

Conclusion

As we have seen: a nonfunctional requirement is not that difficult to describe. With a short checklist, you can clearly define the important aspects for your project. It is not necessary to check all points for every code commit in the repository, this would with all probability just elevate costs and doesn’t result in higher benefits. Running a full check around a day before the release represents an effective solution to keep quality in an agile context and will help recognizing where optimization is necessary. Points of Interests (POI) to secure quality are the revisions in the code base for a release. This gives you a comparable statistic and helps increasing estimations.

Of course, in this short article, it is almost impossible to cover all aspects regarding quality. We hope our explanation helps you to link theory by examples to best practice. In conclusion, this should be your main takeaway: a high level of automation within your infrastructure, like continuous integration, is extremely helpful, but doesn’t prevent you from manual code reviews and audits.

Checklist

  • Follow common standards
  • KISS – keep it simple, stupid!
  • Equal directory structure for different projects
  • Simple META architecture, which can reuse as much as possible in other projects
  • Defined and follow coding styles
  • If a release got prepared – no compiler warnings are accepted
  • Have test coverage up to 85%
  • Avoid third-party libraries as much as possible
  • Don’t support more than one technology for a specific problem (e. g., JSON)
  • Cover foreign code by a design pattern
  • Avoid strong object/module coupling