README – how to

README files have a long tradition in software projects. These originally plain text files contained license information and instructions on how to compile the corresponding artifact from the source code or important notes on installing the program. There is no real standard how to build such a README file.

Since GitHub (acquired by Microsoft in 2018) started its triumphant march as a free code hosting platform for open source projects, there was quite early the function that the README file as the start page of the repository display. All that is required is to create a simple text file called README.md in the root directory of the repository.

In order to be able to structure the README files more clearly a possibility for a simple formatting was looked for. Quickly the markdown notation was chosen, because it is easy to use and can be rendered quite performant. Thus, the overview pages are easier to read for people and can be used as project documentation.

It is possible to link several such markdown files together as project documentation. So you get a kind of mini WIKI that is included in the project and also versioned via Git.

The whole thing became so successful that self-hosting solutions such as GitLab or the commercial BitBucket have also adopted this function.

Now, however, the question arises as to what content is best written in such a README file so that it also represents real added value for outsiders. The following points have become established over the course of time:

  • Short description of the project
  • Conditions under which the source code may be used (license)
  • How to use the project (e.g. instructions for compiling or how to include the library in own projects)
  • Who are the authors of the project and how to contact them
  • What to do if you want to support the project

Meanwhile, so-called badges (stickers) are very popular. These often reference external services such as the free Continuous Integration Server TravisCI. These help to assess the quality of the project.

On GitHub there are also various templates for README files. However, you also have to look a little at the actual circumstances of your own project and judge which information is really relevant for users. But such templates help a lot to find out if you might have missed a point.

The fact that pretty much every manufacturer of source control management server solutions has integrated the function to display the README.md file as the project start page for the code repository means that a README.me is also a useful thing for commercial projects.

Even if the syntax for markdown is easy to learn, it can be more comfortable to use a MARKDOWN editor directly for extensive editing of such files. You should make sure that the preview is displayed correctly and not only a simple syntax highlighting is offered.

Links are only visible for logged in users.

Links are only visible for logged in users.

Meeting Culture for Experts

After I started my career in the software industry I needed to participate in a lot of different meetings. Since the most companies transformed to agile the amount of meetings increased. Many developers have equal perceptions about meetings like me. The most of those meetings are just time wasting.

Often the person who had send the meeting invitations was not thinking about the different between quality and quantity. This thoughts are not new. Long ago Tom DeMarco described this situations in his books. His solution works until today and you find it also in explanations about how agile communication should organized. Just put the persons together you really need to make decisions. Less is always better than to many.

Another thing about meetings I figured out by my own experience, is when technicians and non technicians try to communicate. If you just watch this as an external observer it could be a funny entertainment, like the parody in the video I placed below the article. But if you involved as an expert to help a CTO taking decisions and they do not understand what you talking about, it drives you into frustrations.

Yegor Bugayenko mentioned in his podcast Shift-M that a CTO is far away from hands on code. I had recognized the same. And often this persons also far away from a deep technical understanding. The same I can say about project managers. In my perception, I often got the idea this managers won’t to understand technical details as well, to take good decisions. Well sometimes technicians exaggerate also with the level of information they give. Not everybody need to get the full details of a topic. A realistic overall picture, maybe a bit more details than a simple synopsis could make the things for everybody more easy.

A very old story to illustrate this topic was in the early 2000, when the questions pop up to implement a fat or a thin client. For this decision-guys thin client was sounding more smart, also they took this option, without understanding the consequences. After a while the project was running, they start to blame around that the costs are exploding and the result was not what they expect. In a retrospective is very easy to understand why the problems occurred. The manager had not specified well the browser support for the thin client. To secure all the compatibility was extremely cost intense. Also to keep the UI up to date will produce in the future costs, because the browsers will not stay from today the same like 10 years in the future. The same discussions we always have in every time period. Not so long ago was the question monolith or microservice? Simon Brown answered this question very straight to the point: „If you can’t build a monolith, what makes your think microservices are the answer?“ And what are the drama topics in 2020? Of course cloud services, Kubernetes and serverless.

For me it would be great when the persons taking decisions, minimum want to understand technical things and their consequences. If we focus more on the soft skills mangers should have, less software projects would fail. In between we keep going to give developers rhetoric trainings, to give them the capability to talk unspecific like a salesman.

The Bug Fix Bingo

If you whish to discover a way how to turn negative vibes between testers and developers into something positive – here is a great solution for that. The thing I like to introduce is quite old but even today in our brave new DevOps world an evergreen.

Many years ago in the world wide web I stumbled over a PDF called Bug Fix Bingo. A nice funny game for IT professionals. This little funny game originally was invent by the software testing firm K. J. Ross & Associates. Unfortunately the original site disappeared long ago so I decided to conserve this great idea in this blog post.

I can recommend this game also for folks they are not so deep into testing, but have to participate in a lot of IT meetings. Just print the file, bring some copies to your next meeting and enjoy whats gonna happen. I did it several times. Beside the fun we had it changed something. So let’s have a look into the concept and rules.

Bug Fix Bingo is based on a traditional Bingo just with a few adaptions. Everyone can join the game easily without a big preparation, because its really simple. Instead of numbers the Bingo uses statements from developers in defect review meetings to mark off squares.

Rules:

  1. Bingo squares are marked off when a developer makes the matching statement during bug fix sessions.
  2. Testers must call “Bingo” immediately upon completing a line of 5 squares either horizontally, vertically or diagonally.
  3. Statements that arise as result of a bug that later becomes “deferred”, “as designed”, or “not to fixed” should be classified as not marked.
  4. Bugs that are not reported in an incident report can not be used.
  5. Statements should also be recorded against the bug in the defect tracking system for later confirmation.
  6. Any tester marks off all 25 statements should be awarded 2 weeks stress leave immediately.
  7. Any developer found using all 25 statements should be seconded into the test group for a period of no less than 6 months for re-education.
“It works on my machine.”“Where were you when the program blew up?”“Why do you want to do it in that way?”“You can’t use that version on your system.”“Even thought it doesn’t work, how does it feel.”
“Did you check for a virus on your system?”“Somebody must have changed my code.”“It works, but it hasn’t been tested.”“THIS can’t be the source of that module in weeks!”“I can’t test anything!”
“It’s just some unlucky coincidence.”“You must have the wrong version.”“I haven’t touched that module in weeks.”“There is something funky in your data.”“What did you type in wrong to get it to crash?”
“It must be a hardware problem.”“How is that possible?”“It worked yesterday.”“It’s never done that before.”“That’s weird …”
“That’s scheduled to be fixed in the next release.”“Yes, we knew that would happen.”“Maybe we just don’t support that platform.”“It’s a feature. We just haven’t updated the specs.”“Surly nobody is going to use the program like that.”
The BuxFix Bing Gamecard

Incidentally, developers have a game like this too. They score points every time a QA person tries to raise a defect on functionality that is working as specified.

For your pleasure I place the original file of the Bug Fix Bingo here, so you can download and print it out for your next meeting. Comments about your experiences playing this game are very welcomed and feel free to share if you like this post. Stay updated and subscribe my newsletter.

A Fool with a Tool is still a Fool

Even though considerable additional effort has been expended on testing in recent years in order to improve the quality of software projects [1], the path to continuously repeatable successes cannot be taken for granted. Stringent and targeted management of all available resources was and still is indispensable for reproducible success.

(c) 2016 Marco Schulz, Java aktuell Ausgabe 4, S.14-19
Original article translated from Deutsch

It is no secret that many IT projects are still struggling to reach a successful conclusion. One might well think that the many new tools and methods that have emerged in recent years offer effective solutions for dealing with the situation. However, if one takes a look at current projects, this impression changes.

The author has often been able to observe how this problem was supposed to be mastered by introducing new tools. Not infrequently, the efforts ended in resignation. The supposed miracle solution quickly turned out to be a heavyweight time robber with an enormous amount of self-management. The initial euphoria of all those involved quickly turned into rejection and not infrequently culminated in a boycott of its use. It is therefore not surprising that experienced employees are skeptical of all change efforts for a long time and only deal with them when they are foreseeably successful. Because of this fact, the author has chosen as the title for this article the provocative quote from Grady Booch, a co-founder of UML.

Companies often spend too little time establishing a balanced internal infrastructure. Even the maintenance of existing fragments is often postponed for various reasons. At the management level, companies prefer to focus on current trends in order to attract customers who expect a list of buzzwords in response to their RFP. Yet Tom De Marco already described it in detail in the 1970s [2]: People make projects (see Figure 1).

We do what we can, but can we do anything?

The project, despite best intentions and intensive efforts find a happy end, is unfortunately not the rule. But when can one speak of a failed project in software development? An abandonment of all activities due to a lack of prospects of success is of course an obvious reason, but in this context it is rather rare. Rather, one gains this insight during the post-project review of completed orders. In controlling, for example, weak points come to light when determining profitability.

The reasons for negative results are usually exceeding the estimated budget or the agreed completion date. Usually, both conditions apply at the same time, as the endangered delivery deadline is countered by increasing personnel. This practice quickly reaches its limits, as new team members require an induction period, visibly reducing the productivity of the existing team. Easy-to-use architectures and a high degree of automation mitigate this effect somewhat. Every now and then, people also move to replace the contractor in the hope that new brooms sweep better.

A quick look at the top 3 list of major projects that have failed in Germany shows how a lack of communication, inadequate planning and poor management have a negative impact on the external perception of projects: Berlin Airport, Hamburg’s Elbe Philharmonic Hall and Stuttgart 21. Thanks to extensive media coverage, these undertakings are sufficiently well known and need no further explanation. Even if the examples cited do not originate from information technology, the recurring reasons for failure due to cost explosion and time delay can be found here as well.

Figure 1: Problem solving – “A bisserl was geht immer”, Monaco Franze

The will to create something big and important is not enough on its own. Those responsible also need the necessary technical, planning, social and communication skills, coupled with the authority to act. Building castles in the air and waiting for dreams to come true does not produce presentable results.

Great success is usually achieved when as few people as possible have veto power over decisions. This does not mean that advice should be ignored, but every possible state of mind cannot be taken into account. This makes it all the more important for the person responsible for the project to have the authority to enforce his or her decision, but not to demonstrate this with all vigor.

It is perfectly normal for a decision-maker not to be in control of all the details. After all, you delegate implementation to the appropriate specialists. Here’s a brief example: When the possibilities for creating larger and more complex Web applications became better and better in the early 2000s, the question often came up in meetings as to which paradigm should be used to implement the display logic. The terms “multi-tier”, “thin client” and “fat client” dominated the discussions of the decision-making bodies at that time. Explaining the advantages of different layers of a distributed web application to the client was one thing. But to leave it up to a technically savvy layman to decide how to access his new application – via browser (“thin client”) or via a separate GUI (“fat client”) – is simply foolish. Thus, in many cases, it was necessary to clear up misunderstandings that arose during development. The narrow browser solution not infrequently turned out to be a difficult technology to master, because manufacturers rarely cared about standards. Instead, one of the main requirements was usually to make the application look almost identical in the most popular browsers. However, this could only be achieved with considerable additional effort. Similar observations were made during the first hype of service-oriented architectures.

The consequence of these observations shows that it is indispensable to develop a vision before the start of the project, the goals of which also correspond to the estimated budget. A reusable deluxe version with as many degrees of freedom as possible requires a different approach than a “we get what we need” solution. It’s less about getting lost in the details and more about keeping the big picture in mind.

Particularly in German-speaking countries, companies find it difficult to find the necessary players for successful project implementation. The reasons for this may be quite diverse and could be due, among other things, to the fact that companies have not yet understood that experts rarely want to talk to poorly informed and inadequately prepared recruitment service providers.

Getting things done!

Successful project management is not an arbitrary coincidence. For a long time, an insufficient flow of information due to a lack of communication has been identified as one of the negative causes. Many projects have their own inherent character, which is also shaped by the team that accepts the challenge in order to jointly master the task set. Agile methods such as Scrum [3], Prince2 [4] or Kanban [5] pick up on this insight and offer potential solutions to be able to carry out IT projects successfully.

Occasionally, however, it can be observed how project managers transfer planning tasks to the responsible developers for self-management under the pretext of the newly introduced agile methods. The author has frequently experienced how architects have tended to see themselves in day-to-day implementation work instead of checking the delivered fragments for compliance with standards. In this way, quality cannot be established in the long term, since the results are merely solutions that ensure functionality and, because of time and cost pressures, do not establish the necessary structures to ensure future maintainability. Agile is not a synonym for anarchy. This setup likes to be decorated with an overloaded toolbox full of tools from the DevOps department and already the project is seemingly unsinkable. Just like the Titanic!

It is not without reason that for years it has been recommended to introduce a maximum of three new technologies at the start of a project. In this context, it is also not advisable to always go for the latest trends right away. When deciding on a technology, the appropriate resources must first be built up in the company, for which sufficient time must be planned. The investments are only beneficial if the choice made is more than just a short-lived hype. A good indicator of consistency is extensive documentation and an active community. These open secrets have been discussed in the relevant literature for years.

However, how does one proceed when a project has been established for many years, but in terms of the product life cycle a swing to new techniques becomes unavoidable? The reasons for such an effort may be many and vary from company to company. The need not to miss important innovations in order to remain competitive should not be delayed for too long. This consideration results in a strategy that is quite simple to implement. Current versions are continued in the proven tradition, and only for the next major release or the one after that is a roadmap drawn up that contains all the necessary points for a successful changeover. For this purpose, the critical points are worked out and tested in small feasibility studies, which are somewhat more demanding than a “hello world” tutorial, to see how an implementation could succeed. From experience, it is the small details that can be the crumbs on the scale to determine success or failure.

In all efforts, the goal is to achieve a high degree of automation. Compared to constantly recurring tasks that have to be performed manually, automation offers the possibility of producing continuously repeatable results. However, it is in the nature of things that simple activities are easier to automate than complex processes. In this case, it is important to check the cost-effectiveness of the plans beforehand so that developers do not indulge completely in their natural urge to play and also work through unpleasant day-to-day activities.

He who writes stays

Documentation, the vexed topic, spans all phases of the software development process. Whether for API descriptions, the user manual, planning documents for the architecture or learned knowledge about optimal procedures – describing is not one of the favored tasks of all protagonists involved. It can often be observed that the common opinion seems to prevail that thick manuals stand for extensive functionality of the product. However, long texts in a documentation are more of a quality defect that tries the reader’s patience because he expects precise instructions that get to the point. Instead, they receive vague phrases with trivial examples that rarely solve problems.

Figure 2: Test coverage with Cobertura

This insight can also be applied to project documentation and has been detailed by Johannes Sidersleben [6], among others, under the metaphor about Victorian novellas. Universities have already taken up these findings. For example, Merseburg University of Applied Sciences has established the course of study “Technical Writing” [7]. It is hoped to find more graduates of this course in the project landscape in the future.

When selecting collaborative tools as knowledge repositories, it is always important to keep the big picture in mind. Successful knowledge management can be measured by how efficiently an employee finds the information they are looking for. For this reason, company-wide use is a management decision and mandatory for all departments.

Information has a different nature and varies both in its scope and in how long it remains current. This results in different forms of presentation such as wiki, blog, ticket system, tweets, forums or podcasts, to list just a few. Forums very optimally depict the question and answer problem. A wiki is ideal for continuous text, such as that found in documentation and descriptions. Many webcasts are offered as video, without the visual representation adding any value. In most cases, a well-understood and properly produced audio track is sufficient to distribute knowledge. With a common and standardized database, completed projects can be compared efficiently. The resulting knowledge offers a high added value when making forecasts for future projects.

Test & Metrics – the measure of all things

Just by skimming the Quality Report 2014, one quickly learns that the new trend is “software testing”. Companies are increasingly allocating contingents for this, which take up a volume similar to the expenditure for the implementation of the project. Strictly speaking, one extinguishes fire with gasoline at this point. On closer inspection, the budget is already doubled at the planning stage. It is often up to the skill of the project manager to find a suitable declaration for earmarked project funds.

Only your consequent check of the test case coverage by suitable analysis tools ensures that in the end sufficient testing has been done. Even if one may hardly believe it: In an age in which software tests can be created more easily than ever before and different paradigms can be combined, extensive and meaningful test coverage is rather the exception (see Figure 2).

It is well known that it is not possible to prove that software is free of errors. Tests are only used to prove a defined behavior for the scenarios created. Automated test cases are in no way a substitute for manual code review by experienced architects. A simple example of this are nested “try catch” blocks that occur from time to time in Java, which have a direct effect on the program flow. Sometimes nesting can be intentional and useful. In this case, however, the error handling is not limited to the output of the stack trace in a log file. The cause of this programming error lies in the inexperience of the developer and the bad advice of the IDE at this point, for an expected error handling to enclose the instruction with an own “try catch” block instead of supplementing the existing routine by an additional “catch” statement. To want to detect this obvious error by test cases is an infantile approach from an economic point of view.

Typical error patterns can be detected inexpensively and efficiently by static test procedures. Publications that are particularly concerned with code quality and efficiency in the Java programming language [8, 9, 10] are always a good starting point for developing your own standards.

The consideration of error types is also very informative. Issue tracking and commit messages in SCM systems of open source projects such as Liferay [11] or GeoServer [12] show that a larger part of the errors concern the graphical user interface (GUI). These are often corrections of display texts in buttons and the like. The reporting of primarily display errors can also lie in the perception of the users. For them, the behavior of an application is usually a black box, so they deal with the software accordingly. It is not at all wrong to assume that the application has few errors when the number of users is high.

The usual computer science figures are software metrics that can give management a sense of the physical size of a project. Used correctly, such an overview provides helpful arguments for management decisions. For example, McCabe’s [13] cyclic complexity can be used to derive the number of test cases needed. Also statistics about the Lines of Code and the usual counts of packages, classes and methods show the growth of a project and can provide valuable information.

A very informative processing of this information is the project Code-City [14], which visualizes such a distribution as a city map. It is impressive Figure 3: Maven JDepend Plugin – Numbers with little significance to recognize where dangerous monoliths can arise and where orphaned classes or packages occur.

Figure 3: Maven JDepend plugin – numbers with little meaning

Conclusion

In day-to-day business, one is content to spread hectic bustle and put on a stressed face. By producing countless meters of paper, personal productivity is subsequently proven. The energy consumed in this way could be used much more sensibly through a consistently considered approach.

Loosely based on Kant’s “Sapere Aude”, simple solutions should be encouraged and demanded. Employees who need complicated structures to emphasize their own genius in the team may not be supporting pillars on which joint successes can be built. Cooperation with unteachable contemporaries is quickly reconsidered and, if necessary, corrected.

Many roads lead to Rome – and Rome was not built in a day. However, it cannot be denied that at some point the time has come to break ground. The choice of paths is not an undecidable problem either. There are safe paths and dangerous trails on which even experienced hikers have their fair share of trouble reaching their destination safely.

For successful project management, it is essential to lead the pack on solid and stable ground. This does not fundamentally rule out unconventional solutions, provided they are appropriate. The statement in decision-making bodies: “What you are saying is all correct, but there are processes in our company to which your presentation cannot be applied” is best rebutted with the argument: “That is quite correct, so it is now our task to work out ways of adapting the company processes in line with known success stories, instead of spending our time listing reasons for keeping everything the same. I’m sure you’ll agree that the purpose of our meeting is to solve problems, not ignore them.” … more voice

References

Links are only visible for logged in users.

This is how corporate knowledge becomes tangible

The expertise of its own employees is a significant economic factor for any organization. This makes it all the more important to store experience and expertise permanently and make it available to other employees. A central knowledge management server takes on this task and helps to ensure long-term productivity in the company.

(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 2, S.32-33
Original article translated from Deutsch

The complexity of today’s highly networked working world requires smooth interaction between a wide range of specialists. Knowledge transfer plays an important role here. This exchange is made more difficult when team members work in different locations with different time zones or come from different cultural backgrounds. Companies with worldwide locations are aware of this problem and have developed appropriate strategies for company-wide knowledge management. In order to introduce this successfully, the IT solution to be used should be seen as a methodology instead of focusing on the actual tool. Once those responsible have made the decision in favor of a particular software solution, this should be retained consistently. Frequent system changes reduce the quality of the links between the stored content. Since there is no normalized standard for the representation of knowledge, significant conversion losses can occur when switching to new software solutions.

Various mechanisms for different content

Information can be stored in IT systems in various ways. The individual forms of representation differ in terms of presentation, structuring and use. In order to be able to edit documents geminus conflict-free and version them at the same time, as is necessary for specifications or documentation, wikis [1] are ideally suited, since they were originally developed precisely for this use. The documents stored there are usually project-specific and should also be organized in this way.

Cross-project documents in the wiki are, for example, explanations of technical terms, a central list of abbreviations, or a Who’s Who of company employees including contact data and subject areas. The latter can in turn be linked to the technical term explanation. Comprehensive content can then be kept up-to-date centrally and can be conveniently linked to the corresponding project documents. This procedure avoids unnecessary repetitions and the documents to be read become shorter, but still contain all the necessary information. Johannes Siedersleben already described the risks of excessively long documentation in his book Softwaretechnik [2] in 2003.

Knowledge that has more the character of a FAQ should better be organized via a forum. Grouping by topics in which questions are stored along the lines of “How can I …?” makes it easier to find possible solutions. A particularly attractive feature is the fact that a forum of this kind evolves over time in line with demand. Users can formulate their own questions and post them in the forum. As a rule, qualified answers to newly posed questions are not long in coming.

Suitable candidates for blogs are, for example, general information about the company, status reports or tutorials. These are documents that tend to have an informative character, are not form-bound or are difficult to assign to a specific topic. Short information (tweets [3]) via Twitter, thematically grouped in channels, can also enrich project work. They additionally minimize the e-mails in one’s own mailbox. Examples include reminders about a specific event, a newsflash about new product versions or information about a successfully completed work process. Integrating tweets into project work is relatively new, and suitable software solutions are correspondingly rare.

Of course, the list of possibilities is far from exhausted at this point. However, the examples already provide a good overview of how companies can organize their knowledge. Linking the individual systems to a portal [4], which has an overarching search and user administration, quickly creates a network that is also suitable as a cloud solution.

User-friendliness is a decisive factor for the acceptance of a knowledge platform. Long training periods, unclear structuring, and awkward operation can quickly lead to rejection. Security is also satisfied with access authorization to the individual contents at group level. A good example of this is the enterprise wiki Confluence [5]. It allows different read and write permissions to be assigned to the individual document levels.

Naturally, a developer cannot be expected to describe his work in the right words for posterity after successful implementation. The fact that the quality of the texts in many documentations is not always sufficient has also been recognized by the Merseburg University of Applied Sciences, which offers the course of study Technical Editing [6]. Cross-reading by other project members has therefore proved to be a suitable means of ensuring the quality of the content. To facilitate the writing of texts, it is helpful to provide a small guide – similar to the Coding Convention.

Conclusion

A knowledge database cannot be implemented overnight. It takes time to compile enough information in it. Only through interaction and corrections of incomprehensible passages does the knowledge reach a quality that invites transfer. Every employee should be encouraged to enrich existing texts with new insights, to resolve incomprehensible passages or to add search terms. If the process of knowledge creation and distribution is lived in this way, fewer documents will be orphaned and the information will always be up-to-date.

Automation options in software configuration management

Software development offers some extremely efficient ways to simplify recurring tasks through automation. The elimination of tedious, repetitive, monotonous tasks and a resulting reduction in the frequency of errors in the development process are by no means all facets of this topic.

(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 1, S.32-34
Original article translated from Deutsch

The motivation for establishing automation in the IT landscape is largely the same. Recurring tasks are to be simplified and solved by machine without human intervention. The advantages are fewer errors in the use of IT systems, which in turn reduces costs. As simple and advantageous as the idea of processes running independently sounds, implementation is less trivial. It quickly becomes clear that for every identified possibility of automation, implementation is not always feasible. Here, too, the principle applies: the more complex a problem is, the more complex its solution will be.

In order to weigh up whether the economic effort to introduce certain automatisms is worthwhile, the costs of a manual solution must be multiplied by the factor of the frequency of this work to be repeated. These costs must be set against the expenses for the development and operation of the automated solution. On the basis of this comparison, it quickly becomes clear whether a company should implement the envisaged improvement.

Tools support the development process

Especially in the development of software projects, there is considerable potential for optimization through automatic processes. Here, developers are supported by a wide range of tools that need to be skillfully orchestrated. Configuration and release management in particular deals in great detail with the practical use of a wide variety of tools for automating the software development process.

The existence of a separate build logic, for example in the form of a simple shell script, is already a good approach, but it does not always lead to the desired results. Platform-independent solutions are necessary for such cases, since development is very likely to take place in a heterogeneous environment. An isolated solution always means increased adaptation and maintenance effort. Finally, automation efforts should simplify existing processes. Current build tools such as Maven and Ant take advantage of this platform independence. Both tools encapsulate the entire build logic in separate XML files. Since XML has already established itself as a standard in software development, the learning curve is steeper than with rudimentary solutions.

Die Nutzung zentraler Build-Logiken bildet die Grundlage für weitere Automatismen während der Entwicklungsarbeit. Einen Aspekt bilden dabei automatisierte Tests in Form von UnitTests in einer Continuous-Integration-(CI)-Umgebung. Eine CI-Lösung fügt alle Teile einer Software zu einem Ganzen zusammen und arbeitet alle definierten Testfälle ab. Konnte die Software nicht gebaut werden oder ist ein Test fehlgeschlagen, wird der Entwickler per E-Mail benachrichtigt, um den Fehler schnell zu beheben. Moderne CI-Server werden gegen ein Versionsverwaltungssystem, wie beispielsweise Subversion oder Git, konfiguriert. Das bewirkt, dass der Server ein Build erst dann beginnt, wenn auch tatsächlich Änderungen im Sourcecode gemacht wurden.

Complex software systems usually use dependencies on external components (libraries) that cannot be influenced by the project itself. The efficient management of the artifacts used in the project is the main strength of the build tool Maven, which has contributed to its strong distribution. When used properly, this eliminates the need to archive binary program parts within the version control system, resulting in smaller repositories and shorter commit times (successful completion of a transaction). New versions of the libraries used can be incorporated and tried out more quickly without error-prone manual copy actions. Libraries developed in-house can be easily distributed in a protected manner in the company network with the use of an own repository server (Apache Nexus) in the sense of reuse.

When evaluating a build tool, the possibility of reporting should not be neglected. Automated monitoring of code quality using metrics, for example through the Checkstyle tool, is an excellent tool for project management to realistically assess the current status of the project.

Not too much new technology

With all the possibilities for automating processes, several paths can be taken. It is not uncommon for development teams to have long discussions about which tool is particularly suitable for the current project. This question is difficult to answer in general terms, since every project is unique and the advantages and disadvantages of different tools must be compared with the project requirements.

In practical use, the limitation to a maximum of two novel technologies in the project has proven successful. Whether a tool is suitable is also determined by whether people with the appropriate know-how are available in the company. A good solution is a list approved by management with recommendations of the tools that are already in use or can be integrated into the existing system landscape. This ensures that the tools used remain clear and manageable.

Projects that run for many years must undergo modernization of the technologies used at longer intervals. In this context, suitable times must be found to migrate to the new technology with as little effort as possible. Sensible dates to switch to a newer technology are, for example, a change to a new major release of the own project. This procedure allows a clean separation without having to migrate old project statuses to the new technology. In many cases, this is not easily possible.

Conclusion

The use of automatisms for software development can, if used wisely, energetically support the achievement of the project goal. As with all things, excessive use is associated with some risks. The infrastructure used must remain understandable and controllable despite all the technologization, so that project work does not come to a standstill in the event of system failures.