A Fool with a Tool is still a Fool

Even though considerable additional effort has been expended on testing in recent years in order to improve the quality of software projects [1], the path to continuously repeatable successes cannot be taken for granted. Stringent and targeted management of all available resources was and still is indispensable for reproducible success.

(c) 2016 Marco Schulz, Java aktuell Ausgabe 4, S.14-19
Original article translated from Deutsch

It is no secret that many IT projects are still struggling to reach a successful conclusion. One might well think that the many new tools and methods that have emerged in recent years offer effective solutions for dealing with the situation. However, if one takes a look at current projects, this impression changes.

The author has often been able to observe how this problem was supposed to be mastered by introducing new tools. Not infrequently, the efforts ended in resignation. The supposed miracle solution quickly turned out to be a heavyweight time robber with an enormous amount of self-management. The initial euphoria of all those involved quickly turned into rejection and not infrequently culminated in a boycott of its use. It is therefore not surprising that experienced employees are skeptical of all change efforts for a long time and only deal with them when they are foreseeably successful. Because of this fact, the author has chosen as the title for this article the provocative quote from Grady Booch, a co-founder of UML.

Companies often spend too little time establishing a balanced internal infrastructure. Even the maintenance of existing fragments is often postponed for various reasons. At the management level, companies prefer to focus on current trends in order to attract customers who expect a list of buzzwords in response to their RFP. Yet Tom De Marco already described it in detail in the 1970s [2]: People make projects (see Figure 1).

We do what we can, but can we do anything?

The project, despite best intentions and intensive efforts find a happy end, is unfortunately not the rule. But when can one speak of a failed project in software development? An abandonment of all activities due to a lack of prospects of success is of course an obvious reason, but in this context it is rather rare. Rather, one gains this insight during the post-project review of completed orders. In controlling, for example, weak points come to light when determining profitability.

The reasons for negative results are usually exceeding the estimated budget or the agreed completion date. Usually, both conditions apply at the same time, as the endangered delivery deadline is countered by increasing personnel. This practice quickly reaches its limits, as new team members require an induction period, visibly reducing the productivity of the existing team. Easy-to-use architectures and a high degree of automation mitigate this effect somewhat. Every now and then, people also move to replace the contractor in the hope that new brooms sweep better.

A quick look at the top 3 list of major projects that have failed in Germany shows how a lack of communication, inadequate planning and poor management have a negative impact on the external perception of projects: Berlin Airport, Hamburg’s Elbe Philharmonic Hall and Stuttgart 21. Thanks to extensive media coverage, these undertakings are sufficiently well known and need no further explanation. Even if the examples cited do not originate from information technology, the recurring reasons for failure due to cost explosion and time delay can be found here as well.

Figure 1: Problem solving – “A bisserl was geht immer”, Monaco Franze

The will to create something big and important is not enough on its own. Those responsible also need the necessary technical, planning, social and communication skills, coupled with the authority to act. Building castles in the air and waiting for dreams to come true does not produce presentable results.

Great success is usually achieved when as few people as possible have veto power over decisions. This does not mean that advice should be ignored, but every possible state of mind cannot be taken into account. This makes it all the more important for the person responsible for the project to have the authority to enforce his or her decision, but not to demonstrate this with all vigor.

It is perfectly normal for a decision-maker not to be in control of all the details. After all, you delegate implementation to the appropriate specialists. Here’s a brief example: When the possibilities for creating larger and more complex Web applications became better and better in the early 2000s, the question often came up in meetings as to which paradigm should be used to implement the display logic. The terms “multi-tier”, “thin client” and “fat client” dominated the discussions of the decision-making bodies at that time. Explaining the advantages of different layers of a distributed web application to the client was one thing. But to leave it up to a technically savvy layman to decide how to access his new application – via browser (“thin client”) or via a separate GUI (“fat client”) – is simply foolish. Thus, in many cases, it was necessary to clear up misunderstandings that arose during development. The narrow browser solution not infrequently turned out to be a difficult technology to master, because manufacturers rarely cared about standards. Instead, one of the main requirements was usually to make the application look almost identical in the most popular browsers. However, this could only be achieved with considerable additional effort. Similar observations were made during the first hype of service-oriented architectures.

The consequence of these observations shows that it is indispensable to develop a vision before the start of the project, the goals of which also correspond to the estimated budget. A reusable deluxe version with as many degrees of freedom as possible requires a different approach than a “we get what we need” solution. It’s less about getting lost in the details and more about keeping the big picture in mind.

Particularly in German-speaking countries, companies find it difficult to find the necessary players for successful project implementation. The reasons for this may be quite diverse and could be due, among other things, to the fact that companies have not yet understood that experts rarely want to talk to poorly informed and inadequately prepared recruitment service providers.

Getting things done!

Successful project management is not an arbitrary coincidence. For a long time, an insufficient flow of information due to a lack of communication has been identified as one of the negative causes. Many projects have their own inherent character, which is also shaped by the team that accepts the challenge in order to jointly master the task set. Agile methods such as Scrum [3], Prince2 [4] or Kanban [5] pick up on this insight and offer potential solutions to be able to carry out IT projects successfully.

Occasionally, however, it can be observed how project managers transfer planning tasks to the responsible developers for self-management under the pretext of the newly introduced agile methods. The author has frequently experienced how architects have tended to see themselves in day-to-day implementation work instead of checking the delivered fragments for compliance with standards. In this way, quality cannot be established in the long term, since the results are merely solutions that ensure functionality and, because of time and cost pressures, do not establish the necessary structures to ensure future maintainability. Agile is not a synonym for anarchy. This setup likes to be decorated with an overloaded toolbox full of tools from the DevOps department and already the project is seemingly unsinkable. Just like the Titanic!

It is not without reason that for years it has been recommended to introduce a maximum of three new technologies at the start of a project. In this context, it is also not advisable to always go for the latest trends right away. When deciding on a technology, the appropriate resources must first be built up in the company, for which sufficient time must be planned. The investments are only beneficial if the choice made is more than just a short-lived hype. A good indicator of consistency is extensive documentation and an active community. These open secrets have been discussed in the relevant literature for years.

However, how does one proceed when a project has been established for many years, but in terms of the product life cycle a swing to new techniques becomes unavoidable? The reasons for such an effort may be many and vary from company to company. The need not to miss important innovations in order to remain competitive should not be delayed for too long. This consideration results in a strategy that is quite simple to implement. Current versions are continued in the proven tradition, and only for the next major release or the one after that is a roadmap drawn up that contains all the necessary points for a successful changeover. For this purpose, the critical points are worked out and tested in small feasibility studies, which are somewhat more demanding than a “hello world” tutorial, to see how an implementation could succeed. From experience, it is the small details that can be the crumbs on the scale to determine success or failure.

In all efforts, the goal is to achieve a high degree of automation. Compared to constantly recurring tasks that have to be performed manually, automation offers the possibility of producing continuously repeatable results. However, it is in the nature of things that simple activities are easier to automate than complex processes. In this case, it is important to check the cost-effectiveness of the plans beforehand so that developers do not indulge completely in their natural urge to play and also work through unpleasant day-to-day activities.

He who writes stays

Documentation, the vexed topic, spans all phases of the software development process. Whether for API descriptions, the user manual, planning documents for the architecture or learned knowledge about optimal procedures – describing is not one of the favored tasks of all protagonists involved. It can often be observed that the common opinion seems to prevail that thick manuals stand for extensive functionality of the product. However, long texts in a documentation are more of a quality defect that tries the reader’s patience because he expects precise instructions that get to the point. Instead, they receive vague phrases with trivial examples that rarely solve problems.

Figure 2: Test coverage with Cobertura

This insight can also be applied to project documentation and has been detailed by Johannes Sidersleben [6], among others, under the metaphor about Victorian novellas. Universities have already taken up these findings. For example, Merseburg University of Applied Sciences has established the course of study “Technical Writing” [7]. It is hoped to find more graduates of this course in the project landscape in the future.

When selecting collaborative tools as knowledge repositories, it is always important to keep the big picture in mind. Successful knowledge management can be measured by how efficiently an employee finds the information they are looking for. For this reason, company-wide use is a management decision and mandatory for all departments.

Information has a different nature and varies both in its scope and in how long it remains current. This results in different forms of presentation such as wiki, blog, ticket system, tweets, forums or podcasts, to list just a few. Forums very optimally depict the question and answer problem. A wiki is ideal for continuous text, such as that found in documentation and descriptions. Many webcasts are offered as video, without the visual representation adding any value. In most cases, a well-understood and properly produced audio track is sufficient to distribute knowledge. With a common and standardized database, completed projects can be compared efficiently. The resulting knowledge offers a high added value when making forecasts for future projects.

Test & Metrics – the measure of all things

Just by skimming the Quality Report 2014, one quickly learns that the new trend is “software testing”. Companies are increasingly allocating contingents for this, which take up a volume similar to the expenditure for the implementation of the project. Strictly speaking, one extinguishes fire with gasoline at this point. On closer inspection, the budget is already doubled at the planning stage. It is often up to the skill of the project manager to find a suitable declaration for earmarked project funds.

Only your consequent check of the test case coverage by suitable analysis tools ensures that in the end sufficient testing has been done. Even if one may hardly believe it: In an age in which software tests can be created more easily than ever before and different paradigms can be combined, extensive and meaningful test coverage is rather the exception (see Figure 2).

It is well known that it is not possible to prove that software is free of errors. Tests are only used to prove a defined behavior for the scenarios created. Automated test cases are in no way a substitute for manual code review by experienced architects. A simple example of this are nested “try catch” blocks that occur from time to time in Java, which have a direct effect on the program flow. Sometimes nesting can be intentional and useful. In this case, however, the error handling is not limited to the output of the stack trace in a log file. The cause of this programming error lies in the inexperience of the developer and the bad advice of the IDE at this point, for an expected error handling to enclose the instruction with an own “try catch” block instead of supplementing the existing routine by an additional “catch” statement. To want to detect this obvious error by test cases is an infantile approach from an economic point of view.

Typical error patterns can be detected inexpensively and efficiently by static test procedures. Publications that are particularly concerned with code quality and efficiency in the Java programming language [8, 9, 10] are always a good starting point for developing your own standards.

The consideration of error types is also very informative. Issue tracking and commit messages in SCM systems of open source projects such as Liferay [11] or GeoServer [12] show that a larger part of the errors concern the graphical user interface (GUI). These are often corrections of display texts in buttons and the like. The reporting of primarily display errors can also lie in the perception of the users. For them, the behavior of an application is usually a black box, so they deal with the software accordingly. It is not at all wrong to assume that the application has few errors when the number of users is high.

The usual computer science figures are software metrics that can give management a sense of the physical size of a project. Used correctly, such an overview provides helpful arguments for management decisions. For example, McCabe’s [13] cyclic complexity can be used to derive the number of test cases needed. Also statistics about the Lines of Code and the usual counts of packages, classes and methods show the growth of a project and can provide valuable information.

A very informative processing of this information is the project Code-City [14], which visualizes such a distribution as a city map. It is impressive Figure 3: Maven JDepend Plugin – Numbers with little significance to recognize where dangerous monoliths can arise and where orphaned classes or packages occur.

Figure 3: Maven JDepend plugin – numbers with little meaning

Conclusion

In day-to-day business, one is content to spread hectic bustle and put on a stressed face. By producing countless meters of paper, personal productivity is subsequently proven. The energy consumed in this way could be used much more sensibly through a consistently considered approach.

Loosely based on Kant’s “Sapere Aude”, simple solutions should be encouraged and demanded. Employees who need complicated structures to emphasize their own genius in the team may not be supporting pillars on which joint successes can be built. Cooperation with unteachable contemporaries is quickly reconsidered and, if necessary, corrected.

Many roads lead to Rome – and Rome was not built in a day. However, it cannot be denied that at some point the time has come to break ground. The choice of paths is not an undecidable problem either. There are safe paths and dangerous trails on which even experienced hikers have their fair share of trouble reaching their destination safely.

For successful project management, it is essential to lead the pack on solid and stable ground. This does not fundamentally rule out unconventional solutions, provided they are appropriate. The statement in decision-making bodies: “What you are saying is all correct, but there are processes in our company to which your presentation cannot be applied” is best rebutted with the argument: “That is quite correct, so it is now our task to work out ways of adapting the company processes in line with known success stories, instead of spending our time listing reasons for keeping everything the same. I’m sure you’ll agree that the purpose of our meeting is to solve problems, not ignore them.” … more voice

References

Links are only visible for logged in users.

Leave a Reply

Your email address will not be published. Required fields are marked *