This article is only visible for logged in patreons. With a Patreon membership you help me to keep this page up to date.
Tag Archives: Automation
The spectre of artificial intelligence
The hype surrounding artificial intelligence has been going on for several years. Currently, companies like OpenAI are causing quite a stir with freely accessible neural networks like ChatGPT. Users are fascinated by the possibilities and some intellectual figures of our time are warning humanity about artificial intelligence. So what is it about the specter of AI? In this article, I explore this question and you are invited to join me on this journey. Let’s go and follow me into the future.
In the spring of 2023, reports about the performance capabilities of artificial neural networks overflowed. This trend is continuing and, in my opinion, will not abate any time soon. In the midst of the emerging gold rush mood, however, there are also isolated bad news doing the rounds. For example, Microsoft announced a massive investment in artificial intelligence on a grand scale. This announcement was underlined in the spring of 2023 with the dismissal of just under 1000 employees and gave rise to familiar fears of industrialization and automation. Things were less spectacular at Digital Ocean, which laid off its entire content creation and documentation team. Quickly, some people rightly asked whether AI would now make professions like programmers, translators, journalists, editors and so on obsolete? For now, I would like to answer this question with a no. In the medium term, however, changes will occur, as history has already taught us. Something old passes away while new things come into being. So follow me on a little historical excursion.
To do this, we first look at the various stages of industrialization, which originated in England in the second half of the 18th century. Already the meaning of the original Latin term Industria, which can be translated with diligence, is extremely interesting. Which leads us to Norbert Wiener and his 1960 book ern God and Golem Inc [1]. He publicly pondered whether people who create machines that in turn can create machines are gods. Something I do not want to subscribe from my feeling. But let’s come back to industrialization for the time being.
The introduction of the steam engine and the use of location-independent energy sources such as coal enabled precise mass production. With cheaper automation of production by machines, manual home workplaces were displaced. In exchange, cheaper products were now available in stores. But there were also significant changes in transportation. The railroad allowed for faster, more comfortable and cheaper travel. This catapulted mankind into a globalized world. Because goods could now also travel long distances in a short time without any problems. Today, when we look back at the discussions that took place when the railroad began its triumphal march, we can only smile. After all, some intellectuals of the time argued that speeds in a train of more than 30 kilometers per hour would literally crush the human occupants. A fear that fortunately turned out to be unfounded.
While people in the first industrial revolution could no longer earn an income from working at home, they found an alternative to continue earning a living by working in a factory.
The second industrial revolution is characterized by electrification, which further increased the degree of automation. Machines became less cumbersome and more precise. But new inventions also entered daily life. Fax, telephone and radio spread information at a rapid pace. This brought us into the Information Age and accelerated not only our communication, but also our lives. We created a society that is primarily characterized by the saying “time is money”.

The third industrial revolution blessed mankind with a universal machine, which determined its functionality by the programs (software) running on it. Nowadays, computers support us in a wide range of activities. Modern cash register systems do much more than just spit out the total amount of the purchase made. They log money and flow of goods and allow evaluations for optimization with the collected data. This is a new quality of automation that we have achieved in the last 200 years. With the widespread availability of artificial neural networks, we are now on our way out of this phase, which is why we are currently in the transformation to the fourth industrial revolution. How else do we as humans intend to cope with the constantly growing flood of information?
Even though Industry 4.0 focuses on the networking of machines, this is not a real revolution. The Internet is only a consequence of the previous development to enable communication between machines. We can compare this with the replacement of the steam engine by electric motors. The real innovation was in electric machines that changed our communication. This is now happening in our time through the broad field of artificial intelligence.
In the near future, we will no longer use computers the way we have been doing. That’s because today’s computers owe their existence to the previously limited communication between humans and machines. The keyboard and mouse are actually clumsy input devices. They are slow and prone to error. Voice and gesture control via microphone and camera will replace mouse and keyboard. We will talk to our computers the way we talk to other people. This also means that today’s computer programs will become obsolete. We will no longer have to fill out tedious input masks in graphical user interfaces in order to reach our goal. Gone are the days where I type my articles. I will type them in and my computer will visually display them for me to proofread. Presumably, the profession of speech therapist will then experience a significant upswing.
There will certainly also be enough outcries from people who fear the disintegration of human communication. This fear is not at all unfounded. Let’s just look at the development of the German language in the period since the turn of the millennium. This was marked by the emergence of various text messaging services and the optimization of messages by using as many abbreviations as possible. This in turn only created question marks on the foreheads of parents when it came to deciphering the content of their children’s messages. Even though the current trend is away from text messages to audio messages, it does not mean that our language will not continue to change. I myself have observed for years that many people are no longer able to express themselves correctly in writing or to extract content from written texts. In the long run, this could lead to the unlearning of skills such as reading and writing. Thus also classical print articles such as books and magazines become obsolete. Finally, content can also be produced as video or podcast. Our intellectual abilities will degenerate in the long run.
Since the turn of the millennium, it has become easier and easier for many people to use computers. So first the good news. It will become much easier to use computers in the future because human-machine interaction is becoming more intuitive. In the meantime, we will see more and more major Internet portals shutting down their services because their business model is no longer viable. Here’s a quick example.
As a programmer, I often use the website StackOverflow to find help with problems. The information on this website about programming issues is now so extensive that you can quickly find suitable solutions to your own concerns by searching Google and the like, without having to formulate questions yourself. So far so good. But if you now integrate a neural network like ChatGPT into your programming environment to find the answer to all questions, the number of visitors for StackOverflow will continuously decrease. This in turn has an impact on advertising campaigns to be able to offer the service free of charge on the net. Initially, this will be compensated by the fact that operators of AI systems that access the data from StackOverflow will pay a flat fee for the use of the database. However, this will not stop the dwindling number of visitors. Which will lead to either a payment barrier preventing free use or the service being discontinued completely. There are many offers on the Internet that will encounter similar problems, which will ensure in the long term that the Internet as we know it has disappeared in the future.
Let’s imagine what a future search query for the search term ‘industrial revolution’ might look like. I ask my digital assistant: What do you know about industrial revolution? – Instead of searching through a seemingly endless list of thousands of entries for relevant results, I am read a short explanation with a personalized address that matches my age and level of education. Which immediately raises the question of who is judging my level of education and how?
This is a further downgrading of our abilities. Even if it is perceived as very comfortable in the first moment. If we no longer have the need to focus our attention on one specific thing over a long period of time, it will certainly be difficult for us to think up new things in the future. Our creativity will be reduced to an absolute minimum.
It will also change the way data is stored in the future. Complicated structures that are optimized and stored in databases will be the exception rather than the rule. Rather, I expect independent chunks of data that are concatenated like lists. Let’s look at this together to get a good idea of what I mean.

As a starting point, let’s take Aldous Huxley’s book ‘Brave New World’ from 1932. In addition to the title, the author and the year of publication, we can add English as the language to the meta information. This is then followed by the entire contents of the book including preface and epilogue as plain ASCII text. Generic or changeable things like table of contents or copyright are not included at this stage. With such a chunk, we have defined an atomic datum that can be uniquely identified by a hash value. Since Huxley’s Brave New World was originally written in English, this datum is also an immutable source for all data derived and generated from it.
If the work of Huxley is now translated into German or Spanish, it is the first derivation with the reference to the original. It can happen that books have been translated by different translators in different epochs. This results in a different reference hash for the German translation by Herbert E. Herlitschka from 1933 with the title ‘Brave New World’ than for the translation by Eva Walch published in 1978 with the same title ‘Brave New World’.
If audio books are now produced from the various texts, these audio books are the second derivative of the original text, since they represent an abridged version. A text is also created as an independent version before the recording. The audio track created from the abridged original text has the director as its author and refers to the speaker(s). As in theater, a text can be interpreted and staged by different people. Film adaptations can be treated in the same way.
Books, audio books and films in turn have graphics for the cover. These graphics again represent independent works, which are referenced with the corresponding version of the original.
Quotations from books can also be linked in this way. Similarly, critiques, interpretations, reviews and all kinds of other variations of content that refer to an original.
However, such data blocks are not only limited to books, but can also be applied to music scores, lyrics, etc. The decisive factor is that one can start from the original as far as possible. The resulting files are optimized exclusively for software programs, since they do not contain any formatting that is visible to the human eye. Finally, the corresponding hash value about the content of the file is sufficient as file name.
This is where the vision of the future begins. As authors of our work, we can now use artificial intelligence to automatically create translations, illustrations, audio books and animations even from a book. At this point, I would like to briefly refer to the neural network DeepL [2], which already delivers impressive translations and even improves the original text if handled skillfully. Does DeepL now put translators and editors out of work? I mean no! Because also like us humans, artificial intelligences are not infallible. They also make mistakes. That’s why I think that the price for these jobs will drop dramatically in the future, because these people can now do many times more work than before, thanks to their knowledge and excellent tools. This makes the individual service considerably cheaper, but because more individual services are possible through automation in the same period of time, this compensates for the price reduction for the provider.
If we now look at the new possibilities that are open to us, it doesn’t seem to be so problematic for us. So what are people like Elon Musk trying to warn us about?
If we now assume that the entire human knowledge will be digitized by the fourth industrial revolution and that all new knowledge will only be created in digital form, computer algorithms will be free to use suitable computing power to change these chunks of knowledge in such a way that we humans will not notice. A scenario loosely based on Orwell’s Ministry of Truth from the novel 1984. If we unlearn our abilities out of convenience, we also have few possibilities of verification.
If you think this would not be a problem, I would like to refer you to the lecture “Trust no scan” by David Kriesel [3].What happened? In short, it was about the fact that a construction company noticed discrepancies in copies of their construction plans. This resulted in different copies of the same original, in which the numerical values were changed. A very fatal problem in a construction project for the executing trades. If the bricklayer gets different size data than the concrete formers. The error was finally traced back to the fact that Xerox used an AI as software in their scanners for the OCR and the subsequent compression, which could not reliably recognize the characters read in.

But also the quote from Ted Chiang “Think of ChatGPT as a blurry jpeg of all the text on the Web.” should make us think. Certainly, for people who only know AI as an application, the meaning is hard to understand what is meant by saying “ChatGPT is just a blurry jpeg of all the text on the web”. However, it is not as difficult to understand as it seems at the first moment. Due to their structure, neural networks are always only a snapshot. Because with every input the internal state of a neural network changes. It is the same as with us humans. After all, we are only the sum of our experiences. If in the future more and more texts created by an AI are placed on the web without being reflected, the AI will form its knowledge from its own derivations. The originals fade with the time there they by ever smaller references in weighting lose. If someone would flood the Internet with topics like flat earth and lizard people, programs like ChatGPT would inevitably react to it and let this flow into their texts. These texts could be published then either independently by the AI in the net automated or find their spreading by unreflective persons accordingly. We have thus created a spiral that can only be broken if people have not given up their ability to exercise judgment out of convenience.
So we see that the warnings for caution in dealing with AI are not unfounded. Even if I consider scenarios like in the movie WarGames from 1983 [4] as improbable, we should consider very well how far we want to go with the technology of AI. Not that it happens to us like the sorcerer’s apprentice and we have to find out that we cannot master it any more.
References
Learn to walk with Docker and PostgreSQL

After some years the virtualization tool Docker proofed it’s importance for the software industry. Usually when you hear something about virtualization you may could think this is something for administrators and will not effect me as a developer as much. But wait. You’re might not right. Because having some basic knowledge about Docker as a developer will helps you in your daily business.
Step 0: create a local bridged network
docker network create -d bridge --subnet=172.18.0.0/16 services
BashThe name of the network is services an bind to the IP address range 172.18.0.0 to 172.18.0.255. You can proof the success yourself by typing:
docker network ls
BashAn output like the one below should appear:
NETWORK ID NAME DRIVER SCOPE
ac2f58175229 bridge bridge local
a01dc5513882 host host local
1d3d3ac42a40 none null local
82da585ee2df services bridge local
BashThe network step is important, because it defines a permanent connection, how applications need to establish a connect with the PostgreSQL DBMS. If you don’t do this Docker manage the IP address and when you run multiple containers on your machine the IP addresses could changed after a system reboot. This depends mostly on the order how the containers got started.
Step 1: create the container and initialize the database
docker run -d --name pg-dbms --restart=no \
--net services --ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
postgres:11
BashIf you wish that your PostgreSQL is always up after you restart your system, you should change the restart policy form no to always. The second line configure the network connection we had define in step 0. After you created the instance pg-dbms of your PostgreSQL 11 Docker image, you need to cheek if it was success. This you can do by the
docker ps -a
Bashcommand. When your container is after around 30 seconds still running you did everything right.
Step 2: copy the initialized database directory to a local directory on your host system
docker cp pg-dbms:/var/lib/postgresql/data /home/ed/postgres
BashThe biggest problem with the current container is, that all data will got lost, when you erase the container. This means wen need to find a way how to save this data permanently. The easiest way is to copy the data directory from your container to an directory to your host system. The copy command needs tow parameters source and destination. for the source you need to specify the container were you want to grab the files. in our case the container is named pg-dbms. The destination is a PostgreSQL folder in the home directory of the user ed. If you use Windows instead of Linux it works the same. Just adapt the directory path and try to avoid white-spaces. When the files appeared in the defined directory you’re done with this step.
Step 3: stop the current container
docker stop pg-dbms
BashIn the case you wish to start a container, just replace the word stop for the word start. The container we created to grab the initial files for the PostgreSQL DBMS we don’t need no longer, so we can erase it, but to do that as first the running container have to be stopped.
Step 4: start the current container
docker start pg-dbms
BashAfter the container is stopped we are able to erase it.
Step 5: recreate the container with an external volume
docker run -d --name pg-dbms --restart=no \
--net services --ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
-v /media/ed/memory/pg:/var/lib/postgresql/data \
postgres:11
BashNow we can link the directory with the exported initial database to a new created PostgreSQL container. that’s all. The big benefit of this activities is, that now every database we create in PostgreSQL and the data of this database is outside of the docker container on our local machine. This allows a much more simpler backup and prevent losing information when a container has to be updated.
If you have instead of PostgreSQL other images where you need to grab files to reuse them you can use this tutorial too. just adapt to the image and the paths you need. The procedure is almost the same. If you like to get to know more facts about Docker you can watch also my video Docker Basics in less then 10 Minutes. In the case you like this short tutorial share it with your friends and colleagues. To stay informed don’t forget to subscribe to my newsletter.
Modern Times (for Configuration Manager)
Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

- Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
- The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
- Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
- Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
- Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
- There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
- We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
- Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
- Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.
Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.
- First: understand the problem.
- Second: investigate about a standard solution for the process.
- Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.
If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.
By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.
InPerson (remote) Training’s (Courses)
/ 
A Fool with a Tool is still a Fool
Even though considerable additional effort has been expended on testing in recent years in order to improve the quality of software projects [1], the path to continuously repeatable successes cannot be taken for granted. Stringent and targeted management of all available resources was and still is indispensable for reproducible success.
(c) 2016 Marco Schulz, Java aktuell Ausgabe 4, S.14-19
– Original article translated from

It is no secret that many IT projects are still struggling to reach a successful conclusion. One might well think that the many new tools and methods that have emerged in recent years offer effective solutions for dealing with the situation. However, if one takes a look at current projects, this impression changes.
The author has often been able to observe how this problem was supposed to be mastered by introducing new tools. Not infrequently, the efforts ended in resignation. The supposed miracle solution quickly turned out to be a heavyweight time robber with an enormous amount of self-management. The initial euphoria of all those involved quickly turned into rejection and not infrequently culminated in a boycott of its use. It is therefore not surprising that experienced employees are skeptical of all change efforts for a long time and only deal with them when they are foreseeably successful. Because of this fact, the author has chosen as the title for this article the provocative quote from Grady Booch, a co-founder of UML.
Companies often spend too little time establishing a balanced internal infrastructure. Even the maintenance of existing fragments is often postponed for various reasons. At the management level, companies prefer to focus on current trends in order to attract customers who expect a list of buzzwords in response to their RFP. Yet Tom De Marco already described it in detail in the 1970s [2]: People make projects (see Figure 1).
We do what we can, but can we do anything?
The project, despite best intentions and intensive efforts find a happy end, is unfortunately not the rule. But when can one speak of a failed project in software development? An abandonment of all activities due to a lack of prospects of success is of course an obvious reason, but in this context it is rather rare. Rather, one gains this insight during the post-project review of completed orders. In controlling, for example, weak points come to light when determining profitability.
The reasons for negative results are usually exceeding the estimated budget or the agreed completion date. Usually, both conditions apply at the same time, as the endangered delivery deadline is countered by increasing personnel. This practice quickly reaches its limits, as new team members require an induction period, visibly reducing the productivity of the existing team. Easy-to-use architectures and a high degree of automation mitigate this effect somewhat. Every now and then, people also move to replace the contractor in the hope that new brooms sweep better.
A quick look at the top 3 list of major projects that have failed in Germany shows how a lack of communication, inadequate planning and poor management have a negative impact on the external perception of projects: Berlin Airport, Hamburg’s Elbe Philharmonic Hall and Stuttgart 21. Thanks to extensive media coverage, these undertakings are sufficiently well known and need no further explanation. Even if the examples cited do not originate from information technology, the recurring reasons for failure due to cost explosion and time delay can be found here as well.

The will to create something big and important is not enough on its own. Those responsible also need the necessary technical, planning, social and communication skills, coupled with the authority to act. Building castles in the air and waiting for dreams to come true does not produce presentable results.
Great success is usually achieved when as few people as possible have veto power over decisions. This does not mean that advice should be ignored, but every possible state of mind cannot be taken into account. This makes it all the more important for the person responsible for the project to have the authority to enforce his or her decision, but not to demonstrate this with all vigor.
It is perfectly normal for a decision-maker not to be in control of all the details. After all, you delegate implementation to the appropriate specialists. Here’s a brief example: When the possibilities for creating larger and more complex Web applications became better and better in the early 2000s, the question often came up in meetings as to which paradigm should be used to implement the display logic. The terms “multi-tier”, “thin client” and “fat client” dominated the discussions of the decision-making bodies at that time. Explaining the advantages of different layers of a distributed web application to the client was one thing. But to leave it up to a technically savvy layman to decide how to access his new application – via browser (“thin client”) or via a separate GUI (“fat client”) – is simply foolish. Thus, in many cases, it was necessary to clear up misunderstandings that arose during development. The narrow browser solution not infrequently turned out to be a difficult technology to master, because manufacturers rarely cared about standards. Instead, one of the main requirements was usually to make the application look almost identical in the most popular browsers. However, this could only be achieved with considerable additional effort. Similar observations were made during the first hype of service-oriented architectures.
The consequence of these observations shows that it is indispensable to develop a vision before the start of the project, the goals of which also correspond to the estimated budget. A reusable deluxe version with as many degrees of freedom as possible requires a different approach than a “we get what we need” solution. It’s less about getting lost in the details and more about keeping the big picture in mind.
Particularly in German-speaking countries, companies find it difficult to find the necessary players for successful project implementation. The reasons for this may be quite diverse and could be due, among other things, to the fact that companies have not yet understood that experts rarely want to talk to poorly informed and inadequately prepared recruitment service providers.
Getting things done!
Successful project management is not an arbitrary coincidence. For a long time, an insufficient flow of information due to a lack of communication has been identified as one of the negative causes. Many projects have their own inherent character, which is also shaped by the team that accepts the challenge in order to jointly master the task set. Agile methods such as Scrum [3], Prince2 [4] or Kanban [5] pick up on this insight and offer potential solutions to be able to carry out IT projects successfully.
Occasionally, however, it can be observed how project managers transfer planning tasks to the responsible developers for self-management under the pretext of the newly introduced agile methods. The author has frequently experienced how architects have tended to see themselves in day-to-day implementation work instead of checking the delivered fragments for compliance with standards. In this way, quality cannot be established in the long term, since the results are merely solutions that ensure functionality and, because of time and cost pressures, do not establish the necessary structures to ensure future maintainability. Agile is not a synonym for anarchy. This setup likes to be decorated with an overloaded toolbox full of tools from the DevOps department and already the project is seemingly unsinkable. Just like the Titanic!
It is not without reason that for years it has been recommended to introduce a maximum of three new technologies at the start of a project. In this context, it is also not advisable to always go for the latest trends right away. When deciding on a technology, the appropriate resources must first be built up in the company, for which sufficient time must be planned. The investments are only beneficial if the choice made is more than just a short-lived hype. A good indicator of consistency is extensive documentation and an active community. These open secrets have been discussed in the relevant literature for years.
However, how does one proceed when a project has been established for many years, but in terms of the product life cycle a swing to new techniques becomes unavoidable? The reasons for such an effort may be many and vary from company to company. The need not to miss important innovations in order to remain competitive should not be delayed for too long. This consideration results in a strategy that is quite simple to implement. Current versions are continued in the proven tradition, and only for the next major release or the one after that is a roadmap drawn up that contains all the necessary points for a successful changeover. For this purpose, the critical points are worked out and tested in small feasibility studies, which are somewhat more demanding than a “hello world” tutorial, to see how an implementation could succeed. From experience, it is the small details that can be the crumbs on the scale to determine success or failure.
In all efforts, the goal is to achieve a high degree of automation. Compared to constantly recurring tasks that have to be performed manually, automation offers the possibility of producing continuously repeatable results. However, it is in the nature of things that simple activities are easier to automate than complex processes. In this case, it is important to check the cost-effectiveness of the plans beforehand so that developers do not indulge completely in their natural urge to play and also work through unpleasant day-to-day activities.
He who writes stays
Documentation, the vexed topic, spans all phases of the software development process. Whether for API descriptions, the user manual, planning documents for the architecture or learned knowledge about optimal procedures – describing is not one of the favored tasks of all protagonists involved. It can often be observed that the common opinion seems to prevail that thick manuals stand for extensive functionality of the product. However, long texts in a documentation are more of a quality defect that tries the reader’s patience because he expects precise instructions that get to the point. Instead, they receive vague phrases with trivial examples that rarely solve problems.

This insight can also be applied to project documentation and has been detailed by Johannes Sidersleben [6], among others, under the metaphor about Victorian novellas. Universities have already taken up these findings. For example, Merseburg University of Applied Sciences has established the course of study “Technical Writing” [7]. It is hoped to find more graduates of this course in the project landscape in the future.
When selecting collaborative tools as knowledge repositories, it is always important to keep the big picture in mind. Successful knowledge management can be measured by how efficiently an employee finds the information they are looking for. For this reason, company-wide use is a management decision and mandatory for all departments.
Information has a different nature and varies both in its scope and in how long it remains current. This results in different forms of presentation such as wiki, blog, ticket system, tweets, forums or podcasts, to list just a few. Forums very optimally depict the question and answer problem. A wiki is ideal for continuous text, such as that found in documentation and descriptions. Many webcasts are offered as video, without the visual representation adding any value. In most cases, a well-understood and properly produced audio track is sufficient to distribute knowledge. With a common and standardized database, completed projects can be compared efficiently. The resulting knowledge offers a high added value when making forecasts for future projects.
Test & Metrics – the measure of all things
Just by skimming the Quality Report 2014, one quickly learns that the new trend is “software testing”. Companies are increasingly allocating contingents for this, which take up a volume similar to the expenditure for the implementation of the project. Strictly speaking, one extinguishes fire with gasoline at this point. On closer inspection, the budget is already doubled at the planning stage. It is often up to the skill of the project manager to find a suitable declaration for earmarked project funds.
Only your consequent check of the test case coverage by suitable analysis tools ensures that in the end sufficient testing has been done. Even if one may hardly believe it: In an age in which software tests can be created more easily than ever before and different paradigms can be combined, extensive and meaningful test coverage is rather the exception (see Figure 2).
It is well known that it is not possible to prove that software is free of errors. Tests are only used to prove a defined behavior for the scenarios created. Automated test cases are in no way a substitute for manual code review by experienced architects. A simple example of this are nested “try catch” blocks that occur from time to time in Java, which have a direct effect on the program flow. Sometimes nesting can be intentional and useful. In this case, however, the error handling is not limited to the output of the stack trace in a log file. The cause of this programming error lies in the inexperience of the developer and the bad advice of the IDE at this point, for an expected error handling to enclose the instruction with an own “try catch” block instead of supplementing the existing routine by an additional “catch” statement. To want to detect this obvious error by test cases is an infantile approach from an economic point of view.
Typical error patterns can be detected inexpensively and efficiently by static test procedures. Publications that are particularly concerned with code quality and efficiency in the Java programming language [8, 9, 10] are always a good starting point for developing your own standards.
The consideration of error types is also very informative. Issue tracking and commit messages in SCM systems of open source projects such as Liferay [11] or GeoServer [12] show that a larger part of the errors concern the graphical user interface (GUI). These are often corrections of display texts in buttons and the like. The reporting of primarily display errors can also lie in the perception of the users. For them, the behavior of an application is usually a black box, so they deal with the software accordingly. It is not at all wrong to assume that the application has few errors when the number of users is high.
The usual computer science figures are software metrics that can give management a sense of the physical size of a project. Used correctly, such an overview provides helpful arguments for management decisions. For example, McCabe’s [13] cyclic complexity can be used to derive the number of test cases needed. Also statistics about the Lines of Code and the usual counts of packages, classes and methods show the growth of a project and can provide valuable information.
A very informative processing of this information is the project Code-City [14], which visualizes such a distribution as a city map. It is impressive Figure 3: Maven JDepend Plugin – Numbers with little significance to recognize where dangerous monoliths can arise and where orphaned classes or packages occur.

Conclusion
In day-to-day business, one is content to spread hectic bustle and put on a stressed face. By producing countless meters of paper, personal productivity is subsequently proven. The energy consumed in this way could be used much more sensibly through a consistently considered approach.
Loosely based on Kant’s “Sapere Aude”, simple solutions should be encouraged and demanded. Employees who need complicated structures to emphasize their own genius in the team may not be supporting pillars on which joint successes can be built. Cooperation with unteachable contemporaries is quickly reconsidered and, if necessary, corrected.
Many roads lead to Rome – and Rome was not built in a day. However, it cannot be denied that at some point the time has come to break ground. The choice of paths is not an undecidable problem either. There are safe paths and dangerous trails on which even experienced hikers have their fair share of trouble reaching their destination safely.
For successful project management, it is essential to lead the pack on solid and stable ground. This does not fundamentally rule out unconventional solutions, provided they are appropriate. The statement in decision-making bodies: “What you are saying is all correct, but there are processes in our company to which your presentation cannot be applied” is best rebutted with the argument: “That is quite correct, so it is now our task to work out ways of adapting the company processes in line with known success stories, instead of spending our time listing reasons for keeping everything the same. I’m sure you’ll agree that the purpose of our meeting is to solve problems, not ignore them.” … more voice
References
Automation options in software configuration management
Software development offers some extremely efficient ways to simplify recurring tasks through automation. The elimination of tedious, repetitive, monotonous tasks and a resulting reduction in the frequency of errors in the development process are by no means all facets of this topic.
(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 1, S.32-34
– Original article translated from

The motivation for establishing automation in the IT landscape is largely the same. Recurring tasks are to be simplified and solved by machine without human intervention. The advantages are fewer errors in the use of IT systems, which in turn reduces costs. As simple and advantageous as the idea of processes running independently sounds, implementation is less trivial. It quickly becomes clear that for every identified possibility of automation, implementation is not always feasible. Here, too, the principle applies: the more complex a problem is, the more complex its solution will be.
In order to weigh up whether the economic effort to introduce certain automatisms is worthwhile, the costs of a manual solution must be multiplied by the factor of the frequency of this work to be repeated. These costs must be set against the expenses for the development and operation of the automated solution. On the basis of this comparison, it quickly becomes clear whether a company should implement the envisaged improvement.
Tools support the development process
Especially in the development of software projects, there is considerable potential for optimization through automatic processes. Here, developers are supported by a wide range of tools that need to be skillfully orchestrated. Configuration and release management in particular deals in great detail with the practical use of a wide variety of tools for automating the software development process.
The existence of a separate build logic, for example in the form of a simple shell script, is already a good approach, but it does not always lead to the desired results. Platform-independent solutions are necessary for such cases, since development is very likely to take place in a heterogeneous environment. An isolated solution always means increased adaptation and maintenance effort. Finally, automation efforts should simplify existing processes. Current build tools such as Maven and Ant take advantage of this platform independence. Both tools encapsulate the entire build logic in separate XML files. Since XML has already established itself as a standard in software development, the learning curve is steeper than with rudimentary solutions.
Die Nutzung zentraler Build-Logiken bildet die Grundlage für weitere Automatismen während der Entwicklungsarbeit. Einen Aspekt bilden dabei automatisierte Tests in Form von UnitTests in einer Continuous-Integration-(CI)-Umgebung. Eine CI-Lösung fügt alle Teile einer Software zu einem Ganzen zusammen und arbeitet alle definierten Testfälle ab. Konnte die Software nicht gebaut werden oder ist ein Test fehlgeschlagen, wird der Entwickler per E-Mail benachrichtigt, um den Fehler schnell zu beheben. Moderne CI-Server werden gegen ein Versionsverwaltungssystem, wie beispielsweise Subversion oder Git, konfiguriert. Das bewirkt, dass der Server ein Build erst dann beginnt, wenn auch tatsächlich Änderungen im Sourcecode gemacht wurden.
Complex software systems usually use dependencies on external components (libraries) that cannot be influenced by the project itself. The efficient management of the artifacts used in the project is the main strength of the build tool Maven, which has contributed to its strong distribution. When used properly, this eliminates the need to archive binary program parts within the version control system, resulting in smaller repositories and shorter commit times (successful completion of a transaction). New versions of the libraries used can be incorporated and tried out more quickly without error-prone manual copy actions. Libraries developed in-house can be easily distributed in a protected manner in the company network with the use of an own repository server (Apache Nexus) in the sense of reuse.
When evaluating a build tool, the possibility of reporting should not be neglected. Automated monitoring of code quality using metrics, for example through the Checkstyle tool, is an excellent tool for project management to realistically assess the current status of the project.
Not too much new technology
With all the possibilities for automating processes, several paths can be taken. It is not uncommon for development teams to have long discussions about which tool is particularly suitable for the current project. This question is difficult to answer in general terms, since every project is unique and the advantages and disadvantages of different tools must be compared with the project requirements.
In practical use, the limitation to a maximum of two novel technologies in the project has proven successful. Whether a tool is suitable is also determined by whether people with the appropriate know-how are available in the company. A good solution is a list approved by management with recommendations of the tools that are already in use or can be integrated into the existing system landscape. This ensures that the tools used remain clear and manageable.
Projects that run for many years must undergo modernization of the technologies used at longer intervals. In this context, suitable times must be found to migrate to the new technology with as little effort as possible. Sensible dates to switch to a newer technology are, for example, a change to a new major release of the own project. This procedure allows a clean separation without having to migrate old project statuses to the new technology. In many cases, this is not easily possible.
Conclusion
The use of automatisms for software development can, if used wisely, energetically support the achievement of the project goal. As with all things, excessive use is associated with some risks. The infrastructure used must remain understandable and controllable despite all the technologization, so that project work does not come to a standstill in the event of system failures.