About Elmar Dott

Consultant, Speaker, Trainer & Writer

Non-Functional Requirements: Quality

published also on DZone 02.2020

By experience, most of us know how difficult it is to express what we mean talking about quality. Why is that so?  There exist many different views on quality and every one of them has its importance. What has to be defined for our project is something that fits its needs and works with the budget. Trying to reach perfectionism can be counterproductive if a project is to be terminated successfully. We will start based on a research paper written by B. W. Boehm in 1976 called “Quantitative evaluation of software quality.” Boehm highlights the different aspects of software quality and the right context. Let’s have a look more deeply into this topic.

When we discuss quality, we should focus on three topics: code structure, implementation correctness, and maintainability. Many managers just care about the first two aspects, but not about maintenance. This is dangerous because enterprises will not invest in individual development just to use the application for only a few years. Depending on the complexity of the application the price for creation could reach hundreds of thousands of dollars. Then it’s understandable that the expected business value of such activities is often highly estimated. A lifetime of 10 years and more in production is very typical. To keep the benefits, adaptions will be mandatory. That implies also a strong focus on maintenance. Clean code doesn’t mean your application can simply change. A very easily understandable article that touches on this topic is written by Dan Abramov. Before we go further on how maintenance could be defined we will discuss the first point: the structure.

Scaffolding Your Project

An often underestimated aspect in development divisions is a missing standard for project structures. A fixed definition of where files have to be placed helps team members find points of interests quickly. Such a meta-structure for Java projects is defined by the build tool Maven. More than a decade ago, companies tested Maven and readily adopted the tool to their established folder structure used in the projects. This resulted in heavy maintenance tasks, given the reason that more and more infrastructure tools for software development were being used. Those tools operate on the standard that Maven defines, meaning that every customization affects the success of integrating new tools or exchanging an existing tool for another.

Another aspect to look at is the company-wide defined META architecture. When possible, every project should follow the same META architecture. This will reduce the time it takes a new developer to join an existing team and catch up with its productivity. This META architecture has to be open for adoptions which can be reached by two simple steps:

  1. Don’t be concerned with too many details;
  2. Follow the KISS (Keep it simple, stupid.) principle.

A classical pattern that violates the KISS principle is when standards heavily got customized. A very good example of the effects of strong customization is described by George Schlossnagle in his book “Advanced PHP Programming.” In chapter 21 he explains the problems created for the team when adopting the original PHP core and not following the recommended way via extensions. This resulted in the effect that every update of the PHP version had to be manually manipulated to include its own development adaptations to the core. In conjunction, structure, architecture, and KISS already define three quality gates, which are easy to implement.

The open-source project TP-CORE, hosted on GitHub, concerns itself with the afore-mentioned structure, architecture, and KISS. There you can find their approach on how to put it in practice. This small Java library rigidly defined the Maven convention with his directory structure. For fast compatibility detection, releases are defined by semantic versioning. The layer structure was chosen as its architecture and is fully described here. Examination of their main architectural decisions concludes as follows:

Each layer is defined by his own package and the files following also a strict rule. No special PRE or POST-fix is used. The functionality Logger, for example, is declared by an interface called Logger and the corresponding implementation LogbackLogger. The API interfaces can detect in the package “business” and the implementation classes located in the package “application.” Naming like ILogger and LoggerImpl should be avoided. Imagine a project that was started 10 years ago and the LoggerImpl was based on Log4J. Now a new requirement arises, and the log level needs to be updated during run time. To solve this challenge, the Log4J library could be replaced with Logback. Now it is understandable why it is a good idea to name the implementation class like the interface, combined with the implementation detail: it makes maintenance much easier! Equal conventions can also be found within the Java standard API. The interface List is implemented by an ArrayList. Obviously, again the interface is not labeled as something like IList and the implementation not as ListImpl .

Summarizing this short paragraph, a full measurement rule set was defined to describe our understanding of structural quality. By experience, this description should be short. If other people can easily comprehend your intentions, they willingly accept your guidance, deferring to your knowledge. In addition, the architect will be much faster in detecting rule violations.

Measure Your Success

The most difficult part is to keep a clean code. Some advice is not bad per se, but in the context of your project, may not prove as useful. In my opinion, the most important rule would be to always activate the compiler warning, no matter which programming language you use! All compiler warnings will have to be resolved when a release is prepared. Companies dealing with critical software, like NASA, strictly apply this rule in their projects resulting in utter success.

Coding conventions about naming, line length, and API documentation, like JavaDoc, can be simply defined and observed by tools like Checkstyle. This process can run fully automated during your build. Be careful; even if the code checkers pass without warnings, this does not mean that everything is working optimally. JavaDoc, for example, is problematic. With an automated Checkstyle, it can be assured that this API documentation exists, although we have no idea about the quality of those descriptions.

There should be no need to discuss the benefits of testing in this case; let us rather take a walkthrough of test coverage. The industry standard of 85% of covered code in test cases should be followed because coverage at less than 85% will not reach the complex parts of your application. 100% coverage just burns down your budget fast without resulting in higher benefits. A prime example of this is the TP-CORE project, whose test coverage is mostly between 92% to 95%. This was done to see real possibilities.

As already explained, the business layer contains just interfaces, defining the API. This layer is explicitly excluded from the coverage checks. Another package is called internal and it contains hidden implementations, like the SAX DocumentHandler. Because of the dependencies the DocumentHandler is bound to, it is very difficult to test this class directly, even with Mocks. This is unproblematic given that the purpose of this class is only for internal usage. In addition, the class is implicitly tested by the implementation using the DocumentHandler. To reach higher coverage, it also could be an option to exclude all internal implementations from checks. But it is always a good idea to observe the implicit coverage of those classes to detect aspects you may be unaware of.

Besides the low-level unit tests, automated acceptance tests should also be run. Paying close attention to these points may avoid a variety of problems. But never trust those fully automated checks blindly! Regularly repeated manual code inspections will always be mandatory, especially when working with external vendors. In our talk at JCON 2019, we demonstrated how simply test coverage could be faked. To detect other vulnerabilities you can additionally run checkers like SpotBugs and others more.

Tests don’t indicate that an application is free of failures, but they indicate a defined behavior for implemented functionality.

For a while now, SCM suites like GitLab or Microsoft Azure support pull requests, introduced long ago in GitHub. Those workflows are nothing new; IBM Synergy used to apply the same technique. A Build Manager was responsible to merge the developers’ changes into the codebase. In a rapid manner, all the revisions performed by the developer are just added into the repository by the Build Manager, who does not hold a sufficiently profound knowledge to decide about the implementation quality. It was the usual practice to simply secure that the build is not broken and always the compile produce an artifact.

Enterprises have discovered this as a new strategy to handle pull requests. Now, managers often make the decision to use pull requests as a quality gate. In my personal experience, this slows down productivity because it takes time until the changes are available in the codebase. Understanding of the branch and merge mechanism helps you to decide for a simpler branch model, like release branch lines. On those branches tools like SonarQube operate to observe the overall quality goal.

If a project needs an orchestrated build, with a defined order how artifacts have to create, you have a strong hint for a refactoring.

The coupling between classes and modules is often underestimated. It is very difficult to have an automated visualization for the bindings of modules. You will find out very fast the effect it has when a light coupling is violated because of an increment of complexity in your build logic.

Repeat Your Success

Rest assured, changes will happen! It is a challenge to keep your application open for adjustments. Several of the previous recommendations have implicit effects on future maintenance. A good source quality simplifies the endeavor of being prepared. But there is no guarantee. In the worst cases the end of the product lifecycle, EOL is reached, when mandatory improvements or changes cannot be realized anymore because of an eroded code base, for example.

As already mentioned, light coupling brings with it numerous benefits with respect to maintenance and reutilization. To reach this goal is not that difficult as it might look. In the first place, try to avoid as much as possible the inclusion of third-party libraries. Just to check if a String is empty or NULL it is unnecessary to depend on an external library. These few lines are fast done by oneself. A second important point to be considered in relation to external libraries: “Only one library to solve a problem.” If your project deals with JSON then decide one one implementation and don’t incorporate various artifacts. These two points heavily impact on security: a third-party artifact we can avoid using will not be able to cause any security leaks.

After the decision is taken for an external implementation, try to cover the usage in your project by applying design patterns like proxy, facade, or wrapper. This allows for a replacement more easily because the code changes are not spread around the whole codebase. You don’t need to change everything at once if you follow the advice on how to name the implementation class and provide an interface. Even though a SCM is designed for collaboration, there are limitations when more than one person is editing the same file. Using a design pattern to hide information allows you an iterative update of your changes.

Conclusion

As we have seen: a nonfunctional requirement is not that difficult to describe. With a short checklist, you can clearly define the important aspects for your project. It is not necessary to check all points for every code commit in the repository, this would with all probability just elevate costs and doesn’t result in higher benefits. Running a full check around a day before the release represents an effective solution to keep quality in an agile context and will help recognizing where optimization is necessary. Points of Interests (POI) to secure quality are the revisions in the code base for a release. This gives you a comparable statistic and helps increasing estimations.

Of course, in this short article, it is almost impossible to cover all aspects regarding quality. We hope our explanation helps you to link theory by examples to best practice. In conclusion, this should be your main takeaway: a high level of automation within your infrastructure, like continuous integration, is extremely helpful, but doesn’t prevent you from manual code reviews and audits.

Checklist

  • Follow common standards
  • KISS – keep it simple, stupid!
  • Equal directory structure for different projects
  • Simple META architecture, which can reuse as much as possible in other projects
  • Defined and follow coding styles
  • If a release got prepared – no compiler warnings are accepted
  • Have test coverage up to 85%
  • Avoid third-party libraries as much as possible
  • Don’t support more than one technology for a specific problem (e. g., JSON)
  • Cover foreign code by a design pattern
  • Avoid strong object/module coupling

Acceptance Tests in Java With JGiven

published also on DZone 01.2020

Most of the developer community know what a unit test is, even they don’t write them. But there is still hope. The situation is changing. More and more projects hosted on GitHub contain unit tests.

In a standard set-up for Java projects like NetBeans, Maven, and JUnit, it is not that difficult to produce your first test code. Besides, this approach is used in Test Driven Development (TDD) and exists in other technologies like Behavioral Driven Development (BDD), also known as acceptance tests, which is what we will focus on in this article.

Difference Between Unit and Acceptance Tests

The easiest way to become familiar with this topic is to look at a simple comparison between unit and acceptance tests. In this context, unit tests are very low level. They execute a function and compare the output with an expected result. Some people think differently about it, but in our example, the only one responsible for a unit test is the developer.

Keep in mind that the test code is placed in the project and always gets executed when the build is running. This provides quick feedback as to whether or not something went wrong. As long the test doesn’t cover too many aspects, we are able to identify the problem quickly and provide a solution. The design principle of those tests follows the AAA paradigm. Define a precondition (Arrange), execute the invariant (Act), and check the postconditions (Assume). We will come back to this approach a little later.

When we check the test coverage with tools like JaCoCo and cover more than 85 percent of our code with test cases, we can expect good quality. During the increasing test coverage, we specify our test cases more precisely and are able to identify some optimizations. This can be removing or inverting conditions because during the tests we find out, it is almost impossible to reach those sections. Of course, the topic is a bit more complicated, but those details could be discussed in another article.

Acceptance test are same classified like unit tests. They belong to the family of regression tests. This means we want to observe if changes we made on the code have no effects on already worked functionality. In other words, we want to secure that nothing already is working got broken, because of some side effects of our changes. The tool of our choice is JGiven [1]. Before we look at some examples, first, we need to touch on a bit of theory.

JGiven In-Depth

The test cases we define in JGiven is called a scenario. A scenario is a collection of four classes, the scenario itself, the Given displayed as given (Arrange), the Action displayed as when (Act) and Outcome displayed as then (Assume).

In most projects, especially when there is a huge amount of scenarios and the execution consumes a lot of time, acceptance tests got organized in a separate project. With a build job on your CI server, you can execute those tests once a day to get fast feedback and to react early if something is broken. The code example we demonstrate contains everything in one project on GitHub [2] because it is just a small library and a separation would just over-engineer the project. Usually, the one responsible for acceptance tests is the test center, not the developer.

The sample project TP-CORE is organized by a layered architecture. For our example, we picked out the functionality for sending e-mails. The basic functionality to compose an e-mail is realized in the application layer and has a test coverage of up to 90 percent. The functionality to send the e-mail is defined in the service layer.

In our architecture, we decided that the service layer is our center of attention to defining acceptance tests. Here, we want to see if our requirement to send an e-mail is working well. Supporting this layer with our own unit tests is not that efficient because, in commercial projects, it just produces costs without winning benefits. Also, having also unit tests means we have to do double the work because our JGiven tests already demonstrate and prove that our function is well working. For those reasons, it makes no sense to generate test coverage for the test scenarios of the acceptance test.

Let’s start with a practice example. At first, we need to include our acceptance test framework into our Maven build. In case you prefer Gradle, you can use the same GAV parameters to define the dependencies in your build script.

<dependency>
   <groupId>com.tngtech.jgiven</groupId>
   <artifactId>jgiven-junit</artifactId>
   <version>0.18.2</version>
   <scope>test</scope>
</dependency>
XML

Listing 1: Dependency for Maven.

As you can see in listing 1, JGiven works well together with JUnit. An integration to TestNG also exists , you just need to replace the artifactId for jgiven-testng. To enable the HTML reports, you need to configure the Maven plugin in the build lifecycle, like it is shown in Listing 2.

<build> 
   <plugins>
      <plugin>
         <groupId>com.tngtech.jgiven</groupId>
         <artifactId>jgiven-maven-plugin</artifactId>
         <version>0.18.2</version>
         <executions>
            <execution>
               <goals>
                  <goal>report</goal>
               </goals>
            </execution>
         </executions>
         <configuration>
            <format>html</format>
         </configuration>
      </plugin>
   </plugins>
</build>
XML

Listing 2: Maven Plugin Configuration for JGiven.

The report of our scenarios in the TP-CORE project is shown in image 1. As we can see, the output is very descriptive and human-readable. This result will be explained by following some naming conventions for our methods and classes, which will be explained in detail below. First, let’s discuss what we can see in our test scenario. We defined five preconditions:

  1. The configuration for the SMPT server is readable
  2. The SMTP server is available
  3. The mail has a recipient
  4. The mail has attachments
  5. The mail is full composed

If all these conditions are true, the action will send a single e-mail got performed. Afterward, after the SMTP server is checked, we see that the mail has arrived. For the SMTP service, we use the small Java library greenmail [3] to emulate an SMTP server. Now it is understandable why it is advantageous for acceptance tests if they are written by other people. This increases the quality as early on conceptional inconsistencies appear. Because as long as the tester with the available implementations cannot map the required scenario, the requirement is not fully implemented.

Producing Descriptive Scenarios

Now is the a good time to dive deeper into the implementation details of our send e-mail test scenario. Our object under test is the class MailClientService. The corresponding test class is  MailClientScenarioTest, defined in the test packages. The scenario class definition is shown in listing 3.

@RunWith(JUnitPlatform.class)
public class MailClientScenarioTest
       extends ScenarioTest<MailServiceGiven, MailServiceAction, MailServiceOutcome> { 
    // do something 
}
Java

Listing 3: Acceptance Test Scenario for JGiven.

As we can see, we execute the test framework with JUnit5. In the  ScenarioTest, we can see the three classes: Given, Action, and Outcome in a special naming convention. It is also possible to reuse already defined classes, but be careful with such practices. This can cost some side effects. Before we now implement the test method, we need to define the execution steps. The procedure for the three classes are equivalent.

@RunWith(JUnitPlatform.class)
public class MailServiceGiven 
       extends Stage<MailServiceGiven> { 

    public MailServiceGiven email_has_recipient(MailClient client) {
        try { 
            assertEquals(1, client.getRecipentList().size());
        } catch (Exception ex) {
            System.err.println(ex.getMessage);
        }
        return self(); 
    } 
} 

@RunWith(JUnitPlatform.class)
public class MailServiceAction
       extends Stage<MailServiceAction> { 

    public MailServiceAction send_email(MailClient client) {
        MailClientService service = new MailClientService();
        try {
            assertEquals(1, client.getRecipentList().size());
            service.sendEmail(client);
        } catch (Exception ex) { 
            System.err.println(ex.getMessage);
        }
        return self();
    }
}

@RunWith(JUnitPlatform.class)
public class MailServiceOutcome 
       extends Stage<MailServiceOutcome> {

    public MailServiceOutcome email_is_arrived(MimeMessage msg) { 
         try {
             Address adr = msg.getAllRecipients()[0];
             assertEquals("JGiven Test E-Mail", msg.getSubject());
             assertEquals("noreply@sample.org", msg.getSender().toString());
             assertEquals("otto@sample.org", adr.toString());
             assertNotNull(msg.getSize());
         } catch (Exception ex) {
             System.err.println(ex.getMessage);
         }
         return self();
    }
}
Java

Listing 4: Implementing the AAA Principle for Behavioral Driven Development.

Now, we completed the cycle and we can see how the test steps got stuck together. JGiven supports a bigger vocabulary to fit more necessities. To explore the full possibilities, please consult the documentation.

Lessons Learned

In this short workshop, we passed all the important details to start with automated acceptance tests. Besides JGiven exist other frameworks, like Concordion or FitNesse fighting for usage. Our choice for JGiven was its helpful documentation, simple integration into Maven builds and JUnit tests, and the descriptive human-readable reports.

As negative point, which could people keep away from JGiven, could be the detail that you need to describe the tests in the Java programming language. That means the test engineer needs to be able to develop in Java, if they want to use JGiven. Besides this small detail, our experience with JGiven is absolutely positive.

No post found

Talents wanted

During my career I registered myself on tons of job portals. Untill today some persons contact me and I have no idea where they got my data. Nevertheless, after more than a decade experience I decided to write down my personal resume. Reasons why I want to share my stories are different. The most valuable point for me is the common bad habit of recruiters and how the situation got year by year more worst. I hope other employees can reconsolidate their own current situation and will not feel alone anymore. As long no realistic public discussion about this topic is in process, nothing will change. Let me give you as employee, freelancer or company one short preview in advanced, before I continue to explain my arguments, don’t waste your time talking with recruiters. This brings you nowhere.

Since I started my professional life I had a huge amount contact with persons, they called themselves as talent searchers. But don’t get confused. Those persons aren’t searching seriously for talents. The only interest they have is to find with less work a person fit somehow with a profile description. Of course the most important question this folks have is: how cheap you will work for them. Just cheating you with an unfair payment. Mostly they keep around 50% of the regular market price in their own favor. I often asked myself which real service they deliver to me and the company I should work for.

Don’t be afraid, it is not that difficult to detect that black sheep’s when they try to contact you. A very strong indicator is the recruitment company has an office in London and a heavy background to India. Normally they call you with a British number from a call center in India. So if you got a call from a person, is not the one contact you, on the job web pages, hang up and save your time and nerves.

Another point you should aware, when they need first your CV or profile to tell you how much they plan to pay to you. As first those persons don’t read your CV. They scan automatic for buzzwords and check the result how it match. Secondly in general they don’t know anything about the technical background of the role they try to stuff. Close to 99% after you accept such positions, you will realize that you are the last element in a long contractor chain. All those persons take some of the money away,the final client pays for your work. Ask your self, which real value those persons have, to allow them taking high amounts away from the income, you deserve. It is an unfair game they play with contractors and enterprises. It’s obvious, save your energy and ignore all tryouts to contact you.

Another thing I observed are a mass of websites who offer different kinds of positions. As I found out, it doesn’t matter how detailed you throw your personal information into this applications, the result is always equal. They not looking for high motivated experts. All they want is the most cheapest person for the profile they have to stuff. Save your time and don’ t register. Often those pages hold not longer than 6 months and shortly they disappear. Be sure, as long you not willing to sell your self in conditions as a slave, for almost a payment close to the amount of the social help from the government, no one contact you.

A very important survivor rule, never tell in any web page or to a interview how much you winning in previous jobs. Refuse any request to declare your personal financial situation. With this information they easily calculate when you are empty and then you will accept more worst conditions.

Once a company contacted me and ask if I would be willing to tell them the names of my previous bosses. They argue with this information they can verify the satisfaction of my services I gave. The promise he mention was, like this I could catch much faster new projects et cetera. The truth is, he wanted to get to collect new business contacts to make easilier new deals. Because the persons you had worked for are the ones who take decisions to hire others. Those information have a high capital value in the market. As long they didn’t pay you in advanced 1000€ for each contact, refuse this kind of requests.

Of course this is just the half of the story. If you think, well I will sell myself directly, very fast you realize it is not that easy. If you try to contact companies directly who are in search for employees, often they just want a permanent contract. No chance for freelancer to join. But often those companies complain there no experts available. Real experts searching always for challenges to grow their skills. If you want to win them as part of your team, to got profit from their experience, you need to be flexible. Flexibility is not just a one way for employees. For companies searching success, is also a mandatory skill.

But now I don’t want waste to much of your time reading this resume, fulfilled of weird stories how those black sheep’s cheat the whole marked. Let’s take a look on things we can manage to get a better situation in future.

As first, I have to say there exist trustful recruiters you can have a long and good business relation. May you ask, what makes them different and how you can detect them? Don’t worry with a bit of common sens you catch very easy if the one in front to you is a serious person. So let me first explain how such relations in general have to work.

Recruitment is like an agent for a music or movie star. He help you to promote yourself and looking for clients to bring you in good positions. A relation of trust starts with helpful feedback. If your recruiter don’t give you a feedback about the services you had delivered, then don’t expect a long term business relation. He should be able to explain you why sometimes you got not selected for a position. Even he can tell you how you could increase your CV. He should understand the market and basic technical knowledge, to guide you secure into the future. As contractor you always need to learn new things. But what is the right decision? A good recruiter can see trends and he will talk whit you about that. In those cases you both will be a very powerful and success team. Because your business relation is based on a win win.

And yes I also have to show the opposite of the medal. Some employees are also terrible. Be responsible and deliver what you promised. If you give your word to somebody treats you fair and you cheat him just to win a few coins more, think twice. It’s ok to take the chance to raise your income. being fair means give your partners the chance to fill the space you left. Communicate honest and early. If your services are that excellent, they will understand it. Maybe they willing to increase you rate to convince you to stay. But never use this as a strategy game for pushing up the price. They will find out and you will lose more than you win. The market is small. Most key players know each other.

As lesson learned we have to admit every party have thier own devils in the game. But it is not just worst. A lot of nice and loyal persons acting in the market. Our challenge for the future should be a creation of a trustful network. Closeing doors for all who just try to cheat. As result we got exciting projects, finished in time and budged whit high motivated teams. Everyone will be satified.

5
What kind of job offers you prefer

People are different. Everybody has his own capabilities and necessities. Share with the community which type of job position would suit you most in your current situation.


Podcast

Docker Basics in less than 10 minutes

This short tutorial covers the most fundamental steps to use docker in your development tool chain. After we introduced the basic theory, we will learn how to install docker on a Linux OS (Ubuntu Mate). When this is done we have a short walk through to download an image and instantiate the container. The example use the official PHP 7.3 image with an Apache 2 HTTP Server.

The new Java Release Cycle

After Oracle introduces the new release cycle for Java I was not convinced of this new strategy. Even today I still have a different opinion. One of the point I criticize is the disregard of semantic versioning. Also the argument with this new cycle is more easy to deliver more faster new features, I’m not agree. In my opinion could occur some problems in the future. But wait, let’s start from the beginning, before I share my complete thoughts at once.

The six month release cycle Oracle announced in 2017 for Java ensure some insecurity to the community. The biggest fear was formulated by the popular question: Will be Java in future not anymore for free? Of course the answer is a clear no, but there are some impacts for companies they should be aware of it. If we think on huge Applications in production, are some points addressed to the risk management and the business continuing strategy. If the LTS support for security updates after the 3rd year of a published release have to be paid, force well defined strategies for updates into production. I see myself spending in future more time to migrate my projects to new java versions than implement new functionalities. One solution to avoid a permanent update orgy is move away from the Oracle JVM to OpenJDK.

In professional environment is quite popular that companies define a fixed setup to keep security. When I always are forced to update my components without a proof the new features are secure, it could create problems. Commercial projects running under other circumstances and need often special attention. Because you need a well defined environment where you know everything runs stable. Follow the law never touch a running system.

Absolutely I can understand the intention of Oracle to take this step. I guess it’s a way to get rid of old buggy and insecure installations. To secure the internet a bit more. Of course you can not support decades old deprecated versions. This have a heavy financial impact. but I wish they had chosen an less rough strategy. It’s sadly that the business often operate in this way. I wished it exist a more trustful communication.

By experience of preview releases of Java it always was taken a time until they get stable. In this context I remind myself to some heavy issues I was having with the change to 64 bit versions. The typical motto: latest is greatest, could be dangerous. Specially time based releases are good candidates for problems, even when the team is experienced. The pressure is extremely high to deliver in time.

Another fact which could discuss is the semantic versioning. It is a very powerful process, I always recommend. I ask myself If there really every six months new language features to have the reason increasing the Major number? Even for patches and enhancements? But what happens when in future is no new language enhancement? By the way adding by force often new features could decrease quality. In my opinion Java includes many educative features and not every new feature request increase the language capabilities. A simple example is the well known GOTO statement in other languages. When you learn programming often your mentor told you – it exist something if you see it you should run away. Never use GOTO. In Java inner classes I often compare with GOTO, because I think this should avoid. Until now I didn’t find any case where inner classes not a hint for design problems. The same is the heavy usage of functional statements. I can’t find any benefit to define a for loop as lambda function instead of the classical way.

In my opinion it looks like Oracle try to get some pieces from the cake to increase their business. Well this is not something bad,. But in the view of project management I don’t believe it is a well chosen strategy.

Read more: https://www.infoq.com/news/2017/09/Java6Month/

14
Which is your Java Version you still use?

The not mentioned versions in this list never had any relevant meaning.


No post found

Computer Science Library – My personal Top 10 IT Books (2019)

As I considered to write an article about my top 10 books, related to computer science and software engineering, I thought it will be an easy going task. In all the years over the last two decades, tons of great books fallen into my hands. This was the thing who made the job difficult. What should be the rules to put an title on the list? Only one title per author, different thematics, more than a hype and easy to understand, are the criterias for my own selection. Some of these books are really old. I suggest this is a good sign for stability. The ordering is a completely personal preference. So I hope you will enjoy my recommendations.

  • Effective Java 3. nd Edition, Joshua Bloch,(2017) ISBN: 0-134-68599-7
  • Peopleware: Productive Projects and Teams, Tom DeMarco, (2013), ISBN: 0-321-93411-3
  • Head First Design Pattern, Eric & Elisabeth Freeman, (2004) ISBN: 0-596-00712-4
  • Behind Closed Doors, J. Rothman & E. Derby, (2005) ISBN: 0-9766940-2-6
  • PHP Sicherheit 3 Auflage (German), C.Kunz · S. Esser · P. Prochaska (2010) ISBN: 978-3-89864-535-5
  • Mastering Regular Expressions 3rd Edition, Jeffrey E. F. Friedl, (2006) ISBN: 0-596-52812-4
  • GOD AND GOLEM, Inc. 7. th Edition, Norbert Wiener, (1966) ISBN: 0-262-73011-1
  • Java Power Tools, John F.Smart, (2008) ISBN: 978-0-596-52793-8
  • Advanced PHP Programming, George Schlossnagle, (2004) ISBN: 0-672-32561-6
  • Ich habe das Internet gelöscht! (German, Novell), Philipp Spielbusch, (2017) ISBN: 3-499-63189-X

As you can see is on top of my list, a book about JAVA programming. Well, it was the first title who gave me a giant change in the way of coding. Of course now exist much more brilliant titles who address this topic. My way to thinking in architecture starts like for the most architects with coding skills. But to do a great job you have to increase your knowledge about project management. The best way to start to understand how projects get successful done is read: Peopleware. A big surprise to me was find out that my favorite book about web security is written in German. It addresses solutions for the PHP Programming language, but the authors did a really great job to describe very detailed background information. For this reasons is this book extremely useful for all web developers who take care about security. But its not just technology between all. With God and Golem I remind a very old and critic philosophical text, I like to recommend to read. In the case you like this kind of topics check titles of Josef Weizenbaum, Noam Chomsky or Isaac Asimov. Java Power Tools was the first publication who covers DevOps Ideas. And last but not least a short funny novel about the experience of an IT Consultant with his clients. Lightweight and nice to read for relax. And don’t forget to smile. Feel free to leave a comment.

Modern Times (for Configuration Manager)

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

  1. Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
  2. The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
  3. Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
  4. Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
  5. Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
  6. There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
  7. We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
  8. Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
  9. Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.

Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.

  • First: understand the problem.
  • Second: investigate about a standard solution for the process.
  • Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.

If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.

By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.


No post found

Wind of Change – a journey to Linux

1989 was a historic year not just for Germany, it was for the whole world, when the Berlin wall teared down. A few months before, many people had wished that this event happen. But no one imagined it will become true. And even no person had in mind that everything goes so fast. Even me. Grew up on the side were we wished to touch the “always more green grass” of our neighbors. The Scorpions captured the spirit of this time with a song “Wind of change”. The unofficial hymn for the reunion of Germany.

Before I continue I need to clarify that this post will not mention anything from the Apple universe. Till today I never owned or used any Apple device. Why? Because there is no reason for me.

Conference Talks – Linux Tage

Similar to me it was with the strong dominance of Microsoft Windows as Operation System when I went into those computer things. Never I thought there is coming for me a time without Windows. Windows XP I had loved, after they fixed some heavy problems by Service Pack 1. Also Windows 7 an OS I really was liked to use and never I thought about a change. Really I always defended my Windows OS till this time. Because it was a really good system. But some years ago I changed my opinion – dramatically. Another wind of change. Let me give a brief history about decisions and experiences.

Everything began as I bought my Microsoft Surface 3 Pro with Windows 8.1 OS. In the beginning I was happy with the compact system and the performance. Very portable. A very important fact for me, because I travel a lot. I toked it also to my pilgrimage to Spain on the Camino de Santiago, to write my blog posts whit it. Unfortunately in the last week in Portugal the screen got broken. I was only be able to use it with external mouse. So I was forced to send the device to the Microsoft support. For a very shameless amount, close to a new buy – I got a replacement device with Windows 10. In between I also installed on my ThinkPad 510p Windows 10. I had felt like back in the year 2000 with my old Fujitsu Siemens Desktop and Windows Millennium. Almost every 3 months a re-installation of the whole system was necessary, because the update had broke the system. The most of my time I had spend to sit in front of my machine and wait that my Win 10 system got done whit the updates. During the updates, Windows 10 performance goes dramatically down and the system was not usable. If the device have more than 6 months no internet connection, it also close to impossible to turn on and start work with it. But this is not the whole drama. Every mayor update is breaking the customized configuration. Apps of the MS store already was deleted appeared again, additional language settings get broken and deactivated unwanted features was activated again. Another point is when you haven’t enough disk space for the Windows 10 update. All those things costed me a lot of pain and frustrations. Till today the situation is not really changed. By the way Ubuntu Mate can also be run on a Microsoft Surface 3 PRO device. If you own some of those old machines and you wish to know how to install there Linux to got a better performance write a comment if I should make tutorial how to run Linux on old Microsoft Surface devices.

The best you are be able to do with a Surface 3 PRO and its Windows 10 installation.

After I decided to run away from the Windows OS, I was needed to chose my new operation system. Well Linux! But which distribution? Usually in my job I only got experience with server and not by desktop systems. I remind me to my first SUSE Linux experiences in the early 2000. I think it was the Version 7 or something. I bought it in a store, because it came with a printed documentation and the download of more than a Gigabyte with my 56k modem was not an option. Some years later I was worked with Ubuntu and Fedora. In the beginning Ubuntu I was not wanted to use for my change, because of the Unity engine. The first installation of Fedora was needed a lot of hands on to establish services like Dropbox and Skype. I was in search for a system I can code with it. For this requirements many persons recommended Debian. But Debian is more for experienced user and for the most people get first time in touch with Linux not an good advice. After some investigations I found Ubuntu Mate. A Gnome Desktop based on Debian whit a huge software repository. This was sounding perfect for my necessities. And its still today the system of my choice.

After I installed Ubuntu Mate on my machine I really liked it from the first moment. Fast and simple installation, excellent documentation and all application I was needed was there. Because of my travels I bought some years ago an Asus ZenBook UX and I run also Ubuntu from the first unboxing on it. Always when people see my system they are surprised because everything looks like an iBook, but is much better.

The change to Linux Ubuntu Mate was much easier than I expected. With some small tricks a re-installation of the whole system takes me now less than 2 hours. The main concept is always clean and backuped the bash history file. Then I be able to rerun all needed commands for installation and configuration. Some of the applications like my favorite IDE Apache NetBeans, I backuped the configuration settings. The prefix of the filename is always the date when I perform the settings export. For example the NetBeans backup file is named as 2017-03-31_NetBeans. Those backups I do currently manually and not scripted. The time to develop a full automatism takes me in the moment too much time and the services I have to backup are not so much. For this reason is a manual action sufficient. Typical services are e-mail, SFTP, Browser favorites and so on as part of my manual backup procedure.

Since Firefox changed his API 2018 a very useful tool for export and import passwords is not available any more. To avoid the usage of a cloud service you should not trust to store all account passwords online, I decided to use the crypto tool KeePass. After I add all my web accounts to this tool the storage of the passwords is also more secure. With the browser plugin the accounts can be shared between all popular browsers like Firefox, Chrome and Opera. The KeePass file wiht my stored passwords is automatically included into my backup. The only important procedure of password storing is to keep the password database up to date.

One thing I was used heavily on Windows was the Portable Apps ecosystem. My strategy was to have a independent installation of many well configured services for work I just need to include to my current system. Something like this exist also in Linux. Just without the Portable App environment. My preference is always download the ZIP version of a software which not needs an installation. This I store on a second partition. In the case of a virgin OS setup the partition just need to linked back to the OS and its done. The strategy to deal whit an separate disk partition offers also a high flexibility, we can use for another step, virtualization.

I don’t want to remind myself how much time I spended in my life to configure development services like web server and database. For this problem is Docker now the solution of my choice. Each service is in an own image and the data got linked from a directory into the container. Also the configuration. Start and stop of the service is a simple command and the host system stays clean. Tryouts of service updates are very easily and complete conflict free. A rollback can performed every time by deleting the virtualization container.

The biggest change was from MS Office to Libre Office. Well with the functionality of Libre Office I already was fluent. The problem was all the Presentations and and Word documents I had. If you tried to open those files in both office applications the formatting getting crazy. To avoid this problem, I had find out it is often just a problem of the fonts. I decided to download from google a free and nice looking font. This I installed on the old Windows machine to convert all my office document away from MS Office.

My resume after some years of Linux usage, today I can say absolutely honest I do not miss anything. Of course I have to admit, I do not play games. After some hours every day working on a computer I prefer to move back into reality to meet friends and family. Its nice to explore places during my travels or simply read a book. With Linux I have a great performance of my hardware and as of now I did got any issue with drivers. All my Hardware is still working under Ubuntu Mate Linux well as I expect. With Linux instead of WIndows I save a lot of lifetime and frustrations.

No post found

A Fool with a Tool is still a Fool

Even though considerable additional effort has been expended on testing in recent years in order to improve the quality of software projects [1], the path to continuously repeatable successes cannot be taken for granted. Stringent and targeted management of all available resources was and still is indispensable for reproducible success.

(c) 2016 Marco Schulz, Java aktuell Ausgabe 4, S.14-19
Original article translated from Deutsch

It is no secret that many IT projects are still struggling to reach a successful conclusion. One might well think that the many new tools and methods that have emerged in recent years offer effective solutions for dealing with the situation. However, if one takes a look at current projects, this impression changes.

The author has often been able to observe how this problem was supposed to be mastered by introducing new tools. Not infrequently, the efforts ended in resignation. The supposed miracle solution quickly turned out to be a heavyweight time robber with an enormous amount of self-management. The initial euphoria of all those involved quickly turned into rejection and not infrequently culminated in a boycott of its use. It is therefore not surprising that experienced employees are skeptical of all change efforts for a long time and only deal with them when they are foreseeably successful. Because of this fact, the author has chosen as the title for this article the provocative quote from Grady Booch, a co-founder of UML.

Companies often spend too little time establishing a balanced internal infrastructure. Even the maintenance of existing fragments is often postponed for various reasons. At the management level, companies prefer to focus on current trends in order to attract customers who expect a list of buzzwords in response to their RFP. Yet Tom De Marco already described it in detail in the 1970s [2]: People make projects (see Figure 1).

We do what we can, but can we do anything?

The project, despite best intentions and intensive efforts find a happy end, is unfortunately not the rule. But when can one speak of a failed project in software development? An abandonment of all activities due to a lack of prospects of success is of course an obvious reason, but in this context it is rather rare. Rather, one gains this insight during the post-project review of completed orders. In controlling, for example, weak points come to light when determining profitability.

The reasons for negative results are usually exceeding the estimated budget or the agreed completion date. Usually, both conditions apply at the same time, as the endangered delivery deadline is countered by increasing personnel. This practice quickly reaches its limits, as new team members require an induction period, visibly reducing the productivity of the existing team. Easy-to-use architectures and a high degree of automation mitigate this effect somewhat. Every now and then, people also move to replace the contractor in the hope that new brooms sweep better.

A quick look at the top 3 list of major projects that have failed in Germany shows how a lack of communication, inadequate planning and poor management have a negative impact on the external perception of projects: Berlin Airport, Hamburg’s Elbe Philharmonic Hall and Stuttgart 21. Thanks to extensive media coverage, these undertakings are sufficiently well known and need no further explanation. Even if the examples cited do not originate from information technology, the recurring reasons for failure due to cost explosion and time delay can be found here as well.

Figure 1: Problem solving – “A bisserl was geht immer”, Monaco Franze

The will to create something big and important is not enough on its own. Those responsible also need the necessary technical, planning, social and communication skills, coupled with the authority to act. Building castles in the air and waiting for dreams to come true does not produce presentable results.

Great success is usually achieved when as few people as possible have veto power over decisions. This does not mean that advice should be ignored, but every possible state of mind cannot be taken into account. This makes it all the more important for the person responsible for the project to have the authority to enforce his or her decision, but not to demonstrate this with all vigor.

It is perfectly normal for a decision-maker not to be in control of all the details. After all, you delegate implementation to the appropriate specialists. Here’s a brief example: When the possibilities for creating larger and more complex Web applications became better and better in the early 2000s, the question often came up in meetings as to which paradigm should be used to implement the display logic. The terms “multi-tier”, “thin client” and “fat client” dominated the discussions of the decision-making bodies at that time. Explaining the advantages of different layers of a distributed web application to the client was one thing. But to leave it up to a technically savvy layman to decide how to access his new application – via browser (“thin client”) or via a separate GUI (“fat client”) – is simply foolish. Thus, in many cases, it was necessary to clear up misunderstandings that arose during development. The narrow browser solution not infrequently turned out to be a difficult technology to master, because manufacturers rarely cared about standards. Instead, one of the main requirements was usually to make the application look almost identical in the most popular browsers. However, this could only be achieved with considerable additional effort. Similar observations were made during the first hype of service-oriented architectures.

The consequence of these observations shows that it is indispensable to develop a vision before the start of the project, the goals of which also correspond to the estimated budget. A reusable deluxe version with as many degrees of freedom as possible requires a different approach than a “we get what we need” solution. It’s less about getting lost in the details and more about keeping the big picture in mind.

Particularly in German-speaking countries, companies find it difficult to find the necessary players for successful project implementation. The reasons for this may be quite diverse and could be due, among other things, to the fact that companies have not yet understood that experts rarely want to talk to poorly informed and inadequately prepared recruitment service providers.

Getting things done!

Successful project management is not an arbitrary coincidence. For a long time, an insufficient flow of information due to a lack of communication has been identified as one of the negative causes. Many projects have their own inherent character, which is also shaped by the team that accepts the challenge in order to jointly master the task set. Agile methods such as Scrum [3], Prince2 [4] or Kanban [5] pick up on this insight and offer potential solutions to be able to carry out IT projects successfully.

Occasionally, however, it can be observed how project managers transfer planning tasks to the responsible developers for self-management under the pretext of the newly introduced agile methods. The author has frequently experienced how architects have tended to see themselves in day-to-day implementation work instead of checking the delivered fragments for compliance with standards. In this way, quality cannot be established in the long term, since the results are merely solutions that ensure functionality and, because of time and cost pressures, do not establish the necessary structures to ensure future maintainability. Agile is not a synonym for anarchy. This setup likes to be decorated with an overloaded toolbox full of tools from the DevOps department and already the project is seemingly unsinkable. Just like the Titanic!

It is not without reason that for years it has been recommended to introduce a maximum of three new technologies at the start of a project. In this context, it is also not advisable to always go for the latest trends right away. When deciding on a technology, the appropriate resources must first be built up in the company, for which sufficient time must be planned. The investments are only beneficial if the choice made is more than just a short-lived hype. A good indicator of consistency is extensive documentation and an active community. These open secrets have been discussed in the relevant literature for years.

However, how does one proceed when a project has been established for many years, but in terms of the product life cycle a swing to new techniques becomes unavoidable? The reasons for such an effort may be many and vary from company to company. The need not to miss important innovations in order to remain competitive should not be delayed for too long. This consideration results in a strategy that is quite simple to implement. Current versions are continued in the proven tradition, and only for the next major release or the one after that is a roadmap drawn up that contains all the necessary points for a successful changeover. For this purpose, the critical points are worked out and tested in small feasibility studies, which are somewhat more demanding than a “hello world” tutorial, to see how an implementation could succeed. From experience, it is the small details that can be the crumbs on the scale to determine success or failure.

In all efforts, the goal is to achieve a high degree of automation. Compared to constantly recurring tasks that have to be performed manually, automation offers the possibility of producing continuously repeatable results. However, it is in the nature of things that simple activities are easier to automate than complex processes. In this case, it is important to check the cost-effectiveness of the plans beforehand so that developers do not indulge completely in their natural urge to play and also work through unpleasant day-to-day activities.

He who writes stays

Documentation, the vexed topic, spans all phases of the software development process. Whether for API descriptions, the user manual, planning documents for the architecture or learned knowledge about optimal procedures – describing is not one of the favored tasks of all protagonists involved. It can often be observed that the common opinion seems to prevail that thick manuals stand for extensive functionality of the product. However, long texts in a documentation are more of a quality defect that tries the reader’s patience because he expects precise instructions that get to the point. Instead, they receive vague phrases with trivial examples that rarely solve problems.

Figure 2: Test coverage with Cobertura

This insight can also be applied to project documentation and has been detailed by Johannes Sidersleben [6], among others, under the metaphor about Victorian novellas. Universities have already taken up these findings. For example, Merseburg University of Applied Sciences has established the course of study “Technical Writing” [7]. It is hoped to find more graduates of this course in the project landscape in the future.

When selecting collaborative tools as knowledge repositories, it is always important to keep the big picture in mind. Successful knowledge management can be measured by how efficiently an employee finds the information they are looking for. For this reason, company-wide use is a management decision and mandatory for all departments.

Information has a different nature and varies both in its scope and in how long it remains current. This results in different forms of presentation such as wiki, blog, ticket system, tweets, forums or podcasts, to list just a few. Forums very optimally depict the question and answer problem. A wiki is ideal for continuous text, such as that found in documentation and descriptions. Many webcasts are offered as video, without the visual representation adding any value. In most cases, a well-understood and properly produced audio track is sufficient to distribute knowledge. With a common and standardized database, completed projects can be compared efficiently. The resulting knowledge offers a high added value when making forecasts for future projects.

Test & Metrics – the measure of all things

Just by skimming the Quality Report 2014, one quickly learns that the new trend is “software testing”. Companies are increasingly allocating contingents for this, which take up a volume similar to the expenditure for the implementation of the project. Strictly speaking, one extinguishes fire with gasoline at this point. On closer inspection, the budget is already doubled at the planning stage. It is often up to the skill of the project manager to find a suitable declaration for earmarked project funds.

Only your consequent check of the test case coverage by suitable analysis tools ensures that in the end sufficient testing has been done. Even if one may hardly believe it: In an age in which software tests can be created more easily than ever before and different paradigms can be combined, extensive and meaningful test coverage is rather the exception (see Figure 2).

It is well known that it is not possible to prove that software is free of errors. Tests are only used to prove a defined behavior for the scenarios created. Automated test cases are in no way a substitute for manual code review by experienced architects. A simple example of this are nested “try catch” blocks that occur from time to time in Java, which have a direct effect on the program flow. Sometimes nesting can be intentional and useful. In this case, however, the error handling is not limited to the output of the stack trace in a log file. The cause of this programming error lies in the inexperience of the developer and the bad advice of the IDE at this point, for an expected error handling to enclose the instruction with an own “try catch” block instead of supplementing the existing routine by an additional “catch” statement. To want to detect this obvious error by test cases is an infantile approach from an economic point of view.

Typical error patterns can be detected inexpensively and efficiently by static test procedures. Publications that are particularly concerned with code quality and efficiency in the Java programming language [8, 9, 10] are always a good starting point for developing your own standards.

The consideration of error types is also very informative. Issue tracking and commit messages in SCM systems of open source projects such as Liferay [11] or GeoServer [12] show that a larger part of the errors concern the graphical user interface (GUI). These are often corrections of display texts in buttons and the like. The reporting of primarily display errors can also lie in the perception of the users. For them, the behavior of an application is usually a black box, so they deal with the software accordingly. It is not at all wrong to assume that the application has few errors when the number of users is high.

The usual computer science figures are software metrics that can give management a sense of the physical size of a project. Used correctly, such an overview provides helpful arguments for management decisions. For example, McCabe’s [13] cyclic complexity can be used to derive the number of test cases needed. Also statistics about the Lines of Code and the usual counts of packages, classes and methods show the growth of a project and can provide valuable information.

A very informative processing of this information is the project Code-City [14], which visualizes such a distribution as a city map. It is impressive Figure 3: Maven JDepend Plugin – Numbers with little significance to recognize where dangerous monoliths can arise and where orphaned classes or packages occur.

Figure 3: Maven JDepend plugin – numbers with little meaning

Conclusion

In day-to-day business, one is content to spread hectic bustle and put on a stressed face. By producing countless meters of paper, personal productivity is subsequently proven. The energy consumed in this way could be used much more sensibly through a consistently considered approach.

Loosely based on Kant’s “Sapere Aude”, simple solutions should be encouraged and demanded. Employees who need complicated structures to emphasize their own genius in the team may not be supporting pillars on which joint successes can be built. Cooperation with unteachable contemporaries is quickly reconsidered and, if necessary, corrected.

Many roads lead to Rome – and Rome was not built in a day. However, it cannot be denied that at some point the time has come to break ground. The choice of paths is not an undecidable problem either. There are safe paths and dangerous trails on which even experienced hikers have their fair share of trouble reaching their destination safely.

For successful project management, it is essential to lead the pack on solid and stable ground. This does not fundamentally rule out unconventional solutions, provided they are appropriate. The statement in decision-making bodies: “What you are saying is all correct, but there are processes in our company to which your presentation cannot be applied” is best rebutted with the argument: “That is quite correct, so it is now our task to work out ways of adapting the company processes in line with known success stories, instead of spending our time listing reasons for keeping everything the same. I’m sure you’ll agree that the purpose of our meeting is to solve problems, not ignore them.” … more voice

References

Links are only visible for logged in users.

This is how corporate knowledge becomes tangible

The expertise of its own employees is a significant economic factor for any organization. This makes it all the more important to store experience and expertise permanently and make it available to other employees. A central knowledge management server takes on this task and helps to ensure long-term productivity in the company.

(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 2, S.32-33
Original article translated from Deutsch

The complexity of today’s highly networked working world requires smooth interaction between a wide range of specialists. Knowledge transfer plays an important role here. This exchange is made more difficult when team members work in different locations with different time zones or come from different cultural backgrounds. Companies with worldwide locations are aware of this problem and have developed appropriate strategies for company-wide knowledge management. In order to introduce this successfully, the IT solution to be used should be seen as a methodology instead of focusing on the actual tool. Once those responsible have made the decision in favor of a particular software solution, this should be retained consistently. Frequent system changes reduce the quality of the links between the stored content. Since there is no normalized standard for the representation of knowledge, significant conversion losses can occur when switching to new software solutions.

Various mechanisms for different content

Information can be stored in IT systems in various ways. The individual forms of representation differ in terms of presentation, structuring and use. In order to be able to edit documents geminus conflict-free and version them at the same time, as is necessary for specifications or documentation, wikis [1] are ideally suited, since they were originally developed precisely for this use. The documents stored there are usually project-specific and should also be organized in this way.

Cross-project documents in the wiki are, for example, explanations of technical terms, a central list of abbreviations, or a Who’s Who of company employees including contact data and subject areas. The latter can in turn be linked to the technical term explanation. Comprehensive content can then be kept up-to-date centrally and can be conveniently linked to the corresponding project documents. This procedure avoids unnecessary repetitions and the documents to be read become shorter, but still contain all the necessary information. Johannes Siedersleben already described the risks of excessively long documentation in his book Softwaretechnik [2] in 2003.

Knowledge that has more the character of a FAQ should better be organized via a forum. Grouping by topics in which questions are stored along the lines of “How can I …?” makes it easier to find possible solutions. A particularly attractive feature is the fact that a forum of this kind evolves over time in line with demand. Users can formulate their own questions and post them in the forum. As a rule, qualified answers to newly posed questions are not long in coming.

Suitable candidates for blogs are, for example, general information about the company, status reports or tutorials. These are documents that tend to have an informative character, are not form-bound or are difficult to assign to a specific topic. Short information (tweets [3]) via Twitter, thematically grouped in channels, can also enrich project work. They additionally minimize the e-mails in one’s own mailbox. Examples include reminders about a specific event, a newsflash about new product versions or information about a successfully completed work process. Integrating tweets into project work is relatively new, and suitable software solutions are correspondingly rare.

Of course, the list of possibilities is far from exhausted at this point. However, the examples already provide a good overview of how companies can organize their knowledge. Linking the individual systems to a portal [4], which has an overarching search and user administration, quickly creates a network that is also suitable as a cloud solution.

User-friendliness is a decisive factor for the acceptance of a knowledge platform. Long training periods, unclear structuring, and awkward operation can quickly lead to rejection. Security is also satisfied with access authorization to the individual contents at group level. A good example of this is the enterprise wiki Confluence [5]. It allows different read and write permissions to be assigned to the individual document levels.

Naturally, a developer cannot be expected to describe his work in the right words for posterity after successful implementation. The fact that the quality of the texts in many documentations is not always sufficient has also been recognized by the Merseburg University of Applied Sciences, which offers the course of study Technical Editing [6]. Cross-reading by other project members has therefore proved to be a suitable means of ensuring the quality of the content. To facilitate the writing of texts, it is helpful to provide a small guide – similar to the Coding Convention.

Conclusion

A knowledge database cannot be implemented overnight. It takes time to compile enough information in it. Only through interaction and corrections of incomprehensible passages does the knowledge reach a quality that invites transfer. Every employee should be encouraged to enrich existing texts with new insights, to resolve incomprehensible passages or to add search terms. If the process of knowledge creation and distribution is lived in this way, fewer documents will be orphaned and the information will always be up-to-date.