Acceptance Tests in Java With JGiven

published also on DZone 01.2020

Most of the developer community know what a unit test is, even they don’t write them. But there is still hope. The situation is changing. More and more projects hosted on GitHub contain unit tests.

In a standard set-up for Java projects like NetBeans, Maven, and JUnit, it is not that difficult to produce your first test code. Besides, this approach is used in Test Driven Development (TDD) and exists in other technologies like Behavioral Driven Development (BDD), also known as acceptance tests, which is what we will focus on in this article.

Difference Between Unit and Acceptance Tests

The easiest way to become familiar with this topic is to look at a simple comparison between unit and acceptance tests. In this context, unit tests are very low level. They execute a function and compare the output with an expected result. Some people think differently about it, but in our example, the only one responsible for a unit test is the developer.

Keep in mind that the test code is placed in the project and always gets executed when the build is running. This provides quick feedback as to whether or not something went wrong. As long the test doesn’t cover too many aspects, we are able to identify the problem quickly and provide a solution. The design principle of those tests follows the AAA paradigm. Define a precondition (Arrange), execute the invariant (Act), and check the postconditions (Assume). We will come back to this approach a little later.

When we check the test coverage with tools like JaCoCo and cover more than 85 percent of our code with test cases, we can expect good quality. During the increasing test coverage, we specify our test cases more precisely and are able to identify some optimizations. This can be removing or inverting conditions because during the tests we find out, it is almost impossible to reach those sections. Of course, the topic is a bit more complicated, but those details could be discussed in another article.

Acceptance test are same classified like unit tests. They belong to the family of regression tests. This means we want to observe if changes we made on the code have no effects on already worked functionality. In other words, we want to secure that nothing already is working got broken, because of some side effects of our changes. The tool of our choice is JGiven [1]. Before we look at some examples, first, we need to touch on a bit of theory.

JGiven In-Depth

The test cases we define in JGiven is called a scenario. A scenario is a collection of four classes, the scenario itself, the Given displayed as given (Arrange), the Action displayed as when (Act) and Outcome displayed as then (Assume).

In most projects, especially when there is a huge amount of scenarios and the execution consumes a lot of time, acceptance tests got organized in a separate project. With a build job on your CI server, you can execute those tests once a day to get fast feedback and to react early if something is broken. The code example we demonstrate contains everything in one project on GitHub [2] because it is just a small library and a separation would just over-engineer the project. Usually, the one responsible for acceptance tests is the test center, not the developer.

The sample project TP-CORE is organized by a layered architecture. For our example, we picked out the functionality for sending e-mails. The basic functionality to compose an e-mail is realized in the application layer and has a test coverage of up to 90 percent. The functionality to send the e-mail is defined in the service layer.

In our architecture, we decided that the service layer is our center of attention to defining acceptance tests. Here, we want to see if our requirement to send an e-mail is working well. Supporting this layer with our own unit tests is not that efficient because, in commercial projects, it just produces costs without winning benefits. Also, having also unit tests means we have to do double the work because our JGiven tests already demonstrate and prove that our function is well working. For those reasons, it makes no sense to generate test coverage for the test scenarios of the acceptance test.

Let’s start with a practice example. At first, we need to include our acceptance test framework into our Maven build. In case you prefer Gradle, you can use the same GAV parameters to define the dependencies in your build script.

<dependency>
   <groupId>com.tngtech.jgiven</groupId>
   <artifactId>jgiven-junit</artifactId>
   <version>0.18.2</version>
   <scope>test</scope>
</dependency>
XML

Listing 1: Dependency for Maven.

As you can see in listing 1, JGiven works well together with JUnit. An integration to TestNG also exists , you just need to replace the artifactId for jgiven-testng. To enable the HTML reports, you need to configure the Maven plugin in the build lifecycle, like it is shown in Listing 2.

<build> 
   <plugins>
      <plugin>
         <groupId>com.tngtech.jgiven</groupId>
         <artifactId>jgiven-maven-plugin</artifactId>
         <version>0.18.2</version>
         <executions>
            <execution>
               <goals>
                  <goal>report</goal>
               </goals>
            </execution>
         </executions>
         <configuration>
            <format>html</format>
         </configuration>
      </plugin>
   </plugins>
</build>
XML

Listing 2: Maven Plugin Configuration for JGiven.

The report of our scenarios in the TP-CORE project is shown in image 1. As we can see, the output is very descriptive and human-readable. This result will be explained by following some naming conventions for our methods and classes, which will be explained in detail below. First, let’s discuss what we can see in our test scenario. We defined five preconditions:

  1. The configuration for the SMPT server is readable
  2. The SMTP server is available
  3. The mail has a recipient
  4. The mail has attachments
  5. The mail is full composed

If all these conditions are true, the action will send a single e-mail got performed. Afterward, after the SMTP server is checked, we see that the mail has arrived. For the SMTP service, we use the small Java library greenmail [3] to emulate an SMTP server. Now it is understandable why it is advantageous for acceptance tests if they are written by other people. This increases the quality as early on conceptional inconsistencies appear. Because as long as the tester with the available implementations cannot map the required scenario, the requirement is not fully implemented.

Producing Descriptive Scenarios

Now is the a good time to dive deeper into the implementation details of our send e-mail test scenario. Our object under test is the class MailClientService. The corresponding test class is  MailClientScenarioTest, defined in the test packages. The scenario class definition is shown in listing 3.

@RunWith(JUnitPlatform.class)
public class MailClientScenarioTest
       extends ScenarioTest<MailServiceGiven, MailServiceAction, MailServiceOutcome> { 
    // do something 
}
Java

Listing 3: Acceptance Test Scenario for JGiven.

As we can see, we execute the test framework with JUnit5. In the  ScenarioTest, we can see the three classes: Given, Action, and Outcome in a special naming convention. It is also possible to reuse already defined classes, but be careful with such practices. This can cost some side effects. Before we now implement the test method, we need to define the execution steps. The procedure for the three classes are equivalent.

@RunWith(JUnitPlatform.class)
public class MailServiceGiven 
       extends Stage<MailServiceGiven> { 

    public MailServiceGiven email_has_recipient(MailClient client) {
        try { 
            assertEquals(1, client.getRecipentList().size());
        } catch (Exception ex) {
            System.err.println(ex.getMessage);
        }
        return self(); 
    } 
} 

@RunWith(JUnitPlatform.class)
public class MailServiceAction
       extends Stage<MailServiceAction> { 

    public MailServiceAction send_email(MailClient client) {
        MailClientService service = new MailClientService();
        try {
            assertEquals(1, client.getRecipentList().size());
            service.sendEmail(client);
        } catch (Exception ex) { 
            System.err.println(ex.getMessage);
        }
        return self();
    }
}

@RunWith(JUnitPlatform.class)
public class MailServiceOutcome 
       extends Stage<MailServiceOutcome> {

    public MailServiceOutcome email_is_arrived(MimeMessage msg) { 
         try {
             Address adr = msg.getAllRecipients()[0];
             assertEquals("JGiven Test E-Mail", msg.getSubject());
             assertEquals("noreply@sample.org", msg.getSender().toString());
             assertEquals("otto@sample.org", adr.toString());
             assertNotNull(msg.getSize());
         } catch (Exception ex) {
             System.err.println(ex.getMessage);
         }
         return self();
    }
}
Java

Listing 4: Implementing the AAA Principle for Behavioral Driven Development.

Now, we completed the cycle and we can see how the test steps got stuck together. JGiven supports a bigger vocabulary to fit more necessities. To explore the full possibilities, please consult the documentation.

Lessons Learned

In this short workshop, we passed all the important details to start with automated acceptance tests. Besides JGiven exist other frameworks, like Concordion or FitNesse fighting for usage. Our choice for JGiven was its helpful documentation, simple integration into Maven builds and JUnit tests, and the descriptive human-readable reports.

As negative point, which could people keep away from JGiven, could be the detail that you need to describe the tests in the Java programming language. That means the test engineer needs to be able to develop in Java, if they want to use JGiven. Besides this small detail, our experience with JGiven is absolutely positive.

No post found

Docker Basics in less than 10 minutes

This short tutorial covers the most fundamental steps to use docker in your development tool chain. After we introduced the basic theory, we will learn how to install docker on a Linux OS (Ubuntu Mate). When this is done we have a short walk through to download an image and instantiate the container. The example use the official PHP 7.3 image with an Apache 2 HTTP Server.

The new Java Release Cycle

After Oracle introduces the new release cycle for Java I was not convinced of this new strategy. Even today I still have a different opinion. One of the point I criticize is the disregard of semantic versioning. Also the argument with this new cycle is more easy to deliver more faster new features, I’m not agree. In my opinion could occur some problems in the future. But wait, let’s start from the beginning, before I share my complete thoughts at once.

The six month release cycle Oracle announced in 2017 for Java ensure some insecurity to the community. The biggest fear was formulated by the popular question: Will be Java in future not anymore for free? Of course the answer is a clear no, but there are some impacts for companies they should be aware of it. If we think on huge Applications in production, are some points addressed to the risk management and the business continuing strategy. If the LTS support for security updates after the 3rd year of a published release have to be paid, force well defined strategies for updates into production. I see myself spending in future more time to migrate my projects to new java versions than implement new functionalities. One solution to avoid a permanent update orgy is move away from the Oracle JVM to OpenJDK.

In professional environment is quite popular that companies define a fixed setup to keep security. When I always are forced to update my components without a proof the new features are secure, it could create problems. Commercial projects running under other circumstances and need often special attention. Because you need a well defined environment where you know everything runs stable. Follow the law never touch a running system.

Absolutely I can understand the intention of Oracle to take this step. I guess it’s a way to get rid of old buggy and insecure installations. To secure the internet a bit more. Of course you can not support decades old deprecated versions. This have a heavy financial impact. but I wish they had chosen an less rough strategy. It’s sadly that the business often operate in this way. I wished it exist a more trustful communication.

By experience of preview releases of Java it always was taken a time until they get stable. In this context I remind myself to some heavy issues I was having with the change to 64 bit versions. The typical motto: latest is greatest, could be dangerous. Specially time based releases are good candidates for problems, even when the team is experienced. The pressure is extremely high to deliver in time.

Another fact which could discuss is the semantic versioning. It is a very powerful process, I always recommend. I ask myself If there really every six months new language features to have the reason increasing the Major number? Even for patches and enhancements? But what happens when in future is no new language enhancement? By the way adding by force often new features could decrease quality. In my opinion Java includes many educative features and not every new feature request increase the language capabilities. A simple example is the well known GOTO statement in other languages. When you learn programming often your mentor told you – it exist something if you see it you should run away. Never use GOTO. In Java inner classes I often compare with GOTO, because I think this should avoid. Until now I didn’t find any case where inner classes not a hint for design problems. The same is the heavy usage of functional statements. I can’t find any benefit to define a for loop as lambda function instead of the classical way.

In my opinion it looks like Oracle try to get some pieces from the cake to increase their business. Well this is not something bad,. But in the view of project management I don’t believe it is a well chosen strategy.

Read more: https://www.infoq.com/news/2017/09/Java6Month/

14
Which is your Java Version you still use?

The not mentioned versions in this list never had any relevant meaning.


No post found

Computer Science Library – My personal Top 10 IT Books (2019)

As I considered to write an article about my top 10 books, related to computer science and software engineering, I thought it will be an easy going task. In all the years over the last two decades, tons of great books fallen into my hands. This was the thing who made the job difficult. What should be the rules to put an title on the list? Only one title per author, different thematics, more than a hype and easy to understand, are the criterias for my own selection. Some of these books are really old. I suggest this is a good sign for stability. The ordering is a completely personal preference. So I hope you will enjoy my recommendations.

  • Effective Java 3. nd Edition, Joshua Bloch,(2017) ISBN: 0-134-68599-7
  • Peopleware: Productive Projects and Teams, Tom DeMarco, (2013), ISBN: 0-321-93411-3
  • Head First Design Pattern, Eric & Elisabeth Freeman, (2004) ISBN: 0-596-00712-4
  • Behind Closed Doors, J. Rothman & E. Derby, (2005) ISBN: 0-9766940-2-6
  • PHP Sicherheit 3 Auflage (German), C.Kunz · S. Esser · P. Prochaska (2010) ISBN: 978-3-89864-535-5
  • Mastering Regular Expressions 3rd Edition, Jeffrey E. F. Friedl, (2006) ISBN: 0-596-52812-4
  • GOD AND GOLEM, Inc. 7. th Edition, Norbert Wiener, (1966) ISBN: 0-262-73011-1
  • Java Power Tools, John F.Smart, (2008) ISBN: 978-0-596-52793-8
  • Advanced PHP Programming, George Schlossnagle, (2004) ISBN: 0-672-32561-6
  • Ich habe das Internet gelöscht! (German, Novell), Philipp Spielbusch, (2017) ISBN: 3-499-63189-X

As you can see is on top of my list, a book about JAVA programming. Well, it was the first title who gave me a giant change in the way of coding. Of course now exist much more brilliant titles who address this topic. My way to thinking in architecture starts like for the most architects with coding skills. But to do a great job you have to increase your knowledge about project management. The best way to start to understand how projects get successful done is read: Peopleware. A big surprise to me was find out that my favorite book about web security is written in German. It addresses solutions for the PHP Programming language, but the authors did a really great job to describe very detailed background information. For this reasons is this book extremely useful for all web developers who take care about security. But its not just technology between all. With God and Golem I remind a very old and critic philosophical text, I like to recommend to read. In the case you like this kind of topics check titles of Josef Weizenbaum, Noam Chomsky or Isaac Asimov. Java Power Tools was the first publication who covers DevOps Ideas. And last but not least a short funny novel about the experience of an IT Consultant with his clients. Lightweight and nice to read for relax. And don’t forget to smile. Feel free to leave a comment.

Modern Times (for Configuration Manager)

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

  1. Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
  2. The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
  3. Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
  4. Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
  5. Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
  6. There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
  7. We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
  8. Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
  9. Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.

Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.

  • First: understand the problem.
  • Second: investigate about a standard solution for the process.
  • Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.

If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.

By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.


No post found

Wind of Change – a journey to Linux

1989 was a historic year not just for Germany, it was for the whole world, when the Berlin wall teared down. A few months before, many people had wished that this event happen. But no one imagined it will become true. And even no person had in mind that everything goes so fast. Even me. Grew up on the side were we wished to touch the “always more green grass” of our neighbors. The Scorpions captured the spirit of this time with a song “Wind of change”. The unofficial hymn for the reunion of Germany.

Before I continue I need to clarify that this post will not mention anything from the Apple universe. Till today I never owned or used any Apple device. Why? Because there is no reason for me.

Conference Talks – Linux Tage

Similar to me it was with the strong dominance of Microsoft Windows as Operation System when I went into those computer things. Never I thought there is coming for me a time without Windows. Windows XP I had loved, after they fixed some heavy problems by Service Pack 1. Also Windows 7 an OS I really was liked to use and never I thought about a change. Really I always defended my Windows OS till this time. Because it was a really good system. But some years ago I changed my opinion – dramatically. Another wind of change. Let me give a brief history about decisions and experiences.

Everything began as I bought my Microsoft Surface 3 Pro with Windows 8.1 OS. In the beginning I was happy with the compact system and the performance. Very portable. A very important fact for me, because I travel a lot. I toked it also to my pilgrimage to Spain on the Camino de Santiago, to write my blog posts whit it. Unfortunately in the last week in Portugal the screen got broken. I was only be able to use it with external mouse. So I was forced to send the device to the Microsoft support. For a very shameless amount, close to a new buy – I got a replacement device with Windows 10. In between I also installed on my ThinkPad 510p Windows 10. I had felt like back in the year 2000 with my old Fujitsu Siemens Desktop and Windows Millennium. Almost every 3 months a re-installation of the whole system was necessary, because the update had broke the system. The most of my time I had spend to sit in front of my machine and wait that my Win 10 system got done whit the updates. During the updates, Windows 10 performance goes dramatically down and the system was not usable. If the device have more than 6 months no internet connection, it also close to impossible to turn on and start work with it. But this is not the whole drama. Every mayor update is breaking the customized configuration. Apps of the MS store already was deleted appeared again, additional language settings get broken and deactivated unwanted features was activated again. Another point is when you haven’t enough disk space for the Windows 10 update. All those things costed me a lot of pain and frustrations. Till today the situation is not really changed. By the way Ubuntu Mate can also be run on a Microsoft Surface 3 PRO device. If you own some of those old machines and you wish to know how to install there Linux to got a better performance write a comment if I should make tutorial how to run Linux on old Microsoft Surface devices.

The best you are be able to do with a Surface 3 PRO and its Windows 10 installation.

After I decided to run away from the Windows OS, I was needed to chose my new operation system. Well Linux! But which distribution? Usually in my job I only got experience with server and not by desktop systems. I remind me to my first SUSE Linux experiences in the early 2000. I think it was the Version 7 or something. I bought it in a store, because it came with a printed documentation and the download of more than a Gigabyte with my 56k modem was not an option. Some years later I was worked with Ubuntu and Fedora. In the beginning Ubuntu I was not wanted to use for my change, because of the Unity engine. The first installation of Fedora was needed a lot of hands on to establish services like Dropbox and Skype. I was in search for a system I can code with it. For this requirements many persons recommended Debian. But Debian is more for experienced user and for the most people get first time in touch with Linux not an good advice. After some investigations I found Ubuntu Mate. A Gnome Desktop based on Debian whit a huge software repository. This was sounding perfect for my necessities. And its still today the system of my choice.

After I installed Ubuntu Mate on my machine I really liked it from the first moment. Fast and simple installation, excellent documentation and all application I was needed was there. Because of my travels I bought some years ago an Asus ZenBook UX and I run also Ubuntu from the first unboxing on it. Always when people see my system they are surprised because everything looks like an iBook, but is much better.

The change to Linux Ubuntu Mate was much easier than I expected. With some small tricks a re-installation of the whole system takes me now less than 2 hours. The main concept is always clean and backuped the bash history file. Then I be able to rerun all needed commands for installation and configuration. Some of the applications like my favorite IDE Apache NetBeans, I backuped the configuration settings. The prefix of the filename is always the date when I perform the settings export. For example the NetBeans backup file is named as 2017-03-31_NetBeans. Those backups I do currently manually and not scripted. The time to develop a full automatism takes me in the moment too much time and the services I have to backup are not so much. For this reason is a manual action sufficient. Typical services are e-mail, SFTP, Browser favorites and so on as part of my manual backup procedure.

Since Firefox changed his API 2018 a very useful tool for export and import passwords is not available any more. To avoid the usage of a cloud service you should not trust to store all account passwords online, I decided to use the crypto tool KeePass. After I add all my web accounts to this tool the storage of the passwords is also more secure. With the browser plugin the accounts can be shared between all popular browsers like Firefox, Chrome and Opera. The KeePass file wiht my stored passwords is automatically included into my backup. The only important procedure of password storing is to keep the password database up to date.

One thing I was used heavily on Windows was the Portable Apps ecosystem. My strategy was to have a independent installation of many well configured services for work I just need to include to my current system. Something like this exist also in Linux. Just without the Portable App environment. My preference is always download the ZIP version of a software which not needs an installation. This I store on a second partition. In the case of a virgin OS setup the partition just need to linked back to the OS and its done. The strategy to deal whit an separate disk partition offers also a high flexibility, we can use for another step, virtualization.

I don’t want to remind myself how much time I spended in my life to configure development services like web server and database. For this problem is Docker now the solution of my choice. Each service is in an own image and the data got linked from a directory into the container. Also the configuration. Start and stop of the service is a simple command and the host system stays clean. Tryouts of service updates are very easily and complete conflict free. A rollback can performed every time by deleting the virtualization container.

The biggest change was from MS Office to Libre Office. Well with the functionality of Libre Office I already was fluent. The problem was all the Presentations and and Word documents I had. If you tried to open those files in both office applications the formatting getting crazy. To avoid this problem, I had find out it is often just a problem of the fonts. I decided to download from google a free and nice looking font. This I installed on the old Windows machine to convert all my office document away from MS Office.

My resume after some years of Linux usage, today I can say absolutely honest I do not miss anything. Of course I have to admit, I do not play games. After some hours every day working on a computer I prefer to move back into reality to meet friends and family. Its nice to explore places during my travels or simply read a book. With Linux I have a great performance of my hardware and as of now I did got any issue with drivers. All my Hardware is still working under Ubuntu Mate Linux well as I expect. With Linux instead of WIndows I save a lot of lifetime and frustrations.

No post found

A Fool with a Tool is still a Fool

Even though considerable additional effort has been expended on testing in recent years in order to improve the quality of software projects [1], the path to continuously repeatable successes cannot be taken for granted. Stringent and targeted management of all available resources was and still is indispensable for reproducible success.

(c) 2016 Marco Schulz, Java aktuell Ausgabe 4, S.14-19
Original article translated from Deutsch

It is no secret that many IT projects are still struggling to reach a successful conclusion. One might well think that the many new tools and methods that have emerged in recent years offer effective solutions for dealing with the situation. However, if one takes a look at current projects, this impression changes.

The author has often been able to observe how this problem was supposed to be mastered by introducing new tools. Not infrequently, the efforts ended in resignation. The supposed miracle solution quickly turned out to be a heavyweight time robber with an enormous amount of self-management. The initial euphoria of all those involved quickly turned into rejection and not infrequently culminated in a boycott of its use. It is therefore not surprising that experienced employees are skeptical of all change efforts for a long time and only deal with them when they are foreseeably successful. Because of this fact, the author has chosen as the title for this article the provocative quote from Grady Booch, a co-founder of UML.

Companies often spend too little time establishing a balanced internal infrastructure. Even the maintenance of existing fragments is often postponed for various reasons. At the management level, companies prefer to focus on current trends in order to attract customers who expect a list of buzzwords in response to their RFP. Yet Tom De Marco already described it in detail in the 1970s [2]: People make projects (see Figure 1).

We do what we can, but can we do anything?

The project, despite best intentions and intensive efforts find a happy end, is unfortunately not the rule. But when can one speak of a failed project in software development? An abandonment of all activities due to a lack of prospects of success is of course an obvious reason, but in this context it is rather rare. Rather, one gains this insight during the post-project review of completed orders. In controlling, for example, weak points come to light when determining profitability.

The reasons for negative results are usually exceeding the estimated budget or the agreed completion date. Usually, both conditions apply at the same time, as the endangered delivery deadline is countered by increasing personnel. This practice quickly reaches its limits, as new team members require an induction period, visibly reducing the productivity of the existing team. Easy-to-use architectures and a high degree of automation mitigate this effect somewhat. Every now and then, people also move to replace the contractor in the hope that new brooms sweep better.

A quick look at the top 3 list of major projects that have failed in Germany shows how a lack of communication, inadequate planning and poor management have a negative impact on the external perception of projects: Berlin Airport, Hamburg’s Elbe Philharmonic Hall and Stuttgart 21. Thanks to extensive media coverage, these undertakings are sufficiently well known and need no further explanation. Even if the examples cited do not originate from information technology, the recurring reasons for failure due to cost explosion and time delay can be found here as well.

Figure 1: Problem solving – “A bisserl was geht immer”, Monaco Franze

The will to create something big and important is not enough on its own. Those responsible also need the necessary technical, planning, social and communication skills, coupled with the authority to act. Building castles in the air and waiting for dreams to come true does not produce presentable results.

Great success is usually achieved when as few people as possible have veto power over decisions. This does not mean that advice should be ignored, but every possible state of mind cannot be taken into account. This makes it all the more important for the person responsible for the project to have the authority to enforce his or her decision, but not to demonstrate this with all vigor.

It is perfectly normal for a decision-maker not to be in control of all the details. After all, you delegate implementation to the appropriate specialists. Here’s a brief example: When the possibilities for creating larger and more complex Web applications became better and better in the early 2000s, the question often came up in meetings as to which paradigm should be used to implement the display logic. The terms “multi-tier”, “thin client” and “fat client” dominated the discussions of the decision-making bodies at that time. Explaining the advantages of different layers of a distributed web application to the client was one thing. But to leave it up to a technically savvy layman to decide how to access his new application – via browser (“thin client”) or via a separate GUI (“fat client”) – is simply foolish. Thus, in many cases, it was necessary to clear up misunderstandings that arose during development. The narrow browser solution not infrequently turned out to be a difficult technology to master, because manufacturers rarely cared about standards. Instead, one of the main requirements was usually to make the application look almost identical in the most popular browsers. However, this could only be achieved with considerable additional effort. Similar observations were made during the first hype of service-oriented architectures.

The consequence of these observations shows that it is indispensable to develop a vision before the start of the project, the goals of which also correspond to the estimated budget. A reusable deluxe version with as many degrees of freedom as possible requires a different approach than a “we get what we need” solution. It’s less about getting lost in the details and more about keeping the big picture in mind.

Particularly in German-speaking countries, companies find it difficult to find the necessary players for successful project implementation. The reasons for this may be quite diverse and could be due, among other things, to the fact that companies have not yet understood that experts rarely want to talk to poorly informed and inadequately prepared recruitment service providers.

Getting things done!

Successful project management is not an arbitrary coincidence. For a long time, an insufficient flow of information due to a lack of communication has been identified as one of the negative causes. Many projects have their own inherent character, which is also shaped by the team that accepts the challenge in order to jointly master the task set. Agile methods such as Scrum [3], Prince2 [4] or Kanban [5] pick up on this insight and offer potential solutions to be able to carry out IT projects successfully.

Occasionally, however, it can be observed how project managers transfer planning tasks to the responsible developers for self-management under the pretext of the newly introduced agile methods. The author has frequently experienced how architects have tended to see themselves in day-to-day implementation work instead of checking the delivered fragments for compliance with standards. In this way, quality cannot be established in the long term, since the results are merely solutions that ensure functionality and, because of time and cost pressures, do not establish the necessary structures to ensure future maintainability. Agile is not a synonym for anarchy. This setup likes to be decorated with an overloaded toolbox full of tools from the DevOps department and already the project is seemingly unsinkable. Just like the Titanic!

It is not without reason that for years it has been recommended to introduce a maximum of three new technologies at the start of a project. In this context, it is also not advisable to always go for the latest trends right away. When deciding on a technology, the appropriate resources must first be built up in the company, for which sufficient time must be planned. The investments are only beneficial if the choice made is more than just a short-lived hype. A good indicator of consistency is extensive documentation and an active community. These open secrets have been discussed in the relevant literature for years.

However, how does one proceed when a project has been established for many years, but in terms of the product life cycle a swing to new techniques becomes unavoidable? The reasons for such an effort may be many and vary from company to company. The need not to miss important innovations in order to remain competitive should not be delayed for too long. This consideration results in a strategy that is quite simple to implement. Current versions are continued in the proven tradition, and only for the next major release or the one after that is a roadmap drawn up that contains all the necessary points for a successful changeover. For this purpose, the critical points are worked out and tested in small feasibility studies, which are somewhat more demanding than a “hello world” tutorial, to see how an implementation could succeed. From experience, it is the small details that can be the crumbs on the scale to determine success or failure.

In all efforts, the goal is to achieve a high degree of automation. Compared to constantly recurring tasks that have to be performed manually, automation offers the possibility of producing continuously repeatable results. However, it is in the nature of things that simple activities are easier to automate than complex processes. In this case, it is important to check the cost-effectiveness of the plans beforehand so that developers do not indulge completely in their natural urge to play and also work through unpleasant day-to-day activities.

He who writes stays

Documentation, the vexed topic, spans all phases of the software development process. Whether for API descriptions, the user manual, planning documents for the architecture or learned knowledge about optimal procedures – describing is not one of the favored tasks of all protagonists involved. It can often be observed that the common opinion seems to prevail that thick manuals stand for extensive functionality of the product. However, long texts in a documentation are more of a quality defect that tries the reader’s patience because he expects precise instructions that get to the point. Instead, they receive vague phrases with trivial examples that rarely solve problems.

Figure 2: Test coverage with Cobertura

This insight can also be applied to project documentation and has been detailed by Johannes Sidersleben [6], among others, under the metaphor about Victorian novellas. Universities have already taken up these findings. For example, Merseburg University of Applied Sciences has established the course of study “Technical Writing” [7]. It is hoped to find more graduates of this course in the project landscape in the future.

When selecting collaborative tools as knowledge repositories, it is always important to keep the big picture in mind. Successful knowledge management can be measured by how efficiently an employee finds the information they are looking for. For this reason, company-wide use is a management decision and mandatory for all departments.

Information has a different nature and varies both in its scope and in how long it remains current. This results in different forms of presentation such as wiki, blog, ticket system, tweets, forums or podcasts, to list just a few. Forums very optimally depict the question and answer problem. A wiki is ideal for continuous text, such as that found in documentation and descriptions. Many webcasts are offered as video, without the visual representation adding any value. In most cases, a well-understood and properly produced audio track is sufficient to distribute knowledge. With a common and standardized database, completed projects can be compared efficiently. The resulting knowledge offers a high added value when making forecasts for future projects.

Test & Metrics – the measure of all things

Just by skimming the Quality Report 2014, one quickly learns that the new trend is “software testing”. Companies are increasingly allocating contingents for this, which take up a volume similar to the expenditure for the implementation of the project. Strictly speaking, one extinguishes fire with gasoline at this point. On closer inspection, the budget is already doubled at the planning stage. It is often up to the skill of the project manager to find a suitable declaration for earmarked project funds.

Only your consequent check of the test case coverage by suitable analysis tools ensures that in the end sufficient testing has been done. Even if one may hardly believe it: In an age in which software tests can be created more easily than ever before and different paradigms can be combined, extensive and meaningful test coverage is rather the exception (see Figure 2).

It is well known that it is not possible to prove that software is free of errors. Tests are only used to prove a defined behavior for the scenarios created. Automated test cases are in no way a substitute for manual code review by experienced architects. A simple example of this are nested “try catch” blocks that occur from time to time in Java, which have a direct effect on the program flow. Sometimes nesting can be intentional and useful. In this case, however, the error handling is not limited to the output of the stack trace in a log file. The cause of this programming error lies in the inexperience of the developer and the bad advice of the IDE at this point, for an expected error handling to enclose the instruction with an own “try catch” block instead of supplementing the existing routine by an additional “catch” statement. To want to detect this obvious error by test cases is an infantile approach from an economic point of view.

Typical error patterns can be detected inexpensively and efficiently by static test procedures. Publications that are particularly concerned with code quality and efficiency in the Java programming language [8, 9, 10] are always a good starting point for developing your own standards.

The consideration of error types is also very informative. Issue tracking and commit messages in SCM systems of open source projects such as Liferay [11] or GeoServer [12] show that a larger part of the errors concern the graphical user interface (GUI). These are often corrections of display texts in buttons and the like. The reporting of primarily display errors can also lie in the perception of the users. For them, the behavior of an application is usually a black box, so they deal with the software accordingly. It is not at all wrong to assume that the application has few errors when the number of users is high.

The usual computer science figures are software metrics that can give management a sense of the physical size of a project. Used correctly, such an overview provides helpful arguments for management decisions. For example, McCabe’s [13] cyclic complexity can be used to derive the number of test cases needed. Also statistics about the Lines of Code and the usual counts of packages, classes and methods show the growth of a project and can provide valuable information.

A very informative processing of this information is the project Code-City [14], which visualizes such a distribution as a city map. It is impressive Figure 3: Maven JDepend Plugin – Numbers with little significance to recognize where dangerous monoliths can arise and where orphaned classes or packages occur.

Figure 3: Maven JDepend plugin – numbers with little meaning

Conclusion

In day-to-day business, one is content to spread hectic bustle and put on a stressed face. By producing countless meters of paper, personal productivity is subsequently proven. The energy consumed in this way could be used much more sensibly through a consistently considered approach.

Loosely based on Kant’s “Sapere Aude”, simple solutions should be encouraged and demanded. Employees who need complicated structures to emphasize their own genius in the team may not be supporting pillars on which joint successes can be built. Cooperation with unteachable contemporaries is quickly reconsidered and, if necessary, corrected.

Many roads lead to Rome – and Rome was not built in a day. However, it cannot be denied that at some point the time has come to break ground. The choice of paths is not an undecidable problem either. There are safe paths and dangerous trails on which even experienced hikers have their fair share of trouble reaching their destination safely.

For successful project management, it is essential to lead the pack on solid and stable ground. This does not fundamentally rule out unconventional solutions, provided they are appropriate. The statement in decision-making bodies: “What you are saying is all correct, but there are processes in our company to which your presentation cannot be applied” is best rebutted with the argument: “That is quite correct, so it is now our task to work out ways of adapting the company processes in line with known success stories, instead of spending our time listing reasons for keeping everything the same. I’m sure you’ll agree that the purpose of our meeting is to solve problems, not ignore them.” … more voice

References

Links are only visible for logged in users.

This is how corporate knowledge becomes tangible

The expertise of its own employees is a significant economic factor for any organization. This makes it all the more important to store experience and expertise permanently and make it available to other employees. A central knowledge management server takes on this task and helps to ensure long-term productivity in the company.

(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 2, S.32-33
Original article translated from Deutsch

The complexity of today’s highly networked working world requires smooth interaction between a wide range of specialists. Knowledge transfer plays an important role here. This exchange is made more difficult when team members work in different locations with different time zones or come from different cultural backgrounds. Companies with worldwide locations are aware of this problem and have developed appropriate strategies for company-wide knowledge management. In order to introduce this successfully, the IT solution to be used should be seen as a methodology instead of focusing on the actual tool. Once those responsible have made the decision in favor of a particular software solution, this should be retained consistently. Frequent system changes reduce the quality of the links between the stored content. Since there is no normalized standard for the representation of knowledge, significant conversion losses can occur when switching to new software solutions.

Various mechanisms for different content

Information can be stored in IT systems in various ways. The individual forms of representation differ in terms of presentation, structuring and use. In order to be able to edit documents geminus conflict-free and version them at the same time, as is necessary for specifications or documentation, wikis [1] are ideally suited, since they were originally developed precisely for this use. The documents stored there are usually project-specific and should also be organized in this way.

Cross-project documents in the wiki are, for example, explanations of technical terms, a central list of abbreviations, or a Who’s Who of company employees including contact data and subject areas. The latter can in turn be linked to the technical term explanation. Comprehensive content can then be kept up-to-date centrally and can be conveniently linked to the corresponding project documents. This procedure avoids unnecessary repetitions and the documents to be read become shorter, but still contain all the necessary information. Johannes Siedersleben already described the risks of excessively long documentation in his book Softwaretechnik [2] in 2003.

Knowledge that has more the character of a FAQ should better be organized via a forum. Grouping by topics in which questions are stored along the lines of “How can I …?” makes it easier to find possible solutions. A particularly attractive feature is the fact that a forum of this kind evolves over time in line with demand. Users can formulate their own questions and post them in the forum. As a rule, qualified answers to newly posed questions are not long in coming.

Suitable candidates for blogs are, for example, general information about the company, status reports or tutorials. These are documents that tend to have an informative character, are not form-bound or are difficult to assign to a specific topic. Short information (tweets [3]) via Twitter, thematically grouped in channels, can also enrich project work. They additionally minimize the e-mails in one’s own mailbox. Examples include reminders about a specific event, a newsflash about new product versions or information about a successfully completed work process. Integrating tweets into project work is relatively new, and suitable software solutions are correspondingly rare.

Of course, the list of possibilities is far from exhausted at this point. However, the examples already provide a good overview of how companies can organize their knowledge. Linking the individual systems to a portal [4], which has an overarching search and user administration, quickly creates a network that is also suitable as a cloud solution.

User-friendliness is a decisive factor for the acceptance of a knowledge platform. Long training periods, unclear structuring, and awkward operation can quickly lead to rejection. Security is also satisfied with access authorization to the individual contents at group level. A good example of this is the enterprise wiki Confluence [5]. It allows different read and write permissions to be assigned to the individual document levels.

Naturally, a developer cannot be expected to describe his work in the right words for posterity after successful implementation. The fact that the quality of the texts in many documentations is not always sufficient has also been recognized by the Merseburg University of Applied Sciences, which offers the course of study Technical Editing [6]. Cross-reading by other project members has therefore proved to be a suitable means of ensuring the quality of the content. To facilitate the writing of texts, it is helpful to provide a small guide – similar to the Coding Convention.

Conclusion

A knowledge database cannot be implemented overnight. It takes time to compile enough information in it. Only through interaction and corrections of incomprehensible passages does the knowledge reach a quality that invites transfer. Every employee should be encouraged to enrich existing texts with new insights, to resolve incomprehensible passages or to add search terms. If the process of knowledge creation and distribution is lived in this way, fewer documents will be orphaned and the information will always be up-to-date.

Automation options in software configuration management

Software development offers some extremely efficient ways to simplify recurring tasks through automation. The elimination of tedious, repetitive, monotonous tasks and a resulting reduction in the frequency of errors in the development process are by no means all facets of this topic.

(c) 2011 Marco Schulz, Materna Monitor, Ausgabe 1, S.32-34
Original article translated from Deutsch

The motivation for establishing automation in the IT landscape is largely the same. Recurring tasks are to be simplified and solved by machine without human intervention. The advantages are fewer errors in the use of IT systems, which in turn reduces costs. As simple and advantageous as the idea of processes running independently sounds, implementation is less trivial. It quickly becomes clear that for every identified possibility of automation, implementation is not always feasible. Here, too, the principle applies: the more complex a problem is, the more complex its solution will be.

In order to weigh up whether the economic effort to introduce certain automatisms is worthwhile, the costs of a manual solution must be multiplied by the factor of the frequency of this work to be repeated. These costs must be set against the expenses for the development and operation of the automated solution. On the basis of this comparison, it quickly becomes clear whether a company should implement the envisaged improvement.

Tools support the development process

Especially in the development of software projects, there is considerable potential for optimization through automatic processes. Here, developers are supported by a wide range of tools that need to be skillfully orchestrated. Configuration and release management in particular deals in great detail with the practical use of a wide variety of tools for automating the software development process.

The existence of a separate build logic, for example in the form of a simple shell script, is already a good approach, but it does not always lead to the desired results. Platform-independent solutions are necessary for such cases, since development is very likely to take place in a heterogeneous environment. An isolated solution always means increased adaptation and maintenance effort. Finally, automation efforts should simplify existing processes. Current build tools such as Maven and Ant take advantage of this platform independence. Both tools encapsulate the entire build logic in separate XML files. Since XML has already established itself as a standard in software development, the learning curve is steeper than with rudimentary solutions.

Die Nutzung zentraler Build-Logiken bildet die Grundlage für weitere Automatismen während der Entwicklungsarbeit. Einen Aspekt bilden dabei automatisierte Tests in Form von UnitTests in einer Continuous-Integration-(CI)-Umgebung. Eine CI-Lösung fügt alle Teile einer Software zu einem Ganzen zusammen und arbeitet alle definierten Testfälle ab. Konnte die Software nicht gebaut werden oder ist ein Test fehlgeschlagen, wird der Entwickler per E-Mail benachrichtigt, um den Fehler schnell zu beheben. Moderne CI-Server werden gegen ein Versionsverwaltungssystem, wie beispielsweise Subversion oder Git, konfiguriert. Das bewirkt, dass der Server ein Build erst dann beginnt, wenn auch tatsächlich Änderungen im Sourcecode gemacht wurden.

Complex software systems usually use dependencies on external components (libraries) that cannot be influenced by the project itself. The efficient management of the artifacts used in the project is the main strength of the build tool Maven, which has contributed to its strong distribution. When used properly, this eliminates the need to archive binary program parts within the version control system, resulting in smaller repositories and shorter commit times (successful completion of a transaction). New versions of the libraries used can be incorporated and tried out more quickly without error-prone manual copy actions. Libraries developed in-house can be easily distributed in a protected manner in the company network with the use of an own repository server (Apache Nexus) in the sense of reuse.

When evaluating a build tool, the possibility of reporting should not be neglected. Automated monitoring of code quality using metrics, for example through the Checkstyle tool, is an excellent tool for project management to realistically assess the current status of the project.

Not too much new technology

With all the possibilities for automating processes, several paths can be taken. It is not uncommon for development teams to have long discussions about which tool is particularly suitable for the current project. This question is difficult to answer in general terms, since every project is unique and the advantages and disadvantages of different tools must be compared with the project requirements.

In practical use, the limitation to a maximum of two novel technologies in the project has proven successful. Whether a tool is suitable is also determined by whether people with the appropriate know-how are available in the company. A good solution is a list approved by management with recommendations of the tools that are already in use or can be integrated into the existing system landscape. This ensures that the tools used remain clear and manageable.

Projects that run for many years must undergo modernization of the technologies used at longer intervals. In this context, suitable times must be found to migrate to the new technology with as little effort as possible. Sensible dates to switch to a newer technology are, for example, a change to a new major release of the own project. This procedure allows a clean separation without having to migrate old project statuses to the new technology. In many cases, this is not easily possible.

Conclusion

The use of automatisms for software development can, if used wisely, energetically support the achievement of the project goal. As with all things, excessive use is associated with some risks. The infrastructure used must remain understandable and controllable despite all the technologization, so that project work does not come to a standstill in the event of system failures.

Apache License 2.0

Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

You” (or “Your“) shall mean an individual or Legal Entity exercising permissions granted by this License.

Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.

Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

  1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
  2. You must cause any modified files to carry prominent notices stating that You changed the files; and
  3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
  4. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

HOW TO APPLY THE APACHE LICENSE TO YOUR WORK

Include a copy of the Apache License, typically in a file called LICENSE, in your work, and consider also including a NOTICE file that references the License.

To apply the Apache License to specific files in your work, attach the following boilerplate declaration, replacing the fields enclosed by brackets “[]” with your own identifying information. (Don’t include the brackets!) Enclose the text in the appropriate comment syntax for the file format. We also recommend that you include a file or class name and description of purpose on the same “printed page” as the copyright notice for easier identification within third-party archives.

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.