Treasure chest – Part 2

In the previous part of the article treasure chest, I described how the database connection for the TP-CORE library got established. Also I gave a insight to the internal structure of the ConfiguartionDO. Now in the second part I explain the ConfiguartionDAO and its corresponding service. With all this knowledge you able to include the application configuration feature of TP-CORE in your own project to build your own configuration registry.

Lets resume in short the architectural design of the TP-CORE library and where the fragments of the features located. TP-CORE is organized as layer architecture as shown in the graphic below.

As you can see there are three relevant packages (layer) we have to pay attention. As first the business layer resides like all other layers in an equal named package. The whole API of TP-CORE is defined by interfaces and stored in the business layer. The implementation of the defined interfaces are placed in the application layer. Domain Objects are simple data classes and placed in the domain layer. Another important pattern is heavily used in the TP-CORE library is the Data Access Object (DAO).

The GenericDAO provides the basic CRUD operations, we don’t need to repeat in every specialization again. A full description how the GenericDAO is implemented I documented in the GitHub Wiki of TP-CORE.

Now the days micro services and RESTful application are state of the art. Especially in TP-CORE the defined services aren’t REST. This design decision is based on the mind that TP-CORE is a dependency and not a standalone service. Maybe in future, after I got more feedback how and where this library is used, I could rethink the current concept. For now we treat TP-CORE as what it is, a library. That implies for the usage in your project, you can replace, overwrite, extend or wrap the basic implementation of the ConfigurationDAO to your special necessities.

To keep the portability of changing the DBMS Hibernate (HBM) is used as JPA implementation and O/R mapper. The Spring configuration for Hibernate uses the EntityManager instead of the Session, to send requests to the DBMS. Since version 5 Hibernate use the JPA 2 standard to formulate queries.

As I already mentioned, the application configuration feature of TP-CORE is implemented as DAO. The domain object and the database connection was topic of the first part of this article. Now I discuss how to give access to the domain object with the ConfigurationDAO and its implementation ConfigurationHbmDAO. The domain object ConfigurationDO or a list of domain objects will be in general the return value of the DAO. Actions like create are void and throw just an exception in the case of a failure. For a better style the return type is defined as Boolean. This simplifies also writing unit tests.

Sometimes it could be necessary to overwrite a basic implementation. A common scenario is a protected delete. For example: a requirement exist that a special entry is protected against a unwanted deletion. The most easy solution is to overwrite the delete whit a statement, refuses every time a request to delete a domain object whit a specific UUID. Only adding a new method like protectedDelete() is not a god idea, because a developer could use by accident the default delete method and the protected objects are not protected anymore. To avoid this problem you should prefer the possibility of overwriting GenericDAO methods.

As default query to fetch an object, the identifier defined as primary key (PK) is used. A simple expression fetching an object is written in the find method of the GenericHbmDAO. In the specialization as ConfigurationHbmDAO are more complex queries formulated. To keep a good design it is important to avoid any native SQL. Listing 1 shows fetch operations.

//GenericHbmDAO
public T find(final PK id) {
  return mainEntityManagerFactory.find(genericType, id);
}
//ConfigurationHbmDAO
public List getAllConfigurationSetEntries(final String module,
final String version, final String configSet) {

  CriteriaBuilder builder = mainEntityManagerFactory.getCriteriaBuilder();
  CriteriaQuery query = builder.createQuery(ConfigurationDO.class);
  // create Criteria
  Root root = query.from(ConfigurationDO.class);
  query.where(
    builder.equal(root.get("modulName"), module),
    builder.equal(root.get("version"), version),
    builder.equal(root.get("configurationSet"), configSet)
  );
  return mainEntityManagerFactory.createQuery(query).getResultList();
}
Java

The readability of these few lines of source is pretty easy. The query we formulated for getAllConfigurationSetEntries() returns a list of ConfigurationDO objects from the same module whit equal version of a configSet. A module is for example the library TP-CORE it self or an ACL and so on. The configSet is a namespace that describes configuration entries they belong together like a bundle and will used in a service like e-mail. The version is related to the service. If in future some changes needed the version number have increase. Lets get a bit closer to see how the e-mail example will work in particular.

We assume that a e-mail service in the module TP-CORE contains the configuration entries: mailer.host, mailer.port, user and password. As first we define the module=core, configSet=email and version=1. If we call now getAllConfigurationSetEntries(core, 1, email); the result is a list of four domain objects with the entries for mailer.host, mailer.port, user and password. If in a newer version of the email service more configuration entries will needed, a new version will defined. It is very important that in the database the already exiting entries for the mail service will be duplicated with the new version number. Of course as effect the registry table will grow continual, but with a stable and well planned development process those changes occur not that often. The TP-CORE library contains an simple SMTP Mailer which is using the ConfigurationDAO. If you wish to investigate the usage by the MailClient real world example you can have a look on the official documentation in the TP-CORE GitHub Wiki.

The benefit of duplicate all existing entries of a service, when the service configuration got changed is that a history is created. In the case of update a whole application it is now possible to compare the entries of a service by version to decide exist changes they take effect to the application. In practical usage this feature is very helpful, but it will not avoid that updates could change our actual configuration by accident. To solve this problem the domain object has two different entries for the configuration value: default and configuration.

The application configuration follows the convention over configuration paradigm. Each service need by definition for all existing configuration entries a fix defined default value. Those default values can’t changed itself but when the value in the ConfigurationDO is set then the defaultValue entry will ignored. If an application have to be updated its also necessary to support a procedure to capture all custom changes of the updated configuration set and restore them in the new service version. The basic functionality (API) for application configuration in TP-CORE release 3.0 is:

  • void updateConfigurationEntries(List<ConfigurationDO> configuration)
  • ConfigurationDO getConfigurationByKey( String key, String module, String version)
  • List<ConfigurationDO> getAllConfigurationSetEntries(String module, String version, String configSet)
  • List<ConfigurationDO> getAllModuleEntries(String module)
  • List<ConfigurationDO> getAllDeprecatedEntries()
  • List<ConfigurationDO> getHistoryOfAEntry(String module, String key, String configSet)
  • String getValueByKey(String key, String module, String version)
  • void restoreKeyToDefault(ConfigurationDO entry)

The following listing gives you an idea how a implementation in your own service could look like. This snipped is taken from the JavaMailClient and shows how the internal processing of the fetched ConfigurationDO objects are managed.

private void processConfiguration() {
    List configurationEntries =  
        configurationDAO.getAllConfigurationSetEntries("core", 1, "email");

    for (ConfigurationDO entry : configurationEntries) {
        String value;
        if (StringUtils.isEmpty(entry.getValue())) {
            value = <strong>entry.getDefaultValue</strong>();
        } else {
            value = <strong>entry.getValue</strong>();
        }

        if (entry.getKey()
                .equals(cryptoTools.calculateHash("mailer.host",
                        HashAlgorithm.SHA256))) {
            configuration.replace("mailer.host", value);
        } else if (entry.getKey()
                .equals(cryptoTools.calculateHash("mailer.port",
                        HashAlgorithm.SHA256))) {
            configuration.replace("mailer.port", value);
        } else if (entry.getKey()
                .equals(cryptoTools.calculateHash("user",
                        HashAlgorithm.SHA256))) {
            configuration.replace("mailer.user", value);
        } else if (entry.getKey()
                .equals(cryptoTools.calculateHash("password",
                        HashAlgorithm.SHA256))) {
            configuration.replace("mailer.password", value);
        }
    }
}
Java

Another functionality of the application configuration is located in the service layer. The ConfigurationService operates on the module perspective. The current methods resetModuleToDefault() and filterMandatoryFieldsOfConfigSet() already give a good impression what that means.

If you take a look on the MailClientService you detect the method updateDatabaseConfiguration(). May you wonder why this method is not part of the ConfigurationService? Of course this intention in general is not wrong, but in this specific implementation is the update functionality specialized to the MailClient configuration. The basic idea of the configuration layer is to combine several DAO objects to a composed functionality. The orchestration layer is the correct place to combine services together as a complex process.

Resume

The implementation of the application configuration inside the small library TP-CORE allows to define an application wide configuration registry. This works also in the case the application has a distribute architecture like micro services. The usage is quite simple and can easily extended to own needs. The proof that the idea is well working shows the real world usage in the MailClient and FeatureToggle implementation of TP-CORE.

I hope this article was helpful and may you also like to use TP-CORE in your own project. Feel free to do that, because of the Apache 2 license is also no restriction for commercial usage. If you have some suggestions feel free to leave a comment or give a thumbs up.

Treasure chest – Part 1

Through the years, different techniques to storage configuration settings for applications got established. We can choose between database, property files, XML or YAML, just to give a few impressions of the options we could choose from. But before we jumping into all technical details of a possible implementation, we need to get a bit familiar of some requirements.

Many times in my professional life I touched this topic. Problems occur periodically after an application was updated. My peak of frustration, I reached with Windows 10. After every major update many settings for security and privacy switched back to default, apps I already uninstalled messed up my system again and so on. This was reasons for me to chose an alternative to stop suffering. Now after I switched to Ubuntu Mate I’m fine, because those problems got disappear.

Several times I also had to maintain legacy projects and needed to migrate data to newer versions. A difficult and complex procedure. Because of those activities I questioned myself how this problem could handled in a proper way. My answer you can find in the open source project TP-CORE. The feature application configuration is my way how to avoid the effect of overwriting important configuration entries during the update procedure.

TP-CORE is a free available library with some useful functionality written in Java. The source code is available on GitHub and the binaries are published on Maven Central. To use TP-CORE in your project you can add it as dependency.

<dependency>
   <groupId>io.github.together.modules</groupId>
   <artifactId>core</artifactId>
   <version>2.2.0</version>
</dependency>
XML

The feature of application configuration is implemented as ConfigurationDAO and use a database. My decision for a database approach was driven by the requirement of having a history. Off course the choice have also some limitations. Obviously has the configuration for the database connection needed to be stored somewhere else.

TP-CORE use Spring and Hibernate (JPA) to support several DBMS like PostgreSQL, Oracle or MariaDB. My personal preference is to use PostgreSQL, so we can as next step discuss how to setup our database environment. The easiest way running a PostgreSQL Server is to use the official Docker image. If you need a brief overview how to deal with Docker and PostgreSQL may you like to check my article: Learn to walk with Docker and PostgreSQL. Below is a short listing how the PostgreSQL container could get instantiated in Docker.

docker network create -d bridge –subnet=172.18.0.0/16 services

docker run -d –name postgres \
-p 5432:5432 –net services –ip 172.18.0.2 \
-e POSTGRES_PASSWORD=password \
-e PGPASSWORD=password \
-v /home/<user>/postgreSQL:/var/lib/postgresql/data \
postgres:11
Bash

May you need to make some changes on the listing above to fit it for your system. After your DBMS is running well we have to create the schemata and the user with a proper password. In our case the schema is called together. the user is also called together and the password will be together too.

CREATE ROLE together LOGIN
  ENCRYPTED PASSWORD 'md582721599778493074c7de7e2bb735332'
  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;

CREATE DATABASE "together"
  WITH OWNER = together
       ENCODING = 'UTF8'
       TABLESPACE = pg_default
       LC_COLLATE = 'en_US.utf8'
       LC_CTYPE = 'en_US.utf8'
       CONNECTION LIMIT = -1;
SQL

To establish the connection from your application to the PostgreSQL DBMS we use a XML configuration from the Spring Framework. The GitHub repository of TP-CORE contains already a working configuration file called spring-dao.xml. The Spring configuration includes some other useful features like transactions and a connection pool. All necessary dependencies are already included. You just need to replace the correct entries for the connection variables:

  • ${jdbc.user} = together
  • ${jdbc.password} = togehter
  • ${hibernate.dialect.database} = org.hibernate.dialect.PostgreSQL95Dialect
  • ${jdbc.driverClassName} = org.postgresql.Driver
  • ${jdbc.url} = jdbc:postgresql://172.18.0.2:5432/together
  • ${dbcp.initialSize} = 10
  • ${hibernate.hbm2ddl.auto} = update
  • ${hibernate.show_sql} = false

In the next step you need to tell your application how to instanciate the Spring context, using the configuration file spring-dao.xml. Depending on your application type you have two possibilities. For a standard Java app, you can add the following line to your main method:

ApplicationContext =
   new ClassPathXmlApplicationContext("/spring-dao.xml");
Java

Web application are configured by the web.xml in the WEB-INF directory. there you need to add those lines:

<context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>/WEB-INF/spring-dao.xml</param-value>
</context-param>
XML

The creation of the database table will managed by Hibernate during the application start. When you discover the GitHub repository of the TP-CORE project you will find in the directory /src/main/filters the file database.properties. This file contains more connection strings to other database systems. In the case you wish to compile TP-CORE by your own, you can modify database.properties to your preferred configuration. The full processed configuration file with all token replacements you will find in the target directory.

In the next paragraph we will have a closer look on the Domain Object ConfigurationDO

The most columns you see in the image above, is very clear, for what they got used. As first point we need to clarify, what makes an entry unique? Of course the UUID as primary key fits this requirement as well. In our case the UUID is the primary key and is auto generated by the application, when a new row will created. But using in an application all the time a non human readable id as key, to grab a value is heavily error prone and uncomfortable. For this use case I decided a combination of configuration keymodule name and service version to define a unique key entry.

To understand the benefit of this construction I will give a simple example. Imagine you have functionality of sending E-Mails in your application. This functionality requires several configuration entries like host, user and password to connect with an SMTP server. to group all those entries together in one bundle we have the CONFIG_SET. If your application deals with an modular architecture like micro services, it could be also helpful to organize the configuration entries by module or service name. For this reason the MODULE_NAME was also included into this data structure. Both entries can be used like name spaces to fetch relevant information more efficient.

Now it could be possible that some changes of the functionality create new configuration entries or some entries got obsolete. To enable a history and allow a backward compatibility the data structure got extended by SERVICE_VERSION.

Every entry contains a mandatory default value and an optional configuration value. The application can overwrite the default value by filling the configuration value field. This allows updates without effect the custom configuration, as long the developer respect to not fill entries for configuration values and always use the default entry. This definition is the convention over configuration paradigm.

The flags deprecated and mandatory for a configuration key are very explicit and descriptive. Also the column comment don’t need as well any further explanation.

If there are changes of one or more configuration entries for a service, the whole configuration set has to be duplicated with the new service version. As example you can have a look on the MailClient functionality of TP-CORE how the application configuration is used.

A very important information is that the configuration key is in the DBMS stored as SHA-512 hash. This is a simple protection against a direct manipulation of the configuration in the DBMS, outside of the application. For sure this is not a huge security, but minimum it makes the things a bit uncomfortable. In the application code is a human readable key name used. The mapping is automatic, and we don’t need to worry about it.

Resume

In this first part I talked about why I had need my own implementation of a application registry to storage configuration settings. The solution I prefer is using a database and I showed how enable the database configuration in your own project. Shortly we also had a view on the data structure and how the Domain Object is working.

In the second part of this article I give a introduction to the Data Access Object (DAO) and his corresponding service. With this information you are able to extend or adapt the application configuration implementation to your own needs.

jConf Peru 2021 [1]

Maven POM demystified: enhance your build

Since more than a decade Apache Maven is one of the most convenient Build and Reporting tools for Java projects. The success of this tool is quit more than just being a build tool. It solves problems in a simple manner. As developer you need run your Maven Build several times, but often the knowledge is missing how to manipulate the build without change the build logic called POM. I will show some tricks how to solve problems when your Maven got stuck. In this presentation I cover regular developer issues and how to overcome them. Because a well working build is more than just a bit of DevOps techniques.

Learn things about token replacement, executable jar, BOM, Dependency Management, enforcement, reporting and much more in live demonstrations.

Learn to walk with Docker and PostgreSQL

After some years the virtualization tool Docker proofed it’s importance for the software industry. Usually when you hear something about virtualization you may could think this is something for administrators and will not effect me as a developer as much. But wait. You’re might not right. Because having some basic knowledge about Docker as a developer will helps you in your daily business.

Step 1: create the container and initialize the database

docker run -d --name pg-dbms --restart=no \
--ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
postgres:11

If you wish that your PostgreSQL is always up after you restart your system, you should change the restart policy form no to always. After you created the instance pg-dbms of your PostgreSQL 11 Docker image, you need to cheek if it was success. This you can do by the command:

docker ps -a

Step 2: copy the initialized database directory to a local directory on your host system

docker cp pg-dbms:/var/lib/postgresql/data /home/user/pg

The biggest problem with the current container is, that all data will got lost, when you erase the container. This means wen need to find a way how to save this data permanently. The easiest way is to copy the data directory from your container to an directory to your host system. The copy command needs tow parameters source and destination. for the source you need to specify the container were you want to grab the files. in our case the container is named pg-dbms. The destination is a PostgreSQL folder in the home directory of the user ed. If you use Windows instead of Linux it works the same. Just adapt the directory path and try to avoid white-spaces. When the files appeared in the defined directory you’re done with this step.

Step 3: stop the current container

docker stop pg-dbms

In the case you wish to start a container, just replace the word stop for the word start. The container we created to grab the initial files for the PostgreSQL DBMS we don’t need no longer, so we can erase it, but to do that as first the running container have to be stopped.

Step 4: start the current container

docker start pg-dbms

After the container is stopped we are able to erase it.

Step 5: recreate the container with an external volume

docker run -d --name pg-dbms \
--ip 172.18.0.20 \
-e POSTGRES_PASSWORD=s3cr3t \
-e PGPASSWORD=s3cr3t \
-v /home/user/pg:/var/lib/postgresql/data \
postgres:11

Now we can link the directory with the exported initial database to a new created PostgreSQL container. that’s all. The big benefit of this activities is, that now every database we create in PostgreSQL and the data of this database is outside of the docker container on our local machine. This allows a much more simpler backup and prevent losing information when a container has to be updated.

If you have instead of PostgreSQL other images where you need to grab files to reuse them you can use this tutorial too. just adapt to the image and the paths you need. The procedure is almost the same. If you like to get to know more facts about Docker you can watch also my video Docker Basics in less then 10 Minutes. In the case you like this short tutorial share it with your friends and colleagues. To stay informed don’t forget to subscribe to my newsletter.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.


Tooltime: SCM-Manager

If you and your team are dealing with tools like Git or Subversion, you may need an administrative layer where you are able to manage user access and repositories in a comfortable way, because source control management systems (SCM) don’t bring this functionality out of the box.

Perhaps you are already familiar with popular management solutions like GitHub, GitBlit or GitLab. The main reason for their success is their huge functionality. And of course, if you plan to create your own build and deploy pipeline with an automation server like Jenkins you will need to host your own repository manager too.

As great as the usage of GitLab and other solutions is, there is also a little bitter taste:

  • The administration is very complicated and requires some experience.
  • The minimal requirement of hardware resources to operate those programs with good performance is not that little.

To overcome all these hurdles, I will introduce a new star on the toolmaker’s sky SCM-Manager [1]. Fast, compact, extendable and simple, are the main attributes I would use to describe it.

Kick Starter: Installation

Let’s have a quick look at how easy the installation is. For fast results, you can use the official Docker container [2]. All it takes is a short command:

docker run --name scm –restart=always \ 
-p 8080 -p 2222 \
-v /home/<user>/scmManager:/var/lib/scm \
scmmanager/scm-manager:2.22.0

First, we create a container named scm based on the SCM-Manager image 2.22.0. Then, we tell the container to always restart when the host operating system is rebooted. Also, we open the ports 2222 and 8080 to make the service accessible. The last step is to mount a directory inside the container, where all configuration data and repositories are stored.

Another option to get the SCM-Manager running on a Linux server like Ubuntu is by using apt. The listing below shows how to do the installation.

echo 'deb [arch=all] https://packages.scm-manager.org/repository/apt-v2-releases/ stable main' | sudo tee /etc/apt/sources.list.d/scm-manager.list  
sudo apt-key adv --recv-keys --keyserver hkps://keys.openpgp.org 0x975922F193B07D6E 
sudo apt-get update 
sudo apt-get install scm-server

SCM-Manager can also be installed on systems like Windows or Apple. You can find information about the installations on additional systems on the download page [3]. When you perform an installation, you will find a log entry with a startup token in the console.

Startup token in the command line.

After this you can open your browser and type localhost:8080, where you can finish the installation by creating the initial administration account. In this form, you need to paste the startup token from the command line, as it is shown in image 2. After you submitted the initialization form, you get redirected to the login. That’s all and done in less than 5 minutes.

Initialization screen.

For full scripted untouched installations, there is also a way to bypass the Initialization form by using the system property scm.initalPassword. This creates a user named scmadmin with the given password.

In older versions of the SCM-Manager, the default login account was scmadmin with the password scmadmin. This old way is quite helpful but if the administrator doesn’t disable this account after the installation, there is a high-security risk. This security improvement is new since version 2.21.

Before we discover more together about the administration, let’s first get to some details about the SCM-Manager in general. SCM-Manager is open source under MIT license. This allows commercial usage. The Code is available on GitHub. The project started as research work. Since Version 2 the company Cloudogu took ownership of the codebase and manages the future development. This construct allows the offering of professional enterprise support for companies. Another nice detail is that the SCM-Manager is made in Germany.

Pimp Me Up: Plugins

One of the most exciting details of using the SCM-Manager is, that there is a simple possibility to extend the minimal installation with plugins to add more useful functions. But be careful, because the more plugins are installed, the more resources the SCM-Manager needs to be allocated. Every development team has different priorities and necessities, for this reason, I’m always a fan of customizing applications to my needs.

Installed Plugins.

The plugin installation section is reachable by the Administration tab. If you can’t see this entry you don’t have administration privileges. In the menu on the right side, you find the entry Plugins. The plugin menu is divided into two sections: installed and available. For a better overview, the plugins are organized by categories like Administration, Authorization, or Workflow. The short description for each plugin is very precise and gives a good impression of what they do.

Some of the preinstalled plugins like in the category Source Code Management for supported repository types Git, Subversion, and Mercurial can’t be uninstalled.

Some of my favorite plugins are located in the authorization section:

  • Path Write Protection, Branch Write Protection, and,
  • Tag Protection.

Those features are the most convenient for Build- and Configuration Managers. The usage is also as simple as the installation. Let’s have a look at how it works and for what it’s necessary.

Gate Keeper: Special Permissions

Imagine, your team deals for example whit a Java/Maven project. Perhaps it exists a rule that only selected people should be allowed to change the content of the pom.xml build logic. This can be achieved with the Path Write Protection Plugin. Once it is installed, navigate to the code repository and select the entry Settings in the menu on the right side. Then click on the option Path Permissions and activate the checkbox.

Configuring Path permissions.

As you can see in image 4, I created a rule that only the user Elmar Dott is able to modify the pom.xml. The opposite permission is exclude (deny) the user. If the file or a path expression doesn’t exist, the rule cannot be created. Another important detail is, that this permission covers all existing branches. For easier administration, existing users can be organized into groups.

In the same way, you are able to protect branches against unwanted changes. A scenario you could need this option is when your team uses massive branches or the git-flow branch model. Also, personal developer branches could have only write permission for the developer who owns the branch or the release branch where the CI /CD pipeline is running has only permissions for the Configuration Management team members.

Let’s move ahead to another interesting feature, the review plugin. This plugin enables pull requests for your repositories. After installing the review plugin, a new bullet point in the menu of your repositories appears, it’s called Pull Requests.

Divide and Conquer: Pull Requests

On the right hand, pull requests [4] are a very powerful workflow. During my career, I often saw the misuse of pull requests, which led to drastically reduced productivity. For this reason, I would like to go deeper into the topic.

Originally, pull requests were designed for open source projects to ensure code quality. Another name for this paradigm is dictatorship workflow [5]. Every developer submits his changes to a repository and the repository owner decides which revision will be integrated into the codebase.

If you host your project sources on GitHub, strangers can’t just collaborate in your project, they first have to fork the repository into their own GitHub space. After they commit some revisions to this forked repository, they can create a pull request to the original repository. As repository owner, you can now decide whether you accept the pull request.

The SCM tool IBM Synergy had a similar strategy almost 20 years ago. The usage got too complicated so that many companies decided to move to other solutions. These days, it looks like history is repeating itself.

The reason why I’m skeptical about using pull requests is very pragmatic. I often observed in projects that the manager doesn’t trust the developers. Then he decides to implement the pull request workflow and makes the lead developer or the architect accept the pull requests. These people are usually too busy and can’t really check all details of each single pull request. Hence, their solution is to simply merge each pull request to the code base and check if the CI pipeline still works. This way, pull requests are just a waste of time.

There is another way how pull requests can really improve the code quality in the project: if they are used as a code review tool. How this is going to work, will fill another article. For now, we leave pull requests and move to the next topic about the creation of repositories.

Treasure Chest: Repository Management

The SCM-Manager combines three different source control management repository types: Git, Subversion (SVN), and Mercurial. You could think that nobody uses Subversion anymore, but keep in mind that many companies have to deal with legacy projects managed with SVN. A migration from those projects to other technologies may be too risky or simply expensive. Therefore, it is great to have a solution that can manage more than one repository type.

If you are Configuration Manager and have to deal with SVN, keep in mind that some things are a bit different. Subversion organizes branches and tags in directories. An SVN repository usually gets initialized with the folders:

  • trunk — like the master branch in Git.
  • branches — references to revisions in the trunk were forked code changes can committed.
  • tags — like branches without new code revisions.

In Git you don’t need this folder structure, because how branches are organized is completely different. Git (and Mercurial) compared to Subversion is a distributed Source Control Management System and branches are lose coupled and can easily be deleted if they are obsolete. As of now, I don’t want to get lost in the basics of Source Control Management and jump to the next interesting SCM-Manager plugins.

Uncover Secrets

If a readme.md file is located in the root folder of your project, you could be interested in the readme plugin. Once this plugin is activated and you navigate into your repository the readme.md file will be rendered in HTML and displayed.

The rendered readme.md of a repository.

If you wish to have a readable visualization of the repository’s activities, the activity plugin could be interesting for you. It creates a navigation entry in the header menu called Activity. There you can see all commit log entries and you can enter into a detailed view of the selected revision.

The activity view.

This view also contains a compare and history browser, just like clients as TortoiseGit does.

The Repository Manager includes many more interesting details for the daily work. There is even a code editor, which allows you to modify files directly in the SCM-Manger user interface.

Next, we will have a short walk through the user management and user roles.

Staffing Office: User and Group Management

Creating new users is like almost every activity of the SCM-Manager a simple thing. Just switch to the Users tab and press the create user button. Once you have filled out the form and saved it, you will be brought back to the Users overview.

Creating a new user

Here you can already see the newly created user. After this step, you will need to administrate the user’s permissions, because as of now it doesn’t have any privileges. To change that just click on the name of the newly created user. On the user’s detail page, you need to select the menu entry Settings on the right side. Now choose the new entry named Permissions. Here you can select from all available permissions the ones you need for the created account. Once this is done and you saved your changes, you can log out and log in with your new user, to see if your activity was a success.

If you need to manage a massive number of users it’s a good idea to organize them into groups. That means after a new user is created the permissions inside the user settings will not be touched and stay empty. Group permissions can be managed through the Groups menu entry in the header navigation. Create a new group and select Permission from the right menu. This configuration form is the same as the one of the user management. If you wish to add existing users to a group switch to the point General. In the text field Members, you can search for an existing user. If the right one is selected you need to press the Add Member button. After this, you need to submit the form and all changes are saved and the new permissions got applied.

To have full flexibility, it is allowed to add users to several groups (roles). If you plan to manage the SCM-Manager users by group permissions, be aware not to combine too many groups because then users could inherit rights you didn’t intend to give them. Currently, there is no compact overview to see in which groups a user is listed and which permissions are inherited by those groups. I’m quite sure in some of the future versions of the SCM-Manager this detail will be improved.

Besides the internal SCM-Manager user management exist some plugins where you are able to connect the application with LDAP.

Lessons Learned

If you dared to wish for a simpler life in the DevOps world, maybe your wish became true. The SCM-Manager could be your best friend. The application offers a lot of functionality that I briefly described here, but there are even more advanced features that I haven’t even mentioned in this short introduction: There is a possibility to create scripts and execute them with the SCM-Manager API. Also, a plugin for the Jenkins automation server is available. Other infrastructure tools like Jira, Timescale, or Prometheus metrics gathering have an integration to the SCM-Manager.

I hope that with this little article I was able to whet your appetite for this exciting tool and I hope you enjoy trying it out.

Resources

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

jConf Peru 2021 [2]

Mission accomplished: deploy to Maven Central

Many developers have ideas they work on it in their spare time. The most of us are convinced about open source and share their own projects on platforms like GitHub. But what happen after a publication of the source code? If you really want to gain people to use your project you’re not done yet. It’s also a good idea to publish your artifacts for a simple usage.

The most famous storage for binary Java Artifacts is Maven Central. Doesn‘t matter if you use in your projects Ivy, Gradle or Maven as dependency management, all those technologies access to Maven Central. In this talk you will learn how to publish your artifacts with Maven to Sonatype Nexuss OSS. We pass through all steps from creating accounts until the the binaries are available. In between I give some general hints about the usage of repository managers and helpful tricks for a lightweight Release Management.

Learn things about token replacement, executable jar, BOM, Dependency Management, enforcement, reporting and much more in live demonstrations.

JConf Mexico 2020

Continuous Stupidities – did DevOps fail?

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. A strong dominance of administrators in those teams (Operations), transform ideas of increasing the development process to a personal playground. Far away from the necessities of the DEV team, which have to create the application. Often Developers have to fight heavily with the provided infrastructure, because this break productivity massively down. In this talk we resume the lessons learned after more than one decade as configuration manager in different enterprise projects, to discuss nonsense DevOps workflows.

Version Number Anti-Patterns

After the gang of four (GOF) Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides published the book, Design Patterns: Elements of Reusable Object-Oriented Software, learning how to describe problems and solutions became popular in almost every field in software development. Likewise, learning to describe don’ts and anti-pattern became equally as popular.

In publications that discussed these concepts, we find helpful recommendations for software design, project management, configuration management, and much more. In this article, I will share my experience dealing with version numbers for software artifacts.

Most of us are already familiar with a method called semantic versioning, a powerful and easy-to-learn rule set for how version numbers have to be structured and how the segments should increase.

Version numbering example:

  • Major: Incompatible API changes.
  • Minor: Add new functionality.
  • Patch: Bugfixes and corrections.
  • Label: SNAPSHOT marking the “under development” status.

An incompatible API Change occurs when an externally accessible function or class was deleted or renamed. Another possibility is a change in the signature of a method. This means the return value or parameters has been changed from its original implementation. In these scenarios, it’s necessary to increase the Major segment of the version number. These changes present a high risk for API consumers because they need to adapt their own code.

When dealing with version numbers, it’s also important to know that 1.0.0 and 1.0 are equal. This has effect to the requirement that versions of a software release have to be unique. If not, it’s impossible to distinguish between artifacts. Several times in my professional experience, I was involved in projects where there was no well-defined processes for creating version numbers. The effect of these circumstances was that the team had to secure the quality of the artifact and got confused with which artifact version they were currently dealing with.

The biggest mistake I ever saw was the storage of the version of an artifact in a database together with other configuration entries. The correct procedure should be: place the version inside the artifact in a way that no one after a release can change from outside. The trap you could fall into is the process of how to update the version after a release or installation.

Maybe you have a checklist for all manual activities during a release. But what happens after a release is installed in a testing stage and for some reason another version of the application has to be installed. Are you still aware of changing the version number manually? How do you find out which version is installed or when the information of the database is incorrect?

Detect the correct version in this situation is a very difficult challenge. For that reason, we have the requirement to keep the version inside of the application. In the next step, we will discuss a secure and simple way on how to solve an automatic approach to this problem.

Our precondition is a simple Java library build with Maven. By default, the version number of the artifact is written down in the POM. After the build process, our artifact is created and named like: artifact-1.0.jar or similar. As long we don’t rename the artifact, we have a proper way to distinguish the versions. Even after a rename with a simple trick of packaging and checking, then, in the META-INF folder, we are able to find the correct value.

If you have the Version hardcoded in a property or class file, it would also work fine, as long you don’t forget to always update it. Maybe the branching and merging in SCM systems like Git could need your special attention to always have the correct version in your codebase.

Another solution is using Maven and the token placement mechanism. Before you run to try it out in your IDE, keep in mind that Maven uses to different folders: sources and resources. The token replacement in sources will not work properly. After a first run, your variable is replaced by a fixed number and gone. A second run will fail. To prepare your code for the token replacement, you need to configure Maven as a first in the build lifecycle:

<build>
   <resources>
      <resource>
         <directory>src/main/resources/</directory>
         <filtering>true</filtering>
      </resource> 
   </resources>
   <testResources>
      <testResource>
         <directory>src/test/resources/</directory>
         <filtering>true</filtering>
      </testResource>
   </testResources>
</build>

After this step, you need to know the ${project.version} property form the POM. This allows you to create a file with the name version.property in the resources directory. The content of this file is just one line: version=${project.version}. After a build, you find in your artifact the version.property with the same version number you used in your POM. Now, you can write a function to read the file and use this property. You could store the result in a constant for use in your program. That’s all you have to do!

Example: https://github.com/ElmarDott/TP-CORE/blob/master/src/main/java/org/europa/together/utils/Constraints.java

Non-Functional Requirements: Quality

By experience, most of us know how difficult it is to express what we mean talking about quality. Why is that so?  There exist many different views on quality and every one of them has its importance. What has to be defined for our project is something that fits its needs and works with the budget. Trying to reach perfectionism can be counterproductive if a project is to be terminated successfully. We will start based on a research paper written by B. W. Boehm in 1976 called “Quantitative evaluation of software quality.” Boehm highlights the different aspects of software quality and the right context. Let’s have a look more deeply into this topic.

When we discuss quality, we should focus on three topics: code structure, implementation correctness, and maintainability. Many managers just care about the first two aspects, but not about maintenance. This is dangerous because enterprises will not invest in individual development just to use the application for only a few years. Depending on the complexity of the application the price for creation could reach hundreds of thousands of dollars. Then it’s understandable that the expected business value of such activities is often highly estimated. A lifetime of 10 years and more in production is very typical. To keep the benefits, adaptions will be mandatory. That implies also a strong focus on maintenance. Clean code doesn’t mean your application can simply change. A very easily understandable article that touches on this topic is written by Dan Abramov. Before we go further on how maintenance could be defined we will discuss the first point: the structure.

Scaffolding Your Project

An often underestimated aspect in development divisions is a missing standard for project structures. A fixed definition of where files have to be placed helps team members find points of interests quickly. Such a meta-structure for Java projects is defined by the build tool Maven. More than a decade ago, companies tested Maven and readily adopted the tool to their established folder structure used in the projects. This resulted in heavy maintenance tasks, given the reason that more and more infrastructure tools for software development were being used. Those tools operate on the standard that Maven defines, meaning that every customization affects the success of integrating new tools or exchanging an existing tool for another.

Another aspect to look at is the company-wide defined META architecture. When possible, every project should follow the same META architecture. This will reduce the time it takes a new developer to join an existing team and catch up with its productivity. This META architecture has to be open for adoptions which can be reached by two simple steps:

  1. Don’t be concerned with too many details;
  2. Follow the KISS (Keep it simple, stupid.) principle.

A classical pattern that violates the KISS principle is when standards heavily got customized. A very good example of the effects of strong customization is described by George Schlossnagle in his book “Advanced PHP Programming.” In chapter 21 he explains the problems created for the team when adopting the original PHP core and not following the recommended way via extensions. This resulted in the effect that every update of the PHP version had to be manually manipulated to include its own development adaptations to the core. In conjunction, structure, architecture, and KISS already define three quality gates, which are easy to implement.

The open-source project TP-CORE, hosted on GitHub, concerns itself with the afore-mentioned structure, architecture, and KISS. There you can find their approach on how to put it in practice. This small Java library rigidly defined the Maven convention with his directory structure. For fast compatibility detection, releases are defined by semantic versioning. The layer structure was chosen as its architecture and is fully described here. Examination of their main architectural decisions concludes as follows:

Each layer is defined by his own package and the files following also a strict rule. No special PRE or POST-fix is used. The functionality Logger, for example, is declared by an interface called Logger and the corresponding implementation LogbackLogger. The API interfaces can detect in the package “business” and the implementation classes located in the package “application.” Naming like ILogger and LoggerImpl should be avoided. Imagine a project that was started 10 years ago and the LoggerImpl was based on Log4J. Now a new requirement arises, and the log level needs to be updated during run time. To solve this challenge, the Log4J library could be replaced with Logback. Now it is understandable why it is a good idea to name the implementation class like the interface, combined with the implementation detail: it makes maintenance much easier! Equal conventions can also be found within the Java standard API. The interface List is implemented by an ArrayList. Obviously, again the interface is not labeled as something like IList and the implementation not as ListImpl .

Summarizing this short paragraph, a full measurement rule set was defined to describe our understanding of structural quality. By experience, this description should be short. If other people can easily comprehend your intentions, they willingly accept your guidance, deferring to your knowledge. In addition, the architect will be much faster in detecting rule violations.

Measure Your Success

The most difficult part is to keep a clean code. Some advice is not bad per se, but in the context of your project, may not prove as useful. In my opinion, the most important rule would be to always activate the compiler warning, no matter which programming language you use! All compiler warnings will have to be resolved when a release is prepared. Companies dealing with critical software, like NASA, strictly apply this rule in their projects resulting in utter success.

Coding conventions about naming, line length, and API documentation, like JavaDoc, can be simply defined and observed by tools like Checkstyle. This process can run fully automated during your build. Be careful; even if the code checkers pass without warnings, this does not mean that everything is working optimally. JavaDoc, for example, is problematic. With an automated Checkstyle, it can be assured that this API documentation exists, although we have no idea about the quality of those descriptions.

There should be no need to discuss the benefits of testing in this case; let us rather take a walkthrough of test coverage. The industry standard of 85% of covered code in test cases should be followed because coverage at less than 85% will not reach the complex parts of your application. 100% coverage just burns down your budget fast without resulting in higher benefits. A prime example of this is the TP-CORE project, whose test coverage is mostly between 92% to 95%. This was done to see real possibilities.

As already explained, the business layer contains just interfaces, defining the API. This layer is explicitly excluded from the coverage checks. Another package is called internal and it contains hidden implementations, like the SAX DocumentHandler. Because of the dependencies the DocumentHandler is bound to, it is very difficult to test this class directly, even with Mocks. This is unproblematic given that the purpose of this class is only for internal usage. In addition, the class is implicitly tested by the implementation using the DocumentHandler. To reach higher coverage, it also could be an option to exclude all internal implementations from checks. But it is always a good idea to observe the implicit coverage of those classes to detect aspects you may be unaware of.

Besides the low-level unit tests, automated acceptance tests should also be run. Paying close attention to these points may avoid a variety of problems. But never trust those fully automated checks blindly! Regularly repeated manual code inspections will always be mandatory, especially when working with external vendors. In our talk at JCON 2019, we demonstrated how simply test coverage could be faked. To detect other vulnerabilities you can additionally run checkers like SpotBugs and others more.

Tests don’t indicate that an application is free of failures, but they indicate a defined behavior for implemented functionality.

For a while now, SCM suites like GitLab or Microsoft Azure support pull requests, introduced long ago in GitHub. Those workflows are nothing new; IBM Synergy used to apply the same technique. A Build Manager was responsible to merge the developers’ changes into the codebase. In a rapid manner, all the revisions performed by the developer are just added into the repository by the Build Manager, who does not hold a sufficiently profound knowledge to decide about the implementation quality. It was the usual practice to simply secure that the build is not broken and always the compile produce an artifact.

Enterprises have discovered this as a new strategy to handle pull requests. Now, managers often make the decision to use pull requests as a quality gate. In my personal experience, this slows down productivity because it takes time until the changes are available in the codebase. Understanding of the branch and merge mechanism helps you to decide for a simpler branch model, like release branch lines. On those branches tools like SonarQube operate to observe the overall quality goal.

If a project needs an orchestrated build, with a defined order how artifacts have to create, you have a strong hint for a refactoring.

The coupling between classes and modules is often underestimated. It is very difficult to have an automated visualization for the bindings of modules. You will find out very fast the effect it has when a light coupling is violated because of an increment of complexity in your build logic.

Repeat Your Success

Rest assured, changes will happen! It is a challenge to keep your application open for adjustments. Several of the previous recommendations have implicit effects on future maintenance. A good source quality simplifies the endeavor of being prepared. But there is no guarantee. In the worst cases the end of the product lifecycle, EOL is reached, when mandatory improvements or changes cannot be realized anymore because of an eroded code base, for example.

As already mentioned, light coupling brings with it numerous benefits with respect to maintenance and reutilization. To reach this goal is not that difficult as it might look. In the first place, try to avoid as much as possible the inclusion of third-party libraries. Just to check if a String is empty or NULL it is unnecessary to depend on an external library. These few lines are fast done by oneself. A second important point to be considered in relation to external libraries: “Only one library to solve a problem.” If your project deals with JSON then decide one one implementation and don’t incorporate various artifacts. These two points heavily impact on security: a third-party artifact we can avoid using will not be able to cause any security leaks.

After the decision is taken for an external implementation, try to cover the usage in your project by applying design patterns like proxy, facade, or wrapper. This allows for a replacement more easily because the code changes are not spread around the whole codebase. You don’t need to change everything at once if you follow the advice on how to name the implementation class and provide an interface. Even though a SCM is designed for collaboration, there are limitations when more than one person is editing the same file. Using a design pattern to hide information allows you an iterative update of your changes.

Conclusion

As we have seen: a nonfunctional requirement is not that difficult to describe. With a short checklist, you can clearly define the important aspects for your project. It is not necessary to check all points for every code commit in the repository, this would with all probability just elevate costs and doesn’t result in higher benefits. Running a full check around a day before the release represents an effective solution to keep quality in an agile context and will help recognizing where optimization is necessary. Points of Interests (POI) to secure quality are the revisions in the code base for a release. This gives you a comparable statistic and helps increasing estimations.

Of course, in this short article, it is almost impossible to cover all aspects regarding quality. We hope our explanation helps you to link theory by examples to best practice. In conclusion, this should be your main takeaway: a high level of automation within your infrastructure, like continuous integration, is extremely helpful, but doesn’t prevent you from manual code reviews and audits.

Checklist

  • Follow common standards
  • KISS – keep it simple, stupid!
  • Equal directory structure for different projects
  • Simple META architecture, which can reuse as much as possible in other projects
  • Defined and follow coding styles
  • If a release got prepared – no compiler warnings are accepted
  • Have test coverage up to 85%
  • Avoid third-party libraries as much as possible
  • Don’t support more than one technology for a specific problem (e. g., JSON)
  • Cover foreign code by a design pattern
  • Avoid strong object/module coupling

Acceptance Tests in Java With JGiven

Most of the developer community know what a unit test is, even they don’t write them. But there is still hope. The situation is changing. More and more projects hosted on GitHub contain unit tests.

In a standard set-up for Java projects like NetBeans, Maven, and JUnit, it is not that difficult to produce your first test code. Besides, this approach is used in Test Driven Development (TDD) and exists in other technologies like Behavioral Driven Development (BDD), also known as acceptance tests, which is what we will focus on in this article.

Difference Between Unit and Acceptance Tests

The easiest way to become familiar with this topic is to look at a simple comparison between unit and acceptance tests. In this context, unit tests are very low level. They execute a function and compare the output with an expected result. Some people think differently about it, but in our example, the only one responsible for a unit test is the developer.

Keep in mind that the test code is placed in the project and always gets executed when the build is running. This provides quick feedback as to whether or not something went wrong. As long the test doesn’t cover too many aspects, we are able to identify the problem quickly and provide a solution. The design principle of those tests follows the AAA paradigm. Define a precondition (Arrange), execute the invariant (Act), and check the postconditions (Assume). We will come back to this approach a little later.

When we check the test coverage with tools like JaCoCo and cover more than 85 percent of our code with test cases, we can expect good quality. During the increasing test coverage, we specify our test cases more precisely and are able to identify some optimizations. This can be removing or inverting conditions because during the tests we find out, it is almost impossible to reach those sections. Of course, the topic is a bit more complicated, but those details could be discussed in another article.

Acceptance test are same classified like unit tests. They belong to the family of regression tests. This means we want to observe if changes we made on the code have no effects on already worked functionality. In other words, we want to secure that nothing already is working got broken, because of some side effects of our changes. The tool of our choice is JGiven [1]. Before we look at some examples, first, we need to touch on a bit of theory.

JGiven In-Depth

The test cases we define in JGiven is called a scenario. A scenario is a collection of four classes, the scenario itself, the Given displayed as given (Arrange), the Action displayed as when (Act) and Outcome displayed as then (Assume).

In most projects, especially when there is a huge amount of scenarios and the execution consumes a lot of time, acceptance tests got organized in a separate project. With a build job on your CI server, you can execute those tests once a day to get fast feedback and to react early if something is broken. The code example we demonstrate contains everything in one project on GitHub [2] because it is just a small library and a separation would just over-engineer the project. Usually, the one responsible for acceptance tests is the test center, not the developer.

The sample project TP-CORE is organized by a layered architecture. For our example, we picked out the functionality for sending e-mails. The basic functionality to compose an e-mail is realized in the application layer and has a test coverage of up to 90 percent. The functionality to send the e-mail is defined in the service layer.

In our architecture, we decided that the service layer is our center of attention to defining acceptance tests. Here, we want to see if our requirement to send an e-mail is working well. Supporting this layer with our own unit tests is not that efficient because, in commercial projects, it just produces costs without winning benefits. Also, having also unit tests means we have to do double the work because our JGiven tests already demonstrate and prove that our function is well working. For those reasons, it makes no sense to generate test coverage for the test scenarios of the acceptance test.

Let’s start with a practice example. At first, we need to include our acceptance test framework into our Maven build. In case you prefer Gradle, you can use the same GAV parameters to define the dependencies in your build script.

<dependency>
   <groupId>com.tngtech.jgiven</groupId>
   <artifactId>jgiven-junit</artifactId>
   <version>0.18.2</version>
   <scope>test</scope>
</dependency>
XML

Listing 1: Dependency for Maven.

As you can see in listing 1, JGiven works well together with JUnit. An integration to TestNG also exists , you just need to replace the artifactId for jgiven-testng. To enable the HTML reports, you need to configure the Maven plugin in the build lifecycle, like it is shown in Listing 2.

<build> 
   <plugins> 
      <plugin>
         <groupId>com.tngtech.jgiven</groupId>
         <artifactId>jgiven-maven-plugin</artifactId>
         <version>0.18.2</version>
         <executions>
            <execution>
               <goals>
                  <goal>report</goal>
               </goals>
            </execution>
         </executions>
         <configuration>
            <format>html</format>
         </configuration>
      </plugin>
   </plugins>
</build>
XML

Listing 2: Maven Plugin Configuration for JGiven.

The report of our scenarios in the TP-CORE project is shown in image 1. As we can see, the output is very descriptive and human-readable. This result will be explained by following some naming conventions for our methods and classes, which will be explained in detail below. First, let’s discuss what we can see in our test scenario. We defined five preconditions:

  1. The configuration for the SMPT server is readable
  2. The SMTP server is available
  3. The mail has a recipient
  4. The mail has attachments
  5. The mail is full composed

If all these conditions are true, the action will send a single e-mail got performed. Afterward, after the SMTP server is checked, we see that the mail has arrived. For the SMTP service, we use the small Java library greenmail [3] to emulate an SMTP server. Now it is understandable why it is advantageous for acceptance tests if they are written by other people. This increases the quality as early on conceptional inconsistencies appear. Because as long as the tester with the available implementations cannot map the required scenario, the requirement is not fully implemented.

Producing Descriptive Scenarios

Now is the a good time to dive deeper into the implementation details of our send e-mail test scenario. Our object under test is the class MailClientService. The corresponding test class is  MailClientScenarioTest, defined in the test packages. The scenario class definition is shown in listing 3.

@RunWith(JUnitPlatform.class) 
public class MailClientScenarioTest
       extends <strong>ScenarioTest</strong><MailServiceGiven, MailServiceAction, MailServiceOutcome> { 
    // do something 
}

Listing 3: Acceptance Test Scenario for JGiven.

As we can see, we execute the test framework with JUnit5. In the  ScenarioTest, we can see the three classes: Given, Action, and Outcome in a special naming convention. It is also possible to reuse already defined classes, but be careful with such practices. This can cost some side effects. Before we now implement the test method, we need to define the execution steps. The procedure for the three classes are equivalent.

@RunWith(JUnitPlatform.class) 
public class MailServiceGiven 
       extends <strong>Stage</strong><MailServiceGiven> { 

    public MailServiceGiven email_has_recipient(MailClient client) {
        try { 
            assertEquals(1, client.getRecipentList().size());
        } catch (Exception ex) {
            System.err.println(ex.getMessage);
        }
        return self(); 
    } 
} 

@RunWith(JUnitPlatform.class)
public class MailServiceAction
       extends <strong>Stage</strong><MailServiceAction> { 

    public MailServiceAction send_email(MailClient client) {
        MailClientService service = new MailClientService();
        try {
            assertEquals(1, client.getRecipentList().size());
            service.sendEmail(client);
        } catch (Exception ex) { 
            System.err.println(ex.getMessage);
        }
        return self();
    }
}

@RunWith(JUnitPlatform.class)
public class MailServiceOutcome 
       extends <strong>Stage</strong><MailServiceOutcome> {

    public MailServiceOutcome email_is_arrived(MimeMessage msg) { 
         try {
             Address adr = msg.getAllRecipients()[0];
             assertEquals("JGiven Test E-Mail", msg.getSubject());
             assertEquals("noreply@sample.org", msg.getSender().toString());
             assertEquals("otto@sample.org", adr.toString());
             assertNotNull(msg.getSize());
         } catch (Exception ex) {
             System.err.println(ex.getMessage);
         }
         return self();
    }
}

Listing 4: Implementing the AAA Principle for Behavioral Driven Development.

Now, we completed the cycle and we can see how the test steps got stuck together. JGiven supports a bigger vocabulary to fit more necessities. To explore the full possibilities, please consult the documentation.

Lessons Learned

In this short workshop, we passed all the important details to start with automated acceptance tests. Besides JGiven exist other frameworks, like Concordion or FitNesse fighting for usage. Our choice for JGiven was its helpful documentation, simple integration into Maven builds and JUnit tests, and the descriptive human-readable reports.

As negative point, which could people keep away from JGiven, could be the detail that you need to describe the tests in the Java programming language. That means the test engineer needs to be able to develop in Java, if they want to use JGiven. Besides this small detail, our experience with JGiven is absolutely positive.