Bottleneck Pull Requests

The secure use of source control management (SCM) systems such as Git is essential for programmers (development) and system administrators (operations). This group of tools has a long tradition in software development and enables development teams to work together on a code base. Four questions are answered: When was the change made? Who made the change? What was changed? Why was something changed? It is therefore a pure collaboration tool.

With the advent of the open source code hosting platform GitHub, so-called Pull Requests were introduced. Pull requests are a workflow in GitHub that allows developers to provide code changes for repositories to which they only have read access. Only after the owner of the original repository has reviewed the proposed changes and approved them are these changes integrated by him. This is also how the name comes about. A developer copies the original repository into his GitHub workspace, makes changes and requests the owner of the original repository to adopt the change. The latter can then accept the changes and if necessary adapt them himself or reject them with a reason.

Anyone who thinks that GitHub was particularly innovative is mistaken. This process is very old hat in the open source community. Originally, this procedure was called the Dictatorship Workflow. IBM’s commercial SCM Rational Synergy first published in 1990 is based precisely on the Dictatorship Workflow. With the class of distributed version management tools which Git also belongs to the Dictatorship Workflow is quite easy to implement. So it was obvious that GitHub would make this process available to its users. GitHub has chosen a much more appealing name. Anyone who works with the free DevOps solution GitLab, for example will know pull requests as merge requests. The most common Git servers now contain the pull request process. Without going into too much detail about the technical details of implementing pull requests we will focus our attention on the usual problems that open source projects face.

Developers who want to participate in an open source project are called maintainers. Almost every project has a short guide on how to support the project and which rules apply. For people who are learning to program, open source projects are ideal for quickly and significantly improving their own skills. For the open source project this means that you have maintainers with a wide range of skills and experience. If you don’t establish a control mechanism the code base will erode in a very short time.

If the project is quite large and there are a lot of maintainers working on the code base it is hardly possible for the owner of the repository to process all pull requests in a timely manner. To counteract this bottleneck the Dictatorship Workflow was expanded to the Dictatorship – Lieutenant Workflow. An intermediate instance was introduced that distributes the review of pull requests across several shoulders. This intermediate layer the so-called Lieutenants are particularly active maintainers with an already established reputation. The Dictator therefore only needs to review the Lieutenants’ pull requests. An immense reduction in workload that ensures that there is no backlog of features due to unprocessed pull requests. After all the improvements or extensions should be included in the code base as quickly as possible so that they can be made available to users in the next release.

This approach is still the standard in open source projects to ensure quality. You can never say who is involved in the project. There may even be one or two saboteurs. This idea is not so far-fetched. Companies that have strong competition for their commercial product from the free open source sector could come up with unfair ideas here if there were no regulations. In addition maintainers cannot be disciplined as is the case with team members in companies, for example. It is difficult to threaten a maintainer who is resistant to advice and does not adhere to the project’s conventions despite repeated requests with a pay cut. The only option is to exclude this person from the project.

Even if the problem of disciplining employees in commercial teams described above is not a problem. There are also difficulties in these environments that need to be overcome. These problems date back to the early days of version control tools. The first representatives of this species were not distributed solutions just centralized. CVS and Subversion (SVN) only ever keep the latest revision of the code base on the local development computer. Without a connection to the server you can actually not work. This is different with Git. Here you have a copy of the repository on your own computer, so you can do your work locally in a separate branch and when you are finished you bring these changes into the main development branch and then transfer them to the server. The ability to create offline branches and merge them locally has a decisive influence on the stability of your own work if the repository gets into an inconsistent state. Because in contrast to centralized SCM systems you can now continue working without having to wait for the main development branch to be repaired.

These inconsistencies arise very easily. All it takes is forgetting a file when committing and team members can no longer compile the project locally and are hampered in their work. The concept of Continuous Integration (CI) was established to overcome this problem. It is not as is often wrongly assumed about integrating different components into an application. The aim of CI is to keep the commit stage – the code repository – in a consistent state. For this purpose build servers were established which regularly check the repository for changes and then build the artifact from the source code. A very popular build server that has been established for many years is Jenkins. Jenkins originally emerged as a fork of the Hudson project. Build Servers now takes on many other tasks. That is why it makes a lot of sense to call this class of tools automation servers.

With this brief overview of the history of software development, we now understand the problems of open source projects and commercial software development. We have also discussed the history of the pull request. In commercial projects, it often happens that teams are forced by project management to work with pull requests. For a project manager without technical background knowledge, it makes a lot of sense to establish pull requests in his project as well. After all, he has the idea that this will improve code quality. Unfortunately, this is not the case. The only thing that happens is that a feature backlog is provoked and the team is forced to work harder without improving productivity. The pull request must be evaluated by a competent person. This causes unpleasant delays in large projects.

Now I often see the argument that pull requests can be automated. This means that the build server takes the branch with the pull request and tries to build it, and if the compilation and automated tests are successful, the server tries to incorporate the changes into the main development branch. Maybe I’m seeing something wrong, but where is the quality control? It’s a simple continuous integration process that maintains the consistency of the repository. Since pull requests are primarily found in the Git environment, a temporarily inconsistent repository does not mean a complete stop to development for the entire team, as is the case with Subversion.

Another interesting question is how to deal with semantic merge conflicts in an automatic merge. These are not a serious problem per se. This will certainly lead to the rejection of the pull request with a corresponding message to the developer so that the problem can be solved with a new pull request. However, unfavorable branch strategies can lead to disproportionate additional work.

I see no added value for the use of pull requests in commercial software projects, which is why I advise against using pull requests in this context. Apart from a complication of the CI / CD pipeline and increased resource consumption of the automation server which now does the work twice, nothing else has happened. The quality of a software project can be improved by introducing automated unit tests and a test-driven approach to implementing features. Here it is necessary to continuously monitor and improve the test coverage of the project. Static code analysis and activating compiler warnings bring better results with significantly less effort.

Personally, I believe that companies that rely on pull requests either use them for complicated CI or completely distrust their developers and deny that they do a good job. Of course, I am open to a discussion on the topic, perhaps an even better solution can be found. I would therefore be happy to receive lots of comments with your views and experiences about dealing with pull requests.

Configuration files in software applications

Why do we even need the option to save application configurations in text files? Isn’t a database sufficient for this purpose? The answer to this question is quite trivial. The information on how an application can connect to a database is difficult to save in the database itself.

Now you could certainly argue that you can achieve such things with an embedded database such as SQLite. That may be correct in principle. Unfortunately, this solution is not really practical for highly scalable applications. And you don’t always have to use a sledgehammer to crack a nut. Saving important configuration parameters in text files has a long tradition in software development. However, various text formats such as INI, XML, JSON and YAML have now become established for this use case. For this reason, the question arises as to which format is best to use for your own project.

INI Files

One of the oldest formats are the well-known INI files. They store information according to the key = value principle. If a key appears multiple times in such an INI file, the final value is always overwritten by the last entry that appears in the file.

; Example of an INI File
[Section-name] 
key=value ; inline

text="text configuration with spaces and \' quotas"
string='can be also like this'
char=passwort

# numbers & digets
number=123
hexa=0x123
octa=0123
binary=0b1111
float=123.12

# boolean values
value-1=true
value-0=false
INI

As we can see in the small example, the syntax in INI files is kept very simple. The [section] name is used primarily to group individual parameters and improves readability. Comments can be marked with either ; or #. Otherwise, there is the option of defining various text and number formats, as well as Boolean values.

Web developers know INI files primarily from the PHP configuration, the php.ini, in which important properties such as the size of the file upload can be specified. INI files are also still common under Windows, although the registry was introduced for this purpose in Windows 95.

Properties

Another very tried and tested solution is so-called property files. This solution is particularly common in Java programs, as Java already has a simple class that can handle properties. The key=value format is borrowed from INI files. Comments are also introduced with #.

# PostgreSQL
hibernate.dialect.database = org.hibernate.dialect.PostgreSQLDialect
jdbc.driverClassName = org.postgresql.Driver 
jdbc.url = jdbc:postgresql://127.0.0.1:5432/together-test
Plaintext

In order to ensure type safety when reading .properties in Java programs, the TP-CORE library has an extended implementation. Despite the fact that the properties are read in as strings, the values ​​can be accessed using typing. A detailed description of how the PropertyReader class can be used can be found in the documentation.

.property files can also be used as filters for substitutions in the Maven build process. Of course, properties are not just limited to Maven and Java. This concept can also be used in languages ​​such as Dart, nodeJS, Python and Ruby. To ensure the greatest possible compatibility of the files between the different languages, exotic notation options should be avoided.

XML

XML has also been a widely used option for many years to store configurations in an application in a changeable manner. Compared to INI and property files, XML offers more flexibility in defining data. A very important aspect is the ability to define fixed structures using a grammar. This allows validation even for very complex data. Thanks to the two checking mechanisms of well-formedness and data validation against a grammar, possible configuration errors can be significantly reduced.

Well-known application scenarios for XML can be found, for example, in Java Enterprise projects (J EE) with web.xml or the Spring Framework and Hibernate configuration. The power of XML even allows it to be used as a Domain Specific Language (DSL), as is used in the Apache Maven build tool.

Thanks to many freely available libraries, there is an implementation for almost every programming language to read XML files and access specific data. For example, PHP, a language popular with web developers, has a very simple and intuitive solution for dealing with XML with the Simple XML extension.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1" 
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
                             http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/assembly/ApplicationContext.xml</param-value>
    </context-param>
    <context-param>
        <param-name>javax.faces.PROJECT_STAGE</param-name>
        <param-value>${jsf.project.stage}</param-value>
    </context-param>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
    <listener>
        <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
    </listener>

    <servlet>
        <servlet-name>Faces Servlet</servlet-name>
        <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>Faces Servlet</servlet-name>
        <url-pattern>*.xhtml</url-pattern>
    </servlet-mapping>

    <welcome-file-list>
        <welcome-file>index.xhtml</welcome-file>
    </welcome-file-list>
</web-app>
XML

JSON

JavaScript Object Notation, or JSON for short, is a relatively new technology, although it has been around for several years. JSON also has a corresponding implementation for almost every programming language. The most common use case for JSON is data exchange in microservices. The reason for this is the compactness of JSON. Compared to XML, the data stream to be transferred in web services such as XML RPC or SOAP with JSON is much smaller due to the notation.

There is also a significant difference between JSON and XML in the area of ​​validation. Basically, there is no way to define a grammar like in XML with DTD or schema on the official JSON homepage [1]. There is a proposal for a JSON grammar on GitHub [2], but there are no corresponding implementations to be able to use this technology in projects.

A further development of JSON is JSON5 [3], which was started in 2012 and has been officially published as a specification in version 1.0.0 [4] since 2018. The purpose of this development was to significantly improve the readability of JSON for people. Important functions such as the ability to write comments were added here. JSON5 is fully compatible with JSON as an extension. To get a brief impression of JSON5, here is a small example:

{
  // comments
  unquoted: 'and you can quote me on that', 
  singleQuotes: 'I can use "double quotes" here',
  lineBreaks: "Look, Mom! \
No \\n's!",
  hexadecimal: 0xdecaf,
  leadingDecimalPoint: .8675309, andTrailing: 8675309.,
  positiveSign: +1,
  trailingComma: 'in objects', andIn: ['arrays',],
  "backwardsCompatible": "with JSON",
}
JSON5

YAML

Many modern applications such as the open source metrics tool Prometheus use YAML for configuration. The very compact notation is very reminiscent of the Python programming language. YAML version 1.2 is currently published.

The advantage of YAML over other specifications is its extreme compactness. At the same time, version 1.2 has a grammar for validation. Despite its compactness, the focus of YAML 1.2 is on good readability for machines and people alike. I leave it up to each individual to decide whether YAML has achieved this goal. On the official homepage you will find all the resources you need to use it in your own project. This also includes an overview of the existing implementations. The design of the YAML homepage also gives a good foretaste of the clarity of YAML files. Attached is a very compact example of a Prometheus configuration in YAML:

global:
  scrape_interval:     15s
  evaluation_interval: 15s 

rule_files:
  # - "first.rules"
  # - "second.rules"

#IP: 127.0.0.1
scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['127.0.0.1:8080']

  # SPRING BOOT WEB APP
  - job_name: spring-boot-sample 
    scrape_interval: 60s
    scrape_timeout: 50s
    scheme: "http"
    metrics_path: '/actuator/prometheus' 
    static_configs:
     - targets: ['127.0.0.1:8888']
    tls_config:
     insecure_skip_verify: true
YAML

Resumee

All of the techniques presented here have been tried and tested in practical use in many projects. Of course, there may be some preferences for special applications such as REST services. For my personal taste, I prefer the XML format for configuration files. This is easy to process in the program, extremely flexible and, with clever modeling, also compact and extremely readable for people.

References

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.


Ruby: setting up the development environment

Ruby has been a well-established programming language for many years and can also be recommended to beginners. Ruby follows the object-oriented paradigm and contains many concepts to support OOP well. In addition, the Ruby on Rails framework makes it very easy to develop complex web applications.

The most difficult hurdle to overcome when getting started with Ruby is the installation of the entire development environment. For this reason, I have written this short tutorial on getting started with Ruby. So let’s start with the installation right away.

My operating system is a Debian 12 Linux and Ruby can be installed very easily with the sudo apt-get install ruby-full command. This procedure can be applied to all Debian-based Linux distributions such as Ubuntu. You can then use ruby -v to check the success in the bash.

ed@:~$ ruby -v 
ruby 3.1.2p20 (2022-04-12 revision 4491bb740a) [x86_64-linux-gnu]
Bash

If we now follow the tutorial on the Ruby on Rails homepage and want to install the Rails framework via gem rails, we already encounter the first problem. No libraries for Ruby can be installed due to missing authorizations. Now we could come up with the idea to install the libraries as superuser with sudo. Unfortunately, this solution is only temporary and prevents the libraries from being found correctly later in the development environment. It is better to create a folder for the GEMs in the user’s home directory and make this available via a system variable.

export GEM_HOME=/home/<user>/.ruby-gems
export PATH=$PATH:/home/<user>/.ruby-gems

The above line must be entered at the end of the .bashrc file so that the changes remain persistent. It is important that is <user> replaced with the correct user name. The success of this action can be checked via gem environment and should result in an output similar to the one below.

ed@:~$ gem environment
RubyGems Environment: 
  - RUBYGEMS VERSION: 3.3.15
  - RUBY VERSION: 3.1.2 (2022-04-12 patchlevel 20) [x86_64-linux-gnu]
  - INSTALLATION DIRECTORY: /home/ed/.ruby-gems
  - USER INSTALLATION DIRECTORY: /home/ed/.local/share/gem/ruby/3.1.0
  - RUBY EXECUTABLE: /usr/bin/ruby3.1
  - GIT EXECUTABLE: /usr/bin/git
  - EXECUTABLE DIRECTORY: /home/ed/Programs/gem-repository/bin
  - SPEC CACHE DIRECTORY: /home/ed/.local/share/gem/specs
  - SYSTEM CONFIGURATION DIRECTORY: /etc
  - RUBYGEMS PLATFORMS:
     - ruby
     - x86_64-linux
  - GEM PATHS:
     - /home/ed/Programs/gem-repository
     - /home/ed/.local/share/gem/ruby/3.1.0
     - /var/lib/gems/3.1.0
     - /usr/local/lib/ruby/gems/3.1.0
     - /usr/lib/ruby/gems/3.1.0
     - /usr/lib/x86_64-linux-gnu/ruby/gems/3.1.0
     - /usr/share/rubygems-integration/3.1.0
     - /usr/share/rubygems-integration/all
     - /usr/lib/x86_64-linux-gnu/rubygems-integration/3.1.0
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :backtrace => false
     - :bulk_threshold => 1000
  - REMOTE SOURCES:
     - https://rubygems.org/
  - SHELL PATH:
     - /home/ed/.local/bin
     - /usr/local/bin
     - /usr/bin
     - /bin
     - /usr/local/games
     - /usr/games
     - /snap/bin
     - /home/ed/Programs/maven/bin
     - /usr/share/openjfx/lib
- /home/ed/.local/bin
Bash

With this setting, Ruby GEMs can now be installed without difficulty. Let’s try this out right away and install the Ruby on Rails framework, which supports us in the development of web applications: gem install rails. This should now run without error messages and with the command rails -v we can see if we were successful.

In the next step we can now create a new Rails project. Here I use the example from the Ruby on Rails documentation and write in the bash: rails new blog. This creates a corresponding directory called blog with the project files. After we have changed to the directory, we still need to install all dependencies. This is done via: bundle install.

Here we encounter another problem. The installation cannot be completed because there seems to be a problem with the psych library. The real problem, however, is that there is no support for YAML files at the operating system level. This can be fixed very quickly by installing the YAML package.

sudo apt-get install libyaml-dev

The problem with psych in Ruby on Rails has existed for a while and has been solved with the YAML installation so that the bundle install command now also runs successfully. Now we are also able to start the server for the Rails application: bin/rails server.

ed@:~/blog$ bin/rails server
=> Booting Puma 
=> Rails 7.1.3.3 application starting in development 
=> Run `bin/rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 6.4.2 (ruby 3.1.2-p20) ("The Eagle of Durango")
*  Min threads: 5
*  Max threads: 5
*  Environment: development
*          PID: 12316
* Listening on http://127.0.0.1:3000
* Listening on http://[::1]:3000
Use Ctrl-C to stop
Bash

If we now call up the URL http://127.0.0.1:3000 in the web browser, we see our Rails web application.

With these steps, we have now created a functioning Ruby environment on our system. Now it’s time to decide on a suitable development environment. If you only occasionally adapt a few scripts, VIM and Sublime Text are sufficient as editors. For complex software projects, a full IDE should be used, as this simplifies the work considerably. The best recommendation is the paid IDE RubyMine from JetBrains. If you support Ruby open source projects as a developer, you can apply for a free license.

A freely available Ruby IDE is VSCode from Microsoft. However, a few plugins have to be integrated first and VSCode is not very intuitive for my taste. Ruby integration for the classic Java IDEs Eclipse and NetBeans are quite outdated and can only be made to work with a great deal of effort.

With this we have already discussed all the important points that are necessary to set up a functioning Ruby environment on your own system. I hope that this little workshop has significantly lowered the entry barrier to learning Ruby. If you like this article, please like it and recommend it to your friends.

Modern Times

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

  1. Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
  2. The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
  3. Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
  4. Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
  5. Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
  6. There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
  7. We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
  8. Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
  9. Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.

Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.

  • First: understand the problem.
  • Second: investigate about a standard solution for the process.
  • Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.

If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.

By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.

JPoint Moscow 2023

Test Driven: from zero to hero

In the software industry, it is a common agreement that the code base has sufficient test automation. Because this is necessary for a stable DevOps process and secure refactoring. But the reality is completely different. Almost every project I joined during my career didn’t have any lines of test code. If we think about the fact that after more than 40 years, 80% of all commercial software projects fail, we should not be surprised. But this doesn’t have to be like this. In this talk, we demonstrate how easy it is to introduce, even in huge projects, a test-driven approach. The technical setup is a standard Java project with Apache Maven and JUnit 5.

Date vs. Boolean

When we designing data models and their corresponding tables appears sometimes Boolean as datatype. In general those flags are not really problematic. But maybe there could be a better solution for the data design. Let me give you a short example about my intention.

Assume we have to design a simple domain to store articles. Like a Blog System or any other Content Management. Beside the content of the article and the name of the author could we need a flag which tells the system if the article is visible for the public. Something like published as a Boolean. But there is also an requirement of when the article is scheduled a date for publishing. In the most database designs I observed for those circumstances a Boolean: published and a Date: publishingDate. In my opinion this design is a bit redundant and also error prone. As a fast conclusion I would like to advice you to use from the beginning just Date instead of Boolean. The scenario I described above can also transformed to many other domain solutions.

For now, after we got an idea why we should replace Boolean for Date datatype we will focus about the details how we could reach this goal.

Dealing with standard SQL suggest that replacing a Database Management System (DBMS) for another one should not be a big issue. The reality is unfortunately a bit different. Not all available data types for date like Timestamp are really recommendable to use. By experience I prefer to use the simple java.util.Date to avoid future problems and other surprises. The stored format in the database table looks like: ‘YYYY-MM-dd HH:mm:ss.0’. Between the Date and Time is a single space and .0 indicates an offset. This offset describes the time zone. The Standard Central European Timezone CET has an offset of one hour. That means UTC+01:00 as international format. To define the offset separately I got good results by using java.util.TimeZone, which works perfectly together with Date.

Before we continue I will show you a little code snippet in Java for the OR Manager Hibernate and how you could create those table columns.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Let’s get a bit closer about the listing above. As first we see the @CreationTimestamp Annotation. That means when the ArticleDO Object got created the variable created will initialized by the current time. This value never should changed, because an article can just once created but several times changed. The Timezone is stored in a String. In the Constructor you can see how the system Timezone could grabbed – but be careful this value should not trusted to much. If you have a user like me traveling a lot you will see in all the places I stay the same system time, because usually I never change that. As default Timezone I define the correct String for UTC-0. The same I do for the variable published. Date can also created by a String what we use to set our default zero value. The Setter for published has the option to define an future date or use the current time in the case the article will published immediately. At the end of the listing I demonstrate a simple SQL import for a single record.

But do not rush to fast. We also need to pay a bit attention how to deal with the UTC offset. Because I observed in huge systems several times problems which occurred because developer was used only default values.

The timezone in general is part of the internationalization concept. For managing the offset adjustments correctly we can decide between different strategies. Like in so many other cases there no clear right or wrong. Everything depends on the circumstances and necessities of your application. If a website is just national wide like for a small business and no time critical events are involved everything become very easy. In this case it will be unproblematic to manage the timezone settings automatically by the DBMS. But keep in mind in the world exist countries like Mexico with more than just one timezone. An international system where clients spread around the globe it could be useful to setup each single DBMS in the cluster to UTC-0 and manage the offset by the application and the connected clients.

Another issue we need to come over is the question how should initialize the date value of a single record by default? Because null values should avoided. A full explanation why returning null is not a good programming style is given by books like ‘Effective Java’ and ‘Clean Code’. Dealing with Null Pointer Exceptions is something I don’t really need. An best practice which well works for me is an default date – time value by ‘0000-00-00 00:00:00.0’. Like this I’m avoiding unwanted publishing’s and the meaning is very clear – for everybody.

As you can see there are good reasons why Boolean data types should replaced by Date. In this little article I demonstrated how easy you can deal with Date and timezone in Java and Hibernate. It should also not be a big thing to convert this example to other programming languages and Frameworks. If you have an own solution feel free to leave a comment and share this article with your colleagues and friends.

Working with JSON in Java RESTful Services using Jackson

Since a long time the Java Script Object Notation [1] become as a lightweight standard to replace XML for information exchange between heterogeneous systems. Both technologies XML and JSON closed those gap to return simple and complex data of a remote method invocation (RMI), when different programming languages got involved. Each of those technologies has its own benefits and disadvantages. A good designed XML document is human readable but needs in comparing to JSON more payload when it send through the network. For almost every programming languages existing plenty implementations to deal with XML and also JSON. We don’t need to reinvent the wheel, to implement our own solution for handling JSON objects. But choosing the right library is not that easy might it seems.

The most popular library for JSON in Java projects is the one I already mentioned: Jackson [2]. because of its huge functionality. Another important point for choosing Jackson instead of other libraries is it’s also used by the Jersey REST Framework [3]. Before we start now our journey with the Java Frameworks Jersey and Jackson, I like to share some thoughts about things, I often observe in huge projects during my professional life. Because of this reason I always proclaim: don’t mix up different implementation libraries for the same technology. The reason is it’s a huge quality and security concern.

The general purpose for using JSON in RESTful applications is to transmit data between a server and a client via HTTP. To achieve that, we need to solve two challenges. First, on the server side, we need create form a Java object a valid JSON representation which we can send to the client. This process we call serialization. On the client side, we do the second step, which is exactly the opposite, we did on the server. De-serialization we call it, when we create a valid object from a JSON String.

In this article we will use on the server side and also on the client side Java as programming language, to deal with JSON objects. But keep in mind REST allows you to have different programming languages on the server and for the client. Java is always a good choice to implement your business logic on the server. The client side often is made with JavaScript. Also PHP, .NET and other programming Languages are possible.

In the next step we will have a look at the project architecture. All artifacts are organized by one Apache Maven Multi-Module project. It’s a good recommendation to follow this structure in your own projects too. The three artifacts we create are: api, server and client.

  • API: contain shared objects which will needed on the server and also client side, like domain objects and interfaces.
  • Server: producer of a RESTful service, depends on API.
  • Client: consumer of the RESTful service, depends on API.

Inside of this artifacts an layer architecture is applied. This means the access to objects from a layer is only allowed to the direction of the underlying layers. In short: from top to down. The layer structure are organized by packages. Not every artifact contains every layer, only the ones which are implemented. The following picture draws an better understanding for the whole architecture is used.

The first piece of code, I’d like to show are the JSON dependencies we will need in the notation for Maven projects.

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>${version}</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-annotations</artifactId>
    <version>${version}</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>${version}</version>
</dependency>
XML

Listing 1

In respect to the size of this article, I only focus how the JSON object is used in RESTful applications. It’s not a full workshop about RESTful (Micro) Services. As code base we reuse my open source GitHub project TP-ACL [4], an access control list. For our example I decided to sliced apart the Role – Functionality from the whole code base.

For now we need as first an Java object which we can serialize to an JSON String. This Domain Object will be the Class RolesDO and is located in the layer domain inside the API module. The roles object contains a name, a description and a flag that indicates if a role is allowed to delete.

@Entity
@Table(name = "ROLES")
public class RolesDO implements Serializable {

    private static final long serialVersionUID = 50L;

    @Id
    @Column(name = "NAME")
    private String name;

    @Column(name = "DESCRIPTION")
    private String description;

    @Column(name = "DELETEABLE")
    private boolean deleteable;

    public RolesDO() {
        this.deleteable = true;
    }

    public RolesDO(final String name) {
        this.name = name;
        this.deleteable = true;
    }

    //Getter & Setter
}
Java

Listing 2

So far so good. As next step we will need to serialize the RolesDO in the server module as a JSON String. This step we will do in the RolesHbmDAO which is stored in the implementation layer within the Server module. The opposite direction, the de-serialization is also implemented in the same class. But slowly, not everything at once. lets have as first a look on the code.

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;

public class RolesDAO {

    public transient EntityManager mainEntityManagerFactory;

    public String serializeAsJson(final RolesDO role) 
            throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        return mapper.writeValueAsString(role);
    }

    public RolesDO deserializeJsonAsObject(final String json, final RolesDO role) 
            throws JsonProcessingException, ClassNotFoundException {
        ObjectMapper mapper = new ObjectMapper();
        return (RolesDO) mapper.readValue(json, Class.forName(object.getCanonicalName()));
    }

    public List<RolesDO> deserializeJsonAsList(final String json)
            throws JsonProcessingException, ClassNotFoundException {       
        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(json, new TypeReference<List>() {});
    }

    public List listProtectedRoles() {

        CriteriaBuilder builder = mainEntityManagerFactory.getCriteriaBuilder();
        CriteriaQuery query = builder.createQuery(RolesDO.class);
        
        Root root = query.from(RolesDO.class);
        query.where(builder.isNull(root.get("deactivated")));
        query.orderBy(builder.asc(root.get("name")));

        return mainEntityManagerFactory.createQuery(query).getResultList();
    }
}
Java

Listing 3

The implementation is not so difficult to understand, but may at this point could the first question appear. Why the de-serilization is in the server module and not in the client module? When the client sends a JSON to the server module, we need to transform this to an real Java object. Simple as that.

Usually the Data Access Object (DAO) Pattern contains all functionality for database operations. This CRUD (create, read, update and delete) functions, we will jump over. If you like to get to know more about how the DAO pattern is working, you could also check my project TP-CORE [4] at GitHub. Therefore we go ahead to the REST service implemented in the object RoleService. Here we just grep the function fetchRole().

@Service
public class RoleService {

    @Autowired
    private RolesDAO rolesDAO;

    @GET
    @Path("/{role}")
    @Produces({MediaType.APPLICATION_JSON})
    public Response fetchRole(final @PathParam("role") String roleName) {
        Response response = null;
        try {
            RolesDO role = rolesDAO.find(roleName);
            if (role != null) {
                String json = rolesDAO.serializeAsJson(role);
                response = Response.status(Response.Status.OK)
                        .type(MediaType.APPLICATION_JSON)
                        .entity(json)
                        .encoding("UTF-8")
                        .build();
            } else {
                response = Response.status(Response.Status.NOT_FOUND).build();
            }

        } catch (Exception ex) {
            LOGGER.log("ERROR CODE 500 " + ex.getMessage(), LogLevel.DEBUG);
            response = Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
        }
        return response;
    }
}
Java

Listing 4

The big secret here we have in the line where we stick the things together. As first the RolesDO is created and in the next line the DAO calls the serializeAsJson() Method with the RoleDO as parameter. The result will be a JSON representation of the RoleDO. If the role exist and no exceptions occur, then the service is ready for consuming. In the case of any problem the service send a HTTP error code instead of the JSON.

Complex Services which combine single services to a process take place in the orchestration layer. At this point we can switch to the client module to learn how the JSON String got transformed back to a Java domain object. In the client we don’t have RolesHbmDAO to use the deserializeJsonAsObject() method. And of course we also don’t want to create duplicate code. This forbids us to copy paste the function into the client module.

As pendant to the fetchRole() on the server side, we use for the client getRole(). The purpose of both implementations is identical. The different naming helps to avoid confusions.

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;

public class Role {
    private final String API_PATH
            = "/acl/" + Constraints.REST_API_VERSION + "/role";
    private WebTarget target;

    public RolesDO getRole(String role) throws JsonProcessingException {
        Response response = target
                .path(API_PATH).path(role)
                .request()
                .accept(MediaType.APPLICATION_JSON)
                .get(Response.class);
        LOGGER.log("(get) HTTP STATUS CODE: " + response.getStatus(), LogLevel.INFO);

        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(response.readEntity(String.class), RolesDO.class);
    }
}
Java

Listing 5

As conclusion we have now seen the serialization and de-serialisation by using the Jackson library of JSON objects is not that difficult. In the most of the cases we just need three methods:

  • serialize a Java object to a JSON String
  • create a Java object from a JSON String
  • de-serialize a list of objects inside a JSON String to a Java object collection

This three methods I already introduced in Listing 2 for the DAO. To prevent duplicate code we should separte those functionality in an own Java Class. This is known as the design pattern Wrapper [5] also known as Adapter. For reaching the best flexibility I implemented the JacksonJsonTools from TP-CORE as Generic.

package org.europa.together.application;

import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.core..JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.List;

public class JacksonJsonTools {

    private static final long serialVersionUID = 15L;

    public String serializeAsJsonObject(final T object)
            throws JsonProcessingException {
        try {
            ObjectMapper mapper = new ObjectMapper();
            return mapper.writeValueAsString(object);
        } catch (JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }

    public T deserializeJsonAsObject(final String json, final Class object)
            throws JsonProcessingException, ClassNotFoundException {
        try {
            Class<?> clazz = Class.forName(object.getCanonicalName());
            ObjectMapper mapper = new ObjectMapper();
            return (T) mapper.readValue(json, clazz);
        } catch (JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }

    public List deserializeJsonAsList(final String json)
            throws JsonProcessingException, ClassNotFoundException {
        try {
            ObjectMapper mapper = new ObjectMapper();
            return mapper.readValue(json, new TypeReference<List>() {
            });
        } catch (com.fasterxml.jackson.core.JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }
}
Java

Listing 6

This and much more useful Implementations with a very stable API you find in my project TP-CORE for free usage.

Resources:

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Preventing SQL Injections in Java with JPA and Hibernate

When we have a look at OWASP’s top 10 vulnerabilities [1], SQL Injections are still in a popular position. In this short article, we discuss several options on how SQL Injections could be avoided.

When Applications have to deal with databases existing always high-security concerns, if an invader got the possibility to hijack the database layer of your application, he can choose between several options. Stolen the data of the stored users to flood them with spam is not the worst scenario that could happen. Even more problematic would be when stored payment information got abused. Another possibility of an SQL Injection Cyber attack is to get illegal access to restricted pay content and/or services. As we can see, there are many reasons why to care about (Web) Application security.

To find well-working preventions against SQL Injections, we need first to understand how an SQL Injection attack works and on which points we need to pay attention. In short: every user interaction that processes the input unfiltered in an SQL query is a possible target for an attack. The data input can be manipulated in a manner that the submitted SQL query contains a different logic than the original. Listing 1 will give you a good idea about what could be possible.

SELECT Username, Password, Role FROM User 
   WHERE Username = 'John Doe' AND Password = 'S3cr3t';
SELECT Username, Password, Role FROM Users
   WHERE Username = 'John Doe'; --' AND Password='S3cr3t';

Listing 1: Simple SQL Injection

The first statement in Listing 1 shows the original query. If the Input for the variables Username and Password is not filtered, we have a lack of security. The second query injects for the variable Username a String with the username John Doe and extends with the characters ‘; –. This statement bypasses the AND branch and gives, in this case, access to the login. The ‘; sequence close the WHERE statement and with — all following characters got un-commented. Theoretically, it is possible to execute between both character sequences every valid SQL code.

Of course, my plan is not to spread around ideas that SQL commands could rise up the worst consequences for the victim. With this simple example, I assume the message is clear. We need to protect each UI input variable in our application against user manipulation. Even if they are not used directly for database queries. To detect those variables, it is always a good idea to validate all existing input forms. But modern applications have mostly more than just a few input forms. For this reason, I also mention keeping an eye on your REST endpoints. Often their parameters are also connected with SQL queries.

For this reason, Input validation, in general, should be part of the security concept. Annotations from the Bean Validation [2] specification are, for this purpose, very powerful. For example, @NotNull, as an Annotation for the data field in the domain object, ensure that the object only is able to persist if the variable is not empty. To use the Bean Validation Annotations in your Java project, you just need to include a small library.

<dependency> 
    <groupId>org.hibernate.validator</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>${version}</version>
</dependency>

Listing 2: Maven Dependency for Bean Validation

Perhaps it could be necessary to validate more complex data structures. With Regular Expressions, you have another powerful tool in your hands. But be careful. It is not that easy to write correct working RegEx. Let’s have a look at a short example.

public static final String RGB_COLOR = "#[0-9a-fA-F]{3,3}([0-9a-fA-F]{3,3})?";
 
public boolean validate(String content, String regEx) {
    boolean test;
    if (content.matches(regEx)) {
        test = true;
    } else {
        test = false;
    }
    return test;
}

validate('#000', RGB_COLOR);

Listing 3: Validation by Regular Expression in Java

The RegEx to detect the correct RGB color schema is quite simple. Valid inputs are #ffF or #000000. The Range for the characters is 0-9, and the Letters A to F. Case insensitive. When you develop your own RegEx, you always need to check very well existing boundaries. A good example is also the 24 hours time format. Typical mistakes are invalid entries like 23:60 or 24:00. The validate method compares the input string with the RegEx. If the pattern matches the input, the method will return true. If you want to get more ideas about validators in Java, you can also check my GitHub repository [3].

In resume, our first idea to secure user input against abuse is to filter out all problematic character sequences, like — and so on. Well, this intention of creating a blocking list is not that bad. But still have some limitations. At first, the complexity of the application increased because blocking single characters like –; and ‘ could causes sometimes unwanted side effects. Also, an application-wide default limitation of the characters could cost sometimes problems. Imagine there is a text area for a Blog system or something equal.

This means we need another powerful concept to filter the input in a manner our SQL query can not manipulate. To reach this goal, the SQL standard has a very great solution we can use. SQL Parameters are variables inside an SQL query that will be interpreted as content and not as a statement. This allows large texts to block some dangerous characters. Let’s have a look at how this will work on a PostgreSQL [4] database.

DECLARE user String;
SELECT * FROM login WHERE name = user; 

Listing 4: Defining Parameters in PostgreSQL

In the case you are using the OR mapper Hibernate, there exists a more elegant way with the Java Persistence API (JPA).

String myUserInput;
 
@PersistenceContext
public EntityManager mainEntityManagerFactory;

CriteriaBuilder builder =
    mainEntityManagerFactory.getCriteriaBuilder();

CriteriaQuery<DomainObject> query =
    builder.createQuery(DomainObject.class);

// create Criteria
Root<ConfigurationDO> root =
    query.from(DomainObject.class);

//Criteria SQL Parameters
ParameterExpression<String> paramKey =
    builder.parameter(String.class);

query.where(builder.equal(root.get("name"), paramKey);

// wire queries together with parameters
TypedQuery<ConfigurationDO> result =
    mainEntityManagerFactory.createQuery(query);

result.setParameter(paramKey, myUserInput);
DomainObject entry = result.getSingleResult();

Listing 5: Hibernate JPA SQL Parameter Usage

Listing 5 is shown as a full example of Hibernate using JPA with the criteria API. The variable for the user input is declared in the first line. The comments in the listing explain the way how it works. As you can see, this is no rocket science. The solution has some other nice benefits besides improving web application security. At first, no plain SQL is used. This ensures that each database management system supported by Hibernate can be secured by this code.

May the usage looks a bit more complex than a simple query, but the benefit for your application is enormous. On the other hand, of course, there are some extra lines of code. But they are not that difficult to understand.

Resources


A briefly overview to Java frameworks

When you have a look at Merriam Webster about the word framework you find the following explanations:

  • a basic conceptional structure
  • a skeletal, openwork, or structural frame

May you could think that libraries and frameworks are equal things. But this is not correct. The source code calls the functionality of a library directly. When you use a framework it is exactly the opposite. The framework calls specific functions of your business logic. This concept is also know as Inversion of Control (IoC).

For web applications we can distinguish between Client-Side and Server-Side frameworks. The difference is that the client usually run in a web browser, that means to available programming languages are limited to JavaScript. Depending on the web server we are able to chose between different programming languages. the most popular languages for the internet are PHP and Java. All web languages have one thing in common. They produce as output HTML, witch can displayed in a web browser.

In this article I created an short overview of the most common Java frameworks which also could be used in desktop applications. If you wish to have a fast introduction for Java Server Application you can check out my Article about Java EE and Jakarta.

If you plan to use one or some of the discussed frameworks in your Java application, you just need to include them as Maven or Gradle dependency.

JUnit, TestNGTDD – unit testing
MockitoTDD mocking objects
JGiven, CucumberBDD – acceptance testing
Hibernate, iBatis, Eclipse LinkJPA- O/R Mapper
Spring Framework, Google GuiceDependency Injection
PrimeFaces, BootsFaces, ButterFacesJSF User Interfaces
ControlsFX, BootstrapFXJavaFX User Interfaces
Hazelcast, Apache KafkaEvent Stream Processing
SLF4J, Logback, Log4JLogging
FF4jFeature Flags

Before I continue I wish to telly you, that this frameworks are made to help you in your daily business as developer to solve problems. Every problem have multiple solutions. For this reason it is more important to learn the concepts behind the frameworks instead just how to use a special framework. During the last two decades since I’m programming I saw the rise and fall of plenty Frameworks. Examples of frameworks today almost nobody remember are: Google Web Toolkit and JBoss Seam.

The most used framework in Java for writing and executing unit tests is JUnit. An also often used alternative to JUnit is TestNG. Both solutions working quite equal. The basic idea is execute a function by defined parameters and compare the output with an expected results. When the output fit with the expectation the test passed successful. JUnit and TestNG supporting the Test Driven Development (TDD) paradigm.

If you need to emulate in your test case a behavior of an external system you do not have in the moment your tests are running, then Mockito is your best friend. Mockito works perfectly together with JUnit and TestNG.

The Behavioral Driven Development (BDD) is an evolution to unit tests where you are able to define the circumstances the customer will accepted the integrated functionality. The category of BDD integration tests are called acceptance tests. Integration tests are not a replacement for unit tests, they are an extension to them. The frameworks JGiven and Cucumber are also very similar both are like Mockito an extension for the unit test frameworks JUnit and TestNG.

For dealing in Java with relational databases we can choose between several persistence frameworks. Those frameworks allow you to define your database structure in Java objects without writing any line of SQL The mapping between Java objects and database tables happens in the background. Another very great benefit of using O/R Mapper like Hibernate, iBatis and eclipse link is the possibility to replace the underlying database sever. But this achievement is not so easy to reach as it in the beginning seems.

In the next section I introduce a technique was first introduced by the Spring Framework. Dependency Injection (DI). This allows the loose coupling between modules and an more easy replacement of components without a new compile. The solution from Google for DI is called Guice and Java Enterprise binges its own standard named CDI.

Graphical User Interfaces (GUI) are another category for frameworks. It depends on the chosen technology like JavaFX or JSF which framework is useful. The most of the provided controls are equal. Common libraries for GUI JSF components are PrimeFaces, BootsFaces or ButterFaces. OmniFaces is a framework to have standardized solution for JSF problems, like chaching and so on. Collections for JavaFX controls you can find in ControlsFX and BootstrapFX.

If you have to deal with Event Stream Processing (ESP) may you should have a look on Hazelcast or Apache Kafka. ESP means that the system will react on constantly generated data. The event is a reference to each data point which can be persisted in a database and the stream represent to output of the events.

In December a often used technology comes out of the shadow, because of a attacking vulnerability in Log4J. Log4J together with the Simple Logging Facade for Java (SLF4J) is one of the most used dependencies in the software industry. So you can imagine how critical was this information. Now you can imagine which important role Logging has for software development. Another logging framework is Logback, which I use.

Another very helpful dependency for professional software development is FF4J. This allows you to define feature toggles, also know as feature flags to enable and disable functionality of a software program by configuration.

This list could be much longer. I just tried to focus on the most used ones the are for Java programmers relevant. Feel free to leave a comment to suggest something I may forgot. If you share this article on