Schulung / Training / Kurs / Seminar – 2 Tage remote
Das sich mit Docker auf sehr effiziente Weise Arbeitsumgebungen für Entwickler bereitstellen lassen hat sich bereits herumgesprochen. Wie eine solche Arbeitsumgebung aber auf dem eigenen Entwicklungsrechner effizient und zeitsparend aufgebaut und betrieben wird, lernen sie in diesem Kurs von Grund auf.
Das Praxisbeispiel führt Sie Schritt für Schritt hin zu einer vollwertigen Programmierumgebung für PHP Webprojekte, einem sogenannten LAMP Stack. Während Sie die Dienste Apache 2 HTTP Webserver mit PHP und einer MySQL Datenbank, sowie dem Administrationswerkzeug PhpMyAdmin über Docker Container erstellen, lernen Sie nebenher alle wichtigen Grundlagen zu Docker, um künftig auch anders gestaltete Entwicklungsumgebungen erstellen zu können.
Profitieren sie von den vielen Vorteilen die Ihnen Docker auch in den Entwickleralltag bringt. Gekapselte parallele Installationen die sich leicht aktualisieren lassen und zudem noch wetvolle Systemressourcen schonen. Nutzen Sie die eingesparte Zeit um sich mit den wirklich wichtigen Dingen beschäftigen zu können, dem programmieren.
Durchführungs-Garantie: der Kurs wird auch bei nur einem Teilenmenden durchgeführt
Sollten Sie zu den angebotenen Terminen nicht können, beziehungsweise für Ihr Team einen individuellen Termin wünschen, um beispielsweise in einer geschlossenen Runde auf konkrete Problemstellungen eingehen zu können, besteht die Möglichkeit zusätzliche außerplanmäßige Termine zu vereinbaren. Nutzen Sie bitte hierzu mein Kontaktformular unter der Angabe welchen Kurs Sie wünschen, mit Terminvorschlägen und der zu erwartenden Teilnehmerzahl.
Since a long time the Java Script Object Notation [1] become as a lightweight standard to replace XML for information exchange between heterogeneous systems. Both technologies XML and JSON closed those gap to return simple and complex data of a remote method invocation (RMI), when different programming languages got involved. Each of those technologies has its own benefits and disadvantages. A good designed XML document is human readable but needs in comparing to JSON more payload when it send through the network. For almost every programming languages existing plenty implementations to deal with XML and also JSON. We don’t need to reinvent the wheel, to implement our own solution for handling JSON objects. But choosing the right library is not that easy might it seems.
The most popular library for JSON in Java projects is the one I already mentioned: Jackson [2]. because of its huge functionality. Another important point for choosing Jackson instead of other libraries is it’s also used by the Jersey REST Framework [3]. Before we start now our journey with the Java Frameworks Jersey and Jackson, I like to share some thoughts about things, I often observe in huge projects during my professional life. Because of this reason I always proclaim: don’t mix up different implementation libraries for the same technology. The reason is it’s a huge quality and security concern.
The general purpose for using JSON in RESTful applications is to transmit data between a server and a client via HTTP. To achieve that, we need to solve two challenges. First, on the server side, we need create form a Java object a valid JSON representation which we can send to the client. This process we call serialization. On the client side, we do the second step, which is exactly the opposite, we did on the server. De-serialization we call it, when we create a valid object from a JSON String.
In this article we will use on the server side and also on the client side Java as programming language, to deal with JSON objects. But keep in mind REST allows you to have different programming languages on the server and for the client. Java is always a good choice to implement your business logic on the server. The client side often is made with JavaScript. Also PHP, .NET and other programming Languages are possible.
In the next step we will have a look at the project architecture. All artifacts are organized by one Apache Maven Multi-Module project. It’s a good recommendation to follow this structure in your own projects too. The three artifacts we create are: api, server and client.
API: contain shared objects which will needed on the server and also client side, like domain objects and interfaces.
Server: producer of a RESTful service, depends on API.
Client: consumer of the RESTful service, depends on API.
Inside of this artifacts an layer architecture is applied. This means the access to objects from a layer is only allowed to the direction of the underlying layers. In short: from top to down. The layer structure are organized by packages. Not every artifact contains every layer, only the ones which are implemented. The following picture draws an better understanding for the whole architecture is used.
The first piece of code, I’d like to show are the JSON dependencies we will need in the notation for Maven projects.
In respect to the size of this article, I only focus how the JSON object is used in RESTful applications. It’s not a full workshop about RESTful (Micro) Services. As code base we reuse my open source GitHub project TP-ACL [4], an access control list. For our example I decided to sliced apart the Role – Functionality from the whole code base.
For now we need as first an Java object which we can serialize to an JSON String. This Domain Object will be the Class RolesDO and is located in the layer domain inside the API module. The roles object contains a name, a description and a flag that indicates if a role is allowed to delete.
@Entity
@Table(name = "ROLES")
public class RolesDO implements Serializable {
private static final long serialVersionUID = 50L;
@Id
@Column(name = "NAME")
private String name;
@Column(name = "DESCRIPTION")
private String description;
@Column(name = "DELETEABLE")
private boolean deleteable;
public RolesDO() {
this.deleteable = true;
}
public RolesDO(final String name) {
this.name = name;
this.deleteable = true;
}
//Getter & Setter
}
Listing 2
So far so good. As next step we will need to serialize the RolesDO in the server module as a JSON String. This step we will do in the RolesHbmDAO which is stored in the implementation layer within the Server module. The opposite direction, the de-serialization is also implemented in the same class. But slowly, not everything at once. lets have as first a look on the code.
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
public class RolesDAO {
public transient EntityManager mainEntityManagerFactory;
public String serializeAsJson(final RolesDO role)
throws JsonProcessingException {
ObjectMapper mapper = new ObjectMapper();
return mapper.writeValueAsString(role);
}
public RolesDO deserializeJsonAsObject(final String json, final RolesDO role)
throws JsonProcessingException, ClassNotFoundException {
ObjectMapper mapper = new ObjectMapper();
return (RolesDO) mapper.readValue(json, Class.forName(object.getCanonicalName()));
}
public List<RolesDO> deserializeJsonAsList(final String json)
throws JsonProcessingException, ClassNotFoundException {
ObjectMapper mapper = new ObjectMapper();
return mapper.readValue(json, new TypeReference<List>() {});
}
public List listProtectedRoles() {
CriteriaBuilder builder = mainEntityManagerFactory.getCriteriaBuilder();
CriteriaQuery query = builder.createQuery(RolesDO.class);
Root root = query.from(RolesDO.class);
query.where(builder.isNull(root.get("deactivated")));
query.orderBy(builder.asc(root.get("name")));
return mainEntityManagerFactory.createQuery(query).getResultList();
}
}
Listing 3
The implementation is not so difficult to understand, but may at this point could the first question appear. Why the de-serilization is in the server module and not in the client module? When the client sends a JSON to the server module, we need to transform this to an real Java object. Simple as that.
Usually the Data Access Object (DAO) Pattern contains all functionality for database operations. This CRUD (create, read, update and delete) functions, we will jump over. If you like to get to know more about how the DAO pattern is working, you could also check my project TP-CORE [4] at GitHub. Therefore we go ahead to the REST service implemented in the object RoleService. Here we just grep the function fetchRole().
The big secret here we have in the line where we stick the things together. As first the RolesDO is created and in the next line the DAO calls the serializeAsJson() Method with the RoleDO as parameter. The result will be a JSON representation of the RoleDO. If the role exist and no exceptions occur, then the service is ready for consuming. In the case of any problem the service send a HTTP error code instead of the JSON.
Complex Services which combine single services to a process take place in the orchestration layer. At this point we can switch to the client module to learn how the JSON String got transformed back to a Java domain object. In the client we don’t have RolesHbmDAO to use the deserializeJsonAsObject() method. And of course we also don’t want to create duplicate code. This forbids us to copy paste the function into the client module.
As pendant to the fetchRole() on the server side, we use for the client getRole(). The purpose of both implementations is identical. The different naming helps to avoid confusions.
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
public class Role {
private final String API_PATH
= "/acl/" + Constraints.REST_API_VERSION + "/role";
private WebTarget target;
public RolesDO getRole(String role) throws JsonProcessingException {
Response response = target
.path(API_PATH).path(role)
.request()
.accept(MediaType.APPLICATION_JSON)
.get(Response.class);
LOGGER.log("(get) HTTP STATUS CODE: " + response.getStatus(), LogLevel.INFO);
ObjectMapper mapper = new ObjectMapper();
return mapper.readValue(response.readEntity(String.class), RolesDO.class);
}
}
Listing 5
As conclusion we have now seen the serialization and de-serialisation by using the Jackson library of JSON objects is not that difficult. In the most of the cases we just need three methods:
serialize a Java object to a JSON String
create a Java object from a JSON String
de-serialize a list of objects inside a JSON String to a Java object collection
This three methods I already introduced in Listing 2 for the DAO. To prevent duplicate code we should separte those functionality in an own Java Class. This is known as the design pattern Wrapper [5] also known as Adapter. For reaching the best flexibility I implemented the JacksonJsonTools from TP-CORE as Generic.
package org.europa.together.application;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.core..JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.List;
public class JacksonJsonTools {
private static final long serialVersionUID = 15L;
public String serializeAsJsonObject(final T object)
throws JsonProcessingException {
try {
ObjectMapper mapper = new ObjectMapper();
return mapper.writeValueAsString(object);
} catch (JsonProcessingException ex) {
System.err.println(ex.getOriginalMessage());
}
}
public T deserializeJsonAsObject(final String json, final Class object)
throws JsonProcessingException, ClassNotFoundException {
try {
Class<?> clazz = Class.forName(object.getCanonicalName());
ObjectMapper mapper = new ObjectMapper();
return (T) mapper.readValue(json, clazz);
} catch (JsonProcessingException ex) {
System.err.println(ex.getOriginalMessage());
}
}
public List deserializeJsonAsList(final String json)
throws JsonProcessingException, ClassNotFoundException {
try {
ObjectMapper mapper = new ObjectMapper();
return mapper.readValue(json, new TypeReference<List>() {
});
} catch (com.fasterxml.jackson.core.JsonProcessingException ex) {
System.err.println(ex.getOriginalMessage());
}
}
}
Listing 6
This and much more useful Implementations with a very stable API you find in my project TP-CORE for free usage.
El manejo de excepciones debe ser un conocimiento básico para los desarrolladores de Java. Pero un uso seguro no es tan fácil como parece en un primer momento. También varios libros recomiendan no usar excepciones para evitar problemas de rendimiento. En el medio, nuestro propio código está luchando por las excepciones y tenemos que encontrar el punto exacto de donde proviene el problema. No siempre es una tarea fácil. Porque la posición en la que se detectó la excepción a menudo debe mejorarse para recopilar información relevante. En esta charla comparto mi experiencia sobre cómo tratar en general las excepciones. Explico con ejemplos cómo tratar las excepciones y cuándo es mejor evitar el uso de excepciones. Después de esta presentación, la cantidad de información en sus mensajes de error no aumenta, porque obtendremos la información importante para solucionar el problema donde ocurra.
El flujo del programa no solo se define mediante sentencias if-else. Las excepciones le permiten manejar los problemas antes de que ocurran, si lo hace bien. Aprenda cómo recopilar información sobre las excepciones lanzadas y vea la práctica que debe evitar.
When you have a look at Merriam Webster about the word framework you find the following explanations:
a basic conceptional structure
a skeletal, openwork, or structural frame
May you could think that libraries and frameworks are equal things. But this is not correct. The source code calls the functionality of a library directly. When you use a framework it is exactly the opposite. The framework calls specific functions of your business logic. This concept is also know as Inversion of Control (IoC).
For web applications we can distinguish between Client-Side and Server-Side frameworks. The difference is that the client usually run in a web browser, that means to available programming languages are limited to JavaScript. Depending on the web server we are able to chose between different programming languages. the most popular languages for the internet are PHP and Java. All web languages have one thing in common. They produce as output HTML, witch can displayed in a web browser.
In this article I created an short overview of the most common Java frameworks which also could be used in desktop applications. If you wish to have a fast introduction for Java Server Application you can check out my Article about Java EE and Jakarta.
If you plan to use one or some of the discussed frameworks in your Java application, you just need to include them as Maven or Gradle dependency.
Before I continue I wish to telly you, that this frameworks are made to help you in your daily business as developer to solve problems. Every problem have multiple solutions. For this reason it is more important to learn the concepts behind the frameworks instead just how to use a special framework. During the last two decades since I’m programming I saw the rise and fall of plenty Frameworks. Examples of frameworks today almost nobody remember are: Google Web Toolkit and JBoss Seam.
The most used framework in Java for writing and executing unit tests is JUnit. An also often used alternative to JUnit is TestNG. Both solutions working quite equal. The basic idea is execute a function by defined parameters and compare the output with an expected results. When the output fit with the expectation the test passed successful. JUnit and TestNG supporting the Test Driven Development (TDD) paradigm.
If you need to emulate in your test case a behavior of an external system you do not have in the moment your tests are running, then Mockito is your best friend. Mockito works perfectly together with JUnit and TestNG.
The Behavioral Driven Development (BDD) is an evolution to unit tests where you are able to define the circumstances the customer will accepted the integrated functionality. The category of BDD integration tests are called acceptance tests. Integration tests are not a replacement for unit tests, they are an extension to them. The frameworks JGiven and Cucumber are also very similar both are like Mockito an extension for the unit test frameworks JUnit and TestNG.
For dealing in Java with relational databases we can choose between several persistence frameworks. Those frameworks allow you to define your database structure in Java objects without writing any line of SQL The mapping between Java objects and database tables happens in the background. Another very great benefit of using O/R Mapper like Hibernate, iBatis and eclipse link is the possibility to replace the underlying database sever. But this achievement is not so easy to reach as it in the beginning seems.
In the next section I introduce a technique was first introduced by the Spring Framework. Dependency Injection (DI). This allows the loose coupling between modules and an more easy replacement of components without a new compile. The solution from Google for DI is called Guice and Java Enterprise binges its own standard named CDI.
Graphical User Interfaces (GUI) are another category for frameworks. It depends on the chosen technology like JavaFX or JSF which framework is useful. The most of the provided controls are equal. Common libraries for GUI JSF components are PrimeFaces, BootsFaces or ButterFaces. OmniFaces is a framework to have standardized solution for JSF problems, like chaching and so on. Collections for JavaFX controls you can find in ControlsFX and BootstrapFX.
If you have to deal with Event Stream Processing (ESP) may you should have a look on Hazelcast or Apache Kafka. ESP means that the system will react on constantly generated data. The event is a reference to each data point which can be persisted in a database and the stream represent to output of the events.
In December a often used technology comes out of the shadow, because of a attacking vulnerability in Log4J. Log4J together with the Simple Logging Facade for Java (SLF4J) is one of the most used dependencies in the software industry. So you can imagine how critical was this information. Now you can imagine which important role Logging has for software development. Another logging framework is Logback, which I use.
Another very helpful dependency for professional software development is FF4J. This allows you to define feature toggles, also know as feature flags to enable and disable functionality of a software program by configuration.
Name
Category
JUnit, TestNG
TDD – unit testing
Mockito
TDD mocking objects
JGiven, Cucumber
BDD – acceptance testing
Hibernate, iBatis, Eclipse Link
JPA- O/R Mapper
Spring Framework, Google Guice
Dependency Injection
PrimeFaces, BootsFaces, ButterFaces
JSF User Interfaces
ControlsFX, BootstrapFX
JavaFX User Interfaces
Hazelcast, Apache Kafka
Event Stream Processing
SLF4J, Logback, Log4J
Logging
FF4j
Feature Flags
This list could be much longer. I just tried to focus on the most used ones the are for Java programmers relevant. Feel free to leave a comment to suggest something I may forgot. If you share this article on several social media platforms with your friends or colleagues I will appreciate.
In the previous part of the article treasure chest, I described how the database connection for the TP-CORE library got established. Also I gave a insight to the internal structure of the ConfiguartionDO. Now in the second part I explain the ConfiguartionDAO and its corresponding service. With all this knowledge you able to include the application configuration feature of TP-CORE in your own project to build your own configuration registry.
Lets resume in short the architectural design of the TP-CORE library and where the fragments of the features located. TP-CORE is organized as layer architecture as shown in the graphic below.
As you can see there are three relevant packages (layer) we have to pay attention. As first the business layer resides like all other layers in an equal named package. The whole API of TP-CORE is defined by interfaces and stored in the business layer. The implementation of the defined interfaces are placed in the application layer. Domain Objects are simple data classes and placed in the domain layer. Another important pattern is heavily used in the TP-CORE library is the Data Access Object (DAO).
Now the days micro services and RESTful application are state of the art. Especially in TP-CORE the defined services aren’t REST. This design decision is based on the mind that TP-CORE is a dependency and not a standalone service. Maybe in future, after I got more feedback how and where this library is used, I could rethink the current concept. For now we treat TP-CORE as what it is, a library. That implies for the usage in your project, you can replace, overwrite, extend or wrap the basic implementation of the ConfigurationDAO to your special necessities.
To keep the portability of changing the DBMS Hibernate (HBM) is used as JPA implementation and O/R mapper. The Spring configuration for Hibernate uses the EntityManager instead of the Session, to send requests to the DBMS. Since version 5 Hibernate use the JPA 2 standard to formulate queries.
As I already mentioned, the application configuration feature of TP-CORE is implemented as DAO. The domain object and the database connection was topic of the first part of this article. Now I discuss how to give access to the domain object with the ConfigurationDAO and its implementation ConfigurationHbmDAO. The domain object ConfigurationDO or a list of domain objects will be in general the return value of the DAO. Actions like create are void and throw just an exception in the case of a failure. For a better style the return type is defined as Boolean. This simplifies also writing unit tests.
Sometimes it could be necessary to overwrite a basic implementation. A common scenario is a protected delete. For example: a requirement exist that a special entry is protected against a unwanted deletion. The most easy solution is to overwrite the delete whit a statement, refuses every time a request to delete a domain object whit a specific UUID. Only adding a new method like protectedDelete() is not a god idea, because a developer could use by accident the default delete method and the protected objects are not protected anymore. To avoid this problem you should prefer the possibility of overwriting GenericDAO methods.
As default query to fetch an object, the identifier defined as primary key (PK) is used. A simple expression fetching an object is written in the find method of the GenericHbmDAO. In the specialization as ConfigurationHbmDAO are more complex queries formulated. To keep a good design it is important to avoid any native SQL. Listing 1 shows fetch operations.
//GenericHbmDAO
public T find(final PK id) {
return mainEntityManagerFactory.find(genericType, id);
}
//ConfigurationHbmDAO
public List getAllConfigurationSetEntries(final String module,
final String version, final String configSet) {
CriteriaBuilder builder = mainEntityManagerFactory.getCriteriaBuilder();
CriteriaQuery query = builder.createQuery(ConfigurationDO.class);
// create Criteria
Root root = query.from(ConfigurationDO.class);
query.where(
builder.equal(root.get("modulName"), module),
builder.equal(root.get("version"), version),
builder.equal(root.get("configurationSet"), configSet)
);
return mainEntityManagerFactory.createQuery(query).getResultList();
}
The readability of these few lines of source is pretty easy. The query we formulated for getAllConfigurationSetEntries() returns a list of ConfigurationDO objects from the same module whit equal version of a configSet. A module is for example the library TP-CORE it self or an ACL and so on. The configSet is a namespace that describes configuration entries they belong together like a bundle and will used in a service like e-mail. The version is related to the service. If in future some changes needed the version number have increase. Lets get a bit closer to see how the e-mail example will work in particular.
We assume that a e-mail service in the module TP-CORE contains the configuration entries: mailer.host, mailer.port, user and password. As first we define the module=core, configSet=email and version=1. If we call now getAllConfigurationSetEntries(core, 1, email); the result is a list of four domain objects with the entries for mailer.host, mailer.port, user and password. If in a newer version of the email service more configuration entries will needed, a new version will defined. It is very important that in the database the already exiting entries for the mail service will be duplicated with the new version number. Of course as effect the registry table will grow continual, but with a stable and well planned development process those changes occur not that often. The TP-CORE library contains an simple SMTP Mailer which is using the ConfigurationDAO. If you wish to investigate the usage by the MailClient real world example you can have a look on the official documentation in the TP-CORE GitHub Wiki.
The benefit of duplicate all existing entries of a service, when the service configuration got changed is that a history is created. In the case of update a whole application it is now possible to compare the entries of a service by version to decide exist changes they take effect to the application. In practical usage this feature is very helpful, but it will not avoid that updates could change our actual configuration by accident. To solve this problem the domain object has two different entries for the configuration value: default and configuration.
The application configuration follows the convention over configuration paradigm. Each service need by definition for all existing configuration entries a fix defined default value. Those default values can’t changed itself but when the value in the ConfigurationDO is set then the defaultValue entry will ignored. If an application have to be updated its also necessary to support a procedure to capture all custom changes of the updated configuration set and restore them in the new service version. The basic functionality (API) for application configuration in TP-CORE release 3.0 is:
The following listing gives you an idea how a implementation in your own service could look like. This snipped is taken from the JavaMailClient and shows how the internal processing of the fetched ConfigurationDO objects are managed.
private void processConfiguration() {
List configurationEntries =
configurationDAO.getAllConfigurationSetEntries("core", 1, "email");
for (ConfigurationDO entry : configurationEntries) {
String value;
if (StringUtils.isEmpty(entry.getValue())) {
value = entry.getDefaultValue();
} else {
value = entry.getValue();
}
if (entry.getKey()
.equals(cryptoTools.calculateHash("mailer.host",
HashAlgorithm.SHA256))) {
configuration.replace("mailer.host", value);
} else if (entry.getKey()
.equals(cryptoTools.calculateHash("mailer.port",
HashAlgorithm.SHA256))) {
configuration.replace("mailer.port", value);
} else if (entry.getKey()
.equals(cryptoTools.calculateHash("user",
HashAlgorithm.SHA256))) {
configuration.replace("mailer.user", value);
} else if (entry.getKey()
.equals(cryptoTools.calculateHash("password",
HashAlgorithm.SHA256))) {
configuration.replace("mailer.password", value);
}
}
}
Another functionality of the application configuration is located in the service layer. The ConfigurationService operates on the module perspective. The current methods resetModuleToDefault() and filterMandatoryFieldsOfConfigSet() already give a good impression what that means.
If you take a look on the MailClientService you detect the method updateDatabaseConfiguration(). May you wonder why this method is not part of the ConfigurationService? Of course this intention in general is not wrong, but in this specific implementation is the update functionality specialized to the MailClient configuration. The basic idea of the configuration layer is to combine several DAO objects to a composed functionality. The orchestration layer is the correct place to combine services together as a complex process.
Fazit
The implementation of the application configuration inside the small library TP-CORE allows to define an application wide configuration registry. This works also in the case the application has a distribute architecture like micro services. The usage is quite simple and can easily extended to own needs. The proof that the idea is well working shows the real world usage in the MailClient and FeatureToggle implementation of TP-CORE.
I hope this article was helpful and may you also like to use TP-CORE in your own project. Feel free to do that, because of the Apache 2 license is also no restriction for commercial usage. If you have some suggestions feel free to leave a comment or give a thumbs up. A star on my TP-CORE GitHub project s also welcome.