Tag Archives: DevOps
Setting up a native Git server on Linux
If you want to use your Git repository for collaborative editing of source code, you need a Git server. The Git server enables multiple developers to collaborate on the same code base. Installing the Git client on a Linux server is a first step towards your own server solution, but it is far from sufficient. In order to allow multiple people to access a code repository, we need access authorization. After all, the repository should be publicly accessible via the Internet. We want to use user management to prevent unauthorized people from reading and changing the contents of the repositories.
There are many excellent and convenient solutions for operating a Git server that should be preferred to a native server solution. The administration of a native Git server requires Linux knowledge and is carried out exclusively via the command line. Solutions such as the SCM-Manager have a graphical user interface and come with many useful tools for administering the server. These tools are not available with a native installation.
Why should you install Git as a native server? This question is quite easy to answer. The reason is when the server on which the code repository is to be made available has only a few hardware resources. RAM in particular is always a bit of a problem in this context. This is often the case with rented virtual private servers (VPS) or a small RaspberryPI. So we can see that it can make sense to want to run a native Git server.
As a prerequisite, we need a Linux server on which we can install the Git server. This can be a Debian or Ubuntu server. If you use CentOs or other Linux distributions, you must use your distribution’s package manager instead of APT to install the software.
In the first step, we start by updating the packages and installing the Git client.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gitAs a second step, we create a new user named git and create a home directory for it and activate SSH access there.
sudo useradd --create-home --shell /bin/bash git
sudo su - git
cd /home/git/
mkdir .ssh/ && chmod 700 .ssh/
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keysNow, in the third step, we can create our Git repositories in the newly created home directory of the git user. These differ from the local workspace in that they do not have the source code checked out.
mkdir /home/git/repos/project.git
cd /home/git/repos/project.git
git init --bareUnfortunately, we are not quite finished with our project yet. In the fourth step, we have to set the user authorization for the created repository. This is done by storing the public key on the Git server for SSH access. To do this, we copy the contents of our private key file into the /home/git/.ssh/authorized_keys file in a separate line. If you now want to deny existing users access, simply comment out the private key number with a #.
If everything has been done correctly, you can access the repository using the following command line command: git clone ssh://git@<IP>/~/<repo>
Replace with the actual server IP. For our example, the correct path for is project.git, so it is the directory we created for the Git repository.
Multiple repositories can be created on the native Git server. It is important to note that all authorized users have read and write access to all repositories created in this way. This can only be restricted by creating multiple users on the operating system of the Linux server that provides our Git repositories, to whom the repositories are then assigned.
We see that a native Git server installation can be implemented quickly, but it is not sufficient for commercial software development. If you like to experiment, you can create a virtual machine and try out this workshop in it.
Configuration files in software applications
Why do we even need the option to save application configurations in text files? Isn’t a database sufficient for this purpose? The answer to this question is quite trivial. The information on how an application can connect to a database is difficult to save in the database itself.
Now you could certainly argue that you can achieve such things with an embedded database such as SQLite. That may be correct in principle. Unfortunately, this solution is not really practical for highly scalable applications. And you don’t always have to use a sledgehammer to crack a nut. Saving important configuration parameters in text files has a long tradition in software development. However, various text formats such as INI, XML, JSON and YAML have now become established for this use case. For this reason, the question arises as to which format is best to use for your own project.
INI Files
One of the oldest formats are the well-known INI files. They store information according to the key = value principle. If a key appears multiple times in such an INI file, the final value is always overwritten by the last entry that appears in the file.
; Example of an INI File
[Section-name]
key=value ; inline
text="text configuration with spaces and \' quotas"
string='can be also like this'
char=passwort
# numbers & digets
number=123
hexa=0x123
octa=0123
binary=0b1111
float=123.12
# boolean values
value-1=true
value-0=falseINIAs we can see in the small example, the syntax in INI files is kept very simple. The [section] name is used primarily to group individual parameters and improves readability. Comments can be marked with either ; or #. Otherwise, there is the option of defining various text and number formats, as well as Boolean values.
Web developers know INI files primarily from the PHP configuration, the php.ini, in which important properties such as the size of the file upload can be specified. INI files are also still common under Windows, although the registry was introduced for this purpose in Windows 95.
Properties
Another very tried and tested solution is so-called property files. This solution is particularly common in Java programs, as Java already has a simple class that can handle properties. The key=value format is borrowed from INI files. Comments are also introduced with #.
# PostgreSQL
hibernate.dialect.database = org.hibernate.dialect.PostgreSQLDialect
jdbc.driverClassName = org.postgresql.Driver
jdbc.url = jdbc:postgresql://127.0.0.1:5432/together-testPlaintextIn order to ensure type safety when reading .properties in Java programs, the TP-CORE library has an extended implementation. Despite the fact that the properties are read in as strings, the values can be accessed using typing. A detailed description of how the PropertyReader class can be used can be found in the documentation.
.property files can also be used as filters for substitutions in the Maven build process. Of course, properties are not just limited to Maven and Java. This concept can also be used in languages such as Dart, nodeJS, Python and Ruby. To ensure the greatest possible compatibility of the files between the different languages, exotic notation options should be avoided.
XML
XML has also been a widely used option for many years to store configurations in an application in a changeable manner. Compared to INI and property files, XML offers more flexibility in defining data. A very important aspect is the ability to define fixed structures using a grammar. This allows validation even for very complex data. Thanks to the two checking mechanisms of well-formedness and data validation against a grammar, possible configuration errors can be significantly reduced.
Well-known application scenarios for XML can be found, for example, in Java Enterprise projects (J EE) with web.xml or the Spring Framework and Hibernate configuration. The power of XML even allows it to be used as a Domain Specific Language (DSL), as is used in the Apache Maven build tool.
Thanks to many freely available libraries, there is an implementation for almost every programming language to read XML files and access specific data. For example, PHP, a language popular with web developers, has a very simple and intuitive solution for dealing with XML with the Simple XML extension.
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1"
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/assembly/ApplicationContext.xml</param-value>
</context-param>
<context-param>
<param-name>javax.faces.PROJECT_STAGE</param-name>
<param-value>${jsf.project.stage}</param-value>
</context-param>
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<listener>
<listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
</listener>
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>*.xhtml</url-pattern>
</servlet-mapping>
<welcome-file-list>
<welcome-file>index.xhtml</welcome-file>
</welcome-file-list>
</web-app>
XMLJSON
JavaScript Object Notation, or JSON for short, is a relatively new technology, although it has been around for several years. JSON also has a corresponding implementation for almost every programming language. The most common use case for JSON is data exchange in microservices. The reason for this is the compactness of JSON. Compared to XML, the data stream to be transferred in web services such as XML RPC or SOAP with JSON is much smaller due to the notation.
There is also a significant difference between JSON and XML in the area of validation. Basically, there is no way to define a grammar like in XML with DTD or schema on the official JSON homepage [1]. There is a proposal for a JSON grammar on GitHub [2], but there are no corresponding implementations to be able to use this technology in projects.
A further development of JSON is JSON5 [3], which was started in 2012 and has been officially published as a specification in version 1.0.0 [4] since 2018. The purpose of this development was to significantly improve the readability of JSON for people. Important functions such as the ability to write comments were added here. JSON5 is fully compatible with JSON as an extension. To get a brief impression of JSON5, here is a small example:
{
// comments
unquoted: 'and you can quote me on that',
singleQuotes: 'I can use "double quotes" here',
lineBreaks: "Look, Mom! \
No \\n's!",
hexadecimal: 0xdecaf,
leadingDecimalPoint: .8675309, andTrailing: 8675309.,
positiveSign: +1,
trailingComma: 'in objects', andIn: ['arrays',],
"backwardsCompatible": "with JSON",
}JSON5YAML
Many modern applications such as the open source metrics tool Prometheus use YAML for configuration. The very compact notation is very reminiscent of the Python programming language. YAML version 1.2 is currently published.
The advantage of YAML over other specifications is its extreme compactness. At the same time, version 1.2 has a grammar for validation. Despite its compactness, the focus of YAML 1.2 is on good readability for machines and people alike. I leave it up to each individual to decide whether YAML has achieved this goal. On the official homepage you will find all the resources you need to use it in your own project. This also includes an overview of the existing implementations. The design of the YAML homepage also gives a good foretaste of the clarity of YAML files. Attached is a very compact example of a Prometheus configuration in YAML:
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first.rules"
# - "second.rules"
#IP: 127.0.0.1
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['127.0.0.1:8080']
# SPRING BOOT WEB APP
- job_name: spring-boot-sample
scrape_interval: 60s
scrape_timeout: 50s
scheme: "http"
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['127.0.0.1:8888']
tls_config:
insecure_skip_verify: trueYAMLResumee
All of the techniques presented here have been tried and tested in practical use in many projects. Of course, there may be some preferences for special applications such as REST services. For my personal taste, I prefer the XML format for configuration files. This is easy to process in the program, extremely flexible and, with clever modeling, also compact and extremely readable for people.
References
Abonnement / Subscription
[English] This content is only available to subscribers.
[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.
DevOps Days Medellin 2024
Estupidences Continuas: Mitos de DevOps
¡Aprende a través del software! Mejora tu desempeño reflexionando sobre tu liderazgo. En este taller, desarrollará acciones concretas y adoptará hábitos de empoderamiento y un marco para gerentes de proyectos de software. ¡La casa proporciona los materiales! Si eres un profesional de TI/software y aspiras a un rol de liderazgo o quieres mejorar, este taller es para ti.
Modern Times
Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.
- Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
- The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
- Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
- Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
- Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
- There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
- We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
- Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
- Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.
Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.
- First: understand the problem.
- Second: investigate about a standard solution for the process.
- Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.
If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.
By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.
Working with textfiles on the Linux shell

Linux turns more and more to a popular operating system for IT professional. One of the reasons for this movement are the server solutions. Stability and low resource consuming are some of the important characteristics for this choice. May you already played around with a Microsoft Server you will miss the graphical Desktop in a Linux Server. After a login into a Linux Server you just see the command prompt is waiting for your inputs.
In this short article I introduce you some helpful Linux programs to work with files on the command line. This allows you to gather information, for example from log files. Before I start I’d like to recommend you a simple and powerful editor named joe.
Ctrl + C – Abort the current editing of a file without saving changes
Ctrl + KX – Exit the current editing and save the file
Ctrl + KF – Find text in the current file
Ctrl + V – Paste clipboard into document (CMD + V for Mac)
Ctrl + Y – Delete current line where cursor is
To install joe on an Debian based Linux distribution you just need to type:
sudo apt-get install joe
1. When you need to find content in a huge text file GREP will be your best friend. GREP allows you to search for text pattern in files.
gerp <pattern> file.log
-n : number of lines that matches
-i : case insensitive
-v : invert matches
-E : extended regex
-c : count number of matches
-l : find filenames that matches the patternBash2. When you need to analyze network packages NGREP is the tool of your choice.
ngrep -I file.pcap
-d : specify the network interface
-i : case insensitive
-x : print in alternate hexdump
-t : print timestamp
-I : read a pcap fileBash3. When you need to see the changes between two versions of a file, DIFF will do the job.
diff version1.txt version2.txt
-a : add
-c : change
-d : delete
# : line numbers
< : file 1
> : file 2Bash4. Sometimes it is necessary to give an order to the entries in a file. SORT is gonna to help you with this task.
sort file.log
-o : write the result to a file
-r : reverse order
-n : numerical sort
-k : sort by column
-c : check if orderd
-u : sort and remove
-f : ignore case
-h : human sortBash5. If you have to replace Strings inside of a huge text, like find and replace you can do that with SED, the stream editor.
sed s/regex/replace/g
-s : search
-g : replace
-d : delete
-w : append to file
-e : execute command
-n : suppress outputBash6. Parsing fields using delimiters in text files can done by using CUT.
cut -d ":" -f 2 file.log
-d : use the field delimiter
-f : field numbers
-c : specific characters positionBash7. The extraction of substrings who occurred just once in a text file you will reach with UNIQ.
uniq file.txt
-c : count the numbers of duplicates
-d : print duplicates
-i : case insesitiveBash8. AWK is a programming language consider to manipulate data.
awk {print $2} file.log BashREADME – how to
README files have a long tradition in software projects. These originally plain text files contained license information and instructions on how to compile the corresponding artifact from the source code or important notes on installing the program. There is no real standard how to build such a README file.
Since GitHub (acquired by Microsoft in 2018) started its triumphant march as a free code hosting platform for open source projects, there was quite early the function that the README file as the start page of the repository display. All that is required is to create a simple text file called README.md in the root directory of the repository.
In order to be able to structure the README files more clearly a possibility for a simple formatting was looked for. Quickly the markdown notation was chosen, because it is easy to use and can be rendered quite performant. Thus, the overview pages are easier to read for people and can be used as project documentation.
It is possible to link several such markdown files together as project documentation. So you get a kind of mini WIKI that is included in the project and also versioned via Git.

The whole thing became so successful that self-hosting solutions such as GitLab or the commercial BitBucket have also adopted this function.
Now, however, the question arises as to what content is best written in such a README file so that it also represents real added value for outsiders. The following points have become established over the course of time:
- Short description of the project
- Conditions under which the source code may be used (license)
- How to use the project (e.g. instructions for compiling or how to include the library in own projects)
- Who are the authors of the project and how to contact them
- What to do if you want to support the project
Meanwhile, so-called badges (stickers) are very popular. These often reference external services such as the free Continuous Integration Server TravisCI. These help to assess the quality of the project.
On GitHub there are also various templates for README files. However, you also have to look a little at the actual circumstances of your own project and judge which information is really relevant for users. But such templates help a lot to find out if you might have missed a point.
The fact that pretty much every manufacturer of source control management server solutions has integrated the function to display the README.md file as the project start page for the code repository means that a README.me is also a useful thing for commercial projects.
Even if the syntax for markdown is easy to learn, it can be more comfortable to use a MARKDOWN editor directly for extensive editing of such files. You should make sure that the preview is displayed correctly and not only a simple syntax highlighting is offered.
In any case, it is worth taking a look at the GitHub page https://www.readme-templates.com. Further resources on the topic can be found here:
Abonnement / Subscription
Latest won’t always be greatest
For more than a decade, it has been widely accepted that computer systems should be kept up to date. Those who regularly install updates reduce the risk of having security gaps on their computer that could be misused. Always in the hope that manufacturers of software always fix in their updates also security flaws. Microsoft, for example, has imposed an update requirement on its users since the introduction of Windows 10. Basically, the idea was well-founded. Because unpatched operating systems allow hackers easy access. So the thought: ‘Latest is greatest’ prevailed a very long time ago.
Windows users had little leeway here. But even on mobile devices like smartphones and tablets, automatic updates are activated in the factory settings. If you host an open source project on GitHub, you will receive regular emails about new versions for the libraries used. So at first glance, this is a good thing. However, if you delve a bit deeper into the topic, you will quickly come to the conclusion that latest is not always the best.
The best-known example of this is Windows 10 and the update cycles enforced by Microsoft. It is undisputed that systems must be regularly checked for security problems and available updates must be installed. That the maintenance of computer systems also takes time is also understandable. However, it is problematic when updates installed by the manufacturer paralyze the entire system and a new installation becomes necessary because the update was not sufficiently tested. But also in the context of security updates unasked function changes to the user to bring in I consider unreasonable. Especially with Windows, there are a lot of additional programs installed, which can quickly become a security risk due to lack of further development. That means with all consequence forced Windows updates do not make a computer safe, since here the additionally installed software is not examined for weak points.
If we take a look at Android systems, the situation is much better. However, there are enough points of criticism here as well. The applications are updated regularly, so the security is actually improved significantly. But also with Android, every update usually means functional changes. A simple example is the very popular Google StreetMaps service. With every update, the map usage becomes more confusing for me, as a lot of unwanted additional information is displayed, which considerably reduces the already limited screen.
As a user, it has fortunately not yet happened to me that application updates on Android have paralyzed the entire phone. Which also proves that it is quite possible to test updates extensively before rolling them out to users. However, this does not mean that every update was unproblematic. Problems that can be observed here regularly are things like an excessively increased battery consumption.
Pure Android system updates, on the other hand, regularly cause the hardware to become so slow after almost two years that you often decide to buy a new smartphone. Although the old phone is still in good condition and could be used much longer. I have noticed that many experienced users turn off their Android updates after about a year, before the phone is sent into obsolescence by the manufacturer.
How do you get an update muffler to keep his systems up to date and secure? My approach as a developer and configuration manager is quite simple. I distinguish between feature update and security patch. If you follow the semantic versioning in the release process and use a branch by release model for SCM systems like Git, such a distinction can be easily implemented.
But I also dedicated myself to the question of a versionable configuration setting for software applications. For this, there is a reference implementation in the project TP-CORE on GitHub, which is described in detail in the two-part article Treasue Chest. After all, it must be clear to us that if we reset the entire configuration made by the user to factory settings during an update, as is quite often the case with Windows 10, quite unique security vulnerabilities can arise.
This also brings us to the point of programming and how GitHub motivates developers through emails to include new versions of the libraries used in their applications. Because if such an update is a major API change, the problem is the high migration effort for the developers. This is where an also fairly simple strategy has worked for me. Instead of being impressed by the notifications about updates from GitHub, I regularly check via OWASP whether my libraries contain known risks. Because if a problem is detected by OWASP, it doesn’t matter how costly an update can be. The update and the associated migration must be implemented promptly. This also applies to all releases that are still in production
However, one rule of thumb applies to avoid update hell from the start: Only install or use what you really need. The fewer programs are installed under Windows and the fewer apps there are on the smartphone, the fewer security risks there are. This also applies to program libraries. Less is more from a security perspective. Apart from that, we get a free performance measurement by dispensing with unnecessary programs.
Certainly, for many private users the question of system updates is hardly relevant. Only new unwanted functions in existing programs, performance degradations or now and then shot operating systems cause more or less strong displeasure. In the commercial surrounding field quite fast substantial costs can develop, which can affect also the straight implementing projects negatively. Companies and people who develop software can improve user satisfaction considerably if they differentiate between security patches and feature updates in their release publications. And a feature update should then also contain all known security updates.
jConf Peru 2022
Rolling Stones on stage: release me
Everyone does it, some even several times a day. But few are aware of the complex interlocking mechanisms that make up a complete software release. This is why it sometimes happens that a package gets in the way of the automated processing chain.
With a bit of theory and a typical example from the Java universe, I show how you can take a little pressure out of the software development process in order to achieve lean, slightly automated processes.
To deal with standards in your own projects is not something bad. A well define release process based on common standards increase your productivity. Learn in this talk how you are able to simplify your daily work.
JVM Columbia 2022
Excepciones Tragadas
El manejo de excepciones debe ser un conocimiento básico para los desarrolladores de Java. Pero un uso seguro no es tan fácil como parece en un primer momento. También varios libros recomiendan no usar excepciones para evitar problemas de rendimiento. En el medio, nuestro propio código está luchando por las excepciones y tenemos que encontrar el punto exacto de donde proviene el problema. No siempre es una tarea fácil. Porque la posición en la que se detectó la excepción a menudo debe mejorarse para recopilar información relevante. En esta charla comparto mi experiencia sobre cómo tratar en general las excepciones. Explico con ejemplos cómo tratar las excepciones y cuándo es mejor evitar el uso de excepciones. Después de esta presentación, la cantidad de información en sus mensajes de error no aumenta, porque obtendremos la información importante para solucionar el problema donde ocurra.
El flujo del programa no solo se define mediante sentencias if-else. Las excepciones le permiten manejar los problemas antes de que ocurran, si lo hace bien. Aprenda cómo recopilar información sobre las excepciones lanzadas y vea la práctica que debe evitar.










