Stability in the crisis – business continuity & disaster recovery

The phrase: it’s better to have than to have had is something we have all experienced first-hand, whether in our professional or private lives. If only we hadn’t clicked on the malicious link in the email or something similar goes through our heads. But once the damage has been done, it’s already too late to take precautions.

What is usually just annoying in our private lives can very quickly become a threat to our existence in the business world. For this reason, it is important to set up a safety net in good time for the event of a potential loss. Unfortunately, many companies do not pay adequate attention to the issue of disaster recovery and business continuity, which then leads to high financial losses in an emergency.

The number of possible threat scenarios is long. Some scenarios are more likely to occur than others. It is therefore important to carry out a realistic risk assessment that weighs up the individual options. This helps to prevent the resulting costs from escalating.

The Corona pandemic was a life-changing experience for many people. The state-imposed hygiene rules in particular presented many companies with enormous challenges. The keyword here is home office. In order to get the situation under control, employees were sent home to work from there. Since there is no established culture and even less of an existing infrastructure for home working in German-speaking countries in particular, this had to be created quickly under great pressure. Of course, this was not without friction.

But it does not always have to be a drastic event. Even a mundane power failure or a power surge can cause considerable damage. It does not have to be a building fire or a flood that leads to an immediate standstill. A hacker attack also falls into the category of serious threat situations. That should be enough. I think the problem has been explained in detail with these examples. So let’s start by addressing the question of what good precautions can already be taken.

The easiest and most effective measure to implement is comprehensive data backup. To ensure that no data is lost, it helps to list and categorize the various data. Such a table should contain information about the storage paths to be backed up, approximate storage usage, prioritization according to confidentiality and category of data. Categories include project data, expulsions, email correspondence, financial accounting, supplier lists, payroll statements and so on. It is of course clear that in the context of data protection, not everyone in the company is authorized to read the information. This is why digestible data must be protected by encryption. Depending on the protection class, this can be a simple password for compressed data or a cryptographically encrypted directory or an encrypted hard drive. The question of how often a data backup should be carried out depends on how often the original data is changed. The more often the data is changed, the shorter the data backup intervals should be. Another point is the target storage of the data backup. A completely encrypted archive that is located locally in the company can certainly be uploaded to a cloud storage after a successful backup. However, this solution can be very expensive for large amounts of data and is therefore not necessarily suitable for small and medium-sized enterprises (SMEs). Of course, it is ideal if there are several replicas of a data backup that are stored in different places.

Of course, it is of little use to create extensive backups only to find out in an emergency that they are faulty. That is why verification of the backup is extremely important. Professional data backup tools contain a mechanism that compares the written data with the original. The Linux command rsync also uses this mechanism. A simple copy & paste does not meet the requirement. But a look at the file size of the backup is also important. This quickly shows whether information is missing. Of course, there is much more that can be said about backups, but that would go too far at this point. It is important to develop the right understanding of the topic.

If we take a look at the IT infrastructure of companies, we quickly realize that the provision of software installations is predominantly a manual process. If we consider that, for example, a computer system can no longer perform its service due to a hardware error, it is also important to have a suitable emergency aid strategy in hand. The time-consuming work when hardware errors occur is installing the programs after a device has been replaced. For many companies, it makes little sense to have a redundant infrastructure for cost reasons. A proven solution comes from the DevOps area and is called Infrastructure as a Code (IaaC). This is mainly about providing services such as email or databases etc. via script. For the business continuity & disaster recovery approach, it is sufficient if the automated installation or update is initiated manually. You should not rely on proprietary solutions from possible cloud providers, but use freely available tools. A possible scenario is also a price increase by the cloud provider or changes to the terms and conditions that are unacceptable for companies, which can make a quick change necessary. If the automation solution is based on a special technology that other providers cannot provide, a quick change is extremely difficult.

Employee flexibility should also be taken into account. Purchasing notebooks instead of desktop computers allows for a high level of mobility. This of course also includes permission to take the laptop home and log into the company network from there. Teams that were already familiar with home office at the beginning of 2020 were able to continue their work from home almost seamlessly. This has given the companies in question a huge competitive advantage. It can also be assumed that large, representative company headquarters will become less and less important as part of the digital transformation. The teams will then organize themselves flexibly remotely using modern communication tools. Current studies show that such a setup increases productivity in most cases. A colleague with a cold who still feels able to do his work can come to work without worrying without his colleagues running the risk of being infected.

We can already see how far this topic can be thought of. The challenge, however, is to carry out a gradual transformation. The result is a decentralized structure that works with redundancies. It is precisely these redundancies that provide sufficient room for maneuver in the event of a disruption compared to a centralized structure. Redundancies naturally cause an additional cost factor. Equipping employees with a laptop instead of a stationary desktop PC is somewhat more expensive to purchase. The price difference between the two solutions is no longer as dramatic as it was at the turn of the millennium, and the advantages outweigh the disadvantages. The transformation to maintaining business capability in the event of disruptions does not mean that you immediately rush out and buy all employees new equipment. Once you have determined what is necessary and useful for the company, new purchases can be prioritized. Colleagues whose equipment has been written off and is due for replacement now receive equipment in accordance with the new company guidelines. This model is now being followed in all other areas. This step-by-step optimization allows for a good learning process and ensures that every step that has already been completed has actually been implemented correctly.

Talents wanted

Freelancers who acquire new orders have been experiencing significant changes for some time. Fewer and fewer companies have direct contact with their contractors when placing orders. Recruitment agencies are increasingly pushing themselves between companies and independent contractors.

When specialist knowledge is required for a project, companies are happy to turn to external specialists. This approach gives companies the greatest possible flexibility in controlling costs. But freelancers also have their advantages with this practice. They can only deal with topics in which they have a strong interest. This avoids being used for boring, routine standard tasks. Due to their experience in different organizational structures and the variety of projects, independent contractors have a broad portfolio of unconventional solution strategies. This knowledge base is very attractive for clients, even if a freelance external employee is initially more expensive than his permanent colleague. Due to their diverse experience, freelancers can bring positive impulses to the project that overcome a standstill.

Unfortunately, companies have not been trying to recruit the skilled workers they need on their own for some time. The task of recruiting staff has now been outsourced to external recruitment agencies almost everywhere. These so-called recruitment firms now advertise that they will find the most suitable candidates for open positions and propose them for filling them. After all, these recruiters have access to a large pool of applicant profiles. Companies that want to fill a vacancy often do not know how to find specialists and how to contact them directly. For this reason, the services offered by recruitment firms are also attractive for medium-sized companies. After sufficient personal experience, I have gained a completely different picture over the years. From what I have experienced, what recruitment firms promise is far from what they actually deliver.

I actually find the idea of ​​having my own agent who takes over my order acquisition very appealing. It’s like in the film and music industry. You have an agent who has your back and gives you regular feedback. This gives you an idea of ​​the technologies that are in demand and in which you can, for example, train yourself further. This improves your own market relevance and ensures regular orders. That would actually be an ideal win-win situation for everyone involved. Unfortunately, what actually happens in reality is something completely different.

Instead of recruitment agents building a good relationship with their specialists and promoting their development, these recruiters act like harmful parasites. They harm both the freelancers and the companies that want to fill vacancies. Because in business, it’s not about really finding the most suitable candidate for a company. It’s all about offering candidates who fit the profile you’re looking for at the lowest possible hourly rate. Whether these candidates can actually do the things they claim to be able to do is often questionable.

The approach of the recruitment agencies is very similar. They try to generate a large pool of current applicant profiles. These profiles are then searched for keywords using automatic AI text recognition systems. Then, from the suggested candidates, those with the lowest hourly rate are contacted for a preliminary interview. Anyone who does not show any major abnormalities in this preliminary interview is then suggested to the company for an interview. The profit for the recruitment agency is enormous. This is because they pocket the difference between the hourly rate paid by the client and the hourly rate received by the self-employed person. In some cases, this can be up to 40%.

But that’s not all that these parasitic intermediaries have to offer. They often delay the payment date for the invoice. They also try to shift the entire business risk onto the freelancer. This is done by demanding pointless liability insurance that is not relevant to the advertised position. As a result, companies then receive apparently skilled workers for vacant positions who are more likely to be declared as unskilled workers.

Now you might ask yourself why the companies continue to work with the intermediaries. One reason is the current political situation. For example, since around 2010 there have been laws in Germany that are intended to prevent bogus self-employment. Companies that work directly with freelancers are often pressured by pension insurance companies. This creates a lot of uncertainty and does not serve to protect freelancers. It only secures the business model of the intermediary companies.

I have now gotten into the habit of hanging up without comment and immediately when I notice various basic patterns. Such phone calls are a waste of time and lead to nothing except getting annoyed at the audacity of the recruiters. The most important sign of dubious recruiters is that a completely different person is suddenly on the phone than the one you contacted first. If this person then has a very strong Indian accent, you can be 100% sure that you are connected to a call center. Even if the number shows England as the area code, the people are actually located somewhere in India or Pakistan. Nothing that would underline their seriousness.

Over the course of many years of my career, I have registered on various job portals. My conclusion is that you can save yourself the time. 95% of all contacts that came about through these are recruiters as described above. These people then use the trick of saving them as a contact. But it is naive to believe that these so-called network requests are really about direct contact. The purpose of this action is to get onto the contact list. Many portals such as XING and LinkedIn have the setting that contacts can see the contacts from their own list or are offered them via the network function. These contact lists can be worth real money. You can find department heads or other professionals there who are definitely worth contacting. For this reason, I have also deactivated access to the friends list for friends in all social networks. I also reject all connection requests from people with the title Recruitment without exception. My presence on social networks now only serves to protect profiles against identity theft. I no longer respond to most requests to send a CV. But I also do not enter my personal information about jobs, studies and employers in these network profiles. Anyone who wants to contact me can do so via my homepage.

Another habit I have developed over the years is never to talk about my salary expectations first. If the person I am talking to cannot give a specific figure that they are willing to pay for my services, they just want to collect data. So another reason to end the conversation abruptly. None of these people care what hourly rate I have had in previous projects. They only use this information to drive down the price. If you are a bit sensitive and don’t want to give a rude answer, just name a very high hourly rate or daily rate.

As we can see, it is not that difficult to recognize the real black sheep very quickly by their behavior. My advice is, as soon as one of the patterns described above occurs, to save time and, above all, nerves and simply end the conversation. From experience, I can say that if the agents behave as described, no placement will be made. It is then better to concentrate your energy on realistic contacts. Because there are also really good placement companies. They are interested in a long-term collaboration and behave completely differently. They provide support and advice on how to improve your CV and advise companies on how to formulate realistic job offers.

Unfortunately, I fear that the situation will continue to worsen from year to year. The influence of economic development and the widespread availability of new technologies will also increase the pressure on the job market. Neither companies nor contractors will have any opportunities in the future if they do not adapt to the new times and take other paths.

5
What kind of job offers you prefer

People are different. Everybody has his own capabilities and necessities. Share with the community which type of job position would suit you most in your current situation.


Podcast

Setting up a native Git server on Linux

If you want to use your Git repository for collaborative editing of source code, you need a Git server. The Git server enables multiple developers to collaborate on the same code base. Installing the Git client on a Linux server is a first step towards your own server solution, but it is far from sufficient. In order to allow multiple people to access a code repository, we need access authorization. After all, the repository should be publicly accessible via the Internet. We want to use user management to prevent unauthorized people from reading and changing the contents of the repositories.

There are many excellent and convenient solutions for operating a Git server that should be preferred to a native server solution. The administration of a native Git server requires Linux knowledge and is carried out exclusively via the command line. Solutions such as the SCM-Manager have a graphical user interface and come with many useful tools for administering the server. These tools are not available with a native installation.

Why should you install Git as a native server? This question is quite easy to answer. The reason is when the server on which the code repository is to be made available has only a few hardware resources. RAM in particular is always a bit of a problem in this context. This is often the case with rented virtual private servers (VPS) or a small RaspberryPI. So we can see that it can make sense to want to run a native Git server.

As a prerequisite, we need a Linux server on which we can install the Git server. This can be a Debian or Ubuntu server. If you use CentOs or other Linux distributions, you must use your distribution’s package manager instead of APT to install the software.

In the first step, we start by updating the packages and installing the Git client.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git
Bash

As a second step, we create a new user named git and create a home directory for it and activate SSH access there.

sudo useradd --create-home --shell /bin/bash git
sudo su - git
cd /home/git/
mkdir .ssh/ && chmod 700 .ssh/
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
Bash

Now, in the third step, we can create our Git repositories in the newly created home directory of the git user. These differ from the local workspace in that they do not have the source code checked out.

Unfortunately, we are not quite finished with our project yet. In the fourth step, we have to set the user authorization for the created repository. This is done by storing the public key on the Git server for SSH access. To do this, we copy the contents of our private key file into the /home/git/.ssh/authorized_keys file in a separate line. If you now want to deny existing users access, simply comment out the private key number with a #.

If everything has been done correctly, you can access the repository using the following command line command: git clone ssh://git@<IP>/~/<repo>

Replace with the actual server IP. For our example, the correct path for is project.git, so it is the directory we created for the Git repository.

Multiple repositories can be created on the native Git server. It is important to note that all authorized users have read and write access to all repositories created in this way. This can only be restricted by creating multiple users on the operating system of the Linux server that provides our Git repositories, to whom the repositories are then assigned.

We see that a native Git server installation can be implemented quickly, but it is not sufficient for commercial software development. If you like to experiment, you can create a virtual machine and try out this workshop in it.

Wind of Change – a journey to Linux

When friends or colleagues complain about their systems, I always recommend them Linux. But guess...

Understanding Linux: rediscovering the joy of technology

It's time to take control of your hardware again, because you don't have to be...

Ruby: setting up the development environment

Set up a Ruby development environment for your first steps with the Ruby on Rails...

PHP 8 and GDLib in the Docker container

This article is only visible for logged in patreons. With a Patreon membership you help...

Network spy protection with AdGuard Home on a Raspberry Pi and Docker

In this short tutorial I describe how you are able to setup AdGuard Home on...

Learn to walk with Docker and PostgreSQL

After some years the virtualization tool Docker proofed it's importance for the software industry. Usually...

Bottleneck Pull Requests

The secure use of source control management (SCM) systems such as Git is essential for programmers (development) and system administrators (operations). This group of tools has a long tradition in software development and enables development teams to work together on a code base. Four questions are answered: When was the change made? Who made the change? What was changed? Why was something changed? It is therefore a pure collaboration tool.

With the advent of the open source code hosting platform GitHub, so-called Pull Requests were introduced. Pull requests are a workflow in GitHub that allows developers to provide code changes for repositories to which they only have read access. Only after the owner of the original repository has reviewed the proposed changes and approved them are these changes integrated by him. This is also how the name comes about. A developer copies the original repository into his GitHub workspace, makes changes and requests the owner of the original repository to adopt the change. The latter can then accept the changes and if necessary adapt them himself or reject them with a reason.

Anyone who thinks that GitHub was particularly innovative is mistaken. This process is very old hat in the open source community. Originally, this procedure was called the Dictatorship Workflow. IBM’s commercial SCM Rational Synergy first published in 1990 is based precisely on the Dictatorship Workflow. With the class of distributed version management tools which Git also belongs to the Dictatorship Workflow is quite easy to implement. So it was obvious that GitHub would make this process available to its users. GitHub has chosen a much more appealing name. Anyone who works with the free DevOps solution GitLab, for example will know pull requests as merge requests. The most common Git servers now contain the pull request process. Without going into too much detail about the technical details of implementing pull requests we will focus our attention on the usual problems that open source projects face.

Developers who want to participate in an open source project are called maintainers. Almost every project has a short guide on how to support the project and which rules apply. For people who are learning to program, open source projects are ideal for quickly and significantly improving their own skills. For the open source project this means that you have maintainers with a wide range of skills and experience. If you don’t establish a control mechanism the code base will erode in a very short time.

If the project is quite large and there are a lot of maintainers working on the code base it is hardly possible for the owner of the repository to process all pull requests in a timely manner. To counteract this bottleneck the Dictatorship Workflow was expanded to the Dictatorship – Lieutenant Workflow. An intermediate instance was introduced that distributes the review of pull requests across several shoulders. This intermediate layer the so-called Lieutenants are particularly active maintainers with an already established reputation. The Dictator therefore only needs to review the Lieutenants’ pull requests. An immense reduction in workload that ensures that there is no backlog of features due to unprocessed pull requests. After all the improvements or extensions should be included in the code base as quickly as possible so that they can be made available to users in the next release.

This approach is still the standard in open source projects to ensure quality. You can never say who is involved in the project. There may even be one or two saboteurs. This idea is not so far-fetched. Companies that have strong competition for their commercial product from the free open source sector could come up with unfair ideas here if there were no regulations. In addition maintainers cannot be disciplined as is the case with team members in companies, for example. It is difficult to threaten a maintainer who is resistant to advice and does not adhere to the project’s conventions despite repeated requests with a pay cut. The only option is to exclude this person from the project.

Even if the problem of disciplining employees in commercial teams described above is not a problem. There are also difficulties in these environments that need to be overcome. These problems date back to the early days of version control tools. The first representatives of this species were not distributed solutions just centralized. CVS and Subversion (SVN) only ever keep the latest revision of the code base on the local development computer. Without a connection to the server you can actually not work. This is different with Git. Here you have a copy of the repository on your own computer, so you can do your work locally in a separate branch and when you are finished you bring these changes into the main development branch and then transfer them to the server. The ability to create offline branches and merge them locally has a decisive influence on the stability of your own work if the repository gets into an inconsistent state. Because in contrast to centralized SCM systems you can now continue working without having to wait for the main development branch to be repaired.

These inconsistencies arise very easily. All it takes is forgetting a file when committing and team members can no longer compile the project locally and are hampered in their work. The concept of Continuous Integration (CI) was established to overcome this problem. It is not as is often wrongly assumed about integrating different components into an application. The aim of CI is to keep the commit stage – the code repository – in a consistent state. For this purpose build servers were established which regularly check the repository for changes and then build the artifact from the source code. A very popular build server that has been established for many years is Jenkins. Jenkins originally emerged as a fork of the Hudson project. Build Servers now takes on many other tasks. That is why it makes a lot of sense to call this class of tools automation servers.

With this brief overview of the history of software development, we now understand the problems of open source projects and commercial software development. We have also discussed the history of the pull request. In commercial projects, it often happens that teams are forced by project management to work with pull requests. For a project manager without technical background knowledge, it makes a lot of sense to establish pull requests in his project as well. After all, he has the idea that this will improve code quality. Unfortunately, this is not the case. The only thing that happens is that a feature backlog is provoked and the team is forced to work harder without improving productivity. The pull request must be evaluated by a competent person. This causes unpleasant delays in large projects.

Now I often see the argument that pull requests can be automated. This means that the build server takes the branch with the pull request and tries to build it, and if the compilation and automated tests are successful, the server tries to incorporate the changes into the main development branch. Maybe I’m seeing something wrong, but where is the quality control? It’s a simple continuous integration process that maintains the consistency of the repository. Since pull requests are primarily found in the Git environment, a temporarily inconsistent repository does not mean a complete stop to development for the entire team, as is the case with Subversion.

Another interesting question is how to deal with semantic merge conflicts in an automatic merge. These are not a serious problem per se. This will certainly lead to the rejection of the pull request with a corresponding message to the developer so that the problem can be solved with a new pull request. However, unfavorable branch strategies can lead to disproportionate additional work.

I see no added value for the use of pull requests in commercial software projects, which is why I advise against using pull requests in this context. Apart from a complication of the CI / CD pipeline and increased resource consumption of the automation server which now does the work twice, nothing else has happened. The quality of a software project can be improved by introducing automated unit tests and a test-driven approach to implementing features. Here it is necessary to continuously monitor and improve the test coverage of the project. Static code analysis and activating compiler warnings bring better results with significantly less effort.

Personally, I believe that companies that rely on pull requests either use them for complicated CI or completely distrust their developers and deny that they do a good job. Of course, I am open to a discussion on the topic, perhaps an even better solution can be found. I would therefore be happy to receive lots of comments with your views and experiences about dealing with pull requests.

Setting up a native Git server on Linux

May it doesn't seem very logical to set up a native Git server at first...

Configuration files in software applications

If you need configurable or changeable settings in your application, you cannot always rely on...

Conway’s law

Because the design which occurs first is almost never the best possible, the prevailing system...

Test First?

Do we really have to start with the test first and then write the corresponding...

Modern Times (for Configuration Manager)

DevOps has been on everyone's lips for many years, but there are still numerous myths...

Latest won’t always be greatest

What should you pay attention to in the commercial environment so that software updates do...

Validating credit card numbers for online payments

  • [English en] This article is only visible for logged in Patreons. With your Patreon membership you help me to keep this page up to date.
  • [Deutsch de] Dieser Artikel ist nur für eingeloggte Patreons sichtbar. Mit ener Patreon Mitgliedschaft helft Ihr mir diese Seite stets aktuell zu halten.
  • [Español es] Este artículo sólo es visible para los Patreons registrados. Con tu suscripción a Patreon me ayudas a mantener esta página actualizada.
To view this content, you must be a member of Elmar's Patreon at $2.99 USD or more
Already a qualifying Patreon member? Refresh to access this content.

Accelerate PHP Applications with Redis

  • [English en] This article is only visible for logged in Patreons. With your Patreon membership you help me to keep this page up to date.
  • [Deutsch de] Dieser Artikel ist nur für eingeloggte Patreons sichtbar. Mit ener Patreon Mitgliedschaft helft Ihr mir diese Seite stets aktuell zu halten.
  • [Español es] Este artículo sólo es visible para los Patreons registrados. Con tu suscripción a Patreon me ayudas a mantener esta página actualizada.
To view this content, you must be a member of Elmar's Patreon at $2.99 USD or more
Already a qualifying Patreon member? Refresh to access this content.

Configuration files in software applications

Why do we even need the option to save application configurations in text files? Isn’t a database sufficient for this purpose? The answer to this question is quite trivial. The information on how an application can connect to a database is difficult to save in the database itself.

Now you could certainly argue that you can achieve such things with an embedded database such as SQLite. That may be correct in principle. Unfortunately, this solution is not really practical for highly scalable applications. And you don’t always have to use a sledgehammer to crack a nut. Saving important configuration parameters in text files has a long tradition in software development. However, various text formats such as INI, XML, JSON and YAML have now become established for this use case. For this reason, the question arises as to which format is best to use for your own project.

INI Files

One of the oldest formats are the well-known INI files. They store information according to the key = value principle. If a key appears multiple times in such an INI file, the final value is always overwritten by the last entry that appears in the file.

; Example of an INI File
[Section-name]
key=value ; inline

text="text configuration with spaces and \' quotas"
string='can be also like this'
char=passwort

# numbers & digets
number=123
hexa=0x123
octa=0123
binary=0b1111
float=123.12

# boolean values
value-1=true
value-0=false
INI

As we can see in the small example, the syntax in INI files is kept very simple. The [section] name is used primarily to group individual parameters and improves readability. Comments can be marked with either ; or #. Otherwise, there is the option of defining various text and number formats, as well as Boolean values.

Web developers know INI files primarily from the PHP configuration, the php.ini, in which important properties such as the size of the file upload can be specified. INI files are also still common under Windows, although the registry was introduced for this purpose in Windows 95.

Properties

Another very tried and tested solution is so-called property files. This solution is particularly common in Java programs, as Java already has a simple class that can handle properties. The key=value format is borrowed from INI files. Comments are also introduced with #.

# PostgreSQL
hibernate.dialect.database = org.hibernate.dialect.PostgreSQLDialect
jdbc.driverClassName = org.postgresql.Driver
jdbc.url = jdbc:postgresql://127.0.0.1:5432/together-test
Plaintext

In order to ensure type safety when reading .properties in Java programs, the TP-CORE library has an extended implementation. Despite the fact that the properties are read in as strings, the values ​​can be accessed using typing. A detailed description of how the PropertyReader class can be used can be found in the documentation.

.property files can also be used as filters for substitutions in the Maven build process. Of course, properties are not just limited to Maven and Java. This concept can also be used in languages ​​such as Dart, nodeJS, Python and Ruby. To ensure the greatest possible compatibility of the files between the different languages, exotic notation options should be avoided.

XML

XML has also been a widely used option for many years to store configurations in an application in a changeable manner. Compared to INI and property files, XML offers more flexibility in defining data. A very important aspect is the ability to define fixed structures using a grammar. This allows validation even for very complex data. Thanks to the two checking mechanisms of well-formedness and data validation against a grammar, possible configuration errors can be significantly reduced.

Well-known application scenarios for XML can be found, for example, in Java Enterprise projects (J EE) with web.xml or the Spring Framework and Hibernate configuration. The power of XML even allows it to be used as a Domain Specific Language (DSL), as is used in the Apache Maven build tool.

Thanks to many freely available libraries, there is an implementation for almost every programming language to read XML files and access specific data. For example, PHP, a language popular with web developers, has a very simple and intuitive solution for dealing with XML with the Simple XML extension.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1"
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
                             http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/assembly/ApplicationContext.xml</param-value>
    </context-param>
    <context-param>
        <param-name>javax.faces.PROJECT_STAGE</param-name>
        <param-value>${jsf.project.stage}</param-value>
    </context-param>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
    <listener>
        <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
    </listener>

    <servlet>
        <servlet-name>Faces Servlet</servlet-name>
        <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>Faces Servlet</servlet-name>
        <url-pattern>*.xhtml</url-pattern>
    </servlet-mapping>

    <welcome-file-list>
        <welcome-file>index.xhtml</welcome-file>
    </welcome-file-list>
</web-app>
XML

JSON

JavaScript Object Notation, or JSON for short, is a relatively new technology, although it has been around for several years. JSON also has a corresponding implementation for almost every programming language. The most common use case for JSON is data exchange in microservices. The reason for this is the compactness of JSON. Compared to XML, the data stream to be transferred in web services such as XML RPC or SOAP with JSON is much smaller due to the notation.

There is also a significant difference between JSON and XML in the area of ​​validation. Basically, there is no way to define a grammar like in XML with DTD or schema on the official JSON homepage [1]. There is a proposal for a JSON grammar on GitHub [2], but there are no corresponding implementations to be able to use this technology in projects.

A further development of JSON is JSON5 [3], which was started in 2012 and has been officially published as a specification in version 1.0.0 [4] since 2018. The purpose of this development was to significantly improve the readability of JSON for people. Important functions such as the ability to write comments were added here. JSON5 is fully compatible with JSON as an extension. To get a brief impression of JSON5, here is a small example:

{
  // comments
  unquoted: 'and you can quote me on that',
  singleQuotes: 'I can use "double quotes" here',
  lineBreaks: "Look, Mom! \
No \\n's!",
  hexadecimal: 0xdecaf,
  leadingDecimalPoint: .8675309, andTrailing: 8675309.,
  positiveSign: +1,
  trailingComma: 'in objects', andIn: ['arrays',],
  "backwardsCompatible": "with JSON",
}
JSON5

YAML

Many modern applications such as the open source metrics tool Prometheus use YAML for configuration. The very compact notation is very reminiscent of the Python programming language. YAML version 1.2 is currently published.

The advantage of YAML over other specifications is its extreme compactness. At the same time, version 1.2 has a grammar for validation. Despite its compactness, the focus of YAML 1.2 is on good readability for machines and people alike. I leave it up to each individual to decide whether YAML has achieved this goal. On the official homepage you will find all the resources you need to use it in your own project. This also includes an overview of the existing implementations. The design of the YAML homepage also gives a good foretaste of the clarity of YAML files. Attached is a very compact example of a Prometheus configuration in YAML:

global:
  scrape_interval:     15s
  evaluation_interval: 15s

rule_files:
  # - "first.rules"
  # - "second.rules"

#IP: 127.0.0.1
scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['127.0.0.1:8080']

  # SPRING BOOT WEB APP
  - job_name: spring-boot-sample 
    scrape_interval: 60s
    scrape_timeout: 50s
    scheme: "http"
    metrics_path: '/actuator/prometheus' 
    static_configs:
     - targets: ['127.0.0.1:8888']
    tls_config:
     insecure_skip_verify: true
YAML

Resumee

All of the techniques presented here have been tried and tested in practical use in many projects. Of course, there may be some preferences for special applications such as REST services. For my personal taste, I prefer the XML format for configuration files. This is easy to process in the program, extremely flexible and, with clever modeling, also compact and extremely readable for people.

References

Ruby: setting up the development environment

Ruby has been a well-established programming language for many years and can also be recommended to beginners. Ruby follows the object-oriented paradigm and contains many concepts to support OOP well. In addition, the Ruby on Rails framework makes it very easy to develop complex web applications.

The most difficult hurdle to overcome when getting started with Ruby is the installation of the entire development environment. For this reason, I have written this short tutorial on getting started with Ruby. So let’s start with the installation right away.

My operating system is a Debian 12 Linux and Ruby can be installed very easily with the sudo apt-get install ruby-full command. This procedure can be applied to all Debian-based Linux distributions such as Ubuntu. You can then use ruby -v to check the success in the bash.

ed@:~$ ruby -v
ruby 3.1.2p20 (2022-04-12 revision 4491bb740a) [x86_64-linux-gnu]
Bash

If we now follow the tutorial on the Ruby on Rails homepage and want to install the Rails framework via gem rails, we already encounter the first problem. No libraries for Ruby can be installed due to missing authorizations. Now we could come up with the idea to install the libraries as superuser with sudo. Unfortunately, this solution is only temporary and prevents the libraries from being found correctly later in the development environment. It is better to create a folder for the GEMs in the user’s home directory and make this available via a system variable.

export GEM_HOME=/home/<user>/.ruby-gems

export PATH=$PATH:/home/<user>/.ruby-gems

The above line must be entered at the end of the .bashrc file so that the changes remain persistent. It is important that is <user> replaced with the correct user name. The success of this action can be checked via gem environment and should result in an output similar to the one below.

ed@:~$ gem environment
RubyGems Environment:
  - RUBYGEMS VERSION: 3.3.15
  - RUBY VERSION: 3.1.2 (2022-04-12 patchlevel 20) [x86_64-linux-gnu]
  - INSTALLATION DIRECTORY: /home/ed/.ruby-gems
  - USER INSTALLATION DIRECTORY: /home/ed/.local/share/gem/ruby/3.1.0
  - RUBY EXECUTABLE: /usr/bin/ruby3.1
  - GIT EXECUTABLE: /usr/bin/git
  - EXECUTABLE DIRECTORY: /home/ed/Programs/gem-repository/bin
  - SPEC CACHE DIRECTORY: /home/ed/.local/share/gem/specs
  - SYSTEM CONFIGURATION DIRECTORY: /etc
  - RUBYGEMS PLATFORMS:
     - ruby
     - x86_64-linux
  - GEM PATHS:
     - /home/ed/Programs/gem-repository
     - /home/ed/.local/share/gem/ruby/3.1.0
     - /var/lib/gems/3.1.0
     - /usr/local/lib/ruby/gems/3.1.0
     - /usr/lib/ruby/gems/3.1.0
     - /usr/lib/x86_64-linux-gnu/ruby/gems/3.1.0
     - /usr/share/rubygems-integration/3.1.0
     - /usr/share/rubygems-integration/all
     - /usr/lib/x86_64-linux-gnu/rubygems-integration/3.1.0
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :backtrace => false
     - :bulk_threshold => 1000
  - REMOTE SOURCES:
     - https://rubygems.org/
  - SHELL PATH:
     - /home/ed/.local/bin
     - /usr/local/bin
     - /usr/bin
     - /bin
     - /usr/local/games
     - /usr/games
     - /snap/bin
     - /home/ed/Programs/maven/bin
     - /usr/share/openjfx/lib
- /home/ed/.local/bin
Bash

With this setting, Ruby GEMs can now be installed without difficulty. Let’s try this out right away and install the Ruby on Rails framework, which supports us in the development of web applications: gem install rails. This should now run without error messages and with the command rails -v we can see if we were successful.

In the next step we can now create a new Rails project. Here I use the example from the Ruby on Rails documentation and write in the bash: rails new blog. This creates a corresponding directory called blog with the project files. After we have changed to the directory, we still need to install all dependencies. This is done via: bundle install.

Here we encounter another problem. The installation cannot be completed because there seems to be a problem with the psych library. The real problem, however, is that there is no support for YAML files at the operating system level. This can be fixed very quickly by installing the YAML package.

sudo apt-get install libyaml-dev

The problem with psych in Ruby on Rails has existed for a while and has been solved with the YAML installation so that the bundle install command now also runs successfully. Now we are also able to start the server for the Rails application: bin/rails server.

ed@:~/blog$ bin/rails server
=> Booting Puma
=> Rails 7.1.3.3 application starting in development 
=> Run `bin/rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 6.4.2 (ruby 3.1.2-p20) ("The Eagle of Durango")
*  Min threads: 5
*  Max threads: 5
*  Environment: development
*          PID: 12316
* Listening on http://127.0.0.1:3000
* Listening on http://[::1]:3000
Use Ctrl-C to stop
Bash

If we now call up the URL http://127.0.0.1:3000 in the web browser, we see our Rails web application.

With these steps, we have now created a functioning Ruby environment on our system. Now it’s time to decide on a suitable development environment. If you only occasionally adapt a few scripts, VIM and Sublime Text are sufficient as editors. For complex software projects, a full IDE should be used, as this simplifies the work considerably. The best recommendation is the paid IDE RubyMine from JetBrains. If you support Ruby open source projects as a developer, you can apply for a free license.

A freely available Ruby IDE is VSCode from Microsoft. However, a few plugins have to be integrated first and VSCode is not very intuitive for my taste. Ruby integration for the classic Java IDEs Eclipse and NetBeans are quite outdated and can only be made to work with a great deal of effort.

With this we have already discussed all the important points that are necessary to set up a functioning Ruby environment on your own system. I hope that this little workshop has significantly lowered the entry barrier to learning Ruby. If you like this article, please like it and recommend it to your friends.

PHP 8 and GDLib in the Docker container

  • [English en] This article is only visible for logged in Patreons. With your Patreon membership you help me to keep this page up to date.
  • [Deutsch de] Dieser Artikel ist nur für eingeloggte Patreons sichtbar. Mit ener Patreon Mitgliedschaft helft Ihr mir diese Seite stets aktuell zu halten.
  • [Español es] Este artículo sólo es visible para los Patreons registrados. Con tu suscripción a Patreon me ayudas a mantener esta página actualizada.
To view this content, you must be a member of Elmar's Patreon at $2.99 USD or more
Already a qualifying Patreon member? Refresh to access this content.

Understanding Linux: rediscovering the joy of technology

Since Windows 10 users losing more and more control over their systems. The keyword is governance where companies try to educate their users. Don’t worry I will not give a huge speech about the free of will and evil tech companies. In this article I give you an opportunity to take back control of your computing device. Say goodbye to Microsoft and Windows and begin your journey with Linux.

Around the year 2015 I took a decision to get rid of Microsoft windows and move to 100% to Linux. I already had at this time several experiences with Linux on the server side. But use Linux daily on your desktop for all the tasks was a new challenge, because the tings are different. So I spended around 4 weeks and several tries to find a Linux desktop distribution which fits as best to my needs. The things you need to pay attention to the best result for yourself is the availability of a huge software repository where you can get all the applications you need. Another important point is the look and feel of your desktop. Here are also several options you can choose from. Before I give a brief overview about distributions and desktops I will explain the problem of software versions and repositories.

A software repository is just a big storage were the binaries and installers for your application is placed. If you decided for a distribution and a specific version you have a defined repository where you are able to purchase you software and distribution updates. Let’s assume we have a Linux distribution with a version from 2019. The support for this version is let’s say is until 2020. That means for the software repository the included software just contains a state from this time. If you wish to have for this distribution the newest GIMP version like 2.10.36 from November 2023 you can not get it from your original distribution repository. The last software version for GIMP in the original repository is the version from 2020. The same problem is with older software versions. This is a very specific problem for software developers and available programming languages. If you need to maintain old PHP or JAVA applications you need very specific compilers and interpreters they may not included in your software repository. To solve this problem it exist several solutions. Developers for example can use Docker. As a regular user you can try to get older or newer versions of a software directly from the Developer / Manufacturer. With this knowledge we can now learn what is a distribution and which ones existing.

If you want to choose for you a Linux distribution or in short Distro you have a huge selection where to choose from. But what is a Distro? In general you can say that a manufacturer put together a Linux Kernel, a collection of software and a repository and decorate everything with his own look and feel. It is also possible that you create your own Linux, including compiling your own kernel. But Linux is not in all the cases Linux. We need to distinguish between Linux and Unix. Linux is a derivation of Unix, like the Name LinUx as a combination from Linus and Unix already suggest. The big difference between Linux and Unix are the network functionality, which is in Unix more advanced. The operation System (OS) from Apple for example is based on Unix. A very common Unix OS is FreeBSD. BSD means Barkley Software Distribution and is continuous developed by the Barkley University in California.

5
Rate your favorite Linux Distribution

Beside this distinguish exist tree main Linux branches which also includes more derivation. As first there is from Germany Suse with the YAST packet manger for their software repositories. By the way Suse 7.2 was my first desktop tryout in 2002. Suse is not so common and sometimes is very hard to find for special questions or problems sufficient content. Another Distro comes from Red Hat, an Open Source pioneer. RHEL or Red Hat Enterprise Linux is a commercial Linux server distribution. The reason for a commercial Linux relies on the existence of server productive systems. With a RHEL license companies get from Red Hat a professional support in the case of emergencies. The Red Hat Desktop is not anymore supported and transformed to Fedora Linux. The default packet manger for their software repositories is YUM. The last but not least branch of the main Linux Distros is Debian with a huge brunch of Sub-Distros. All those popular beginner Distros like Linux Mint, Ubuntu or Zorin OS are based on Debian. When I changes in 2015 to Linux Desktop I started with Ubuntu Mate LTS and since end of 2023 I switched to Debian 12 with a Gnome 3 Desktop. The default packet manager for Debian systems is APT. Ubuntu introduced in 2023 their own SNAP packet Manager.

Another important term in the Linux universe is the Desktop manager. Here you can choose from: KDE, Gnome, Mate, XFCE and Cinnamon. Like my list of Distros the list of available Dektops is also not complete. I just mention the most common. The most Linux Distros comes with several Desktop manager by default. You can choose during the Login procedure which Dektop you like to use. So you have an easy way to try out which Desktop fits you as best. Later when you more experienced and you already nailed your choice for your daily work you can deselect unwanted Desktops during the installation process to save a bit of disk space.

Many people recommend new Linux users an Ubuntu distribution with the argumentation is more easy for the beginning. They give hits like that you can switch to Debian later when you more experienced. I have there a complete different opinion. The complication on Linux is not the Distro it is the Dektop. Ubuntu / Debian have a very good community and documentation. If you need to fix a problem you will find some help. This is why it is important to know of which family your Distro is based on, to search for help. Before you start with things like multi or dual boot. Create your an virtual machine with the Linux you wish to try. Like this you learn something about the installation process and file system.

The most difficult thing in the early beginning of my Linux adventures was to understand where the things get stored. For example 3rd party software usually got installed into the directory opt. Complicated is also the permission system for files and directories on ext3 / ext4. This concept don’t exist on Windows FAT or NTFS file system. Don’t worry about that. During your daily work you will figure out very fast how it goes.

Microsoft Surface 3 PRO with Ubuntu Linux

When I tied out the first time Ubuntu it was around 2010 where canonical the distributor of Ubuntu introduced by default the UNITY Desktop. I was not liked because it contains a lot of advertising and unwanted Apps like Spotify and Facebook. I figured out that Ubuntu has a Version with a Gnome Dektop Clone called Mate. I decided to try and I liked from the first moment. After the first Installation on my Laptop in 2015 I was needed until 2023 only 2 times to install Ubuntu new on my system. The reasons was I replaced my internal SSD first to 1 TB later to 2TB. No issues that the systems slowed down or the disk messed up with orphan files. No regular cleanup and service tasks like they are common in windows. The system just worked stable without serious crashes. I also never had any issues with hardware drivers. AL my new components always got detected correctly from linux and often exist little administration tools. Stremdeck UI or Noson for Sonos Speaker for example. When I got my new Laptop in December 2023 I decided to move from Ubuntu to the original. Because Ubuntu is made from a company and often they decide to go their own way. Who knows if they in the future transform to a Microsoft behavior. I prefer always to do my transition early without pressure to be prepared before I don’t have any other choice.

Of course I use an Antivirus and a Firewall on my system. Because Linux is not free against bad attacks, so there also necessities for protection.

A lot of people messing up with changing from Microsoft Office to Libre Office. The really secret to do not loose you layout is take care on the Fonts. When your MS Office original documents using the default Microsoft Fonts you need to install them also on your Linux machine to not destroy the office document layout.

For me is no real necessity to move back to Windows. Well I’m not a gamer but also for this exist plenty solutions. Steam is also available on Linux. The most persons I know they don’t even play anymore on PC or Laptops, they prefer Consoles like PlayStation. For me Linux is cool because I have full control over my system, its free and I can customize as I want. If you have an old Laptop which is not performing anymore with Windows try to install a Linux and enjoy.

Wind of Change – a journey to Linux

When friends or colleagues complain about their systems, I always recommend them Linux. But guess...

Understanding Linux: rediscovering the joy of technology

It's time to take control of your hardware again, because you don't have to be...

Ruby: setting up the development environment

Set up a Ruby development environment for your first steps with the Ruby on Rails...

PHP 8 and GDLib in the Docker container

This article is only visible for logged in patreons. With a Patreon membership you help...

Network spy protection with AdGuard Home on a Raspberry Pi and Docker

In this short tutorial I describe how you are able to setup AdGuard Home on...

Learn to walk with Docker and PostgreSQL

After some years the virtualization tool Docker proofed it's importance for the software industry. Usually...