How to buy and pay with Bitcoin

For many, Bitcoin (BTC) is a pure speculation object with which they only want to make money. However, the cryptocurrency Bitcoin is also ideal for payment. You do not need any in-depth technical knowledge to pay with Bitcoin. Bitcoin can also be bought with relatively small amounts, for example 10 euros. Everything you need to get started is explained in this article in an easy-to-understand manner.

To buy your first Bitcoin, you need a regular bank account, €20 and about 10 minutes of time. Depending on the bank, the transfer of euros until they are credited as Bitcoin can take up to a day. Incidentally, all of elmar-dott.com’s services can also be paid for using Bitcoin.

Before we start the first transaction, we need to create a wallet. Wallet is the English term for purse. This means that a Bitcoin wallet is nothing more than a digital purse. The program with which you can create and manage a wallet is very similar to the typical banking app. Wallets can be easily set up on computers, smartphones and tablets (Android & iPhone/iPad). There are also hardware wallets that work similarly to a USB stick and store the Bitcoins there.

The most important difference between a bank account and a wallet is that the bitcoins stored in your wallet actually belong to you. There is no bank or other institution that has access to this wallet. You can compare bitcoins stored in your wallet with the cash you have in your wallet. So let’s first look at how to create your own wallet. For this we use the free open source software Electrum. The Electrum Bitcoin Wallet was developed in Python 3 and is available for: Linux, Windows, MacOS and Android.

1st step: Create a wallet

After the app has been downloaded and started, we can get started and create our first Bitcoin wallet. First, we give our wallet a name and press Next. We are then asked which wallet type we would like to create. Here we leave it at the default. We then have to create a seed. The seed is 12 randomly created words that we can expand with our own terms/character strings using the Options button. The specified terms (seed) are extremely important and must be kept safe. It is best to write them on a piece of paper.

After the app has been downloaded and started, we can get started and create our Bitcoin wallet. First, we give our wallet a name and press Next. We are then asked which wallet type we would like to create. Here we leave it at the default. We then have to create a seed. The seed is 12 randomly created words that we can expand with our own terms/character strings using the Options button. The specified terms (seed) are extremely important and must be kept safe. It is best to write them on a piece of paper. The seed allows full access to your personal wallet. With the seed you can easily transfer your wallet to any device. A secure password is then assigned and the wallet file is encrypted. We have now created our own Bitcoin wallet, with which we can send and receive Bitcoin.

In this way, you can create as many wallets as you like. Many people use 2 or more wallets at the same time. This process is called proxy pay. This measure conceals the actual recipient and is intended to prevent transfer services from refusing transactions to undesirable recipients.

In order to convert your own euros into bitcoin, a so-called broker is required. You transfer euros or other currencies to this broker and receive bitcoin in return. The bitcoin is first transferred to a wallet managed by the broker. From this wallet you can already send bitcoin to any other wallet. As long as the bitcoin is still in the broker’s wallet, however, the broker can block the wallet or steal the bitcoin on it. Only when we transfer the purchased bitcoin to a self-managed wallet, as we created in step 1, are the coins in our possession and no outsider has access to them.

The problem that can arise is that these broker services, also called crypto exchanges, can keep a list of bitcoin wallets to which they do not send transactions. To avoid this, you transfer your Bitcoins from the wallet of the Bitcoin exchange where you bought your coins to your own wallet. You can also use multiple wallets to receive payments. This strategy makes it difficult to track payment flows. The money that has been received in various wallets can now be easily transferred to a central wallet where you can save your coins. It is important to know that fees are also charged when sending Bitcoin. Just like with a checking account.

Understanding Bitcoin Transaction Fees

Every time a transaction is made, it is stored in a block. These blocks have a limited size of 1MB, which limits the number of transactions per block. Since the number of transactions that can fit in a block is limited, users compete to have their transactions included in the next block. This is where Bitcoin transaction fees come in. Users offer fees to make their transactions more attractive to miners. The higher the fee, the more likely the transaction will be confirmed faster. The amount of fees depends on several factors:

  • Network load: When load is high, fees increase because more users want to prioritize their transactions.
  • Transaction size: Larger transactions require more space in the block and therefore incur higher fees.
  • Market conditions: General demand for Bitcoin and market volatility can affect fees.

Most wallets calculate fees automatically based on these factors. However, some wallets offer the ability to manually adjust fees to either save costs or achieve faster confirmation.

Bitcoin transaction fees are not fixed and can vary widely. Bitcoin transactions can be confirmed within minutes to hours, depending on the amount of the fees. Bitcoin fees are not calculated based on the value of the transaction (i.e. how much Bitcoin you send), but rather based on the size of the transaction in bytes. The fee you pay is given in Satoshis per byte (sat/byte). A Satoshi is the smallest unit of Bitcoin (1 BTC = 100 million Satoshis).

You can find out how many Satoshi you get for €1 on coincodex.com and the current transaction fee can be found on bitinfocharts.com

Notes on the anonymity of Bitcoin

When you pay with Bitcoin, you send coins from your wallet to a recipient wallet. This transaction is publicly visible. Basically, when you create a wallet using software such as Electrum, the owner of the wallet is not stored. Nevertheless, conclusions about the owner of a wallet can be drawn from the transactions. Using multiple wallets can make it more difficult to assign them to a real person and conceal money flows. But 100% anonymity cannot be guaranteed. Only cash offers absolute anonymity.

Nevertheless, Bitcoin has some advantages over cash. If you travel a lot and don’t want to keep your money in your bank account, you can easily carry very large amounts with you without them being found and confiscated when crossing borders. You are also fairly well protected against theft. If you save your wallet in an encrypted file on various data storage devices, you can easily restore it using the seed.

2nd Step: Buy Bitcoin

Before we can start using Bitcoin, we first need to get our hands on Bitcoin. We can do this quite easily by buying Bitcoin. Since Bitcoin can be worth several thousand euros depending on the exchange rate, it makes sense to buy parts of a Bitcoin. As already mentioned, the smallest unit of a Bitcoin is Satoshi and corresponds to one μBTC (1 BTC = 100 million Satoshis). The easiest way to buy BTC is via an official Bitcoin exchange. A very easy-to-use exchange is Wallet of Satoshi for Android & iPhone.

With this app you can buy, receive and send Bitcoin. After you have installed the Wallet of Satoshi on your smartphone and the wallet is set up, you can also buy Satoshis immediately via bank transfer with just 20 euros via the menu.
A very practical detail is that you can also use the Wallet of Satoshi to buy Bitcoin using other currencies such as US dollars. This is excellent for international business relationships, where you no longer have to deal with all sorts of exchange rates. Since I consider Bitcoin to be an alternative means of payment, it makes sense for me to always leave an amount of 200 to 500 euros in the Wallet of Satoshi. Anything above that is transferred to the Electrum Wallet. This is purely a precautionary measure, because Wallet of Satoshi is based on the Lightning Network and is a private provider. True to the motto, it is better to be safe than sorry. This strategy also saves transaction fees, which can add up to a considerable amount, especially for micro payments of a few euros.

3rd Step: Pay with Bitcoin

In order to pay with Bitcoin, you need a valid wallet address. This address is usually a long, cryptic character string. Since things can quickly go wrong when entering it manually, this address is often given as a QR code.

To make a payment, for example, via the Wallet of Satoshi to any Bitcoin wallet, either the character string or, better yet, the QR code is required. To do this, open the application, press the send button and then use the camera to scan the QR code of the wallet where the Bitcoin should go.

For example, if you send Bitcoin to the Wallet of Satoshi, all transactions are completely transparent. That’s why you can also send Bitcoin to an anonymous wallet. In step 1, I showed in detail how the Electrum wallet is created. Now let’s look at how we get to the wallet’s address. To do this, we go to the Wallet entry in the Electrum menu and select the Information item. We then get a display like the one in the following screenshot.

The master public key is the character string for our wallet to which Bitcoins can be sent. If you press the QR symbol in the bottom right of the field, you will receive the corresponding QR code, which can be saved as an image file. If you now make transfers from a Bitcoin exchange such as the Wallet of Satoshi, the exchange does not know who the owner is. In order to find this out, complex analyses are necessary.

Stability in the crisis – business continuity & disaster recovery

The phrase: it’s better to have than to have had is something we have all experienced first-hand, whether in our professional or private lives. If only we hadn’t clicked on the malicious link in the email or something similar goes through our heads. But once the damage has been done, it’s already too late to take precautions.

What is usually just annoying in our private lives can very quickly become a threat to our existence in the business world. For this reason, it is important to set up a safety net in good time for the event of a potential loss. Unfortunately, many companies do not pay adequate attention to the issue of disaster recovery and business continuity, which then leads to high financial losses in an emergency.

The number of possible threat scenarios is long. Some scenarios are more likely to occur than others. It is therefore important to carry out a realistic risk assessment that weighs up the individual options. This helps to prevent the resulting costs from escalating.

The Corona pandemic was a life-changing experience for many people. The state-imposed hygiene rules in particular presented many companies with enormous challenges. The keyword here is home office. In order to get the situation under control, employees were sent home to work from there. Since there is no established culture and even less of an existing infrastructure for home working in German-speaking countries in particular, this had to be created quickly under great pressure. Of course, this was not without friction.

But it does not always have to be a drastic event. Even a mundane power failure or a power surge can cause considerable damage. It does not have to be a building fire or a flood that leads to an immediate standstill. A hacker attack also falls into the category of serious threat situations. That should be enough. I think the problem has been explained in detail with these examples. So let’s start by addressing the question of what good precautions can already be taken.

The easiest and most effective measure to implement is comprehensive data backup. To ensure that no data is lost, it helps to list and categorize the various data. Such a table should contain information about the storage paths to be backed up, approximate storage usage, prioritization according to confidentiality and category of data. Categories include project data, expulsions, email correspondence, financial accounting, supplier lists, payroll statements and so on. It is of course clear that in the context of data protection, not everyone in the company is authorized to read the information. This is why digestible data must be protected by encryption. Depending on the protection class, this can be a simple password for compressed data or a cryptographically encrypted directory or an encrypted hard drive. The question of how often a data backup should be carried out depends on how often the original data is changed. The more often the data is changed, the shorter the data backup intervals should be. Another point is the target storage of the data backup. A completely encrypted archive that is located locally in the company can certainly be uploaded to a cloud storage after a successful backup. However, this solution can be very expensive for large amounts of data and is therefore not necessarily suitable for small and medium-sized enterprises (SMEs). Of course, it is ideal if there are several replicas of a data backup that are stored in different places.

Of course, it is of little use to create extensive backups only to find out in an emergency that they are faulty. That is why verification of the backup is extremely important. Professional data backup tools contain a mechanism that compares the written data with the original. The Linux command rsync also uses this mechanism. A simple copy & paste does not meet the requirement. But a look at the file size of the backup is also important. This quickly shows whether information is missing. Of course, there is much more that can be said about backups, but that would go too far at this point. It is important to develop the right understanding of the topic.

If we take a look at the IT infrastructure of companies, we quickly realize that the provision of software installations is predominantly a manual process. If we consider that, for example, a computer system can no longer perform its service due to a hardware error, it is also important to have a suitable emergency aid strategy in hand. The time-consuming work when hardware errors occur is installing the programs after a device has been replaced. For many companies, it makes little sense to have a redundant infrastructure for cost reasons. A proven solution comes from the DevOps area and is called Infrastructure as a Code (IaaC). This is mainly about providing services such as email or databases etc. via script. For the business continuity & disaster recovery approach, it is sufficient if the automated installation or update is initiated manually. You should not rely on proprietary solutions from possible cloud providers, but use freely available tools. A possible scenario is also a price increase by the cloud provider or changes to the terms and conditions that are unacceptable for companies, which can make a quick change necessary. If the automation solution is based on a special technology that other providers cannot provide, a quick change is extremely difficult.

Employee flexibility should also be taken into account. Purchasing notebooks instead of desktop computers allows for a high level of mobility. This of course also includes permission to take the laptop home and log into the company network from there. Teams that were already familiar with home office at the beginning of 2020 were able to continue their work from home almost seamlessly. This has given the companies in question a huge competitive advantage. It can also be assumed that large, representative company headquarters will become less and less important as part of the digital transformation. The teams will then organize themselves flexibly remotely using modern communication tools. Current studies show that such a setup increases productivity in most cases. A colleague with a cold who still feels able to do his work can come to work without worrying without his colleagues running the risk of being infected.

We can already see how far this topic can be thought of. The challenge, however, is to carry out a gradual transformation. The result is a decentralized structure that works with redundancies. It is precisely these redundancies that provide sufficient room for maneuver in the event of a disruption compared to a centralized structure. Redundancies naturally cause an additional cost factor. Equipping employees with a laptop instead of a stationary desktop PC is somewhat more expensive to purchase. The price difference between the two solutions is no longer as dramatic as it was at the turn of the millennium, and the advantages outweigh the disadvantages. The transformation to maintaining business capability in the event of disruptions does not mean that you immediately rush out and buy all employees new equipment. Once you have determined what is necessary and useful for the company, new purchases can be prioritized. Colleagues whose equipment has been written off and is due for replacement now receive equipment in accordance with the new company guidelines. This model is now being followed in all other areas. This step-by-step optimization allows for a good learning process and ensures that every step that has already been completed has actually been implemented correctly.

Talents wanted

Freelancers who acquire new orders have been experiencing significant changes for some time now. Fewer and fewer companies have direct contact with their contractors when they are commissioned. Recruitment agencies are increasingly pushing their way between companies and independent contractors.

If specialist knowledge is required for a project, companies are happy to call on external specialists. This approach gives companies the greatest possible flexibility in terms of cost control. But freelancers also benefit from this practice. They can focus exclusively on topics in which they have a strong interest. This avoids being used for boring, routine standard tasks. Due to their experience in different organizational structures and the variety of projects, independent contractors have a broad portfolio of unconventional solution strategies. This knowledge base is very attractive for clients, even if a freelance external employee is initially more expensive than their permanent colleague. Freelancers can bring positive impulses to the project due to their diverse experience, which can overcome a standstill.

Unfortunately, for some time now, companies have no longer been making an independent effort to recruit the specialists they need. The task of recruitment has now been outsourced to external recruitment agencies almost everywhere. These so-called

I actually find the idea of having my own agent who takes care of my order acquisition very appealing. It’s like in the film and music industry. You have an agent who has your back and gives you regular feedback. This gives you a picture of the technologies that are in demand and in which you can develop further. This allows you to improve your own market relevance and ensures regular commissions. This would actually be an ideal win-win situation for everyone involved. Unfortunately, what actually happens in reality is something completely different.

Instead of recruiters building a good relationship with their professionals and encouraging their development, these recruiters act like harmful parasites. They harm both the freelancers and the companies looking to fill vacancies. Because business is not really about finding the most suitable candidate for a company. It’s all about offering candidates who fit the profile you’re looking for at the lowest possible hourly rate. Whether these candidates can really do the things they claim to be able to do is often questionable.

The approach of recruitment agencies is very similar. They try to generate a large pool of current applicant profiles. These profiles are then searched for keywords using automatic A. I. text recognition systems. Then, from the proposed candidates, those with the lowest hourly rate are contacted for a preliminary interview. Those who show no major anomalies in this preliminary interview are then suggested to the company for an interview appointment. The recruitment company’s profit is enormous. This is because they pocket the difference between the hourly rate paid by the client and the hourly rate paid to the self-employed person. In some cases, this can be up to 40%.

But that’s not all these parasitic intermediaries have to offer. They often delay the payment date for the invoice issued. They also try to shift the entire entrepreneurial risk onto the freelancer. This is done by demanding pointless liability insurance policies that are not relevant to the advertised position. As a result, companies are then given vacancies for supposedly skilled workers who are more likely to be classified as unskilled workers.

Now you might ask yourself why companies continue to work with intermediaries. One reason is the current political situation. Since around 2010, for example, there have been laws in Germany aimed at preventing bogus self-employment. Companies that work directly with freelancers are often put under pressure by pension insurance companies. This creates a great deal of uncertainty and does not serve to protect freelancers. It only protects the business model of the intermediary companies.

I have now gotten into the habit of hanging up without comment and immediately when I notice various basic patterns. Such phone calls are a waste of time and lead to nothing except annoyance at the audacity of the recruitment agency. The most important indication of dubious recruiters is that the person on the phone is suddenly completely different from the one who first contacted you. If this person also has a very strong Indian accent, you can be 100% sure that you have been connected to a call center. Even if the number shows England as the area code, the people are actually based somewhere in India or Pakistan. Nothing that would underline the seriousness.

I have registered on various job portals over the many years of my career. My conclusion is that you can save yourself the time. 95% of all contacts made via these portals are recruiters as described above. These people then have the scam that you save them as a contact. However, it is naive to believe that these so-called network requests are really about direct contact. The purpose of this action is to obtain the contact list. This is because many portals such as XING and LinkedIn have the setting that contacts see the contacts from their own list or are offered them via the network function. These contact lists can be worth a lot of money. You can find department heads or other professionals who are definitely worth writing to. I have therefore deactivated access to the friends list for friends in all social networks. I also reject all connection requests from people with the title Recruitment without exception. My presence in social networks now only serves to protect my profile name against identity theft. I no longer respond to most requests to send a CV. I also do not enter my personal information on jobs, studies and employers in these network profiles. If you would like to contact me, you can do so via my homepage.

Another habit I’ve picked up over the years is to never talk about my salary expectations first. If the other person can’t give me a specific figure that they’re willing to pay for my services, they’re just looking for data. So that’s another reason to end the conversation abruptly. It’s also none of their business what my hourly rate was in previous projects. They only use this information to push the price down. If you are a bit sensitive and don’t want to give a rude answer, simply quote a very high hourly rate or daily rate.

As we can see, it’s not that difficult to recognize the real black sheep very quickly by their behaviour. My advice is, as soon as one of the patterns described above occurs, to save time and above all nerves and simply end the conversation. From experience, I can say that if the brokers behave as described, there will definitely be no mediation. It is then better to concentrate your energy on realistic contacts. Because there are also really good placement companies. They are interested in long-term cooperation and behave completely differently. They provide support and advice on how to improve your CV and advise companies on how to formulate realistic job offers.

Unfortunately, I fear that the situation will continue to deteriorate from year to year. The impact of economic development and the widespread availability of new technologies will also continue to increase the pressure on the labor market. Neither companies nor contractors will have any further opportunities in the future if they do not adapt to the new times and take different paths.

Bottleneck Pull Requests

The secure use of source control management (SCM) systems such as Git is essential for programmers (development) and system administrators (operations). This group of tools has a long tradition in software development and enables development teams to work together on a code base. Four questions are answered: When was the change made? Who made the change? What was changed? Why was something changed? It is therefore a pure collaboration tool.

With the advent of the open source code hosting platform GitHub, so-called Pull Requests were introduced. Pull requests are a workflow in GitHub that allows developers to provide code changes for repositories to which they only have read access. Only after the owner of the original repository has reviewed the proposed changes and approved them are these changes integrated by him. This is also how the name comes about. A developer copies the original repository into his GitHub workspace, makes changes and requests the owner of the original repository to adopt the change. The latter can then accept the changes and if necessary adapt them himself or reject them with a reason.

Anyone who thinks that GitHub was particularly innovative is mistaken. This process is very old hat in the open source community. Originally, this procedure was called the Dictatorship Workflow. IBM’s commercial SCM Rational Synergy first published in 1990 is based precisely on the Dictatorship Workflow. With the class of distributed version management tools which Git also belongs to the Dictatorship Workflow is quite easy to implement. So it was obvious that GitHub would make this process available to its users. GitHub has chosen a much more appealing name. Anyone who works with the free DevOps solution GitLab, for example will know pull requests as merge requests. The most common Git servers now contain the pull request process. Without going into too much detail about the technical details of implementing pull requests we will focus our attention on the usual problems that open source projects face.

Developers who want to participate in an open source project are called maintainers. Almost every project has a short guide on how to support the project and which rules apply. For people who are learning to program, open source projects are ideal for quickly and significantly improving their own skills. For the open source project this means that you have maintainers with a wide range of skills and experience. If you don’t establish a control mechanism the code base will erode in a very short time.

If the project is quite large and there are a lot of maintainers working on the code base it is hardly possible for the owner of the repository to process all pull requests in a timely manner. To counteract this bottleneck the Dictatorship Workflow was expanded to the Dictatorship – Lieutenant Workflow. An intermediate instance was introduced that distributes the review of pull requests across several shoulders. This intermediate layer the so-called Lieutenants are particularly active maintainers with an already established reputation. The Dictator therefore only needs to review the Lieutenants’ pull requests. An immense reduction in workload that ensures that there is no backlog of features due to unprocessed pull requests. After all the improvements or extensions should be included in the code base as quickly as possible so that they can be made available to users in the next release.

This approach is still the standard in open source projects to ensure quality. You can never say who is involved in the project. There may even be one or two saboteurs. This idea is not so far-fetched. Companies that have strong competition for their commercial product from the free open source sector could come up with unfair ideas here if there were no regulations. In addition maintainers cannot be disciplined as is the case with team members in companies, for example. It is difficult to threaten a maintainer who is resistant to advice and does not adhere to the project’s conventions despite repeated requests with a pay cut. The only option is to exclude this person from the project.

Even if the problem of disciplining employees in commercial teams described above is not a problem. There are also difficulties in these environments that need to be overcome. These problems date back to the early days of version control tools. The first representatives of this species were not distributed solutions just centralized. CVS and Subversion (SVN) only ever keep the latest revision of the code base on the local development computer. Without a connection to the server you can actually not work. This is different with Git. Here you have a copy of the repository on your own computer, so you can do your work locally in a separate branch and when you are finished you bring these changes into the main development branch and then transfer them to the server. The ability to create offline branches and merge them locally has a decisive influence on the stability of your own work if the repository gets into an inconsistent state. Because in contrast to centralized SCM systems you can now continue working without having to wait for the main development branch to be repaired.

These inconsistencies arise very easily. All it takes is forgetting a file when committing and team members can no longer compile the project locally and are hampered in their work. The concept of Continuous Integration (CI) was established to overcome this problem. It is not as is often wrongly assumed about integrating different components into an application. The aim of CI is to keep the commit stage – the code repository – in a consistent state. For this purpose build servers were established which regularly check the repository for changes and then build the artifact from the source code. A very popular build server that has been established for many years is Jenkins. Jenkins originally emerged as a fork of the Hudson project. Build Servers now takes on many other tasks. That is why it makes a lot of sense to call this class of tools automation servers.

With this brief overview of the history of software development, we now understand the problems of open source projects and commercial software development. We have also discussed the history of the pull request. In commercial projects, it often happens that teams are forced by project management to work with pull requests. For a project manager without technical background knowledge, it makes a lot of sense to establish pull requests in his project as well. After all, he has the idea that this will improve code quality. Unfortunately, this is not the case. The only thing that happens is that a feature backlog is provoked and the team is forced to work harder without improving productivity. The pull request must be evaluated by a competent person. This causes unpleasant delays in large projects.

Now I often see the argument that pull requests can be automated. This means that the build server takes the branch with the pull request and tries to build it, and if the compilation and automated tests are successful, the server tries to incorporate the changes into the main development branch. Maybe I’m seeing something wrong, but where is the quality control? It’s a simple continuous integration process that maintains the consistency of the repository. Since pull requests are primarily found in the Git environment, a temporarily inconsistent repository does not mean a complete stop to development for the entire team, as is the case with Subversion.

Another interesting question is how to deal with semantic merge conflicts in an automatic merge. These are not a serious problem per se. This will certainly lead to the rejection of the pull request with a corresponding message to the developer so that the problem can be solved with a new pull request. However, unfavorable branch strategies can lead to disproportionate additional work.

I see no added value for the use of pull requests in commercial software projects, which is why I advise against using pull requests in this context. Apart from a complication of the CI / CD pipeline and increased resource consumption of the automation server which now does the work twice, nothing else has happened. The quality of a software project can be improved by introducing automated unit tests and a test-driven approach to implementing features. Here it is necessary to continuously monitor and improve the test coverage of the project. Static code analysis and activating compiler warnings bring better results with significantly less effort.

Personally, I believe that companies that rely on pull requests either use them for complicated CI or completely distrust their developers and deny that they do a good job. Of course, I am open to a discussion on the topic, perhaps an even better solution can be found. I would therefore be happy to receive lots of comments with your views and experiences about dealing with pull requests.

Configuration files in software applications

Why do we even need the option to save application configurations in text files? Isn’t a database sufficient for this purpose? The answer to this question is quite trivial. The information on how an application can connect to a database is difficult to save in the database itself.

Now you could certainly argue that you can achieve such things with an embedded database such as SQLite. That may be correct in principle. Unfortunately, this solution is not really practical for highly scalable applications. And you don’t always have to use a sledgehammer to crack a nut. Saving important configuration parameters in text files has a long tradition in software development. However, various text formats such as INI, XML, JSON and YAML have now become established for this use case. For this reason, the question arises as to which format is best to use for your own project.

INI Files

One of the oldest formats are the well-known INI files. They store information according to the key = value principle. If a key appears multiple times in such an INI file, the final value is always overwritten by the last entry that appears in the file.

; Example of an INI File
[Section-name] 
key=value ; inline

text="text configuration with spaces and \' quotas"
string='can be also like this'
char=passwort

# numbers & digets
number=123
hexa=0x123
octa=0123
binary=0b1111
float=123.12

# boolean values
value-1=true
value-0=false
INI

As we can see in the small example, the syntax in INI files is kept very simple. The [section] name is used primarily to group individual parameters and improves readability. Comments can be marked with either ; or #. Otherwise, there is the option of defining various text and number formats, as well as Boolean values.

Web developers know INI files primarily from the PHP configuration, the php.ini, in which important properties such as the size of the file upload can be specified. INI files are also still common under Windows, although the registry was introduced for this purpose in Windows 95.

Properties

Another very tried and tested solution is so-called property files. This solution is particularly common in Java programs, as Java already has a simple class that can handle properties. The key=value format is borrowed from INI files. Comments are also introduced with #.

# PostgreSQL
hibernate.dialect.database = org.hibernate.dialect.PostgreSQLDialect
jdbc.driverClassName = org.postgresql.Driver 
jdbc.url = jdbc:postgresql://127.0.0.1:5432/together-test
Plaintext

In order to ensure type safety when reading .properties in Java programs, the TP-CORE library has an extended implementation. Despite the fact that the properties are read in as strings, the values ​​can be accessed using typing. A detailed description of how the PropertyReader class can be used can be found in the documentation.

.property files can also be used as filters for substitutions in the Maven build process. Of course, properties are not just limited to Maven and Java. This concept can also be used in languages ​​such as Dart, nodeJS, Python and Ruby. To ensure the greatest possible compatibility of the files between the different languages, exotic notation options should be avoided.

XML

XML has also been a widely used option for many years to store configurations in an application in a changeable manner. Compared to INI and property files, XML offers more flexibility in defining data. A very important aspect is the ability to define fixed structures using a grammar. This allows validation even for very complex data. Thanks to the two checking mechanisms of well-formedness and data validation against a grammar, possible configuration errors can be significantly reduced.

Well-known application scenarios for XML can be found, for example, in Java Enterprise projects (J EE) with web.xml or the Spring Framework and Hibernate configuration. The power of XML even allows it to be used as a Domain Specific Language (DSL), as is used in the Apache Maven build tool.

Thanks to many freely available libraries, there is an implementation for almost every programming language to read XML files and access specific data. For example, PHP, a language popular with web developers, has a very simple and intuitive solution for dealing with XML with the Simple XML extension.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1" 
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
                             http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/assembly/ApplicationContext.xml</param-value>
    </context-param>
    <context-param>
        <param-name>javax.faces.PROJECT_STAGE</param-name>
        <param-value>${jsf.project.stage}</param-value>
    </context-param>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
    <listener>
        <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
    </listener>

    <servlet>
        <servlet-name>Faces Servlet</servlet-name>
        <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>Faces Servlet</servlet-name>
        <url-pattern>*.xhtml</url-pattern>
    </servlet-mapping>

    <welcome-file-list>
        <welcome-file>index.xhtml</welcome-file>
    </welcome-file-list>
</web-app>
XML

JSON

JavaScript Object Notation, or JSON for short, is a relatively new technology, although it has been around for several years. JSON also has a corresponding implementation for almost every programming language. The most common use case for JSON is data exchange in microservices. The reason for this is the compactness of JSON. Compared to XML, the data stream to be transferred in web services such as XML RPC or SOAP with JSON is much smaller due to the notation.

There is also a significant difference between JSON and XML in the area of ​​validation. Basically, there is no way to define a grammar like in XML with DTD or schema on the official JSON homepage [1]. There is a proposal for a JSON grammar on GitHub [2], but there are no corresponding implementations to be able to use this technology in projects.

A further development of JSON is JSON5 [3], which was started in 2012 and has been officially published as a specification in version 1.0.0 [4] since 2018. The purpose of this development was to significantly improve the readability of JSON for people. Important functions such as the ability to write comments were added here. JSON5 is fully compatible with JSON as an extension. To get a brief impression of JSON5, here is a small example:

{
  // comments
  unquoted: 'and you can quote me on that', 
  singleQuotes: 'I can use "double quotes" here',
  lineBreaks: "Look, Mom! \
No \\n's!",
  hexadecimal: 0xdecaf,
  leadingDecimalPoint: .8675309, andTrailing: 8675309.,
  positiveSign: +1,
  trailingComma: 'in objects', andIn: ['arrays',],
  "backwardsCompatible": "with JSON",
}
JSON5

YAML

Many modern applications such as the open source metrics tool Prometheus use YAML for configuration. The very compact notation is very reminiscent of the Python programming language. YAML version 1.2 is currently published.

The advantage of YAML over other specifications is its extreme compactness. At the same time, version 1.2 has a grammar for validation. Despite its compactness, the focus of YAML 1.2 is on good readability for machines and people alike. I leave it up to each individual to decide whether YAML has achieved this goal. On the official homepage you will find all the resources you need to use it in your own project. This also includes an overview of the existing implementations. The design of the YAML homepage also gives a good foretaste of the clarity of YAML files. Attached is a very compact example of a Prometheus configuration in YAML:

global:
  scrape_interval:     15s
  evaluation_interval: 15s 

rule_files:
  # - "first.rules"
  # - "second.rules"

#IP: 127.0.0.1
scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['127.0.0.1:8080']

  # SPRING BOOT WEB APP
  - job_name: spring-boot-sample 
    scrape_interval: 60s
    scrape_timeout: 50s
    scheme: "http"
    metrics_path: '/actuator/prometheus' 
    static_configs:
     - targets: ['127.0.0.1:8888']
    tls_config:
     insecure_skip_verify: true
YAML

Resumee

All of the techniques presented here have been tried and tested in practical use in many projects. Of course, there may be some preferences for special applications such as REST services. For my personal taste, I prefer the XML format for configuration files. This is easy to process in the program, extremely flexible and, with clever modeling, also compact and extremely readable for people.

References

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.


Modern Times

Heavy motivation to automate everything, even the automation itself, is the common understanding of the most DevOps teams. There seems to be a dire necessity to automate everything – even automation itself. This is common understanding and therefore motivation for most DevOps teams. Let’s have a look on typical Continuous Stupidities during a transformation from a pure Configuration Management to DevOps Engineer.

In my role as Configuration and Release Manager, I saw in close to every project I joined, gaps in the build structure or in the software architecture, I had to fix by optimizing the build jobs. But often you can’t fix symptoms like long running build scripts with just a few clicks. In his post I will give brief introduction about common problems in software projects, you need to overcome before you really think about implementing a DevOps culture.

  1. Build logic can’t fix a broken architecture. A huge amount of SCM merging conflicts occur, because of missing encapsulation of business logic. A function which is spread through many modules or services have a high likelihood that a file will be touched by more than one developer.
  2. The necessity of orchestrated builds is a hint of architectural problems.Transitive dependencies, missing encapsulation and a heavy dependency chain are typical reasons to run into the chicken and egg problem. Design your artifacts as much as possible independent.
  3. Build logic have developed by Developers, not by Administrators. Persons which focused in Operations have different concepts to maintain artifact builds, than a software developer. A good anti pattern example of a build structure is webMethofs of Software AG. They don‘ t provide a repository server like Sonatype Nexus to share dependencies. The build always point to the dependencies inside a webMethods installation. This practice violate the basic idea of build automation, which mentioned in the book book ‚Practices of an Agile Developer‘ from 2006.
  4. Not everything at once. Split up the build jobs to specific goals, like create artifact, run acceptance tests, create API documentation and generate reports. If one of the last steps fail you don’t need to repeat everything. The execution time of the build get dramatically reduced and it is easier to maintain the build infrastructure.
  5. Don’t give to much flexibility to your build infrastructure. This point is strongly related to the first topic I explains. When a build manager have less discipline he will create extremely complex scripts nobody is able to understand. The JavaScript task runner Grunt is a example how a build logic can get messy and unreadable. This is one of the reason, why my favorite build tool for Java projects is always decided to Maven, because it takes governance of understandable builds.
  6. There is no requirement to automate the automation. By definition have complex automation levels higher costs than simple tasks. Always think before, about the benefits you get of your automation activities to see if it make sens to spend time and money for it.
  7. We do what we can, but can we what we do? Or in the words by Gardy Bloch „A fool with a tool is still a fool“. Understand the requirements of your project and decide based on that which tool you choose. If you don’t have the resources even the most professional solution can not support you. If you understood your problem you are be able to learn new professional advanced processes.
  8. Build logic have run first on the local development environment. If your build runs not on your local development machine than don’t call it build logic. It is just a hack. Build logic have to be platform and IDE independent.
  9. Don’t mix up source repositories. The organization of the sources into several folders inside a huge directory, creates just a complex build whiteout any flexibility. Sources should structured by technology or separate independent modules.

Many of the point I mentioned can understood by comparing the current Situation in almost every project. The solution to fix the things in a healthy manner is in the most cases not that complicated. It needs just a bit of attention and well planning. The most important advice I can give is follow the KISS principle. Keep it simple, stupid. This means follow as much as possible the standard process without modifications. You don’t need to reinvent the wheel. There are reasons why a standard becomes to a standard. Here is a short plan you can follow.

  • First: understand the problem.
  • Second: investigate about a standard solution for the process.
  • Third: develop a plan to apply the solution in the existing process landscape. This implies to kick out tools which not support standard processes.

If you follow step by step you own pan, without jumping to more far ten the ext point, you can see quite fast positive results.

By the way. If you think you like to have a guiding to reach a success DevOps process, don’t hesitate to contact me. I offer hands on Consulting and also training to build up a powerful DevOps team.

Working with textfiles on the Linux shell

Linux turns more and more to a popular operating system for IT professional. One of the reasons for this movement are the server solutions. Stability and low resource consuming are some of the important characteristics for this choice. May you already played around with a Microsoft Server you will miss the graphical Desktop in a Linux Server. After a login into a Linux Server you just see the command prompt is waiting for your inputs.

In this short article I introduce you some helpful Linux programs to work with files on the command line. This allows you to gather information, for example from log files. Before I start I’d like to recommend you a simple and powerful editor named joe.

Ctrl + C – Abort the current editing of a file without saving changes
Ctrl + KX – Exit the current editing and save the file
Ctrl + KF – Find text in the current file
Ctrl + V – Paste clipboard into document (CMD + V for Mac)
Ctrl + Y – Delete current line where cursor is

To install joe on an Debian based Linux distribution you just need to type:

sudo apt-get install joe

1. When you need to find content in a huge text file GREP will be your best friend. GREP allows you to search for text pattern in files.

gerp <pattern> file.log
    -n : number of lines that matches 
    -i : case insensitive
    -v : invert matches
    -E : extended regex
    -c : count number of matches
    -l : find filenames that matches the pattern
Bash

2. When you need to analyze network packages NGREP is the tool of your choice.

ngrep -I file.pcap
    -d : specify the network interface
    -i : case insensitive
    -x : print in alternate hexdump 
    -t : print timestamp
    -I : read a pcap file
Bash

3. When you need to see the changes between two versions of a file, DIFF will do the job.

diff version1.txt version2.txt
    -a : add 
    -c : change
    -d : delete
     # : line numbers
     < : file 1
     > : file 2
Bash

4. Sometimes it is necessary to give an order to the entries in a file. SORT is gonna to help you with this task.

sort file.log 
     -o : write the result to a file 
     -r : reverse order
     -n : numerical sort
     -k : sort by column
     -c : check if orderd
     -u : sort and remove
     -f : ignore case
     -h : human sort
Bash

5. If you have to replace Strings inside of a huge text, like find and replace you can do that with SED, the stream editor.

sed s/regex/replace/g
     -s : search 
     -g : replace
     -d : delete
     -w : append to file
     -e : execute command
     -n : suppress output
Bash

6. Parsing fields using delimiters in text files can done by using CUT.

cut -d ":" -f 2 file.log
     -d : use the field delimiter 
     -f : field numbers
     -c : specific characters position
Bash

7. The extraction of substrings who occurred just once in a text file you will reach with UNIQ.

uniq file.txt
     -c : count the numbers of duplicates 
     -d : print duplicates
     -i : case insesitive
Bash

8.  AWK is a programming language consider to manipulate data.

awk {print $2} file.log 
Bash

Test First?

java Aktuell 2024.03

When I started test-driven programming over 10 years ago, I was aware of many different concepts in theory. But this approach of first writing test cases and then implementing them was somehow not the way I got on well with. To be honest, this is still the case today. So I found an adaptation of Kent Beck’s TDD paradigm that works for me. But first things first. Perhaps my approach is also quite helpful for one or the other.

I originally come from environments for highly scalable web applications to which all the great theories from the university cannot be easily applied in practice. The main reason for this is the high complexity of such applications. On the one hand, various additional systems such as in-memory cache, database and identity and access management (IAM) are part of the overall system. On the other hand, many modern frameworks such as OR Mapper hide complexity behind different access layers. As developers, we need to master all of these things. That is why there are robust, and practice proven solutions that are well known but rarely used. Kent Beck is one of the most important voices for the practical use of automated software testing.

If we want to get involved with the concept of TDD, it is important not to put too much weight on every character. Not everything is set in stone. What is important is the result at the end of the day. For this reason, it is essential to keep the objective of all efforts in mind in order to achieve personal added value. So let’s start by looking at what we want to achieve in the first place.

Success proves us right

When I first started out as a developer, I needed constant feedback on whether what I was putting together was really working. I mostly generated this feedback by spreading during my implementation countless console outputs on the one hand and on the other hand I always tried to integrate everything into a user interface and then ‘click through’ manually. Basically a very cumbersome test setup, which then has to be removed again at the end. If later bug fixes had to be made, the whole procedure started all over again. Everything was somehow unsatisfactory and far removed from a productive way of working. Somehow this had to be improved without having to reinvent yourself every time.

Finally, my original approach has exactly two significant weaknesses. The most obvious one is the commenting in and out of debug information via the console.

But the second point is much more serious. Because all the knowledge acquired about this particular implementation is not preserved. It is therefore in danger of fading over time and ultimately being lost. However, such specialized knowledge is extremely valuable for many subsequent process steps in software development. By this I explicitly mean the topic of quality. Refactoring, code reviews, bug fixes and change requests are just some of the possible examples where in-depth detailed knowledge is required.

For me personally, there is also the fact that monotonously repetitive work quickly tires me out and I would like to avoid it. Clicking through an application again and again with the same test procedure is a far away from what constitutes a fulfilling working day for me. I want to discover new things. But I can only do that if I’m not trapped in the past.

But they dare to do something

But before I go into how I have spiced up my day-to-day development work with TDD, I have to say a few words about responsibility and courage. In conversations others told me frequently that I am right, but they can’t take action to follow my recommendations because the project manager or some other superior doesn’t give a green light.

Such an attitude is extremely unprofessional in my eyes. I don’t ask an marketing manager which algorithm terminate as best. He simply has no idea what I’m talking about, because it is not his area of responsibility. A project manager who speaks out against test-driven work in the development team has also missed his job. Nowadays, test frameworks are so well integrated into the build environment that even inexperienced people can prepare for TDD in a matter of moments. It is therefore not necessary to make a big deal of the project. I can promise that even the first attempts will not take any longer than with the original approach. On the contrary, there will be a noticeable increase in productivity very quickly.

The first stage of evolution

As already mentioned, logging is a central part of test-driven development for me. Whenever it makes sense, I try to output the status of objects or variables on the console. If we use the means provided by the programming language used for this, this means that we must at least comment out this system output after the work has been done and comment it in again later when searching for errors. A redundant and error-prone procedure.

If, on the other hand, we use a logging framework right from the start, we can confidently leave the debug information in the code and deactivate it later in productive operation via the setting log level.

I also use logging as a tracer. This means that each constructor of a class writes a corresponding log entry by the log level info while it is being called. This allows me to see the order in which objects are instantiated. From time to time I have also become aware of the excessively frequent instantiation of a single object. This is helpful for performance and memory optimization measures.

I log errors that are thrown during exception handling as errors or warnings, depending on the context. This is a very helpful tool for tracking down errors later in operation.

So if I have a database access, I write a log output in the log level debug as the associated SQL was assembled. If this SQL leads to an exception because it contains an error, this exception is written with the log level error. If, on the other hand, a simple search query with correct SQL syntax takes place and the result set is empty, this event is classified as either Debug or Warning, depending on requirements. For example, if it is a login request with an incorrect user name or password, I tend to opt for the Log Level Warning, as this may contain security-related aspects during operation.

In the overall context, I tend to configure the logging for the test case execution very loquaciously and limit myself to a pure console output. During operation, the logging information is written to a log file.

The chicken or egg

Once we have laid the foundations for an additional feedback loop with logging, the next step is to decide what to do next. As already mentioned, I find it very difficult to first write a test case and then find a suitable implementation for it. Many other developers who start with TDD also face this problem.

One thing I can already anticipate is the problem of making sure that an implementation is testable. Once I have the test case, I immediately realize whether what I am creating is really testable. Experienced TDD developers have quickly learned in flesh and blood how testable code should look like. The most important point here is that methods should always have a return value that is preferably not null. This can be achieved, for example, by returning an empty list instead of null.

The requirement to have a return value is due to the way unit test frameworks work. A test case compares the return value of a method with an expected value. The test assertion has different characteristics and can therefore be: equal, unequal, true or false. Of course, there are also different variations here. For example, it may be possible to test methods that have no return value by using exceptions. All these details become clear in a very short time during using TDD. So that everyone can get started immediately without lengthy preparations.

When reading the book Test Driven Development by Example by Kent Beck, we also quickly find an explanation as to why the test cases should be written first. It is a psychological factor. It should help us to cope better with the usual stress that arises in the project. It creates a mental state in us about the status and progress of the current work. It guides us in an iterative process to expand and improve the existing solution step by step via the various test cases.

For those who, like me, have no concrete idea of the final result at the start of an implementation, this approach is difficult to implement. The intended effect of relaxation turns into a negative one. As we humans are all different, we have to find out what makes us tick in order to achieve the best possible result. It’s the same with learning strategies. Some people process information better visually, others more haptically and still others extract everything important from spoken words. So let’s try not to bend ourselves against our nature in order to produce mediocre or poor results.

Drawing the first line

A topic only becomes clear to me while I’m working on it. So I try my hand at an implementation until I need some initial feedback. That’s when I write the first test. This approach automatically gives rise to questions, each of which is worth its own test case. Can I find all available results? What happens if the result set is empty? How can the result set be narrowed down? These are all points that can be noted on a piece of paper and ticked off step by step. I had the idea of writing down a to-do list on a piece of paper a long time before I rode about it in the book by Kent Beck mentioned above. It helps me to preserve quick thoughts without being distracted from what I am currently doing. It also gives me a sense of accomplishment at the end of the day.

Since I don’t wait until I’ve implemented everything to write the first test, this approach also results in an iterative approach. I also notice very quickly if my design is not sufficiently testable, as I receive immediate feedback. This results in my own interpretation of TDD, which is characterized by the permanent change between implementing and writing tests.

As a result of my early TDD attempts, I already noticed a speeding up of my working methods in the first week. I also became more confident. But the way I program also started to change very early on. I have noticed that my code has become more compact and robust. Things that had only become apparent over time emerged during activities such as refactoring and extensions. Failed test cases have saved me from unpleasant surprises.

Start without overzealousness

If we decide to use TDD in an existing project, it is a bad idea to start writing test cases for existing functionality. Apart from the time that has to be planned for this, the result will not fulfill the high expectations.

One of the problems is that you now have to familiarize yourself with each functionality and this is very time-consuming. The quality of the resulting test cases is also inadequate. The problem also arises from missing experience. When the experience is first built up, the quality of the test cases is also not quite optimal and code may also have to be rewritten to make it testable. This creates a lot of risks that are problematic for day-to-day project business.

A proven procedure for introducing TDD is simply to use it for the current implementation you are currently working on. The current state of the current problem is documented by automated tests. Since you are already in familiar territory, you do not have to familiarize yourself with a new topic, so you can concentrate fully on formulating meaningful tests. Apart from the fact that you take responsibility for other people’s work without being asked when you implement test cases for them.

Existing functionality is only supplemented with test cases when errors are corrected. For the correction, you have to deal with the implementation details anyway, so that there is sufficient knowledge here of how a functionality should behave. The resulting tests also document the correction and ensure that the behavior does not change in the future during optimization work.

If you follow this procedure in a disciplined manner, you will not lose yourself in so-called hectic activity, which in turn is the opposite of productivity. In addition, you quickly acquire knowledge of how effective and meaningful tests can be implemented. Only when sufficient experience has been gained and possibly extensive refactoring are planned you can consider how test coverage can be gradually improved for the entire project.

Quality level

Just because test cases are available does not mean that they are meaningful. Nor does a high test coverage prove that a program is error-free. A high test coverage only ensures that a program behaves within the scope of the tests.

So how can you ensure that the existing tests are really an enrichment and have good informative value? The first and, in my opinion, most important point is to keep test cases as short as possible. In concrete terms, this means that a test only answers one explicit question, e.g. What happens if the result set is empty? The test method is then named according to the question. The added value of this approach arises when the test case fails. If the test is very short, it is often possible to get to know from the test method what the problem is without having to spend a lot of time familiarizing yourself with a test case.

Another important point in the TDD procedure is to check the test coverage for lines of code as well as for branches for my implemented functionality. If, for example, I cannot simulate the occurrence of a single condition in an IF statement, this condition can be deleted without hesitation.

Of course, you also have enough dependencies on external libraries in your own project. Now it can happen that a method from this library throws an exception that cannot be simulated by any test case. This is exactly the reason why you should strive for high test coverage but not despair if 100% cannot be achieved. Especially when introducing TDD, a good measure of test coverage greater than 85% is common. As the development team gains experience, this value can be increased up to 95%.

Finally, however, it should be noted that you should not get too carried away. Because it can quickly become excessive and then all the advantages gained are quickly lost. The point is that you don’t write tests that in turn test tests. This is where the cat bites its own tail. This also applies to third-party libraries. No tests are written for these either. Kent Beck is very clear about this: “Even if there are good reasons to distrust other people’s code, don’t test it. External code requires more of your own implementation logic”.

Lessons learned

The lessons that can be learned when trying to achieve the highest possible test coverage are the ones that will have an impact on future programming. The code becomes more compact and robust.

Productivity increases simply due to the fact that error-prone and monotonous work is avoided through automation. There are no additional work steps because old habits are replaced by newer, better ones.

One effect that I have observed time and again is that when individual members of the team have opted for TDD, their successes are quickly recognized. Within a few weeks, the entire team had developed TDD. Each individual according to their own abilities. Some with Test First, others as I have just described. In the end, it’s the result that counts and it was uniformly excellent. When the work is easier and at the end of the day each individual has the feeling that they have also achieved something, this gives the team an enormous motivation boost, which gives the project and the working atmosphere a huge boost. So what are you waiting for? Try it out for yourself right away.

Goodbye privacy, goodbye liberty

The new terms of conditions for Microsoft services released on October 2023 caused an outcry in the IT world. The reason was a paragraph who said, that now all Microsoft Services are powered by artificial intelligence. This A. I. supposed to be used to detect copyright violations. This includes things like Music, Movies, Graphics, E-Books and Software. In the case this A. I. Detect copyright violations on your system, those files supposed to got deleted automatically from the ‘system’. At this time it is not clear if this rule applies to your own local disk storage or just to the files on the Microsoft Cloud. Microsoft also declared that user which violates the copyright rule will be suspended from all Microsoft Services.

This exclusion has different flavors. The first questions rise up to my mind is what will happened with paid subscriptions like Skype? They will block me and refund my unused credits? A more worst scenario is may I will loose also all my credits and digital properties like access to games and other things. Or paid subscriptions will not be affected? Until now this part not clear.

If you are an Apple user my you could think this things will not affect you but better be sure you may use a Microsoft Service you don’t know its Microsoft. Not every Product include the companies name. Think about it, because who knows if those products spying around on your system. Some applications like Skype, Teams, Edge Browser and Visual Studio Code are available for other platforms like Apple and Linux.

Microsoft also owned the Source Code hosting Platform GitHub and an social network for professionals called LinkedIn. Whit Office 360 you can use the entire Microsoft Office Suite via Web Browser as Cloud solution and all your documents will be stored in the Microsoft Cloud. The same Cloud where US Government institutions like the CIA, NSA and many others keep their files. Well seems it will be a secure place for all your thought you place inside a office document.

This small detail about Office documents leads us to a little side note in the new terms of condition from Microsoft. The fight against hate speech. Whatever that means. Public insults and defamation have always been strictly enforced by the legislature. This means that it is not a trivial offense but rather a criminal offense. So it’s not clear to me what all this talk about hate speech means. Maybe it’s an attempt to introduce public censorship of freedom of expression.

But well back to the side notice from Microsoft term of conditions about hate speech. Microsoft wrote something like: if we detect hate speech we will warn the user and if the violations occur several times the Microsoft account of the user will be deactivated.

If you may think this is just something happen now by Microsoft, be sure many other companies working to introduce equal services. The communication platform Zoom for example included also A. I. techniques to observe the user communication for training purposes.

With all those news is still a big questions needed to be answered: What can I do by myself? The solution is simple. Move back from the digital universe into the real world. Turn the brain back on. Use pen and paper, pay in cash, leave your smartphone at home and there never on the bedside table. If you don’t use it turn it off. Meet your friend physically when ever it is possible and don’t bring your smartphone. There will be no government, no president and no messiahs to bring a change. It’s up to us.

The dark side of artificial intelligence

As a technician, I am quite quickly fascinated by all sorts of things that somehow blink and beep, no matter how useless they may be. Electronic gadgets attract me like moths to the light. For a while now, a new generation of toys has been available to the masses. Artificial intelligence applications, more precisely artificial neural networks. The freely available applications are already doing remarkable things and it is only the beginning of what could get possible in the future. Many people have not yet realized the scope of A.I. based applications. This is not surprising, because what is happening in the A.I. sector will change our lives forever. We can rightly say that we are living in a time that is making history. It will be up to us to decide whether the coming changes will be good or whether they will turn out to be a dystopia.

When I chose artificial intelligence as a specialization in my studies many years ago, the time was still characterized by so-called expert systems. These rule-based systems were highly specialized for their domain and were designed for corresponding experts. The system was supposed to support the expert in making decisions. Meanwhile, we also have the necessary hardware to create much more general systems. If we consider applications like ChatGPT, they are based on neural networks, which allows a very high flexibility in usage. The disadvantage, however, is that we as developers can hardly understand what output a neural network produces for any given input. A circumstance that makes most programmers I know rather take a negative attitude. Because they are no longer master of the algorithm and can only act on the principle of trial and error.

Nevertheless, the power of neural networks is astounding. The time seems gone now when one can make fun of clumsy automated, software-supported translations. Frommy own experience I remember how tedious it was to let the Google Translator translate a sentence from German into Spanish. To get a usable result you could either use the English – Spanish option. Alternatively, if you speak only rudimentary English for vacation use, you could still formulate very simple German sentences that were at least correct in content. The time saved for automatically translated texts is considerable, even though you have to proofread them and adjust some wording if necessary.

As much as I appreciate being able to work with such powerful tools, we have to be aware that there is also a downside. The more we do our daily tasks with A.I. based tools, the more we lose the ability to do these tasks manually in the future. For programmers, this means that over time they will lose their ability to express themselves in source code via A.I. based IDEs. Of course, this is not a process that happens overnight, but is gradual. Once this dependency is created, the question arises whether the available dear tools will remain free of charge or whether existing subscriptions will possibly be subject to drastic price increases. After all, it should be clear to us that commercially used tools that significantly improve our productivity are usually not available at low prices.

I also think that the Internet as we are used to it so far, will change very much in the future. Many of the free services that have been financed by advertising will disappear in the medium term. Let’s take a look at the StackOverFlow service as an example. A very popular knowledge platform among developer circles. If we now in the future the research to questions of programming ChatGPT or other neural networks are questioned for StackOverFlow the visitor numbers sink continuously. The knowledge base in turn ChatGPT uses is based on data from public forums like StackOverFlow. So for the foreseeable future StackOverFlow will try to make its service inaccessible to AIs. There could certainly also be an agreement with compensation payments. So that the omitted advertising revenues are compensated. As technicians, we do not need to be told that an offer like StackOverFlow incurs considerable costs for operation and development. It then remains to be seen how users will accept the offer in the future. If no new data is added to StackOverFlow, the knowledge base for A.I. systems will also become uninteresting. I therefore suspect that by around 2030, it will be primarily high-quality content on the Internet that will be subject to a charge.

If we look at the forecast of the medium-term trend in the demand for programmers, we come to the question of whether it will be a good recommendation in the future to study computer science or to start an apprenticeship as a programmer. I actually see a positive future here and would encourage anyone who sees education as a vocation and not as a necessity to make a living. In my opinion, we will continue to need many innovative minds. Only those who instead of dealing with basics and concepts prefer to quickly learn a current framework in order to keep up with the emerging hyphe of the market, will certainly achieve only limited success in the future. However, I have already made these observations before the wide availability of A.I. systems. Therefore, I am firmly convinced that quality will always prevail in the long run.

I consider it a virtue to approach all kinds of topics as critically and attentively as possible. Nevertheless, I must say that some fears in dealing with A.I. are quite unfounded. You have already seen some of my possible visions of the future in this article. Statements that A.I. will one day take over our world by subtly influencing uninitiated users to motivate them to take action are, in my opinion, pure fantasy for a period up to 2030 and, given the current state of knowledge, unfounded. Much more realistically I see the problem that if resourceful marketing people litter the Internet with inferior non-revised A.I. generated articles to spice up their SEO ranking and this in turn as a new knowledge cab of the neural networks the quality of future A.I. generated texts significantly reduced.

The A.I. systems that have been freely available so far have one decisive difference compared to humans. You lack the motivation to do something on your own initiative. Only through an extrinsic request by the user does the A.I. begin to work on a question. It becomes interesting when an A.I. dedicates itself to self-selected questions and also researches them independently. In this case the probability is very high that the A.I. will develop a consciousness very fast. If such an A.I. then still runs on a high performance quantum computer, we do not have sufficient reaction time to recognize dangerous developments and to intervene. Therefore, we should definitely keep the play “The Physicists” created by Dürrenmatt in our consciousness. Because the ghosts I called once, I will possibly not get rid of so fast again.

Basically, I have to admit that the topic of A.I. continues to fascinate me and I am very curious about future developments. Nevertheless, I think it is important not to close our eyes to the dark side of artificial intelligence and to start an objective discourse in order to exploit the existing potential of this technology as harmlessly as possible.