Featureitis

You don’t have to be a software developer to recognize a good application. But from my own experience, I’ve often seen programs that were promising and innovative at the start mutate into unwieldy behemoths once they reach a certain number of users. Since I’ve been making this observation regularly for several decades now, I’ve wondered what the reasons for this might be.

The phenomenon of software programs, or solutions in general, becoming overloaded with details was termed “featuritis” by Brooks in his classic book, “The Mythical Man-Month.” Considering that the first edition of the book was published in 1975, it’s fair to say that this is a long-standing problem. Perhaps the most famous example of featureitis is Microsoft’s Windows operating system. Of course, there are countless other examples of improvements that make things worse.

The phenomenon of software programs, or solutions in general, becoming overloaded with details is what Brooks called “featuritis.” Windows users who were already familiar with Windows XP and then confronted with its wonderful successor Vista, only to be appeased again by Windows 7, and then nearly had a heart attack with Windows 8 and 8.1, were calmed down again at the beginning of Windows 10. At least for a short time, until the forced updates quickly brought them back down to earth. And don’t even get me started on Windows 11. The old saying about Windows was that every other version is junk and should be skipped. Well, that hasn’t been true since Windows 7. For me, Windows 10 was the deciding factor in completely abandoning Microsoft, and like many others, I bought a new operating system. Some switched to Apple, and those who couldn’t afford or didn’t want the expensive hardware, like me, opted for a Linux system. This shows how a lack of insight can quickly lead to the loss of significant market share. Since Microsoft isn’t drawing any conclusions from these developments, this fact seems to be of little concern to the company. For other companies, such events can quickly push them to the brink of collapse, and beyond.

One motivation for adding more and more features to an existing application is the so-called product life cycle, which is represented by the BCG matrix in Figure 1.

With a product’s launch, it’s not yet certain whether it will be accepted by the market. If users embrace it, it quickly rises to stardom and reaches its maximum market position as a cash cow. Once market saturation is reached, it degrades to a slow seller. So far, so good. Unfortunately, the prevailing management view is that if no growth is generated compared to the previous quarter, market saturation has already been reached. This leads to the nonsensical assumption that users must be forced to accept an updated version of the product every year. Of course, the only way to motivate a purchase is to print a bulging list of new features on the packaging.

Since well-designed features can’t simply be churned out on an assembly line, a redesign of the graphical user interface is thrown in as a free bonus every time. Ultimately, this gives the impression of having something completely new, as it requires a period of adjustment to discover the new placement of familiar functions. It’s not as if the redesign actually streamlines the user experience or increases productivity. The arrangement of input fields and buttons always seems haphazardly thrown together.

But don’t worry, I’m not calling for an update boycott; I just want to talk about how things can be improved. Because one thing is certain: thanks to artificial intelligence, the market for software products will change dramatically in just a few years. I don’t expect complex and specialized applications to be produced by AI algorithms anytime soon. However, I do expect that these applications will have enough poorly generated AI-generated code sequences, which the developer doesn’t understand, injected into their codebases, leading to unstable applications. This is why I’m rethinking clean, handcrafted, efficient, and reliable software, because I’m sure there will always be a market for it.

I simply don’t want an internet browser that has mutated into a communication hub, offering chat, email, cryptocurrency payments, and who knows what else, in addition to simply displaying web pages. I want my browser to start quickly when I click something, then respond quickly and display website content correctly and promptly. If I ever want to do something else with my browser, it would be nice if I could actively enable this through a plugin.

Now, regarding the problem just described, the argument is often made that the many features are intended to reach a broad user base. Especially if an application has all possible options enabled from the start, it quickly engages inexperienced users who don’t have to first figure out how the program actually works. I can certainly understand this reasoning. It’s perfectly fine for a manufacturer to focus exclusively on inexperienced users. However, there is a middle ground that considers all user groups equally. This solution isn’t new and is very well-known: the so-called product lines.

In the past, manufacturers always defined target groups such as private individuals, businesses, and experts. These user groups were then often assigned product names like Home, Enterprise, and Ultimate. This led to everyone wanting the Ultimate version. This phenomenon is called Fear Of Missing Out (FOMO). Therefore, the names of the product groups and their assigned features are psychologically poorly chosen. So, how can this be done better?

An expert focuses their work on specific core functions that allow them to complete tasks quickly and without distractions. For me, this implies product lines like Essentials, Pure, or Core.

If the product is then intended for use by multiple people within the company, it often requires additional features such as external user management like LDAP or IAM. This specialized product line is associated with terms like Enterprise, Company, Business, and so on.

The cluttered end result, actually intended for NOOPS, has all sorts of things already activated during installation. If people don’t care about the application’s startup and response time, then go for it. Go all out. Throw in everything you can! Here, names like Ultimate, Full, and Maximized Extended are suitable for labeling the product line. The only important thing is that professionals recognize this as the cluttered version.

Those who cleverly manage these product lines and provide as many functions as possible via so-called modules, which can be installed later, enable high flexibility even in expert mode, where users might appreciate the occasional additional feature.

If you install tracking on the module system beforehand to determine how professional users upgrade their version, you’ll already have a good idea of ​​what could be added to the new version of Essentials. However, you shouldn’t rely solely on downloads as the decision criterion for this tracking. I often try things out myself and delete extensions faster than the installation process took if I think they’re useless.

I’d like to give a small example from the DevOps field to illustrate the problem I just described. There’s the well-known GitLab, which was originally a pure code repository hosting project. The name still reflects this today. An application that requires 8 GB of RAM on a server in its basic installation just to make a Git repository accessible to other developers is unusable for me, because this software has become a jack-of-all-trades over time. Slow, inflexible, and cluttered with all sorts of unnecessary features that are better implemented using specialized solutions.

In contrast to GitLab, there’s another, less well-known solution called SCM-Manager, which focuses exclusively on managing code repositories. I personally use and recommend SCM-Manager because its basic installation is extremely compact. Despite this, it offers a vast array of features that can be added via plugins.

I tend to be suspicious of solutions that claim to be an all-in-one solution. To me, that’s always the same: trying to do everything and nothing. There’s no such thing as a jack-of-all-trades, or as we say in Austria, a miracle worker!

When selecting programs for my workflow, I focus solely on their core functionality. Are the basic features promised by the marketing truly present and as intuitive as possible? Is there comprehensive documentation that goes beyond a simple “Hello World”? Does the developer focus on continuously optimizing core functions and consider new, innovative concepts? These are the questions that matter to me.

Especially in commercial environments, programs are often used that don’t deliver on their marketing promises. Instead of choosing what’s actually needed to complete tasks, companies opt for applications whose descriptions are crammed with buzzwords. That’s why I believe that companies that refocus on their core competencies and use highly specialized applications for them will be the winners of tomorrow.


Mismanagement and Alpha Geeks

When I recently picked up Camille Fournier’s book “The Manager’s Path,” I was immediately reminded of Tom DeMarco. He wrote the classic “Peopleware” and, in the early 2000s, published “Adrenaline Junkies and Form Junkies.” It’s a list of stereotypes you might encounter in software projects, with advice on how to deal with them. After several decades in the business, I can confirm every single word from my own experience. And it’s still relevant today, because people are the ones who make projects, and we all have our quirks.

For projects to be successful, it’s not just technical challenges that need to be overcome. Interpersonal relationships also play a crucial role. One important factor in this context, which often receives little attention, is project management. There are shelves full of excellent literature on how to become a good manager. The problem, unfortunately, is that few who hold this position actually fulfill it, and even fewer are interested in developing their skills further. The result of poor management is worn-down and stressed teams, extreme pressure in daily operations, and often also delayed delivery dates. It’s no wonder, then, that this impacts product quality.

One of the first sayings I learned in my professional life was: “Anyone who thinks a project manager actually manages projects also thinks a butterfly folds lemons.” So it seems to be a very old piece of wisdom. But what is the real problem with poor management? Anyone who has to fill a managerial position has a duty to thoroughly examine the candidate’s skills and suitability. It’s easy to be impressed by empty phrases and a list of big names in the industry on a CV, without questioning actual performance. In my experience, I’ve primarily encountered project managers who often lacked the necessary technical expertise to make important decisions. It wasn’t uncommon for managers in IT projects to dismiss me with the words, “I’m not a technician, sort this out amongst yourselves.” This is obviously disastrous when the person who’s supposed to make the decisions can’t make them because they lack the necessary knowledge. An IT project manager doesn’t need to know which algorithm will terminate the project faster. Evaluations can be used to inform decisions. However, a basic understanding of programming is essential. Anyone who doesn’t know what an API is and why version compatibility prevents modules that will later be combined into a software product from working together has no right to act as a decision-maker. A fundamental understanding of software development processes and the programming paradigms used is also indispensable for project managers who don’t work directly with the code.

I therefore advocate for vetting not only the developers you hire for their skills, but also the managers who are to be brought into a company. For me, external project management is an absolute no-go when selecting my projects. This almost always leads to chaos and frustration for everyone involved, which is why I reject such projects. Managers who are not integrated into the company and whose performance is evaluated based on project success, in my experience, do not deliver high-quality work. Furthermore, internal managers, just like developers, can develop and expand their skills through mentoring, training, and workshops. The result is a healthy, relaxed working environment and successful projects.

The title of this article points to toxic stereotypes in the project business. I’m sure everyone has encountered one or more of these stereotypes in their professional environment. There’s a lot of discussion about how to deal with these individuals. However, I would like to point out that hardly anyone is born a “monster.” People are the way they are, a result of their experiences. If a colleague learns that looking stressed and constantly rushed makes them appear more productive, they will perfect this behavior over time.

Camille Fournier aptly described this with the term “The Alpha Geek.” Someone who has made their role in the project indispensable and has an answer for everything. They usually look down on their colleagues with disdain, but can never truly complete a task without others having to redo it. Unrealistic estimates for extensive tasks are just as typical as downplaying complex issues. Naturally, this is the darling of all project managers who wish their entire team consisted of these “Alpha Geeks.” I’m quite certain that if this dream could come true, it would be the ultimate punishment for the project managers who create such individuals in the first place.

To avoid cultivating “alpha geeks” within your company, it’s essential to prevent personality cults and avoid elevating personal favorites above the rest of the team. Naturally, it’s also crucial to constantly review work results. Anyone who marks a task as completed but requires rework should be reassigned until the result is satisfactory.

Personally, I share Tom DeMarco’s view on the dynamics of a project. While productivity can certainly be measured by the number of tasks completed, other factors also play a crucial role. My experience has taught me that, as mentioned earlier, it’s essential to ensure employees complete all tasks thoroughly and thoroughly before taking on new ones. Colleagues who downplay a task or offer unrealistic, low-level assessments should be assigned that very task. Furthermore, there are colleagues who, while having relatively low output, contribute significantly to team harmony.

When I talk about people who build a healthy team, I don’t mean those who simply hand out sweets every day. I’m referring to those who possess valuable skills and mentor their colleagues. These individuals typically enjoy a high level of trust within the team, which is why they often achieve excellent results as mediators in conflicts. It’s not the people who try to be everyone’s darling with false promises, but rather those who listen and take the time to find a solution. They are often the go-to people for everything and frequently have a quiet, unassuming demeanor. Because they have solutions and often lend a helping hand, they themselves receive only average performance ratings in typical process metrics. A good manager quickly recognizes these individuals because they are generally reliable. They are balanced and appear less stressed because they proceed calmly and consistently.

Of course, much more could be said about the stereotypes in a software project, but I think the points already made provide a good basic understanding of what I want to express. An experienced project manager can address many of the problems described as they arise. This naturally requires solid technical knowledge and some interpersonal skills.

Of course, we must also be aware that experienced project managers don’t just appear out of thin air. They need to be developed and supported, just like any other team member. This definitely includes rotations through all technical departments, such as development, testing, and operations. Paradigms like pair programming are excellent for this. The goal isn’t to turn a manager into a programmer or tester, but rather to give them an understanding of the daily processes. This also strengthens confidence in the skills of the entire team, and mentalities like “you have to control and push lazy and incompetent programmers to get them to lift a finger” don’t even arise. In projects that consistently deliver high quality and meet their deadlines, there’s rarely a desire to introduce every conceivable process metric.


Blockchain – an introduction

The blockchain concept is a fundamental component of various crypto payment methods such as Bitcoin and Ethereum. But what exactly is blockchain technology, and what other applications does this concept have? Essentially, blockchain is structured like a backward-linked list. Each element in the list points to its predecessor. So, what makes blockchain so special?

Blockchain extends the list concept by adding various constraints. One of these constraints is ensuring that no element in the list can be altered or removed. This is relatively easy to achieve using a hash function. We encode the content of each element in the list into a hash using a hash algorithm. A wide range of hash functions are now available, with SHA-512 being a current standard. This hash algorithm is already implemented in the standard library of almost every programming language and can be used easily. Specifically, this means that the SHA-512 hash is generated from all the data in a block. This hash is always unique and never occurs again. Thus, the hash serves as an identifier (ID) to locate a block. An entry in the block is a reference to its predecessors. This reference is the hash value of the predecessor, i.e., its ID. When implementing a blockchain, it is essential that the hash value of the predecessor is included in the calculation of the hash value of the current block. This detail ensures that elements in the blockchain can only be modified with great difficulty. Essentially, to manipulate the element one wishes to alter, all subsequent elements must also be changed. In a large blockchain with a very large number of blocks, such an undertaking entails an enormous computational effort that is very difficult, if not impossible, to accomplish.

This chaining provides us with a complete transaction history. This also explains why crypto payment methods are not anonymous. Even though the effort required to uniquely identify a transaction participant can be enormous, if this participant also uses various obfuscation methods with different wallets that are not linked by other transactions, the effort increases exponentially.

Of course, the mechanism just described still has significant weaknesses. Transactions, i.e., the addition of new blocks, can only be considered verified and secure once enough successors have been added to the blockchain to ensure that changes are more difficult to implement. For Bitcoin and similar cryptocurrencies, a transaction is considered secure if there are five subsequent transactions.

To avoid having just one entity storing the transaction history—that is, all the blocks of the blockchain—a decentralized approach comes into play. This means there is no central server acting as an intermediary. Such a central server could be manipulated by its operator. With sufficient computing power, this would allow for the rebuilding of even very large blockchains. In the context of cryptocurrencies, this is referred to as chain reorganization. This is also the criticism leveled at many cryptocurrencies. Apart from Bitcoin, no other decentralized and independent cryptocurrency exists. If the blockchain, with all its contained elements, is made public and each user has their own instance of this blockchain locally on their computer, where they can add elements that are then synchronized with all other instances of the blockchain, then we have a decentralized approach.

The technology for decentralized communication without an intermediary is called peer-to-peer (P2P). P2P networks are particularly vulnerable in their early stages, when there are only a few participants. With a great deal of computing power, one could easily create a large number of so-called “Zomi peers” that influence the network’s behavior. Especially in times when cloud computing, with providers like AWS and Google Cloud Platform, can provide virtually unlimited resources for relatively little money, this is a significant problem. This point should not be overlooked, particularly when there is a high financial incentive for fraudsters.

There are also various competing concepts within P2P. To implement a stable and secure blockchain, it is necessary to use only solutions that do not require supporting backbone servers. The goal is to prevent the establishment of a master chain. Therefore, questions must be answered regarding how individual peers can find each other and which protocol they use to synchronize their data. By protocol, we mean a set of rules, a fixed framework for how interaction between peers is regulated. Since this point is already quite extensive, I refer you to my 2022 presentation for an introduction to the topic.

Another feature of blockchain blocks is that their validity can be easily and quickly verified. This simply requires generating the SHA-512 hash of the entire contents of a block. If this hash matches the block’s ID, the block is valid. Time-sensitive or time-critical transactions, such as those relevant to payment systems, can also be created with minimal effort. No complex time servers are needed as intermediaries. Each block is appended with a timestamp. This timestamp must, however, take into account the location where it is created, i.e., specify the time zone. To obscure the location of the transaction participants, all times in the different time zones can be converted to the current UTC 0.

To ensure that the time is correctly set on the system, a time server can be queried for the current time when the software starts, and a correction message can be displayed if there are any discrepancies.

Of course, time-critical transactions are subject to a number of challenges. It must be ensured that a transaction was carried out within a defined time window. This is a problem that so-called real-time systems have to deal with. The double-spending problem also needs to be prevented—that is, the same amount being sent twice to different recipients. In a decentralized network, this requires confirmation of the transaction by multiple participants. Classic race conditions can also pose a problem. Race conditions can be managed by applying the Immutable design pattern to the block elements.

To prevent the blockchain from being disrupted by spam attacks, we need a solution that makes creating a single block expensive. We achieve this by incorporating computing power. The participant creating a block must solve a puzzle that requires a certain amount of computing time. If a spammer wants to flood the network with many blocks, their computing power increases exorbitantly, making it impossible for them to generate an unlimited number of blocks in a short time. This cryptographic puzzle is called a nonce, which stands for “number used only once.” The nonce mechanism in the blockchain is also often referred to as Proof of Work (POW) and is used in Bitcoin to verify the blocks by the miners.

The nonce is a (pseudo)random number for which a hash must be generated. This hash must then meet certain criteria. These could be, for example, two or three leading zeros in the hash. To prevent arbitrary hashes from being inserted into the block, the random number that solves the puzzle is stored directly. A nonce that has already been used cannot be used again, as this would circumvent the puzzle. When generating the hash from the nonce, it must meet the requirements, such as leading zeros, to be accepted.

Since finding a valid nonce becomes increasingly difficult as the number of blocks in a blockchain grows, it is necessary to change the rules for such a nonce cyclically, for example, every 2048 blocks. This also means that the rules for a valid nonce must be assigned to the corresponding blocks. Such a set of rules for the nonce can easily be formulated using a regular expression (regex).

We’ve now learned a considerable amount about the ruleset for a blockchain. So it’s time to consider performance. If we were to simply store all the individual blocks of the blockchain in a list, we would quickly run out of memory. While it’s possible to store the blocks in a local database, this would negatively impact the blockchain’s speed, even with an embedded solution like SQLite. A simple solution would be to divide the blockchain into equal parts, called chunks. A chunk would have a fixed length of 2048 valid blocks, and the first block of a new chunk would point to the last block of the previous chunk. Each chunk could also contain a central rule for the nonce and store metadata such as minimum and maximum timestamps.

To briefly recap our current understanding of the blockchain ruleset, we’re looking at three different levels. The largest level is the blockchain itself, which contains fundamental metadata and configurations. Such configurations include the hash algorithm used. The second level consists of so-called chunks, which contain a defined set of block elements. As mentioned earlier, chunks also contain metadata and configurations. The smallest element of the blockchain is the block itself, which comprises an ID, the described additional information such as a timestamp and nonce, and the payload. The payload is a general term for any data object that is to be made verifiable by the blockchain. For Bitcoin and other cryptocurrencies, the payload is the information about the amount being transferred from Wallet A (source) to Wallet B (destination).

Blockchain technology is also suitable for many other application scenarios. For example, the hash values ​​of open-source software artifacts could be stored in a blockchain. This would allow users to download binary files from untrusted sources and verify them against the corresponding blockchain. The same principle could be applied to the signatures of antivirus programs. Applications and other documents could also be transmitted securely in governmental settings. The blockchain would function as a kind of “postal stamp.” Accounting, including all receipts for goods and services purchased and sold, is another conceivable application.

Depending on the use case, an extension of the blockchain would be the unique signing of a block by its creator. This would utilize the classic PKI (Public Key Infrastructure) method with public and private keys. The signer stores their public key in the block and creates a signature using their private key via the payload, which is then also stored in the block.

Currently, there are two freely available blockchain implementations: BitcoinJ and Web3j for Ethereum. Of course, it’s possible to create your own universally applicable blockchain implementation using the principles just described. The pitfalls, naturally, lie in the details, some of which I’ve already touched upon in this article. Fundamentally, however, blockchain isn’t rocket science and is quite manageable for experienced developers. Anyone considering trying their hand at their own implementation now has sufficient basic knowledge to delve deeper into the necessary details of the various technologies involved.


Privacy

I constantly encounter statements like, “I use Apple because of the data privacy,” or “There are no viruses under Linux,” and so on and so forth. In real life, I just chuckle to myself and refrain from replying. These people are usually devotees of a particular brand, which they worship and would even defend with their lives. Therefore, I save my energy for more worthwhile things, like writing this article.

My aim is to use as few technical details and jargon as possible so that people without a technical background can also access this topic. Certainly, some skeptics might demand proof to support my claims. To them, I say that there are plenty of keywords for each statement that you can use to search for yourself and find plenty of primary sources that exist outside of AI and Wikipedia.

When one ponders what freedom truly means, one often encounters statements like: “Freedom is doing what you want without infringing on the freedom of others.” This definition also includes the fact that confidential information should remain confidential. However, efforts to maintain this confidentiality existed long before the availability of electronic communication devices. It is no coincidence that there is an age-old art called cryptography, which renders messages transmitted via insecure channels incomprehensible to the uninitiated. The fact that the desire to know other people’s thoughts is very old is also reflected in the saying that the two oldest professions of humankind are prostitution and espionage. Therefore, one might ask: Why should this be any different in the age of communication?

Particularly thoughtless individuals approach the topic with the attitude that they have nothing to hide anyway, so why should they bother with their own privacy? I personally belong to the group of people who consider this attitude very dangerous, as it opens the floodgates to abuse by power-hungry groups. Everyone has areas of their life that they don’t want dragged into the public eye. These might include specific sexual preferences, infidelity to a partner, or a penchant for gambling—things that can quickly shatter a seemingly perfect facade of moral integrity.

In East Germany, many people believed they were too insignificant for the notorious domestic intelligence service, the Stasi, to be interested in them. The opening of the Stasi files after German reunification demonstrated just how wrong they were. In this context, I would like to point out the existing legal framework in the EU, which boasts achievements such as hate speech laws, chat monitoring, and data retention. The private sector also has ample reason to learn more about every individual. This allows them to manipulate people effectively and encourage them to purchase services and products. One goal of companies is to determine the optimal price for their products and services, thus maximizing profit. This is achieved through methods of psychology. Or do you really believe that products like a phone that can take photos are truly worth the price they’re charged? So we see: there are plenty of reasons why personal data can indeed be highly valuable. Let’s therefore take a look at the many technological half-truths circulating in the public sphere. I’ve heard many of these half-truths from technology professionals themselves, who haven’t questioned many things.

Before I delve into the details, I’d like to make one essential point. There is no such thing as secure and private communication when electronic devices are involved. Anyone wanting to have a truly confidential conversation would have to go to an open field in strong winds, with a visibility of at least 100 meters, and cover their mouth while speaking. Of course, I realize that microphones could be hidden there as well. This statement is meant to be illustrative and demonstrates how difficult it is to create a truly confidential environment.

Let’s start with the popular brand Apple. Many Apple users believe their devices are particularly secure. This is only true to the extent that strangers attempting to gain unauthorized access to the devices face significant obstacles. The operating systems incorporate numerous mechanisms that allow users to block applications and content, for example, on their phones.

Microsoft is no different and goes several steps further. Ever since the internet became widely available, there has been much speculation about what telemetry data users send to the parent company via Windows. Windows 11 takes things to a whole new level, recording every keystroke and taking a screenshot every few seconds. Supposedly, this data is only stored locally on the computer. Of course, you can believe that if you like, but even if it were true, it’s a massive security vulnerability. Any hacker who compromises a Windows 11 computer can then read this data and gain access to online banking and all sorts of other accounts.

Furthermore, Windows 11 refuses to run on supposedly outdated processors. The fact that Windows has always been very resource-intensive is nothing new. However, the reason for the restriction to older CPUs is different. Newer generation CPUs have a so-called security feature that allows the computer to be uniquely identified and deactivated via the internet. The key term here is Pluton Security Processor with the Trusted Platform Module (TPM 2.0).

The extent of Microsoft’s desire to collect all possible information about its users is also demonstrated by the changes to its terms and conditions around 2022. These included a new section granting Microsoft permission to use all data obtained through its products to train artificial intelligence. Furthermore, Microsoft reserves the right to exclude users from all Microsoft products if hate speech is detected.

But don’t worry, Microsoft isn’t the only company with such disclaimers in its terms and conditions. Social media platforms like Meta, better known for its Facebook and WhatsApp products, and the communication platform Zoom also operate similarly. The list of such applications is, of course, much longer. Everyone is invited to imagine the possibilities that the things already described offer.

I’ve already mentioned Apple as problematic in the area of ​​security and privacy. But Android, Google’s operating system for smart TVs and phones, also gives enormous scope for criticism. It’s not entirely without reason that you can no longer remove the batteries from these phones. Android behaves just like Windows and sends all sorts of telemetry data to its parent company. Add to that the scandal involving manufacturer Samsung, which came to light in 2025. They had a hidden Israeli program called AppCloud on their devices, the purpose of which can only be guessed at. Perhaps it’s also worth remembering when, in 2023, pagers exploded for many Palestinians and other people declared enemies by Israel. It’s no secret in the security community that Israel is at the forefront of cybersecurity and cyberattacks.

Another issue with phones is the use of so-called messengers. Besides well-known ones like WhatsApp and Telegram, there are also a few niche solutions like Signal and Session. All these applications claim end-to-end encryption for secure communication. It’s true that hackers have difficulty accessing information when they only intercept network traffic. However, what happens to the message after successful transmission and decryption on the target device is a different matter entirely. How else can the meta terms and conditions, with their already included clauses, be explained?

Considering all the aforementioned facts, it’s no wonder that many devices, such as Apple, Windows, and Android, have implemented forced updates. Of course, not everything is about total control. The issue of resilience, which allows devices to age prematurely in order to replace them with newer models, is another reason.

Of course, there are also plenty of options that promise their users exceptional security. First and foremost is the free and open-source operating system Linux. There are many different Linux distributions, and not all of them prioritize security and privacy equally. The Ubuntu distribution, published by Canonical, regularly receives criticism. For example, around 2013, the Unity desktop was riddled with ads, which drew considerable backlash. The notion that there are no viruses under Linux is also a myth. They certainly exist, and the antivirus scanner for Linux is called ClamAV; however, its use is less widespread due to the lower number of home installations compared to Windows. Furthermore, Linux users are still often perceived as somewhat nerdy and less likely to click on suspicious links. But those who have installed all the great applications like Skype, Dropbox, AI agents, and so on under Linux don’t actually have any improved security compared to the Big Tech industry.

The situation is similar with so-called “debugged” smartphones. Here, too, the available hardware, which is heavily regulated, is a problem. But everyday usability also often reveals limitations. These limitations are already evident within families and among friends, who are often reliant on WhatsApp and similar apps. Even online banking can present significant challenges, as banks, for security reasons, only offer their apps through the verified Google Play Store.

As you can see, this topic is quite extensive, and I haven’t even listed all the points, nor have I delved into them in great depth. I hope, however, that I’ve been able to raise awareness, at least to the point that smartphones shouldn’t be taken everywhere, and that more time should be spent in real life with other people, free from all these technological devices.

RTFM – usable documentation

An old master craftsman used to say: “He who writes, remains.” His primary intention was to obtain accurate measurements and weekly reports from his journeymen. He needed this information to issue correct invoices, which was crucial to the success of his business. This analogy can also be readily applied to software development. It wasn’t until Ruby, the programming language developed in Japan by Yukihiro Matsumoto, had English-language documentation that Ruby’s global success began.

We can see, therefore, that documentation can be of considerable importance to the success of a software project. It’s not simply a repository of information within the project where new colleagues can find necessary details. Of course, documentation is a rather tedious subject for developers. It constantly needs to be kept up-to-date, and they often lack the skills to put their own thoughts down on paper in a clear and organized way for others to understand.

I myself first encountered the topic of documentation many years ago through reading the book “Software Engineering” by Johannes Siedersleben. Ed Yourdon was quoted there as saying that before methods like UML, documentation often took the form of a Victorian novella. During my professional life, I’ve also encountered a few such Victorian novellas. The frustrating thing was: after battling through the textual desert—there’s no other way to describe the feeling than as overcoming and struggling—you still didn’t have the information you were looking for. To paraphrase Goethe’s Faust: “So here I stand, poor fool, no wiser than before.”

Here we already see a first criticism of poor documentation: inappropriate length and a lack of information. We must recognize that writing isn’t something everyone is born with. After all, you became a software developer, not a book author. For the concept of “successful documentation,” this means that you shouldn’t force anyone to do it and instead look for team members who have a knack for it. This doesn’t mean, however, that everyone else is exempt from documentation tasks. Their input is essential for quality. Proofreading, pointing out errors, and suggesting additions are all necessary tasks that can easily be shared.

It’s highly advisable to provide occasional rhetorical training for the team or individual team members. The focus should be on precise, concise, and understandable expression. This also involves organizing your thoughts so they can be put down on paper and follow a clear and coherent structure. The resulting improved communication has a very positive impact on development projects.

Up-to-date documentation that is easy to read and contains important information quickly becomes a living document, regardless of the format chosen. This is also a fundamental concept for successful DevOps and agile methodologies. These paradigms rely on good information exchange and address the avoidance of information silos.

One point that really bothers me is the statement: “Our tests are the documentation.” Not all stakeholders can program and are therefore unable to understand the test cases. Furthermore, while tests demonstrate the behavior of functions, they don’t inherently demonstrate their correct usage. Variations of usable solutions are also usually missing. For test cases to have a documentary character, it’s necessary to develop specific tests precisely for this purpose. In my opinion, this approach has two significant advantages. First, the implementation documentation remains up-to-date, because changes will cause the test case to fail. Another positive effect is that the developer becomes aware of how their implementation is being used and can correct a flawed design in a timely manner.

Of course, there are now countless technical solutions that are suitable for different groups of people, depending on their perspective on the system. Issue and bug tracking systems, such as the commercial JIRA or the open-source Redmine, map entire processes. They allow testers to assign identified problems and errors in the software to a specific release version. Project managers can use release management to prioritize fixes, and developers document the implemented fixes. That’s the theory. In practice, I’ve seen in almost every project how the comment function in these systems is misused as a chat to describe the change status. The result is a bug report with countless useless comments, and any real, relevant information is completely missing.

Another widespread technical solution in development projects is the use of enterprise wikis. They enhance simple wikis with navigation and allow the creation of closed spaces where only explicitly authorized user groups receive granular permissions such as read or write. Besides the widely used commercial solution Confluence, there’s also a free alternative called BlueSpice, which is based on MediaWiki. Wikis allow collaborative work on a document, and individual pages can be exported as PDFs in various formats. To ensure that the wiki pages remain usable, it’s important to maintain clean and consistent formatting. Tables should fit their content onto a single A4 page without unwanted line breaks. This improves readability. There are also many instances where bulleted lists are preferable to tables for the sake of clarity.

This brings us to another very sensitive topic: graphics. It’s certainly true that a picture is often worth a thousand words. But not always! When working with graphics, it’s important to be aware that images often require a considerable amount of time to create and can often only be adapted with significant effort. This leads to several conclusions to make life easier. A standard program (format) should be used for creating graphics. Expensive graphics programs like Photoshop and Corel should be avoided. Graphics created for wiki pages should be attached to the wiki page in their original, editable form. A separate repository can also be set up for this purpose to allow reuse in other projects.

If an image offers no added value, it’s best to omit it. Here’s a small example: It’s unnecessary to create a graphic depicting ten stick figures with a role name or person underneath. Here, it is advisable to create a simple list, which is also easier to supplement or adapt.

But you should also avoid overloaded graphics. True to the motto “more is better,” overly detailed information tends to cause confusion and can lead to misinterpretations. A recommended book is “Documenting and Communicating Software Architectures” by Stefan Zörner. In this book, he effectively demonstrates the importance of different perspectives on a system and which groups of people are addressed by a specific viewpoint. I would also like to take this opportunity to share his seven rules for good documentation:

  1. Write from the reader’s perspective.
  2. Avoid unnecessary repetition.
  3. Avoid ambiguity; explain notation if necessary.
  4. Use standards such as UML.
  5. Include the reasons (why).
  6. Keep the documentation up-to-date, but never too up-to-date.
  7. Review the usability.

Anyone tasked with writing the documentation, or ensuring its progress and accuracy, should always be aware that it contains important information and presents it correctly and clearly. Concise and well-organized documentation can be easily adapted and expanded as the project progresses. Adjustments are most successful when the affected area is as cohesive as possible and appears only once. This centralization is achieved through references and hyperlinks, so that changes in the original document are reflected in the references.

Of course, there is much more to say about documentation; it’s the subject of numerous books, but that would go beyond the scope of this article. My main goal was to raise awareness of this topic, as paradigms like Agile and DevOps rely on a good flow of information.


Passwords, but secure?

Does someone really need to write about passwords again? – Of course not, but I’ll do it anyway. The topic of secure passwords is a perennial topic for a reason. In this constant game of cat and mouse between hackers and users, there’s only one viable solution: staying on top of things. Faster computers and the availability of AI systems are constantly reshuffling the deck. In cryptography, there’s an unwritten rule that simply keeping information secret isn’t sufficient protection. Rather, the algorithm for keeping it secret should be disclosed, and its security should be proven mathematically.

Security researchers are currently observing a trend toward using artificial intelligence to guess supposedly secure passwords. So far, one rule has been established when dealing with passwords: the longer a password, the more difficult it is to guess. We can test this fact with a simple combination lock. A three-digit combination lock has exactly 1,000 possible combinations. Now, the effort required to manually try all the numbers from 000 to 999 is quite manageable and, with a little skill, can be solved in less than 30 minutes. If you change the combination lock from three to five digits, this work multiplies, and finding the solution in less than 30 minutes becomes more a matter of luck, especially if the combination is in the lower number range. Security is further increased if each digit allows not only numbers from 0 to 9, but also letters, both upper and lower case.

This small and simple example shows how the ‘vicious circle’ works. Faster computers allow for trying out possible combinations in a shorter time, so the number of possible combinations must be driven immeasurably with the least possible effort. While in the early 2000s, eight digits with numbers and letters were sufficient, today it should ideally be 22 digits with numbers, upper and lower case, including special characters. Proton lumo’s AI makes the following recommendation:

  • Length at least 22 characters
  • Mixture: Uppercase/lowercase letters, numbers, special characters, underscore

A practical example of a secure password would be: R3gen!Berg_2025$Flug.

Here we see the first vulnerability. No one can remember such passwords. At work, someone might give you a password policy that you have to follow – oh well, that’s a shame, live with it! But don’t worry, there’s a life hack for everything.

That’s why it’s still common for employees to keep their passwords in close proximity to their PCs. Yes, they still keep them on little slips of paper under the keyboard or as Post-it notes on the edge of the screen. As an IT technician, when I want to log into a coworker’s PC while they’re not at their desk, I still glance over the edge of the screen and then look under the keyboard.

How do I know it’s the password? Sure! I look for a sequence of uppercase and lowercase letters, numbers, and special characters. If there were a Post-it stuck to the edge of my screen with, for example, the inscription “Wed Foot Care 10:45,” I wouldn’t even recognize it as a password at first.

So, as a password, “Wed Foot Care 10:45” would be 16 characters long, with upper and lower case letters, numbers, and special characters. Perfect! And at first, it wouldn’t even be recognizable as a password. By the way: The note should have as little dust or patina as possible.

In everyday working life, there are also such nice peculiarities that you have to change your password monthly, and the new password must not have been used in the last few months. Here, too, employees came up with solutions such as password01, password02, and so on, until all 12 months were completed. So there was an extended verification process, and now it had to contain a certain number of different characters.

But even in our private lives, we shouldn’t take the topic of secure passwords lightly. The services we regularly log in to have become an important part of many people’s lives. Online banking and social media are important points here. The number of online accounts is constantly growing. Of course, it’s clear that you shouldn’t recycle your passwords. So you should use multiple passwords. How best to go about this—how many and how to structure them—is something everyone has to decide for themselves, of course, in a way that suits them personally. But we’re not memory masters, and the less often we need a particular password, the harder it is for us to remember it. Password managers can help.

Password managers

The good old filing cabinet. By the way, battery life: infinite. Even if that might seem unworthy of a computer nerd, it’s still possibly the most effective way to store passwords at home.

With today’s number of passwords, management software is certainly attractive, but there’s a risk that if someone gains control of the software, they could have you – as our American friends colloquially say, “by the balls” – loosely translated into German: in a stranglehold. This rule applies especially to cloud solutions that seem convenient at first glance.

For Linux and Windows, however, there is a solution you can install on your computer to manage the many passwords of your online accounts. This software is called KeePass, is open source, and can also be used legally and free of charge in a commercial setting. This so-called password store stores the passwords encrypted on your hard drive. Of course, it’s quite tedious to copy and paste the login details from the password manager on every website. A small browser plugin called TUSK KeePass can help here. It’s available for all common browsers, including Brave, Firefox, and Opera. Even if other people are looking over your shoulder, your password will never be displayed in plain text. Copying and pasting will also delete it from your clipboard after a few minutes.

It’s a completely different story when you’re on the go and have to work on someone else’s computer. In your personal life, it’s a good idea to adapt passwords to the circumstances, depending on where you use them. Let’s say you want to log into your email account on a PC, but you may not be able to guarantee that you’re not being watched at all times.

At this point, it would certainly be counterproductive to dig out a cheat sheet with a password written down that follows the recommended guidelines: uppercase and lowercase letters, numbers, special characters, including Japanese and Cyrillic, if possible, which you then type character by character with your index finger using the eagle search system.

(with advanced keyboard layout also labeled ‘Kölsch’ instead of ‘Alt’)

If you’re not too bad at typing, meaning you can type a bit faster, you should use a password that you can type in 1-1.5 seconds. This will overwhelm a normal observer, especially if you use the Shift key discreetly while typing. You draw attention to your right hand while typing and discreetly use the Shift or Alt keys occasionally with your left hand.

Perhaps, at a cautious assessment, the leaking of your personal Tetris high score list doesn’t constitute a security-relevant loss. Access to online banking is a completely different matter. It’s therefore certainly sensible to use a separate password for financial transactions, a different one for less critical logins, and a simple one for “run-of-the-mill” registrations.

If you have the option to create alias email addresses, this is also very useful, since logging in usually requires not only a password but also an email address. If possible, having a unique email address there, created only for the corresponding site, can not only increase security but also give you the opportunity to become unreachable if you wish. Every now and then, for example, it happens that I receive advertisements, even though I’ve explicitly opted out of advertising. Strangely enough, these are usually the same ‘birds’ who, for example, don’t stick to their payment terms, which they promised before registration. So I simply take the most effective route and delete the alias email address → and that’s it!

Memorability

I’d also like to say a few words about the memorability of passwords. As we’ve seen in the article, it’s a good idea to use a different password for each online account, if possible. This way, we can avoid having our login to Facebook and other social media accounts affected if Sony’s PlayStation Store is hacked again and all customer data is stolen. Of course, there are now multi-factor authentication, authentication, and many other security solutions, but operators don’t always take care of them. Moreover, the motto in hacker circles is: Every problem has a solution.

To create a marketable password that meets all security criteria, we’ll use a simple approach. Our password consists of a very complex static part that, if possible, avoids any personal reference. As a mnemonic, we can use the image of an image, as in the initial example: a combination of an image (“Regen Berg”) and a year, complemented by another word (“Flug”). It’s also very popular to randomly replace letters with similar-looking numbers, such as replacing the E with a 3 or the I with a 1. To avoid limiting the number of possibilities and ensuring that all E’s are now a 3, we won’t do this for all E’s. This results in a static password part that might look like this: R3gen!Berg_2025$Flug. This static part is easy to remember. If we now need a password for our X login, we supplement the static part with a dynamic segment that applies only to our X account. The static part can be easily introduced with a special character like # and then supplemented with the reference to the login. This could look like this: sOCIAL.med1a-X. As mentioned several times, this is an idea that everyone can adapt to their own needs.

In conclusion

At work, you should always be aware that whoever logs into your account is also acting on your behalf. That is, under your identity.

It’s logical that things sometimes run much more smoothly if a colleague can just “check in” on you. The likelihood of this coming back to haunt you is certainly low as long as they handle your password carefully.

Of course, you shouldn’t underestimate the issue of passwords in general, but even if you lose a password: Life on the planet as we know it won’t change significantly. At least not because of that. I promise!


Marketing with artificial intelligence

Nothing is as certain as change. This wisdom applies to virtually every area of ​​our lives. The internet is also in a constant state of flux. However, the many changes in the technology sector are happening so rapidly that it’s almost impossible to keep up. Anyone who has based their business model on marketing through online channels is already familiar with the problem. Marketing will also continue to experience significant changes in the future, influenced by the availability of artificial intelligence.

Before we delve into the details in a little more detail, I would like to point out right away that by no means has everything become obsolete. Certainly, some agencies will not be able to continue to assert themselves in the future if they focus on traditional solutions. Therefore, it is also important for contractors to understand which marketing concepts can be implemented that will ultimately achieve their goals. Here, we believe that competence and creativity will not be replaced by AI. Nevertheless, successful agencies will not be able to avoid the targeted use of artificial intelligence. Let’s take a closer look at how internet user behavior has changed since the launch of ChatGPT around 2023.
More and more people are accessing AI systems to obtain information. This naturally leads to a decline in traditional search engines like Google and others. Search engines per se are unlikely to disappear, as AI models also require an indexed database on which to operate. It’s more likely that people will no longer access search engines directly, but will instead have a personal AI assistant that evaluates all search queries for them. This also suggests that the number of freely available websites may decline significantly, as they will hardly be profitable due to a lack of visitors. What will replace them?
Following current trends, it can be assumed that well-known and possibly new platforms such as Instagram, Facebook, and X will continue to gain market power. Short texts, graphics, and videos already dominate the internet. All of these facts already require a profound rethinking of marketing strategies.

They say dead live longer. Therefore, it would be wrong to completely neglect traditional websites and the associated SEO. Be aware of the business strategy you are pursuing with your internet/social media presence. As an agency, we specifically help our clients review and optimize existing strategies or develop entirely new ones.
Questions are clarified as to whether you want to sell goods or services, or whether you want to be perceived as a center of expertise on a specific topic. Here, we follow the classic approach from search engine optimization, which is intended to generate qualified traffic. It is of little use to receive thousands of impressions when only a small fraction of them are interested in the topic. The previously defined marketing goals are promoted with cleverly distributed posts on websites and in social media.
Of course, every marketing strategy stands or falls with the quality of the products or services offered. Once the customer feels they received a bad product or a service was too poor, a negative campaign can spread explosively. Therefore, it is highly desirable to receive honest reviews from real customers on various platforms.
There are countless offers from dubious agencies that offer their clients the opportunity to generate a set number of followers, clicks, or reviews. The results quickly disappear once the service is no longer purchased. Besides, such generic posts created by bots are easy to spot, and many people now selectively ignore them. Thus, the effort is pointless. Furthermore, real reviews and comments are also an important tool for assessing the true external impact of your business. If you are constantly being told how great you are, you might be tempted to believe it. There are some stars who have experienced this firsthand.

Therefore, we rely on regular publications of high-quality content that are part of the marketing objective in order to generate attention. We try to use this attention to encourage user interaction, which in turn leads to greater visibility. Our AI models help us identify current trends in a timely manner so that we can incorporate them into our campaigns.
Based on our experience, artificial intelligence allows us to create and schedule high-frequency publications for a relatively long campaign period. The time a post or comment goes live also influences success.
There are isolated voices that suggest the end of agencies. The reasoning is often that many small business owners can now do all these great things that are part of marketing themselves thanks to AI. We don’t share this view. Many entrepreneurs simply don’t have the time to manage marketing independently across all channels. That’s why we rely on a healthy mix of manual work and automation in many steps. Because we believe that success doesn’t just happen in a test tube. We use our tools and experience to achieve qualitative individual results.


Beyond code: Why soft skills for developers in the AI era become irreplaceable

AI tools such as Github Copilot, Chatgpt and other code generators change the developer role. Many programmers wonder which skills will be asked in the future. AI does not replace any developers. But developers without soft skills replace themselves.

“The best developers 2030 will not be a better code – but better translators between humans and machines.” Andrej Karpathy, ex-Openai

In June 2025, Microsoft deleted 9000 jobs [1]. Companies such as Microsoft, Google or IBM change their teams-and AI tools are often part of the strategy. One reason for these laying waves is the comprehensive availability of powerful AI tools. According to a study by McKinsey [2], AI systems can accelerate up to 60% of the Developer workload. If AI can do up to 80% of the coding, what makes me irreplaceable? More and more people are now asking themselves this central question because they are directly affected by the 4th industrial revolution or are affected in the foreseeable future.

Unlike earlier revolutions, there is no ‘retraining on web design’ this time. AI tools such as Devin or Chatgpt code not only automate tasks, but entire job profiles and faster than most of those affected can react. Studies show that up to 30% of all developer roles will not be converted by 2030, but are replaced by artificial intelligence.

This trend can be found in almost all professions, also in classic craft. On YouTube you can specifically search for videos, such as deliver small, cute robots orders in Moscow. Or as robots print out entire houses. New patents that affect steel shavings to concrete increase the stability and replace classic iron lichen. Machines that lay the floor tiles can also be seen. The list of activities that can be carried out by AI is long.

If you internalize this forecast, you can be afraid and worried. In order not only to survive in this new period, but even to be one of the winners, requires a high degree of flexibility. That is why one of the most important properties we have to develop will be a flexible spirit. Because although AI is very powerful, their limits are also set. If we only think about what defines us as humans, we find an important quality: creativity. How can we use this for future success? So that the statement: if your creativity does not become a platitude, I first look at the way how it will most likely become nothing.

Often junior developers ask me which framework, which programming Apache, which operating system etc. you should learn. These were the wrong questions in the old days. It’s not about following trends, but an appeal. If programming is to be a calling for me, it is first of all about understanding what the code you write really does. With a profound understanding of the source text, performance improvements can also be found quickly. Optimizations in the area of security are also included. But locating errors and their elimination are also characteristics of good developers. Because it is precisely in these areas that human creativity of artificial intelligence is superior. Of course, this means that as a consequence, it is consequently expanding exactly these skills.

Anyone who is only busy running after current fashion phenomena was not one of the specialists in demand in the ‘old’ time. Pure code of Monkeys their activities primarily consist of copying and inserting, without really grasping what the code snippets mean, were easy to replace. Especially now that AI is supposed to increase productivity, it is important to decide quickly and safely where a proposed implementation needs adjustments so that there are no unpleasant surprises when the application goes into production. Of course, this also means as a consequence that AI is a tool that needs to be used efficiently. In order to continue to stay on the winning page in the future, it is essential to significantly improve your own productivity by handling AI. Companies expect their employees that with the support of AI they can do a four to five times the current workload.

In order to be able to work effectively with artificial intelligence, your own communication skills are essential. Because only if you have clearly structured your thoughts can you formulate it correctly and specifically. A significant increase in performance can only be achieved if the desired result is achieved when the first instruction. Anyone who has to explain to the language model every time how inquiries can be understood, for example, because they contain amplifying, you will be able to achieve little time savings by AI.

You can basically say that the developer of the future should have some management skills. In addition to clear tasks, there will be a lot of self -management. To distribute suitable resources for optimal results. Because not only artificial intelligence threatens your own workplace, but also a strong competition from the Asian region. Well -trained, motivated and powerful people are now available in high numbers.

So we see that very moving times are coming up. The world will turn a little faster. Anyone who perceives these changes as a threat but as a challenge has a good chance of being fit for the no longer too long future. Anyone who already sets the course is well prepared for what will come to us and does not have to be afraid of anything.

resources

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

How to buy and pay with Bitcoin

For many, Bitcoin (BTC) is a pure speculation object with which they only want to make money. However, the cryptocurrency Bitcoin is also ideal for payment. You do not need any in-depth technical knowledge to pay with Bitcoin. Bitcoin can also be bought with relatively small amounts, for example 10 euros. Everything you need to get started is explained in this article in an easy-to-understand manner.

To buy your first Bitcoin, you need a regular bank account, €20 and about 10 minutes of time. Depending on the bank, the transfer of euros until they are credited as Bitcoin can take up to a day. Incidentally, all of elmar-dott.com’s services can also be paid for using Bitcoin.

Before we start the first transaction, we need to create a wallet. Wallet is the English term for purse. This means that a Bitcoin wallet is nothing more than a digital purse. The program with which you can create and manage a wallet is very similar to the typical banking app. Wallets can be easily set up on computers, smartphones and tablets (Android & iPhone/iPad). There are also hardware wallets that work similarly to a USB stick and store the Bitcoins there.

The most important difference between a bank account and a wallet is that the bitcoins stored in your wallet actually belong to you. There is no bank or other institution that has access to this wallet. You can compare bitcoins stored in your wallet with the cash you have in your wallet. So let’s first look at how to create your own wallet. For this we use the free open source software Electrum. The Electrum Bitcoin Wallet was developed in Python 3 and is available for: Linux, Windows, MacOS and Android.

1st step: Create a wallet

After the app has been downloaded and started, we can get started and create our first Bitcoin wallet. First, we give our wallet a name and press Next. We are then asked which wallet type we would like to create. Here we leave it at the default. We then have to create a seed. The seed is 12 randomly created words that we can expand with our own terms/character strings using the Options button. The specified terms (seed) are extremely important and must be kept safe. It is best to write them on a piece of paper.

After the app has been downloaded and started, we can get started and create our Bitcoin wallet. First, we give our wallet a name and press Next. We are then asked which wallet type we would like to create. Here we leave it at the default. We then have to create a seed. The seed is 12 randomly created words that we can expand with our own terms/character strings using the Options button. The specified terms (seed) are extremely important and must be kept safe. It is best to write them on a piece of paper. The seed allows full access to your personal wallet. With the seed you can easily transfer your wallet to any device. A secure password is then assigned and the wallet file is encrypted. We have now created our own Bitcoin wallet, with which we can send and receive Bitcoin.

In this way, you can create as many wallets as you like. Many people use 2 or more wallets at the same time. This process is called proxy pay. This measure conceals the actual recipient and is intended to prevent transfer services from refusing transactions to undesirable recipients.

In order to convert your own euros into bitcoin, a so-called broker is required. You transfer euros or other currencies to this broker and receive bitcoin in return. The bitcoin is first transferred to a wallet managed by the broker. From this wallet you can already send bitcoin to any other wallet. As long as the bitcoin is still in the broker’s wallet, however, the broker can block the wallet or steal the bitcoin on it. Only when we transfer the purchased bitcoin to a self-managed wallet, as we created in step 1, are the coins in our possession and no outsider has access to them.

The problem that can arise is that these broker services, also called crypto exchanges, can keep a list of bitcoin wallets to which they do not send transactions. To avoid this, you transfer your Bitcoins from the wallet of the Bitcoin exchange where you bought your coins to your own wallet. You can also use multiple wallets to receive payments. This strategy makes it difficult to track payment flows. The money that has been received in various wallets can now be easily transferred to a central wallet where you can save your coins. It is important to know that fees are also charged when sending Bitcoin. Just like with a checking account.

Understanding Bitcoin Transaction Fees

Every time a transaction is made, it is stored in a block. These blocks have a limited size of 1MB, which limits the number of transactions per block. Since the number of transactions that can fit in a block is limited, users compete to have their transactions included in the next block. This is where Bitcoin transaction fees come in. Users offer fees to make their transactions more attractive to miners. The higher the fee, the more likely the transaction will be confirmed faster. The amount of fees depends on several factors:

  • Network load: When load is high, fees increase because more users want to prioritize their transactions.
  • Transaction size: Larger transactions require more space in the block and therefore incur higher fees.
  • Market conditions: General demand for Bitcoin and market volatility can affect fees.

Most wallets calculate fees automatically based on these factors. However, some wallets offer the ability to manually adjust fees to either save costs or achieve faster confirmation.

Bitcoin transaction fees are not fixed and can vary widely. Bitcoin transactions can be confirmed within minutes to hours, depending on the amount of the fees. Bitcoin fees are not calculated based on the value of the transaction (i.e. how much Bitcoin you send), but rather based on the size of the transaction in bytes. The fee you pay is given in Satoshis per byte (sat/byte). A Satoshi is the smallest unit of Bitcoin (1 BTC = 100 million Satoshis).

You can find out how many Satoshi you get for €1 on coincodex.com and the current transaction fee can be found on bitinfocharts.com

Notes on the anonymity of Bitcoin

When you pay with Bitcoin, you send coins from your wallet to a recipient wallet. This transaction is publicly visible. Basically, when you create a wallet using software such as Electrum, the owner of the wallet is not stored. Nevertheless, conclusions about the owner of a wallet can be drawn from the transactions. Using multiple wallets can make it more difficult to assign them to a real person and conceal money flows. But 100% anonymity cannot be guaranteed. Only cash offers absolute anonymity.

Nevertheless, Bitcoin has some advantages over cash. If you travel a lot and don’t want to keep your money in your bank account, you can easily carry very large amounts with you without them being found and confiscated when crossing borders. You are also fairly well protected against theft. If you save your wallet in an encrypted file on various data storage devices, you can easily restore it using the seed.

2nd Step: Buy Bitcoin

Before we can start using Bitcoin, we first need to get our hands on Bitcoin. We can do this quite easily by buying Bitcoin. Since Bitcoin can be worth several thousand euros depending on the exchange rate, it makes sense to buy parts of a Bitcoin. As already mentioned, the smallest unit of a Bitcoin is Satoshi and corresponds to one μBTC (1 BTC = 100 million Satoshis). The easiest way to buy BTC is via an official Bitcoin exchange. A very easy-to-use exchange is Wallet of Satoshi for Android & iPhone.

With this app you can buy, receive and send Bitcoin. After you have installed the Wallet of Satoshi on your smartphone and the wallet is set up, you can also buy Satoshis immediately via bank transfer with just 20 euros via the menu.
A very practical detail is that you can also use the Wallet of Satoshi to buy Bitcoin using other currencies such as US dollars. This is excellent for international business relationships, where you no longer have to deal with all sorts of exchange rates. Since I consider Bitcoin to be an alternative means of payment, it makes sense for me to always leave an amount of 200 to 500 euros in the Wallet of Satoshi. Anything above that is transferred to the Electrum Wallet. This is purely a precautionary measure, because Wallet of Satoshi is based on the Lightning Network and is a private provider. True to the motto, it is better to be safe than sorry. This strategy also saves transaction fees, which can add up to a considerable amount, especially for micro payments of a few euros.

3rd Step: Pay with Bitcoin

In order to pay with Bitcoin, you need a valid wallet address. This address is usually a long, cryptic character string. Since things can quickly go wrong when entering it manually, this address is often given as a QR code.

To make a payment, for example, via the Wallet of Satoshi to any Bitcoin wallet, either the character string or, better yet, the QR code is required. To do this, open the application, press the send button and then use the camera to scan the QR code of the wallet where the Bitcoin should go.

For example, if you send Bitcoin to the Wallet of Satoshi, all transactions are completely transparent. That’s why you can also send Bitcoin to an anonymous wallet. In step 1, I showed in detail how the Electrum wallet is created. Now let’s look at how we get to the wallet’s address. To do this, we go to the Wallet entry in the Electrum menu and select the Information item. We then get a display like the one in the following screenshot.

The master public key is the character string for our wallet to which Bitcoins can be sent. If you press the QR symbol in the bottom right of the field, you will receive the corresponding QR code, which can be saved as an image file. If you now make transfers from a Bitcoin exchange such as the Wallet of Satoshi, the exchange does not know who the owner is. In order to find this out, complex analyses are necessary.

Stability in the crisis – business continuity & disaster recovery

The phrase: it’s better to have than to have had is something we have all experienced first-hand, whether in our professional or private lives. If only we hadn’t clicked on the malicious link in the email or something similar goes through our heads. But once the damage has been done, it’s already too late to take precautions.

What is usually just annoying in our private lives can very quickly become a threat to our existence in the business world. For this reason, it is important to set up a safety net in good time for the event of a potential loss. Unfortunately, many companies do not pay adequate attention to the issue of disaster recovery and business continuity, which then leads to high financial losses in an emergency.

The number of possible threat scenarios is long. Some scenarios are more likely to occur than others. It is therefore important to carry out a realistic risk assessment that weighs up the individual options. This helps to prevent the resulting costs from escalating.

The Corona pandemic was a life-changing experience for many people. The state-imposed hygiene rules in particular presented many companies with enormous challenges. The keyword here is home office. In order to get the situation under control, employees were sent home to work from there. Since there is no established culture and even less of an existing infrastructure for home working in German-speaking countries in particular, this had to be created quickly under great pressure. Of course, this was not without friction.

But it does not always have to be a drastic event. Even a mundane power failure or a power surge can cause considerable damage. It does not have to be a building fire or a flood that leads to an immediate standstill. A hacker attack also falls into the category of serious threat situations. That should be enough. I think the problem has been explained in detail with these examples. So let’s start by addressing the question of what good precautions can already be taken.

The easiest and most effective measure to implement is comprehensive data backup. To ensure that no data is lost, it helps to list and categorize the various data. Such a table should contain information about the storage paths to be backed up, approximate storage usage, prioritization according to confidentiality and category of data. Categories include project data, expulsions, email correspondence, financial accounting, supplier lists, payroll statements and so on. It is of course clear that in the context of data protection, not everyone in the company is authorized to read the information. This is why digestible data must be protected by encryption. Depending on the protection class, this can be a simple password for compressed data or a cryptographically encrypted directory or an encrypted hard drive. The question of how often a data backup should be carried out depends on how often the original data is changed. The more often the data is changed, the shorter the data backup intervals should be. Another point is the target storage of the data backup. A completely encrypted archive that is located locally in the company can certainly be uploaded to a cloud storage after a successful backup. However, this solution can be very expensive for large amounts of data and is therefore not necessarily suitable for small and medium-sized enterprises (SMEs). Of course, it is ideal if there are several replicas of a data backup that are stored in different places.

Of course, it is of little use to create extensive backups only to find out in an emergency that they are faulty. That is why verification of the backup is extremely important. Professional data backup tools contain a mechanism that compares the written data with the original. The Linux command rsync also uses this mechanism. A simple copy & paste does not meet the requirement. But a look at the file size of the backup is also important. This quickly shows whether information is missing. Of course, there is much more that can be said about backups, but that would go too far at this point. It is important to develop the right understanding of the topic.

If we take a look at the IT infrastructure of companies, we quickly realize that the provision of software installations is predominantly a manual process. If we consider that, for example, a computer system can no longer perform its service due to a hardware error, it is also important to have a suitable emergency aid strategy in hand. The time-consuming work when hardware errors occur is installing the programs after a device has been replaced. For many companies, it makes little sense to have a redundant infrastructure for cost reasons. A proven solution comes from the DevOps area and is called Infrastructure as a Code (IaaC). This is mainly about providing services such as email or databases etc. via script. For the business continuity & disaster recovery approach, it is sufficient if the automated installation or update is initiated manually. You should not rely on proprietary solutions from possible cloud providers, but use freely available tools. A possible scenario is also a price increase by the cloud provider or changes to the terms and conditions that are unacceptable for companies, which can make a quick change necessary. If the automation solution is based on a special technology that other providers cannot provide, a quick change is extremely difficult.

Employee flexibility should also be taken into account. Purchasing notebooks instead of desktop computers allows for a high level of mobility. This of course also includes permission to take the laptop home and log into the company network from there. Teams that were already familiar with home office at the beginning of 2020 were able to continue their work from home almost seamlessly. This has given the companies in question a huge competitive advantage. It can also be assumed that large, representative company headquarters will become less and less important as part of the digital transformation. The teams will then organize themselves flexibly remotely using modern communication tools. Current studies show that such a setup increases productivity in most cases. A colleague with a cold who still feels able to do his work can come to work without worrying without his colleagues running the risk of being infected.

We can already see how far this topic can be thought of. The challenge, however, is to carry out a gradual transformation. The result is a decentralized structure that works with redundancies. It is precisely these redundancies that provide sufficient room for maneuver in the event of a disruption compared to a centralized structure. Redundancies naturally cause an additional cost factor. Equipping employees with a laptop instead of a stationary desktop PC is somewhat more expensive to purchase. The price difference between the two solutions is no longer as dramatic as it was at the turn of the millennium, and the advantages outweigh the disadvantages. The transformation to maintaining business capability in the event of disruptions does not mean that you immediately rush out and buy all employees new equipment. Once you have determined what is necessary and useful for the company, new purchases can be prioritized. Colleagues whose equipment has been written off and is due for replacement now receive equipment in accordance with the new company guidelines. This model is now being followed in all other areas. This step-by-step optimization allows for a good learning process and ensures that every step that has already been completed has actually been implemented correctly.