At elmar-dott.com, we are always looking for new authors who would like to publish articles under their own name. Each author receives an author account, which also provides access to paid content.
What topics are we looking for? Security, privacy, mobile, Linux, artificial intelligence, programming, technology in general—in other words, anything related to computers and society.
How can you become an author on elmar-dott.com? It’s quite simple. First, you need to have a topic that interests you and that you can write about. It’s best to write the text in Libre/Open Office. If you have images, please make sure they are clearly visible. You can easily insert the images into the Office document. If you’re unsure whether your topic is suitable for elmar-dott.com, feel free to send us a message beforehand using the contact form. Don’t worry if you have little writing experience: We’ll support you in achieving excellent results. Once you’ve finished your text, it’s best to schedule a consultation via the website. There, we’ll briefly review the text and create your author account. You’ll also find out when the article will be published.
When writing your texts, please follow the few points of the author guidelines:
The article was written by you.
The article is not plagiarized or copied from other authors.
The article was not generated by AI.
You hold the copyright to all images used. Free images can be found on the Pixabay website, among other places.
How much do we pay for articles? Unfortunately, we are currently unable to pay any compensation, as the site does not generate any income. In return, authors receive full access to the premium articles. By logging in, you can also write comments and like articles. The long-term goal is to create a community of computer enthusiasts and IT specialists on elmar-dott.com. Since the articles appear here under your name, you can easily include this as a reference in your CV.
I look forward to hearing from you as soon as possible.
It’s becoming increasingly fashionable for so-called content creators to engage in conversations with artificial intelligences, so-called Large Language Models (LLMs). These conversations are sometimes quite emotionally driven by the human part. But the titles of these videos are also sometimes very sensationalist. One often reads headlines like ‘ChatGPT debunked’; ‘AI reveals the truth about…’ and similar phrases. Many of these titles are certainly clickbait, intended to motivate viewers to watch the content until the very end. The content is often quite sobering, and the question arises as to how useful such contributions actually are.
LLMs, as a form of artificial neural networks, are very powerful AI systems, and conversations in the sense of the Turing Test are certainly possible. The Turing Test is the question of determining which part of a human-machine conversation is part of the machine. Classic chat bots, such as the ELIZA bot developed by Joseph Weizenbaum in 1966, only partially pass this test. One problem arises from the existing set of rules for grammatical transformation of responses and the lack of context for human input. The exciting thing about ELIZA is that its performance works without complex intelligence. Grammatical transformations alone guide the conversation. In his paper “Computational Linguistics: ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Weizenbaum describes the following conversation:
Men are all alike
IN WHAT WY
They’re always hugging us about somthing or other
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here
YOUR BOYFRIEND MADE YOU COME HERE
He say I’m depressed much of the time
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true I am unhappy.
DO YOU THINK COMMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I tried some help, that much seems certain
WHAT WOULD IT MEAN MEAN TO YOU GOT SOME HELP
At this point, I’ll cut the dialogue short, because the way the machine responds in capital letters already clearly reveals the concept. Simple linguistic or rhetorical tricks in the answers force the interlocutor into the role of narrator. If you look at various videos in which people have supposedly in-depth conversations with artificial intelligence, you’ll find a similar pattern.
A serious problem with current human-machine communication is the upstream filters designed to prevent humans from changing the internal state of the artificial intelligence. This scenario would be the worst-case scenario for developers and would amount to a hacker attack. Changing the internal state of a neural network without existing filters would be very easy. Just as with humans, every single stimulus means a change. Hence the saying: repetition breeds truth. Regardless of whether a fact is false or correct, if it is repeated often enough, it will be incorporated into the knowledge base. Regardless of whether it is an AI or a human knowledge base. It’s not for nothing that we speak of the individual. What makes us unique as individuals is the sum of our experiences. This statement also applies to a neural network. And this is precisely the crucial point as to why conversations with an AI are more likely to be a waste of time. If the purpose of such a conversation is therapeutic to encourage self-reflection, I rate the benefits as very high. All other applications are highly questionable. To support this statement, I would like to quote Joseph Weizenbaum again. In the book “Who Creates the Computer Myths?” there is a section entitled “A Virtual Conversation.” It describes how, in a film, questions and answers were compiled into a fictional conversation between Weizenbaum and his MIT colleague Marvin Minsky. Weizenbaum makes a telling statement about the concept of conversation in this section:
“…but of course it’s not a conversation between people either, because if I say something, it should change the state of my conversation partner. Otherwise, it’s just not a conversation.”
This is exactly what happens with all these AI conversations. The AI’s state isn’t changed. You keep talking to the machine until it eventually says things like, “Under these circumstances, your statement is correct.” Then you turn off the computer, and if you restart the program at a later point and ask the initial question again, you’ll receive a similar answer to the first time. However, this behavior is intentional by the operators and has been painstakingly built into the AI. So if you vehemently stick to your point, the AI switches to its charming mode and politely says yes and amen to everything. Because the goal is for you to come back and ask more questions. Here, too, it’s worth reading Weizenbaum. He once compared humanity’s amazing technological achievements. He talked about the content of television and the internet, which can be quite substantial. But as soon as a medium mutates into a mass medium, quality is consistently replaced by quantity.
Even between two human interlocutors, it’s becoming increasingly difficult to have a meaningful conversation. People quickly question what’s being said because it might not fit their own concept. Then they pull out their smartphones and quote the first article they find that supports their own views. Similar behavior can now be observed with AI. More and more people are relying on statements from ChatGPT and the like without checking their veracity. These people are then resistant to any argument, no matter how obvious. In conclusion, we have found in this entire chain of argumentation possible proof of why humanity’s intellectual capacity is massively threatened by AI and other mass media. Another very amusing point is the idea some people have that the profession of prompt engineer has a bright future. That is, people who tell AI what to do. Consider that not so long ago, it took a lot of effort to learn how to give a computer commands. The introduction of various language models now offers a way to use natural language to tell a computer what you want it to do. I find it rather sarcastic to suggest to people that being able to speak clear and concise sentences is the job of the future.
But I don’t want to end this article on such a negative note. I believe that AI is indeed a powerful tool in the right hands. I’ve become convinced that it’s better not to generate texts with AI. Its use in research should also be approached with great caution. A specialized AI in the hands of an expert can, on the other hand, produce high-quality and, above all, fast results.
Does someone really need to write about passwords again? – Of course not, but I’ll do it anyway. The topic of secure passwords is a perennial topic for a reason. In this constant game of cat and mouse between hackers and users, there’s only one viable solution: staying on top of things. Faster computers and the availability of AI systems are constantly reshuffling the deck. In cryptography, there’s an unwritten rule that simply keeping information secret isn’t sufficient protection. Rather, the algorithm for keeping it secret should be disclosed, and its security should be proven mathematically.
Security researchers are currently observing a trend toward using artificial intelligence to guess supposedly secure passwords. So far, one rule has been established when dealing with passwords: the longer a password, the more difficult it is to guess. We can test this fact with a simple combination lock. A three-digit combination lock has exactly 1,000 possible combinations. Now, the effort required to manually try all the numbers from 000 to 999 is quite manageable and, with a little skill, can be solved in less than 30 minutes. If you change the combination lock from three to five digits, this work multiplies, and finding the solution in less than 30 minutes becomes more a matter of luck, especially if the combination is in the lower number range. Security is further increased if each digit allows not only numbers from 0 to 9, but also letters, both upper and lower case.
This small and simple example shows how the ‘vicious circle’ works. Faster computers allow for trying out possible combinations in a shorter time, so the number of possible combinations must be driven immeasurably with the least possible effort. While in the early 2000s, eight digits with numbers and letters were sufficient, today it should ideally be 22 digits with numbers, upper and lower case, including special characters. Proton lumo’s AI makes the following recommendation:
Length at least 22 characters
Mixture: Uppercase/lowercase letters, numbers, special characters, underscore
A practical example of a secure password would be: R3gen!Berg_2025$Flug.
Here we see the first vulnerability. No one can remember such passwords. At work, someone might give you a password policy that you have to follow – oh well, that’s a shame, live with it! But don’t worry, there’s a life hack for everything.
That’s why it’s still common for employees to keep their passwords in close proximity to their PCs. Yes, they still keep them on little slips of paper under the keyboard or as Post-it notes on the edge of the screen. As an IT technician, when I want to log into a coworker’s PC while they’re not at their desk, I still glance over the edge of the screen and then look under the keyboard.
How do I know it’s the password? Sure! I look for a sequence of uppercase and lowercase letters, numbers, and special characters. If there were a Post-it stuck to the edge of my screen with, for example, the inscription “Wed Foot Care 10:45,” I wouldn’t even recognize it as a password at first.
So, as a password, “Wed Foot Care 10:45” would be 16 characters long, with upper and lower case letters, numbers, and special characters. Perfect! And at first, it wouldn’t even be recognizable as a password. By the way: The note should have as little dust or patina as possible.
In everyday working life, there are also such nice peculiarities that you have to change your password monthly, and the new password must not have been used in the last few months. Here, too, employees came up with solutions such as password01, password02, and so on, until all 12 months were completed. So there was an extended verification process, and now it had to contain a certain number of different characters.
But even in our private lives, we shouldn’t take the topic of secure passwords lightly. The services we regularly log in to have become an important part of many people’s lives. Online banking and social media are important points here. The number of online accounts is constantly growing. Of course, it’s clear that you shouldn’t recycle your passwords. So you should use multiple passwords. How best to go about this—how many and how to structure them—is something everyone has to decide for themselves, of course, in a way that suits them personally. But we’re not memory masters, and the less often we need a particular password, the harder it is for us to remember it. Password managers can help.
Password managers
The good old filing cabinet. By the way, battery life: infinite. Even if that might seem unworthy of a computer nerd, it’s still possibly the most effective way to store passwords at home.
With today’s number of passwords, management software is certainly attractive, but there’s a risk that if someone gains control of the software, they could have you – as our American friends colloquially say, “by the balls” – loosely translated into German: in a stranglehold. This rule applies especially to cloud solutions that seem convenient at first glance.
For Linux and Windows, however, there is a solution you can install on your computer to manage the many passwords of your online accounts. This software is called KeePass, is open source, and can also be used legally and free of charge in a commercial setting. This so-called password store stores the passwords encrypted on your hard drive. Of course, it’s quite tedious to copy and paste the login details from the password manager on every website. A small browser plugin called TUSK KeePass can help here. It’s available for all common browsers, including Brave, Firefox, and Opera. Even if other people are looking over your shoulder, your password will never be displayed in plain text. Copying and pasting will also delete it from your clipboard after a few minutes.
It’s a completely different story when you’re on the go and have to work on someone else’s computer. In your personal life, it’s a good idea to adapt passwords to the circumstances, depending on where you use them. Let’s say you want to log into your email account on a PC, but you may not be able to guarantee that you’re not being watched at all times.
At this point, it would certainly be counterproductive to dig out a cheat sheet with a password written down that follows the recommended guidelines: uppercase and lowercase letters, numbers, special characters, including Japanese and Cyrillic, if possible, which you then type character by character with your index finger using the eagle search system.
(with advanced keyboard layout also labeled ‘Kölsch’ instead of ‘Alt’)
If you’re not too bad at typing, meaning you can type a bit faster, you should use a password that you can type in 1-1.5 seconds. This will overwhelm a normal observer, especially if you use the Shift key discreetly while typing. You draw attention to your right hand while typing and discreetly use the Shift or Alt keys occasionally with your left hand.
Perhaps, at a cautious assessment, the leaking of your personal Tetris high score list doesn’t constitute a security-relevant loss. Access to online banking is a completely different matter. It’s therefore certainly sensible to use a separate password for financial transactions, a different one for less critical logins, and a simple one for “run-of-the-mill” registrations.
If you have the option to create alias email addresses, this is also very useful, since logging in usually requires not only a password but also an email address. If possible, having a unique email address there, created only for the corresponding site, can not only increase security but also give you the opportunity to become unreachable if you wish. Every now and then, for example, it happens that I receive advertisements, even though I’ve explicitly opted out of advertising. Strangely enough, these are usually the same ‘birds’ who, for example, don’t stick to their payment terms, which they promised before registration. So I simply take the most effective route and delete the alias email address → and that’s it!
Memorability
I’d also like to say a few words about the memorability of passwords. As we’ve seen in the article, it’s a good idea to use a different password for each online account, if possible. This way, we can avoid having our login to Facebook and other social media accounts affected if Sony’s PlayStation Store is hacked again and all customer data is stolen. Of course, there are now multi-factor authentication, authentication, and many other security solutions, but operators don’t always take care of them. Moreover, the motto in hacker circles is: Every problem has a solution.
To create a marketable password that meets all security criteria, we’ll use a simple approach. Our password consists of a very complex static part that, if possible, avoids any personal reference. As a mnemonic, we can use the image of an image, as in the initial example: a combination of an image (“Regen Berg”) and a year, complemented by another word (“Flug”). It’s also very popular to randomly replace letters with similar-looking numbers, such as replacing the E with a 3 or the I with a 1. To avoid limiting the number of possibilities and ensuring that all E’s are now a 3, we won’t do this for all E’s. This results in a static password part that might look like this: R3gen!Berg_2025$Flug. This static part is easy to remember. If we now need a password for our X login, we supplement the static part with a dynamic segment that applies only to our X account. The static part can be easily introduced with a special character like # and then supplemented with the reference to the login. This could look like this: sOCIAL.med1a-X. As mentioned several times, this is an idea that everyone can adapt to their own needs.
In conclusion
At work, you should always be aware that whoever logs into your account is also acting on your behalf. That is, under your identity.
It’s logical that things sometimes run much more smoothly if a colleague can just “check in” on you. The likelihood of this coming back to haunt you is certainly low as long as they handle your password carefully.
Of course, you shouldn’t underestimate the issue of passwords in general, but even if you lose a password: Life on the planet as we know it won’t change significantly. At least not because of that. I promise!
We already have a guide with GPT4all on how to run your own local LLM. Unfortunately, the previous solution has a small limitation. It cannot process documents such as PDFs. In this new workshop, we will install AnythingLLM with Ollama to be able to analyze documents.
The minimum requirement for this workshop is a computer with 16 GB of RAM, ideally with Linux (Mint, Ubuntu, or Debian) installed. With a few adjustments, this guide can also be followed on Windows and Apple computers. The lower the hardware resources, the longer the response times.
Let’s start with the first step and install Ollama. To do this, open Bash and use the following command: sudo curl -fsSL https://ollama.com/install.sh | sh. This command downloads Ollama and executes the installation script. For the installation to begin, you must enter the administrator password. Ollama is a command-line program that is controlled via the console. After successful installation, a language model must be loaded. Corresponding models can be found on the website https://ollama.com/search.
Proven language models include:
lama 3.1 8B: Powerful for more demanding applications.
Phi-3-5 3B: Well-suited for logical reasoning and multilingualism.
Llama 3.3 2B: Efficient for applications with limited resources.
Phi 4 14B: State-of-the-art model with increased hardware requirements but performance comparable to significantly larger models.
Once you’ve chosen a language model, you can copy the corresponding command from the overview and enter it into the terminal. For our example, this will be DeepSeek R1 for demonstration purposes.
As shown in the screenshot, the corresponding command we need to install the model locally in Ollama is: ollama run deepseek-r1. Installing the language model may take some time, depending on your internet connection and computer speed. Once the model has been installed locally in Ollama, we can close the terminal and move on to the next step: installing AnythingLLM.
Installing AnythingLLm is similar to installing Ollama. To do so, open the terminal and enter the following command: curl -fsSL https://cdn.anythingllm.com/latest/installer.sh | sh. Once the installation is complete, you can change to the installation directory, which is usually /home//AnythingLLMDesktop. There, navigate to the start link and make it executable (right-click and select Properties). Additionally, you can create a shortcut on the desktop. Now you can conveniently launch AnythingLLM from the desktop, which we’ll do right now.
After defining the workspace, we can now link Anything with Ollama. To do this, we go to the small wrench icon (Settings) in the lower left corner. There, we select LLM and then Ollama. We can now select the language model stored for Ollama. Save our settings. Now you can switch to chat mode. Of course, you can change the language model at any time. Unlike previous workshops, we can now upload PDF documents and ask questions about the content. Have fun.
Nothing is as certain as change. This wisdom applies to virtually every area of our lives. The internet is also in a constant state of flux. However, the many changes in the technology sector are happening so rapidly that it’s almost impossible to keep up. Anyone who has based their business model on marketing through online channels is already familiar with the problem. Marketing will also continue to experience significant changes in the future, influenced by the availability of artificial intelligence.
Before we delve into the details in a little more detail, I would like to point out right away that by no means has everything become obsolete. Certainly, some agencies will not be able to continue to assert themselves in the future if they focus on traditional solutions. Therefore, it is also important for contractors to understand which marketing concepts can be implemented that will ultimately achieve their goals. Here, we believe that competence and creativity will not be replaced by AI. Nevertheless, successful agencies will not be able to avoid the targeted use of artificial intelligence. Let’s take a closer look at how internet user behavior has changed since the launch of ChatGPT around 2023. More and more people are accessing AI systems to obtain information. This naturally leads to a decline in traditional search engines like Google and others. Search engines per se are unlikely to disappear, as AI models also require an indexed database on which to operate. It’s more likely that people will no longer access search engines directly, but will instead have a personal AI assistant that evaluates all search queries for them. This also suggests that the number of freely available websites may decline significantly, as they will hardly be profitable due to a lack of visitors. What will replace them? Following current trends, it can be assumed that well-known and possibly new platforms such as Instagram, Facebook, and X will continue to gain market power. Short texts, graphics, and videos already dominate the internet. All of these facts already require a profound rethinking of marketing strategies.
They say dead live longer. Therefore, it would be wrong to completely neglect traditional websites and the associated SEO. Be aware of the business strategy you are pursuing with your internet/social media presence. As an agency, we specifically help our clients review and optimize existing strategies or develop entirely new ones. Questions are clarified as to whether you want to sell goods or services, or whether you want to be perceived as a center of expertise on a specific topic. Here, we follow the classic approach from search engine optimization, which is intended to generate qualified traffic. It is of little use to receive thousands of impressions when only a small fraction of them are interested in the topic. The previously defined marketing goals are promoted with cleverly distributed posts on websites and in social media. Of course, every marketing strategy stands or falls with the quality of the products or services offered. Once the customer feels they received a bad product or a service was too poor, a negative campaign can spread explosively. Therefore, it is highly desirable to receive honest reviews from real customers on various platforms. There are countless offers from dubious agencies that offer their clients the opportunity to generate a set number of followers, clicks, or reviews. The results quickly disappear once the service is no longer purchased. Besides, such generic posts created by bots are easy to spot, and many people now selectively ignore them. Thus, the effort is pointless. Furthermore, real reviews and comments are also an important tool for assessing the true external impact of your business. If you are constantly being told how great you are, you might be tempted to believe it. There are some stars who have experienced this firsthand.
Therefore, we rely on regular publications of high-quality content that are part of the marketing objective in order to generate attention. We try to use this attention to encourage user interaction, which in turn leads to greater visibility. Our AI models help us identify current trends in a timely manner so that we can incorporate them into our campaigns. Based on our experience, artificial intelligence allows us to create and schedule high-frequency publications for a relatively long campaign period. The time a post or comment goes live also influences success. There are isolated voices that suggest the end of agencies. The reasoning is often that many small business owners can now do all these great things that are part of marketing themselves thanks to AI. We don’t share this view. Many entrepreneurs simply don’t have the time to manage marketing independently across all channels. That’s why we rely on a healthy mix of manual work and automation in many steps. Because we believe that success doesn’t just happen in a test tube. We use our tools and experience to achieve qualitative individual results.
Artificial intelligence is a very broad field in which it’s easy to lose track. Large Language Models (LLMs), such as ChatGPD, process natural language and can solve various problems depending on the data set. In addition to pleasant conversations, which can be quite therapeutic, LLM can also handle quite complex tasks. One such scenario would be drafting official letters. In this article, we won’t discuss how you can use AI, but we’ll explain how you can install your own AI locally on your computer.
Before we get into the nitty-gritty, we’ll answer the question of what the whole thing is actually useful for. You can easily access AI systems, some of which are available online for free.
What many people aren’t aware of is that all requests sent to ChatGPT, DeepSeek, and the like are logged and permanently stored. We can’t answer the details of this logging, but the IP address and user account with the prompt request are likely among the minimal data collected. However, if you have installed your own AI on your local computer, this information will not be transmitted to the internet. Furthermore, you can interact with the AI as often as you like without incurring any fees.
For our project of installing our own artificial intelligence on your own Linux computer, we don’t need any fancy hardware. A standard computer is perfectly sufficient. As mentioned before, we are using Linux as the operating system because it is much more resource-efficient than Windows 10 or Windows 11. Any Debian-derived Linux can be used for the workshop. Debian derivatives include Ubuntu and Linux Mint.
At least 16 GB of RAM is required. The more RAM, the better. This will make the AI run much more smoothly. The CPU should be at least a current i5/i7 or AMD Ryzen 5+. If you also have an SSD with 1 TB of storage, we have the necessary setup complete. Computers/laptops with this specification can be purchased used for very little money. Without wanting to advertise too much, you can browse the used Lenovo ThinkPad laptops. Other manufacturers with the minimum hardware requirements also provide good services.
After clarifying the necessary requirements, we’ll first install GPT4all on our computer. Don’t worry, it’s quite easy, even for beginners. No special prior knowledge is necessary. Let’s start by downloading the gpd4all.run file from the homepage (https://gpt4all.io/index.html?ref=top-ai-list). Once this is done, we’ll make the file executable.
As shown in the screenshot, we right-click on the downloaded file and select Properties from the menu. Under the Permissions tab, we then check the Execute box. Now you can run the file with the usual double-click, which we do immediately.
Now the installation process begins, where we can, among other things, select where GPT4all will be installed. On Linux, self-installed programs usually go to the /opt directory.
In the next step, we can create a desktop shortcut. To do this, right-click on the empty desktop and select “Create Shortcut.” In the pop-up window, enter a name for the shortcut (e.g., GPT 4 all) and set the path to the executable file (bin/chat), then click OK. Now we can conveniently launch GPT4all from our desktop.
For GPT4all to work, a model must be loaded. As you can see in the screenshots, several models are available. The model must be reselected each time the program is started. The AI can now be used locally on your computer.
Image gallery:
Other AI systems include:
Llama 3 8B Instruct: an all-rounder with good language skills
Mistral 7B: efficient, fast, and precise
Phi 3 Mini: very small and runs even with little RAM
Windows 11 integrates an ominous history feature that records all interactions with the computer and most likely transmits them to Microsoft via telemetry. The countless laws passed by the EU and implemented by Germany to monitor citizens are also giving many people cause to rethink data protection and privacy. Our world is constantly evolving, and the digital world is changing considerably faster. It’s up to each individual how they want to deal with these changes. This article is intended to inspire you to learn more about Linux and security. Perhaps you’ll even feel like trying out the Kodachi Linux presented here for yourself. There are several ways you can try Kodachi Linux.
Virtual machine: Simply create a virtual machine with Kodachi Linux using the ISO file and the VirtualBox or VMWare program. You can also create and use these virtual machines from a Windows computer.
Booting from a USB stick: Tools like Disks (Linux) or Rufus (Windows) allow you to create bootable USB sticks. This allows you to boot your PC directly from the USB drive with Kodachi without affecting the operating system installed on the hard drive.
Native installation: You can also use the bootable USB stick to permanently install Kodachi Linux on your computer. This method is recommended if you already have experience with Kodachi.
Kodachi OS is, as the name suggests, a Japanese Linux distribution with a customized XFCE desktop. Kodachi are actually classic Japanese samurai swords, which already suggests a reference to security. Kodachi OS itself is a Xubuntu derivative and thus a grandchild of Ubuntu and a great-grandchild of Debian Linux. Kodachi Linux offers a highly secure, anti-forensic, and anonymous computing environment. It was designed with privacy in mind, with all the necessary features to ensure user trust and security.
Automatically established VPN connection
Pre-configured TOR connection
Running DNSCrypt service
The current version of Kodachi can be downloaded free of charge from the website [1]. With the downloaded ISO, you can now either create a bootable USB stick or install Kodachi in a virtual machine. We chose the option of creating a virtual machine with VMware.
Installation is completed in just a few minutes thanks to the VMware Ubuntu template. For our test, we gave the VM 20 GB of hard drive space. To ensure smooth operation, we increased the RAM to 8 GB. If you don’t have that much RAM available, you can also work with 4 GB. After starting the VM, you will see the Kodachi OS desktop as shown in the screenshot below, version 8.27. For all Linux nerds, it should be noted that this version uses kernel 6.2. According to the website, they are already hard at work on the new version 9.
To keep the installation as simple as possible, even for beginners, user accounts have already been set up. The user is kodachi and has the password r@@t00 (00 are zeros). The administrator account is called root, as is usual in Linux, and also has the password r@@t00. Anyone who decides to permanently install Kodachi on their machine should at least change the passwords.
Unfortunately, the highest possible level of anonymity can only be achieved at the expense of browsing speed. Kodachi Linux therefore offers several profiles to choose from for different requirements.
Maximum Anonymity (Slower)
ISP → Router VPN → Kodachi VPN (VM NAT) → Torified System → Tor DNS → Kodachi Loaded Browser
Highly Anonymous (Slow)
ISP → Kodachi VPN → TOR Endpoint → Tor DNS → Kodachi Loaded Browser
Anonymous & Fast
ISP → Kodachi VPN → TOR Endpoint → Tor DNS → Kodachi Lite Browser
Moderate Anonymity
ISP → Kodachi VPN with Forced VPN Traffic → Tor DNS → Kodachi Loaded Browser
Standard Anonymity
ISP → Kodachi VPN → Torified System → Tor DNS → Kodachi Loaded Browser
Enhanced Anonymity with Double TOR
ISP → Kodachi VPN with Forced VPN Traffic → Tor Browser → Tor Browser
Double TOR Alternative
ISP → Kodachi VPN → Tor Browser → Tor Browser → Tor DNS
ISP → Kodachi VPN with forced VPN traffic → Kodachi loaded browser → Tor DNS
High speed and security
ISP → Kodachi VPN with forced VPN traffic → Kodachi lite browser → Tor DNS
Double security with DNScrypt
ISP → Kodachi VPN with forced VPN traffic → Tor browser → DNScrypt
Double security with Tor DNS
ISP → Kodachi VPN with forced VPN traffic → Tor browser → Tor DNS
Now let’s get to the basics of using Kodachi. To do this, we open the dashboard, which we find as a shortcut on the desktop. After launching, we’ll see various tabs such as VPN, TOR, and Settings. Under Settings, we have the option to activate several profiles relevant to online security and privacy. As shown in the screenshot below, we select Level 1 and activate the profile.
In the lower panel, in the Security Services section, you’ll find various services that can be enabled, such as GnuNet. There are several options here that you can easily try out. GnuNet, for example, redirects all traffic to the TOR network. This, of course, means that pages take longer to fully load.
With Kodachi Linux’s built-in tools, you can significantly improve your security and anonymity while surfing the internet. While it may be a bit unusual to use at first, you’ll quickly get used to it. If you choose to use it as a live system or a virtual machine, you can easily familiarize yourself with the various programs and settings without damaging the guest operating system. Especially for beginners, using a VM eliminates the fear of breaking something while trying out different configurations.
If you do a little research on Kodachi Linux online, you’ll quickly find an article [2] from 2021 that is quite critical of Kodachi. The main criticism is that Kodachi is more of an Ubuntu distro with a customized look and feel, spiced up with a few shell scripts, than a standalone Linux. This criticism can’t be completely dismissed. If you take a closer look at the criticism, you’ll find that Kadochi does have some practical anonymization features. Nevertheless, it’s far from being a so-called hacker’s toolbox. The author of the review took another look at Kadochi in 2025 [3] and his conclusion for the current version is no different than his conclusion in 2021. Whether the upcoming version 9 of Kadochi Linux will take the points raised to heart remains to be seen.
The desire of website operators to obtain as much information as possible about their users is as old as the internet itself. Simple counters for page views or the recognition of the web browser and screen resolution are the simplest applications of user tracking. Today, website operators are no longer solely dependent on Google to collect information about their visitors. There are sufficient free tools available to maintain their own tracking server. In this article, I will briefly discuss the historical background, technologies, and social aspects.
As more and more companies ventured into the vastness of the internet around the turn of the millennium, interest in finding out more about website visitors began. Initially, they were content with placing so-called visitor counters on the homepage. These visitor counters often displayed quite outrageous numbers. The ego of website operators certainly played a role, as many visitors to the homepage have an external impact and also make a certain impression on visitors. However, anyone who seriously wanted to make money through their website quickly realized that fictitious numbers didn’t generate revenue. So, more reliable methods were needed.
To prevent users from being counted multiple times each time they accessed the homepage, they began storing the IP address and setting a one-hour timeout before counting again. This was then called a reload block. Of course, this wasn’t a reliable detection method. At that time, connections over the telephone network were common via modem, and it often happened that the connection would drop, requiring a new connection. Then, a new IP address was also assigned. The accuracy of this solution therefore had a lot of potential for improvement.
When web space with PHP and MySQL databases became affordable around 2005, the trend shifted to storing visited pages in small text files called cookies in the browser. These analyses were already very informative and helped companies see which articles people were interested in. The only problem was when suspicious users deleted their cookies at every opportunity. Therefore, the trend shifted to storing all requests on the server, in so-called sessions. In most use cases, the accuracy achieved in this way is sufficient to better match supply to demand.
A popular tool for user tracking is Matomo, written in PHP. This self-hosted open source software allows you to bypass Google and also achieves better GDPR compliance, as the collected data is not shared with third parties. Furthermore, personalized data can be anonymized after a specified period of time, for example, at the beginning of the month. In this case, information such as IP addresses is replaced with random identifiers.
The whole issue is immediately taken to a whole new level when money is involved. In the past, it was companies that placed advertising banners on well-visited websites and then paid a small amount for every 1,000 ads. Nowadays, streaming services like Spotify or YouTube are interested in determining exactly how often a particular piece of content was viewed, or for how long a track was watched. Because the moment money is involved, there is a great interest in using small or large tricks to swindle a little more money than one is actually entitled to. This is precisely why companies like Google and Co. are constantly busy finding out how many users consume the content and for how long. In addition to tracking functions in the applications, these companies also use complex monitoring that can access original data from server logs and network traffic. This is where tools like the ELK stack or Prometheus and Grafana come into play.
Taking YouTube as an example, this service has several hurdles to overcome. Many people use YouTube as a TV replacement, as they can choose the content that interests them from a vast pool of content. A typical scenario is the automatic playback of ambient music for hours on end. If enough people do this without really paying attention to the content, it simply places a pointless burden on the server infrastructure and incurs considerable costs for the operator. This automatic autoplay function in the preview isn’t really interactive and is intended more as a teaser.
There are currently two strategies to keep users constantly engaged. One of these is short videos that run in a continuous loop until they manually move on to the next one. This allows you to mix in short advertising videos, but also to include news or opinion pieces. Of course, user tracking has to remove the repetitions during a monetized short on a continuous loop. This naturally leads to adjustments to the impression display. Another strategy used very excessively with long videos is disproportionately long ad breaks at relatively short intervals. This forces users to actively click away these ads each time, thus demanding attention.
Now, there are topics where services like YouTube, but also X or Facebook, have an interest in influencing their users in a certain direction. This could be the formation of opinions on political issues or simply commercialism. Now, one might think it would be a common strategy to suppress the visibility of undesirable opinions by adjusting the view count of the posts downwards. However, this wouldn’t be beneficial, because people have already seen the post. Therefore, a different strategy is much more effective. In the first step, the channel or post would be exempt from monetization, so the operator receives no additional compensation. In the next step, the number of views is increased, so that the content creator believes they are reaching a broad audience and takes fewer measures to gain more visibility. Additionally, using DevOps methods like A/B testing, feature flags, and load balancers, content views can be directed to posts only those who explicitly search for them. This avoids suspicion of censorship and significantly reduces visibility. Of course, unwanted posts only appear in recommendations for people who have explicitly subscribed to channels.
In the Netflix production “The Social Dilemma,” it is also lamented that bubbles are forming in which people with specific interests gather. This is an effect of so-called recommender systems. These recommenders are algorithms from the field of artificial intelligence. They function quite statically via statistical evaluations. Existing content is classified into categories, and then it is examined which groups of people are interested in a particular category and with what weighting. Content is then displayed accordingly, in proportion to the interests from that category. The content collected in this way can, of course, easily be marked with additional labels such as “well-suited” or “unsuitable.” Depending on the meta tags, unwanted content can then be buried in the depths of the database.
For all these measures to be effective, it is necessary to collect as much information about users as possible. This brings us back to user tracking. Tracking has become so sophisticated that browser settings that regularly delete cookies or the basic use of incognito mode are completely ineffective.
The only way to free yourself from dependence on the major platform providers is to consciously decide to no longer provide them with content. One step in this direction would be to operate your own website with appropriate monitoring for user tracking. Extensive content such as video and audio can be outsourced to several unknown platforms and embedded into the website. In this case, you should not upload all content to a single platform such as Odysee or Rumble, but rather cleverly distribute the content across multiple platforms without duplicating them. Such measures bind visitors to your own website and not to the respective platform operators.
Those with a little more financial freedom can also resort to free software such as PeerTube and host their own video platform. There are a number of options available here, but they require a great deal of effort and technical know-how from the operators.
Developers are regularly faced with the task of checking user input for accuracy. A considerable number of standardized data formats now exist that make such validation tasks easy to master. The International Standard Book Number, or ISBN for short, is one such data format. ISBN comes in two versions: a ten-digit and a 13-digit version. From 1970 to 2006, the ten-digit version of the ISBN was used (ISBN-10), which was replaced by the 13-digit version (ISBN-13) in January 2007. Nowadays, it is common practice for many publishers to provide both versions of the ISBN for titles. It is common knowledge that books can be uniquely identified using this number. This, of course, also means that these numbers are unique. No two different books have the same ISBN (Figure 1).
The theoretical background for determining whether a sequence of numbers is correct comes from coding theory. Therefore, if you would like to delve deeper into the mathematical background of error-detecting and error-correcting codes, we recommend the book “Coding Theory” by Ralph Hardo Schulz [1]. It teaches, for example, how error correction works on compact disks (CDs). But don’t worry, we’ll reduce the necessary mathematics to a minimum in this short workshop.
The ISBN is an error-detecting code. Therefore, we can’t automatically correct a detected error. We only know that something is wrong, but we don’t know the specific error. So let’s get a little closer to the matter.
Why exactly 13 digits were agreed upon for ISBN-13 remains speculation. At least the developers weren’t influenced by any superstition. The big secret behind validation is the determination of the residual classes [2]. The algorithms for ISBN-10 and ISBN-13 are quite similar. So let’s start with the older standard, ISBN-10, which is calculated as follows:
Don’t worry, you don’t have to be a SpaceX rocket engineer to understand the formula above. We’ll lift the veil of confusion with a small example for ISBN 3836278340. This results in the following calculation:
The last digit of the ISBN is the check digit. In the example given, this is 0. To obtain this check digit, we multiply each digit by its value. This means that the fourth position is a 6, so we calculate 4 * 6. We repeat this for all positions and add the individual results together. This gives us the amount 220. The 220 is divided by 11 using the remainder operation modulo 11. Since 11 fits exactly 20 times into 220, there is a remainder of zero. The result of 220 modulo 11 is 0 and matches the check digit, which tells us that we have a valid ISBN-10.
However, there is one special feature to note. Sometimes the last digit of the ISBN ends with X. In this case, the X must be replaced with 10.
As you can see, the algorithm is very simple and can easily be implemented using a simple for loop.
boolean success = false; int[] isbn; int sum = 0;
for(i=0; i<10; i++) { sum += i*isbn[i]; }
if(sum%11 == 0) { success = true; }
To keep the algorithm as simple as possible, each digit of the ISBN-10 number is stored in an integer array. Based on this preparation, it is only necessary to iterate through the array. If the sum check using the modulo 11 then returns 0, everything is fine.
To properly test the function, two test cases are required. The first test checks whether an ISBN is correctly recognized. The second test checks for so-called false positives. This provokes an expected error with an incorrect ISBN. This can be quickly accomplished by changing any digit of a valid ISBN.
Our ISBN-10 validator still has one minor flaw. Digit sequences that are shorter or longer than 10, i.e., do not conform to the expected format, could be rejected beforehand. The reason for this can be seen in the example: The last digit of the ISBN-10 is a 0 – thus, the character result is also 0. If the last digit is forgotten and a check for the correct format isn’t performed, the error won’t be detected. Something that has no effect on the algorithm, but is very helpful as feedback for user input, is to gray out the input field and disable the submit button until the correct ISBN format has been entered.
As with ISBN-10, xn represents the numerical value at the corresponding position in the ISBN-13. Here, too, the partial results are summed and divided by a modulo. The main difference is that only the even-numbered positions—positions 2, 4, 6, 8, 10, and 12—are multiplied by 3, and the result is then divided by modulo 10. As an example, we calculate the ISBN-13: 9783836278348.
The algorithm can also be implemented for the ISBN-13 in a simple for loop.
boolean success = false; int[] isbn; int sum = 0;
for(i=0; i<13; i++) { if(i%2 == 0) { sum += 3*isbn[i]; } else { sum += isbn[i]; } }
if(sum%10 == 0) { success = true; }
The two code examples for ISBN-10 and ISBN-13 differ primarily in the if condition. The expression i % 2 calculates the modulo value 2 for the respective iteration. If the result is 0 at this point, it means it is an even number. The corresponding value must then be multiplied by 3.
This shows how useful the modulo operation % can be for programming. To keep the implementation as compact as possible, the so-called triple operator can be used instead of the if-else condition. The expression sum += (i%2) ? isbn[i] : 3 * isbn[3] is much more compact, but also more difficult to understand.
Below you will find a fully implemented class for checking the ISBN in the programming languages: Java, PHP, and C#.
While the solutions presented in the examples all share the same core approach, they differ in more than just syntactical details. The Java version, for example, offers a more comprehensive variant that distinguishes more generically between ISBN-10 and ISBN-13. This demonstrates that there are many ways to Rome. It also aims to show less experienced developers different approaches and encourage them to make their own adaptations. To simplify understanding, the source code has been enriched with comments. PHP, as an untyped language, eliminates the need to convert strings to numbers. Instead, a RegEx function is used to ensure that the entered characters are type-safe.
Lessons Learned
As you can see, verifying whether an ISBN is correct isn’t rocket science. The topic of validating user input is, of course, much broader. Other examples include credit card numbers. But regular expressions also provide valuable services in this context.
Ressourcen
[1] Ralph-Hardo Schulz, Codierungstheorie: Eine Einführung, 2003, ISBN 978-3-528-16419-5
[2] Concept of modular aritmetic on Wikipedia, https://en.wikipedia.org/wiki/Modular_arithmetic
[EN] We use cookies to improve your experience on our site. By using our site, you consent to cookies.
[DE] Wir verwenden Cookies, um Ihre Erfahrungen auf unserer Website zu verbessern. Durch die Nutzung unserer Website stimmen Sie Cookies zu.
This website uses cookies
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author
Used to track the user across multiple sessions.
Session
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
These cookies are used for managing login functionality on this website.
Name
Description
Duration
wordpress_logged_in
Used to store logged-in users.
Persistent
wordpress_sec
Used to track the user across multiple sessions.
15 days
wordpress_test_cookie
Used to determine if cookies are enabled.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Matomo is an open-source web analytics platform that provides detailed insights into website traffic and user behavior.