Installing Artificial Intelligence GPD4all on Linux

Artificial intelligence is a very broad field in which it’s easy to lose track. Large Language Models (LLMs), such as ChatGPD, process natural language and can solve various problems depending on the data set. In addition to pleasant conversations, which can be quite therapeutic, LLM can also handle quite complex tasks. One such scenario would be drafting official letters. In this article, we won’t discuss how you can use AI, but we’ll explain how you can install your own AI locally on your computer.

Before we get into the nitty-gritty, we’ll answer the question of what the whole thing is actually useful for. You can easily access AI systems, some of which are available online for free.

What many people aren’t aware of is that all requests sent to ChatGPT, DeepSeek, and the like are logged and permanently stored. We can’t answer the details of this logging, but the IP address and user account with the prompt request are likely among the minimal data collected. However, if you have installed your own AI on your local computer, this information will not be transmitted to the internet. Furthermore, you can interact with the AI as often as you like without incurring any fees.

For our project of installing our own artificial intelligence on your own Linux computer, we don’t need any fancy hardware. A standard computer is perfectly sufficient. As mentioned before, we are using Linux as the operating system because it is much more resource-efficient than Windows 10 or Windows 11. Any Debian-derived Linux can be used for the workshop. Debian derivatives include Ubuntu and Linux Mint.

At least 16 GB of RAM is required. The more RAM, the better. This will make the AI run much more smoothly. The CPU should be at least a current i5/i7 or AMD Ryzen 5+. If you also have an SSD with 1 TB of storage, we have the necessary setup complete. Computers/laptops with this specification can be purchased used for very little money. Without wanting to advertise too much, you can browse the used Lenovo ThinkPad laptops. Other manufacturers with the minimum hardware requirements also provide good services.

After clarifying the necessary requirements, we’ll first install GPT4all on our computer. Don’t worry, it’s quite easy, even for beginners. No special prior knowledge is necessary. Let’s start by downloading the gpd4all.run file from the homepage (https://gpt4all.io/index.html?ref=top-ai-list). Once this is done, we’ll make the file executable.

As shown in the screenshot, we right-click on the downloaded file and select Properties from the menu. Under the Permissions tab, we then check the Execute box. Now you can run the file with the usual double-click, which we do immediately.

Now the installation process begins, where we can, among other things, select where GPT4all will be installed. On Linux, self-installed programs usually go to the /opt directory.

In the next step, we can create a desktop shortcut. To do this, right-click on the empty desktop and select “Create Shortcut.” In the pop-up window, enter a name for the shortcut (e.g., GPT 4 all) and set the path to the executable file (bin/chat), then click OK. Now we can conveniently launch GPT4all from our desktop.

For GPT4all to work, a model must be loaded. As you can see in the screenshots, several models are available. The model must be reselected each time the program is started. The AI can now be used locally on your computer.

Image gallery:

Other AI systems include:

Recover privacy with Kodachi Linux on the Internet

Windows 11 integrates an ominous history feature that records all interactions with the computer and most likely transmits them to Microsoft via telemetry. The countless laws passed by the EU and implemented by Germany to monitor citizens are also giving many people cause to rethink data protection and privacy. Our world is constantly evolving, and the digital world is changing considerably faster. It’s up to each individual how they want to deal with these changes. This article is intended to inspire you to learn more about Linux and security. Perhaps you’ll even feel like trying out the Kodachi Linux presented here for yourself. There are several ways you can try Kodachi Linux.

  • Virtual machine: Simply create a virtual machine with Kodachi Linux using the ISO file and the VirtualBox or VMWare program. You can also create and use these virtual machines from a Windows computer.
  • Booting from a USB stick: Tools like Disks (Linux) or Rufus (Windows) allow you to create bootable USB sticks. This allows you to boot your PC directly from the USB drive with Kodachi without affecting the operating system installed on the hard drive.
  • Native installation: You can also use the bootable USB stick to permanently install Kodachi Linux on your computer. This method is recommended if you already have experience with Kodachi.

Kodachi OS is, as the name suggests, a Japanese Linux distribution with a customized XFCE desktop. Kodachi are actually classic Japanese samurai swords, which already suggests a reference to security. Kodachi OS itself is a Xubuntu derivative and thus a grandchild of Ubuntu and a great-grandchild of Debian Linux. Kodachi Linux offers a highly secure, anti-forensic, and anonymous computing environment. It was designed with privacy in mind, with all the necessary features to ensure user trust and security.

  • Automatically established VPN connection
  • Pre-configured TOR connection
  • Running DNSCrypt service

The current version of Kodachi can be downloaded free of charge from the website [1]. With the downloaded ISO, you can now either create a bootable USB stick or install Kodachi in a virtual machine. We chose the option of creating a virtual machine with VMware.

Installation is completed in just a few minutes thanks to the VMware Ubuntu template. For our test, we gave the VM 20 GB of hard drive space. To ensure smooth operation, we increased the RAM to 8 GB. If you don’t have that much RAM available, you can also work with 4 GB. After starting the VM, you will see the Kodachi OS desktop as shown in the screenshot below, version 8.27. For all Linux nerds, it should be noted that this version uses kernel 6.2. According to the website, they are already hard at work on the new version 9.

To keep the installation as simple as possible, even for beginners, user accounts have already been set up. The user is kodachi and has the password r@@t00 (00 are zeros). The administrator account is called root, as is usual in Linux, and also has the password r@@t00. Anyone who decides to permanently install Kodachi on their machine should at least change the passwords.



Unfortunately, the highest possible level of anonymity can only be achieved at the expense of browsing speed. Kodachi Linux therefore offers several profiles to choose from for different requirements.

  1. Maximum Anonymity (Slower)
    • ISP → Router VPN → Kodachi VPN (VM NAT) → Torified System → Tor DNS → Kodachi Loaded Browser
  2. Highly Anonymous (Slow)
    • ISP → Kodachi VPN → TOR Endpoint → Tor DNS → Kodachi Loaded Browser
  3. Anonymous & Fast
    • ISP → Kodachi VPN → TOR Endpoint → Tor DNS → Kodachi Lite Browser
  4. Moderate Anonymity
    • ISP → Kodachi VPN with Forced VPN Traffic → Tor DNS → Kodachi Loaded Browser
  5. Standard Anonymity
    • ISP → Kodachi VPN → Torified System → Tor DNS → Kodachi Loaded Browser
  6. Enhanced Anonymity with Double TOR
    • ISP → Kodachi VPN with Forced VPN Traffic → Tor Browser → Tor Browser
  7. Double TOR Alternative
    • ISP → Kodachi VPN → Tor Browser → Tor Browser → Tor DNS
  8. Highly Secure Setup
    • ISP → Host machine (XMR VPN) → Kodachi VPN (VM) with forced VPN traffic → Kodachi browser → DNScrypt
  9. Secure and fast
    • ISP → Kodachi VPN with forced VPN traffic → Kodachi loaded browser → DNScrypt
  10. Increased security
    • ISP → Kodachi VPN with forced VPN traffic → Kodachi loaded browser → Tor DNS
  11. High speed and security
    • ISP → Kodachi VPN with forced VPN traffic → Kodachi lite browser → Tor DNS
  12. Double security with DNScrypt
    • ISP → Kodachi VPN with forced VPN traffic → Tor browser → DNScrypt
  13. Double security with Tor DNS
    • ISP → Kodachi VPN with forced VPN traffic → Tor browser → Tor DNS

Now let’s get to the basics of using Kodachi. To do this, we open the dashboard, which we find as a shortcut on the desktop. After launching, we’ll see various tabs such as VPN, TOR, and Settings. Under Settings, we have the option to activate several profiles relevant to online security and privacy. As shown in the screenshot below, we select Level 1 and activate the profile.

In the lower panel, in the Security Services section, you’ll find various services that can be enabled, such as GnuNet. There are several options here that you can easily try out. GnuNet, for example, redirects all traffic to the TOR network. This, of course, means that pages take longer to fully load.

With Kodachi Linux’s built-in tools, you can significantly improve your security and anonymity while surfing the internet. While it may be a bit unusual to use at first, you’ll quickly get used to it. If you choose to use it as a live system or a virtual machine, you can easily familiarize yourself with the various programs and settings without damaging the guest operating system. Especially for beginners, using a VM eliminates the fear of breaking something while trying out different configurations.

If you do a little research on Kodachi Linux online, you’ll quickly find an article [2] from 2021 that is quite critical of Kodachi. The main criticism is that Kodachi is more of an Ubuntu distro with a customized look and feel, spiced up with a few shell scripts, than a standalone Linux. This criticism can’t be completely dismissed. If you take a closer look at the criticism, you’ll find that Kadochi does have some practical anonymization features. Nevertheless, it’s far from being a so-called hacker’s toolbox. The author of the review took another look at Kadochi in 2025 [3] and his conclusion for the current version is no different than his conclusion in 2021. Whether the upcoming version 9 of Kadochi Linux will take the points raised to heart remains to be seen.

Resources

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.


User Tracking and Shadowbanned

The desire of website operators to obtain as much information as possible about their users is as old as the internet itself. Simple counters for page views or the recognition of the web browser and screen resolution are the simplest applications of user tracking. Today, website operators are no longer solely dependent on Google to collect information about their visitors. There are sufficient free tools available to maintain their own tracking server. In this article, I will briefly discuss the historical background, technologies, and social aspects.

As more and more companies ventured into the vastness of the internet around the turn of the millennium, interest in finding out more about website visitors began. Initially, they were content with placing so-called visitor counters on the homepage. These visitor counters often displayed quite outrageous numbers. The ego of website operators certainly played a role, as many visitors to the homepage have an external impact and also make a certain impression on visitors. However, anyone who seriously wanted to make money through their website quickly realized that fictitious numbers didn’t generate revenue. So, more reliable methods were needed.

To prevent users from being counted multiple times each time they accessed the homepage, they began storing the IP address and setting a one-hour timeout before counting again. This was then called a reload block. Of course, this wasn’t a reliable detection method. At that time, connections over the telephone network were common via modem, and it often happened that the connection would drop, requiring a new connection. Then, a new IP address was also assigned. The accuracy of this solution therefore had a lot of potential for improvement.

When web space with PHP and MySQL databases became affordable around 2005, the trend shifted to storing visited pages in small text files called cookies in the browser. These analyses were already very informative and helped companies see which articles people were interested in. The only problem was when suspicious users deleted their cookies at every opportunity. Therefore, the trend shifted to storing all requests on the server, in so-called sessions. In most use cases, the accuracy achieved in this way is sufficient to better match supply to demand.

A popular tool for user tracking is Matomo, written in PHP. This self-hosted open source software allows you to bypass Google and also achieves better GDPR compliance, as the collected data is not shared with third parties. Furthermore, personalized data can be anonymized after a specified period of time, for example, at the beginning of the month. In this case, information such as IP addresses is replaced with random identifiers.

The whole issue is immediately taken to a whole new level when money is involved. In the past, it was companies that placed advertising banners on well-visited websites and then paid a small amount for every 1,000 ads. Nowadays, streaming services like Spotify or YouTube are interested in determining exactly how often a particular piece of content was viewed, or for how long a track was watched. Because the moment money is involved, there is a great interest in using small or large tricks to swindle a little more money than one is actually entitled to. This is precisely why companies like Google and Co. are constantly busy finding out how many users consume the content and for how long. In addition to tracking functions in the applications, these companies also use complex monitoring that can access original data from server logs and network traffic. This is where tools like the ELK stack or Prometheus and Grafana come into play.

Taking YouTube as an example, this service has several hurdles to overcome. Many people use YouTube as a TV replacement, as they can choose the content that interests them from a vast pool of content. A typical scenario is the automatic playback of ambient music for hours on end. If enough people do this without really paying attention to the content, it simply places a pointless burden on the server infrastructure and incurs considerable costs for the operator. This automatic autoplay function in the preview isn’t really interactive and is intended more as a teaser.

There are currently two strategies to keep users constantly engaged. One of these is short videos that run in a continuous loop until they manually move on to the next one. This allows you to mix in short advertising videos, but also to include news or opinion pieces. Of course, user tracking has to remove the repetitions during a monetized short on a continuous loop. This naturally leads to adjustments to the impression display. Another strategy used very excessively with long videos is disproportionately long ad breaks at relatively short intervals. This forces users to actively click away these ads each time, thus demanding attention.

Now, there are topics where services like YouTube, but also X or Facebook, have an interest in influencing their users in a certain direction. This could be the formation of opinions on political issues or simply commercialism. Now, one might think it would be a common strategy to suppress the visibility of undesirable opinions by adjusting the view count of the posts downwards. However, this wouldn’t be beneficial, because people have already seen the post. Therefore, a different strategy is much more effective. In the first step, the channel or post would be exempt from monetization, so the operator receives no additional compensation. In the next step, the number of views is increased, so that the content creator believes they are reaching a broad audience and takes fewer measures to gain more visibility. Additionally, using DevOps methods like A/B testing, feature flags, and load balancers, content views can be directed to posts only those who explicitly search for them. This avoids suspicion of censorship and significantly reduces visibility. Of course, unwanted posts only appear in recommendations for people who have explicitly subscribed to channels.

In the Netflix production “The Social Dilemma,” it is also lamented that bubbles are forming in which people with specific interests gather. This is an effect of so-called recommender systems. These recommenders are algorithms from the field of artificial intelligence. They function quite statically via statistical evaluations. Existing content is classified into categories, and then it is examined which groups of people are interested in a particular category and with what weighting. Content is then displayed accordingly, in proportion to the interests from that category. The content collected in this way can, of course, easily be marked with additional labels such as “well-suited” or “unsuitable.” Depending on the meta tags, unwanted content can then be buried in the depths of the database.

For all these measures to be effective, it is necessary to collect as much information about users as possible. This brings us back to user tracking. Tracking has become so sophisticated that browser settings that regularly delete cookies or the basic use of incognito mode are completely ineffective.

The only way to free yourself from dependence on the major platform providers is to consciously decide to no longer provide them with content. One step in this direction would be to operate your own website with appropriate monitoring for user tracking. Extensive content such as video and audio can be outsourced to several unknown platforms and embedded into the website. In this case, you should not upload all content to a single platform such as Odysee or Rumble, but rather cleverly distribute the content across multiple platforms without duplicating them. Such measures bind visitors to your own website and not to the respective platform operators.

Those with a little more financial freedom can also resort to free software such as PeerTube and host their own video platform. There are a number of options available here, but they require a great deal of effort and technical know-how from the operators.


Successful validation of ISBN validation of ISBN numbers

Developers are regularly faced with the task of checking user input for accuracy. A considerable number of standardized data formats now exist that make such validation tasks easy to master. The International Standard Book Number, or ISBN for short, is one such data format. ISBN comes in two versions: a ten-digit and a 13-digit version. From 1970 to 2006, the ten-digit version of the ISBN was used (ISBN-10), which was replaced by the 13-digit version (ISBN-13) in January 2007. Nowadays, it is common practice for many publishers to provide both versions of the ISBN for titles. It is common knowledge that books can be uniquely identified using this number. This, of course, also means that these numbers are unique. No two different books have the same ISBN (Figure 1).

The theoretical background for determining whether a sequence of numbers is correct comes from coding theory. Therefore, if you would like to delve deeper into the mathematical background of error-detecting and error-correcting codes, we recommend the book “Coding Theory” by Ralph Hardo Schulz [1]. It teaches, for example, how error correction works on compact disks (CDs). But don’t worry, we’ll reduce the necessary mathematics to a minimum in this short workshop.

The ISBN is an error-detecting code. Therefore, we can’t automatically correct a detected error. We only know that something is wrong, but we don’t know the specific error. So let’s get a little closer to the matter.

Why exactly 13 digits were agreed upon for ISBN-13 remains speculation. At least the developers weren’t influenced by any superstition. The big secret behind validation is the determination of the residual classes [2]. The algorithms for ISBN-10 and ISBN-13 are quite similar. So let’s start with the older standard, ISBN-10, which is calculated as follows:

1x1 + 2x2 + 3x3 + 4x4 + 5x5 + 6x6 + 7x7 + 8x8 + 9x9 + 10x10 = k modulo 11

Don’t worry, you don’t have to be a SpaceX rocket engineer to understand the formula above. We’ll lift the veil of confusion with a small example for ISBN 3836278340. This results in the following calculation:

(1*3) + (2*8) + (3*3) + (4*6) + (5*2) + (6*7) + (7*8) + (8*3) + (9*4) + (10*0) = 220
220 modulo 11 = 0

The last digit of the ISBN is the check digit. In the example given, this is 0. To obtain this check digit, we multiply each digit by its value. This means that the fourth position is a 6, so we calculate 4 * 6. We repeat this for all positions and add the individual results together. This gives us the amount 220. The 220 is divided by 11 using the remainder operation modulo 11. Since 11 fits exactly 20 times into 220, there is a remainder of zero. The result of 220 modulo 11 is 0 and matches the check digit, which tells us that we have a valid ISBN-10.

However, there is one special feature to note. Sometimes the last digit of the ISBN ends with X. In this case, the X must be replaced with 10.

As you can see, the algorithm is very simple and can easily be implemented using a simple for loop.

boolean success = false;
int[] isbn;
int sum = 0;

for(i=0; i<10; i++) {
sum += i*isbn[i];
}

if(sum%11 == 0) {
success = true;
}

To keep the algorithm as simple as possible, each digit of the ISBN-10 number is stored in an integer array. Based on this preparation, it is only necessary to iterate through the array. If the sum check using the modulo 11 then returns 0, everything is fine.

To properly test the function, two test cases are required. The first test checks whether an ISBN is correctly recognized. The second test checks for so-called false positives. This provokes an expected error with an incorrect ISBN. This can be quickly accomplished by changing any digit of a valid ISBN.

Our ISBN-10 validator still has one minor flaw. Digit sequences that are shorter or longer than 10, i.e., do not conform to the expected format, could be rejected beforehand. The reason for this can be seen in the example: The last digit of the ISBN-10 is a 0 – thus, the character result is also 0. If the last digit is forgotten and a check for the correct format isn’t performed, the error won’t be detected. Something that has no effect on the algorithm, but is very helpful as feedback for user input, is to gray out the input field and disable the submit button until the correct ISBN format has been entered.

The algorithm for ISBN-13 is similarly simple.

x1 + 3x2 + x3 + 3x4 + x5 + 3x6 + x7 + 3x8 + x9 + 3x10 + x11 + 3x12 + x13 = k modulo 10

As with ISBN-10, xn represents the numerical value at the corresponding position in the ISBN-13. Here, too, the partial results are summed and divided by a modulo. The main difference is that only the even-numbered positions—positions 2, 4, 6, 8, 10, and 12—are multiplied by 3, and the result is then divided by modulo 10. As an example, we calculate the ISBN-13: 9783836278348.

9 + (3*7) + 8 + (3*3) + 8 + (3*3) + 6 + (3*2) + 7 + (3*8) + 3 + (3*4) + 8 = 130
130 modulo 10 = 0

The algorithm can also be implemented for the ISBN-13 in a simple for loop.

boolean success = false;
int[] isbn;
int sum = 0;

for(i=0; i<13; i++) {
if(i%2 == 0) {
sum += 3*isbn[i];
} else {
sum += isbn[i];
}
}

if(sum%10 == 0) {
success = true;
}

The two code examples for ISBN-10 and ISBN-13 differ primarily in the if condition. The expression i % 2 calculates the modulo value 2 for the respective iteration. If the result is 0 at this point, it means it is an even number. The corresponding value must then be multiplied by 3.

This shows how useful the modulo operation % can be for programming. To keep the implementation as compact as possible, the so-called triple operator can be used instead of the if-else condition. The expression sum += (i%2) ? isbn[i] : 3 * isbn[3] is much more compact, but also more difficult to understand.

Below you will find a fully implemented class for checking the ISBN in the programming languages: Java, PHP, and C#.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

While the solutions presented in the examples all share the same core approach, they differ in more than just syntactical details. The Java version, for example, offers a more comprehensive variant that distinguishes more generically between ISBN-10 and ISBN-13. This demonstrates that there are many ways to Rome. It also aims to show less experienced developers different approaches and encourage them to make their own adaptations. To simplify understanding, the source code has been enriched with comments. PHP, as an untyped language, eliminates the need to convert strings to numbers. Instead, a RegEx function is used to ensure that the entered characters are type-safe.

Lessons Learned

As you can see, verifying whether an ISBN is correct isn’t rocket science. The topic of validating user input is, of course, much broader. Other examples include credit card numbers. But regular expressions also provide valuable services in this context.

Ressourcen

  • [1] Ralph-Hardo Schulz, Codierungstheorie: Eine Einführung, 2003, ISBN 978-3-528-16419-5
  • [2] Concept of modular aritmetic on Wikipedia, https://en.wikipedia.org/wiki/Modular_arithmetic

Beyond code: Why soft skills for developers in the AI era become irreplaceable

AI tools such as Github Copilot, Chatgpt and other code generators change the developer role. Many programmers wonder which skills will be asked in the future. AI does not replace any developers. But developers without soft skills replace themselves.

“The best developers 2030 will not be a better code – but better translators between humans and machines.” Andrej Karpathy, ex-Openai

In June 2025, Microsoft deleted 9000 jobs [1]. Companies such as Microsoft, Google or IBM change their teams-and AI tools are often part of the strategy. One reason for these laying waves is the comprehensive availability of powerful AI tools. According to a study by McKinsey [2], AI systems can accelerate up to 60% of the Developer workload. If AI can do up to 80% of the coding, what makes me irreplaceable? More and more people are now asking themselves this central question because they are directly affected by the 4th industrial revolution or are affected in the foreseeable future.

Unlike earlier revolutions, there is no ‘retraining on web design’ this time. AI tools such as Devin or Chatgpt code not only automate tasks, but entire job profiles and faster than most of those affected can react. Studies show that up to 30% of all developer roles will not be converted by 2030, but are replaced by artificial intelligence.

This trend can be found in almost all professions, also in classic craft. On YouTube you can specifically search for videos, such as deliver small, cute robots orders in Moscow. Or as robots print out entire houses. New patents that affect steel shavings to concrete increase the stability and replace classic iron lichen. Machines that lay the floor tiles can also be seen. The list of activities that can be carried out by AI is long.

If you internalize this forecast, you can be afraid and worried. In order not only to survive in this new period, but even to be one of the winners, requires a high degree of flexibility. That is why one of the most important properties we have to develop will be a flexible spirit. Because although AI is very powerful, their limits are also set. If we only think about what defines us as humans, we find an important quality: creativity. How can we use this for future success? So that the statement: if your creativity does not become a platitude, I first look at the way how it will most likely become nothing.

Often junior developers ask me which framework, which programming Apache, which operating system etc. you should learn. These were the wrong questions in the old days. It’s not about following trends, but an appeal. If programming is to be a calling for me, it is first of all about understanding what the code you write really does. With a profound understanding of the source text, performance improvements can also be found quickly. Optimizations in the area of security are also included. But locating errors and their elimination are also characteristics of good developers. Because it is precisely in these areas that human creativity of artificial intelligence is superior. Of course, this means that as a consequence, it is consequently expanding exactly these skills.

Anyone who is only busy running after current fashion phenomena was not one of the specialists in demand in the ‘old’ time. Pure code of Monkeys their activities primarily consist of copying and inserting, without really grasping what the code snippets mean, were easy to replace. Especially now that AI is supposed to increase productivity, it is important to decide quickly and safely where a proposed implementation needs adjustments so that there are no unpleasant surprises when the application goes into production. Of course, this also means as a consequence that AI is a tool that needs to be used efficiently. In order to continue to stay on the winning page in the future, it is essential to significantly improve your own productivity by handling AI. Companies expect their employees that with the support of AI they can do a four to five times the current workload.

In order to be able to work effectively with artificial intelligence, your own communication skills are essential. Because only if you have clearly structured your thoughts can you formulate it correctly and specifically. A significant increase in performance can only be achieved if the desired result is achieved when the first instruction. Anyone who has to explain to the language model every time how inquiries can be understood, for example, because they contain amplifying, you will be able to achieve little time savings by AI.

You can basically say that the developer of the future should have some management skills. In addition to clear tasks, there will be a lot of self -management. To distribute suitable resources for optimal results. Because not only artificial intelligence threatens your own workplace, but also a strong competition from the Asian region. Well -trained, motivated and powerful people are now available in high numbers.

So we see that very moving times are coming up. The world will turn a little faster. Anyone who perceives these changes as a threat but as a challenge has a good chance of being fit for the no longer too long future. Anyone who already sets the course is well prepared for what will come to us and does not have to be afraid of anything.

resources

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.