Disk-Jock-Ey

Working with mass storage devices such as hard disks (HDDs), solid-state drives (SSDs), USB drives, memory cards, or network-attached storage devices (NAS) isn’t as difficult under Linux as many people believe. You just have to be able to let go of old habits you’ve developed under Windows. In this compact course, you’ll learn everything you need to master potential problems on Linux desktops and servers.

Before we dive into the topic in depth, a few important facts about the hardware itself. The basic principle here is: Buy cheap, buy twice. The problem isn’t even the device itself that needs replacing, but rather the potentially lost data and the effort of setting everything up again. I’ve had this experience especially with SSDs and memory cards, where it’s quite possible that you’ve been tricked by a fake product and the promised storage space isn’t available, even though the operating system displays full capacity. We’ll discuss how to handle such situations a little later, though.

Another important consideration is continuous operation. Most storage media are not designed to be switched on and used 24 hours a day, 7 days a week. Hard drives and SSDs designed for laptops quickly fail under constant load. Therefore, for continuous operation, as is the case with NAS systems, you should specifically look for such specialized devices. Western Digital, for example, has various product lines. The Red line is designed for continuous operation, as is the case with servers and NAS. It is important to note that the data transfer speed of storage media is generally somewhat lower in exchange for an increased lifespan. But don’t worry, we won’t get lost in all the details that could be said about hardware, and will leave it at that to move on to the next point.

A significant difference between Linux and Windows is the file system, the mechanism by which the operating system organizes access to information. Windows uses NTFS as its file system, while USB sticks and memory cards are often formatted in FAT. The difference is that NTFS can store files larger than 4 GB. FAT is preferred by device manufacturers for navigation systems or car radios due to its stability. Under Linux, the ext3 or ext4 file systems are primarily found. Of course, there are many other specialized formats, which we won’t discuss here. The major difference between Linux and Windows file systems is the security concept. While NTFS has no mechanism to control the creation, opening, or execution of files and directories, this is a fundamental concept for ext3 and ext4.

Storage devices formatted in NTFS or FAT can be easily connected to Linux computers, and their contents can be read. To avoid any risk of data loss when writing to network storage, which is often formatted as NTFS for compatibility reasons, the SAMBA protocol is used. Samba is usually already part of many Linux distributions and can be installed in just a few moments. No special configuration of the service is required.

Now that we’ve learned what a file system is and what it’s used for, the question arises: how to format external storage in Linux? The two graphical programs Disks and Gparted are a good combination for this. Disks is a bit more versatile and allows you to create bootable USB sticks, which you can then use to install computers. Gparted is more suitable for extending existing partitions on hard drives or SSDs or for repairing broken partitions.

Before you read on and perhaps try to replicate one or two of these tips, it’s important that I offer a warning here. Before you try anything with your storage media, first create a backup of your data so you can fall back on it in case of disaster. I also expressly advise you to only attempt scenarios you understand and where you know what you’re doing. I assume no liability for any data loss.

Bootable USB & Memory Cards with Disks

One scenario we occasionally need is the creation of bootable media. Whether it’s a USB flash drive for installing a Windows or Linux operating system, or installing the operating system on an SD card for use on a Raspberry Pi, the process is the same. Before we begin, we need an installation medium, which we can usually download as an ISO from the operating system manufacturer’s website, and a corresponding USB flash drive.

Next, open the Disks program and select the USB drive on which we want to install the ISO file. Then, click the three dots at the top of the window and select Restore Disk Image from the menu that appears. In the dialog that opens, select our ISO file for the Image to Restore input field and click Start Restoring. That’s all you need to do.

Repairing Partitions and MTF with Gparted

Another scenario you might encounter is that data on a flash drive, for example, is unreadable. If the data itself isn’t corrupted, you might be lucky and be able to solve the problem with GParted. In some cases, (A) the partition table may be corrupted and the operating system simply doesn’t know where to start. Another possibility is (B) the Master File Table (MFT) may be corrupted. The MTF contains information about the memory location in which a file is located. Both problems can be quickly resolved with GParted.

Of course, it’s impossible to cover the many complex aspects of data recovery in a general article.

Now that we know that a hard drive consists of partitions, and these partitions contain a file system, we can now say that all information about a partition and the file system formatted on it is stored in the partition table. To locate all files and directories within a partition, the operating system uses an index, the so-called Master File Table, to search for them. This connection leads us to the next point: the secure deletion of storage media.

Data Shredder – Secure Deletion

When we delete data on a storage medium, only the entry where the file can be found is removed from the MFT. The file therefore still exists and can still be found and read by special programs. Securely deleting files is only possible if we overwrite the free space multiple times. Since we can never know where a file was physically written on a storage medium, we must overwrite the entire free space multiple times after deletion. Specialists recommend three write processes, each with a different pattern, to make recovery impossible even for specialized labs. A Linux program that also sweeps up and deletes “data junk” is BleachBit.

Securely overwriting deleted files is a somewhat lengthy process, depending on the size of the storage device, which is why it should only be done sporadically. However, you should definitely delete old storage devices completely when they are “sorted out” and then either disposed of or passed on to someone else.

Mirroring Entire Hard Drives 1:1 – CloneZilla

Another scenario we may encounter is the need to create a copy of the hard drive. This is relevant when the existing hard drive or SSD for the current computer needs to be replaced with a new one with a higher storage capacity. Windows users often take this opportunity to reinstall their system to keep up with the practice. Those who have been working with Linux for a while appreciate that Linux systems run very stably and the need for a reinstallation only arises sporadically. Therefore, it is a good idea to copy the data from the current hard drive bit by bit to the new drive. This also applies to SSDs, of course, or from HDD to SSD and vice versa. We can accomplish this with the free tool CloneZilla. To do this, we create a bootable USB with CloneZilla and start the computer in the CloneZilla live system. We then connect the new drive to the computer using a SATA/USB adapter and start the data transfer. Before we open up our computer and swap the disks after finishing the installation, we’ll change the boot order in the BIOS and check whether our attempt was successful. Only if the computer boots smoothly from the new disk will we proceed with the physical replacement. This short guide describes the basic procedure; I’ve deliberately omitted a detailed description, as the interface and operation may differ from newer Clonezilla versions.

SWAP – The Paging File in Linux

At this point, we’ll leave the graphical user interface and turn to the command line. We’ll deal with a very special partition that sometimes needs to be expanded. It’s the SWAP file. The SWAP file is what Windows calls the swap file. This means that the operating system writes data that no longer fits into RAM to this file and can then read this data back into RAM more quickly when needed. However, it can happen that this swap file is too small and needs to be expanded. But that’s not rocket science, as we’ll see shortly.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

We’ve already discussed quite a bit about handling storage media under Linux. In the second part of this series, we’ll delve deeper into the capabilities of command-line programs and look, for example, at how NAS storage can be permanently mounted in the system. Strategies for identifying defective storage devices will also be the subject of the next part. I hope I’ve piqued your interest and would be delighted if you would share the articles from this blog.

Working with textfiles on the Linux shell

Working with textfiles on the Linux shell

Elmar DottJan 27, 20244 min read

The command line is a powerful tool under Linux. In this article, you will learn about various little helpers for…

Pathfinder

So that we can call console programs directly across the system without having to specify the full path, we use the so-called path variable. So we save the entire path including the executable program, the so-called executable, in this path variable so that we no longer have to specify the path including the executable on the command line. By the way, the word executable derives the file extension exe, which is common in Windows. Here we also have a significant difference between the two operating systems Windows and Linux. While Windows knows whether it is a pure ASCII text file or an executable file via the file extension such as exe or txt, Linux uses the file’s meta information to make this distinction. That’s why it’s rather unusual to use these file extensions txt and exe under Linux.

Typical use cases for setting the path variable are programming languages ​​such as Java or tools such as the Maven build tool. For example, if we downloaded Maven from the official homepage, we can unpack the program anywhere on our system. On Linux the location could be /opt/maven and on Microsoft Windows it could be C:/Program Files/Maven. In this installation directory there is a subdirectory /bin in which the executable programs are located. The executable for Maven is called mvn and in order to output the version, under Linux without the entry in the path variable the command would be as follows: /opt/maven/bin/mvn -v. So it’s a bit long, as we can certainly admit. Entering the Maven installation directory in the path shortens the entire command to mvn -v. By the way, this mechanism applies to all programs that we use as a command in the console.

Before I get to how the path variable can be adjusted under Linux and Windows, I would like to introduce another concept, the system variable. System variables are global variables that are available to us in Bash. The path variable also counts as a system variable. Another system variable is HOME, which points to the logged in user’s home directory. System variables are capitalized and words are separated with an underscore. For our example with entering the Maven Executable in the path, we can also set our own system variable. The M2_HOME convention applies to Maven and JAVA_HOME applies to Java. As a best practice, you bind the installation directory to a system variable and then use the self-defined system variable to expand the path. This approach is quite typical for system administrators who simplify their server installation using system variables. Because these system variables are global and can also be read by automation scripts.

The command line, also known as shell, bash, console and terminal, offers an easy way to output the value of the system variable with echo. Using the example of the path variable, we can immediately see the difference to Linux and Windows. Linux: echo $PATH Windows: echo %PATH%

ed@local:~$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/snap/bin:/home/ed/Programs/maven/bin:/home/ed/.local/share/gem//bin:/home/ed/.local/bin:/usr/share/openjfx/lib

Let’s start with the simplest way to set the path variable. In Linux we just need to edit the hidden .bashrc file. At the end of the file we add the following lines and save the content.

export M2_HOME="/opt/maven"
export PATH=$PATH:$M2_HOME/bin

We bind the installation directory to the M2_HOME variable. We then expand the path variable to include the M2_HOME system variable with the addition of the subdirectory of executable files. This procedure is also common on Windows systems, as it allows the installation path of an application to be found and adjusted more quickly. After modifying the .bashrc file, the terminal must be restarted for the changes to take effect. This procedure ensures that the entries are not lost even after the computer is restarted.

Under Windows, the challenge is simply to find the input mask where the system variables can be set. In this article I will limit myself to the version for Windows 11. It may of course be that the way to edit the system variables has changed in a future update. There are slight variations between the individual Windows versions. The setting then applies to both the CMD and PowerShell. The screenshot below shows how to access the system settings in Windows 11.

To do this, we right-click on an empty area on the desktop and select the System entry. In the System – About submenu you will find the system settings, which open the System properties popup. In the system settings we press the Environment Variables button to get the final input mask. After making the appropriate adjustments, the console must also be restarted for the changes to take effect.

In this little help, we learned about the purpose of system variables and how to store them permanently on Linux and Windows. We can then quickly check the success of our efforts in the shell using echo by outputting the contents of the variables. And we are now one step closer to becoming an IT professional.


PHP Elegant Testing with Laravel

The PHP programming language has been the first choice for many developers in the field of web applications for decades. Since the introduction of object-oriented language features with version 5, PHP has come of age. Large projects can now be implemented in a clean and, above all, maintainable architecture. A striking difference between commercial software development and a hobbyist who has assembled and maintains a club’s website is the automated verification that the application adheres to specified specifications. This brings us into the realm of automated software testing.

A key principle of automated software testing is that it verifies, without additional interaction, that the application exhibits a predetermined behavior. Software tests cannot guarantee that an application is error-free, but they do increase quality and reduce the number of potential errors. The most important aspect of automated software testing is that behavior already defined in tests can be quickly verified at any time. This ensures that if developers extend an existing function or optimize its execution speed, the existing functionality is not affected. In short, we have a powerful tool for ensuring that we haven’t broken anything in our code without having to laboriously click through all the options manually each time.

To be fair, it’s also worth mentioning that the automated tests have to be developed, which initially takes time. However, this ‘supposed’ extra effort quickly pays off once the test cases are run multiple times to ensure that the status quo hasn’t changed. Of course, the created test cases also have to be maintained.

If, for example, an error is detected, you first write a test case that replicates the error. The repair is then successfully completed if the test case(s) pass. However, changes in the behavior of existing functionality always require corresponding adaptation of the associated tests. This concept of writing tests in parallel to implement the function is feasible in many programming languages ​​and is called test-driven development. From my own experience, I recommend taking a test-driven approach even for relatively small projects. Small projects often don’t have the complexity of large applications, which also require some testing skills. In small projects, however, you have the opportunity to develop your skills within a manageable framework.

Test-driven software development is nothing new in PHP either. Sebastian Bergmann’s unit testing framework PHPUnit has been around since 2001. The PEST testing framework, released around 2021, builds on PHPUnit and extends it with a multitude of new features. PEST stands for PHP Elegant Testing and defines itself as a next-generation tool. Since many agencies, especially smaller ones, that develop their software in PHP generally limit themselves to manual testing, I would like to use this short article to demonstrate how easy it is to use PEST. Of course, there is a wealth of literature on the topic of test-driven software development, which focuses on how to optimally organize tests in a project. This knowledge is ideal for developers who have already taken their first steps with testing frameworks. These books teach you how to develop independent, low-maintenance, and high-performance tests with as little effort as possible. However, to get to this point, you first have to overcome the initial hurdle: installing the entire environment.

A typical environment for self-developed web projects is the Laravel framework. When creating a new Laravel web project, you can choose between PHPUnit and PEST. Laravel takes care of all the necessary details. A functioning PHP environment is required as a prerequisite. This can be a Docker container, a native installation, or the XAMPP server environment from Apache Friends. For our short example, I’ll use the PHP CLI on Debian Linux.

sudo apt-get install php-cli php-mbstring php-xml php-pcov

After executing the command in the console, you can test the installation success using the php -v command. The next step is to use a package manager to deploy other PHP libraries for our application. Composer is one such package manager. It can also be quickly deployed to the system with just a few instructions.

php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === 'ed0feb545ba87161262f2d45a633e34f591ebb3381f2e0063c345ebea4d228dd0043083717770234ec00c5a9f9593792') { echo 'Installer verified'.PHP_EOL; } else { echo 'Installer corrupt'.PHP_EOL; unlink('composer-setup.php'); exit(1); }"
php composer-setup.php
php -r "unlink('composer-setup.php');"

This downloads the current version of the composer.phar file to the current directory in which the command is executed. The correct hash is also automatically checked. To make Composer globally available via the command line, you can either include the path in the path variable or link composer.phar to a directory whose path is already integrated into Bash. I prefer the latter option and achieve this with:

ln -d composer.phar $HOME/.local/bin/composer

If everything was executed correctly, composer list should now display the version, including the available commands. If this is the case, we can install the Lavarel installer globally in the Composer repository.

php composer global require laravel/installer

To install Lavarel via Bash, the path variable COMPOSER_HOME must be set. To find out where Composer created the repository, simply use the command composer config -g home. The resulting path, which in my case is /home/ed/.config/composer, is then bound to the variable COMPOSER_HOME. We can now run

php $COMPOSER_HOME/vendor/bin/laravel new MyApp

in an empty directory to create a new Laravel project. The corresponding console output looks like this:

ed@P14s:~/Downloads/test$ php $COMPOSER_HOME/vendor/bin/laravel new MyApp

   _                               _
  | |                             | |
  | |     __ _ _ __ __ ___   _____| |
  | |    / _` |  __/ _` \ \ / / _ \ |
  | |___| (_| | | | (_| |\ V /  __/ |
  |______\__,_|_|  \__,_| \_/ \___|_|


 ┌ Which starter kit would you like to install? ────────────────┐
 │ None                                                         │
 └──────────────────────────────────────────────────────────────┘

 ┌ Which testing framework do you prefer? ──────────────────────┐
 │ Pest                                                         │
 └──────────────────────────────────────────────────────────────┘

Creating a "laravel/laravel" project at "./MyApp"
Installing laravel/laravel (v12.4.0)
  - Installing laravel/laravel (v12.4.0): Extracting archive
Created project in /home/ed/Downloads/test/MyApp
Loading composer repositories with package information

The directory structure created in this way contains the tests folder, where the test cases are stored, and the phpunit.xml file, which contains the test configuration. Laravel defines two test suites: Unit and Feature, each of which already contains a demo test. To run the two demo test cases, we use the artisan command-line tool [1] provided by Laravel. To run the tests, simply enter the php artisan test command in the root directory.

In order to assess the quality of the test cases, we need to determine the corresponding test coverage. We also obtain the coverage using artisan with the test statement, which is supplemented by the --coverage parameter.

php artisan test --coverage

The output for the demo test cases provided by Laravel is as follows:

Unfortunately, artisan’s capabilities for executing test cases are very limited. To utilize PEST’s full functionality, the PEST executor should be used right from the start.

php ./vendor/bin/pest -h

The PEST executor can be found in the vendor/bin/pest directory, and the -h parameter displays help. In addition to this detail, we’ll focus on the tests folder, which we already mentioned. In the initial step, two test suites are preconfigured via the phpunit.xml file. The test files themselves should end with the suffix Test, as in the ExampleTest.php example.

Compared to other test suites, PEST attempts to support as many concepts of automated test execution as possible. To maintain clarity, each test level should be stored in its own test suite. In addition to classic unit tests, browser tests, stress tests, architecture tests, and even the newly emerging mutation testing are supported. Of course, this article can’t cover all aspects of PEST, and there are now many high-quality tutorials available for writing classic unit tests in PEST. Therefore, I’ll limit myself to an overview and a few less common concepts.

Architecture test

The purpose of architectural tests is to provide a simple way to verify whether developers are adhering to the specifications. This includes, among other things, ensuring that classes representing data models are located in a specified directory and may only be accessed via specialized classes.

test('models')
->expect('App\Models')
->toOnlyBeUsedOn('App\Repositories')
->toOnlyUse('Illuminate\Database');

Mutation-Test

This form of testing is something new. The purpose of the exercise is to create so-called mutants by making changes, for example, to the conditions of the original implementation. If the tests assigned to the mutants continue to run correctly instead of failing, this can be a strong indication that the test cases may be faulty and lack meaningfulness.

Original: if(TRUE) → Mutant: if(FALSE)

Stress-Test

Another term for stress tests is penetration testing, which focuses specifically on the performance of an application. This allows you to ensure that the web app, for example, can handle a defined number of accesses.

Of course, there are many other helpful features available. For example, you can group tests and then run the groups individually.

// definition
pest()->extend(TestCase::class)
->group('feature')
->in('Feature');

// calling
php ./vendor/bin/pest --group=feature

For those who don’t work with the Lavarel framework but still want to test in PHP with PEST, you can also integrate the PEST framework into your application. All you need to do is define PEST as a corresponding development dependency in the Composer project configuration. Then, you can initiate the initial test setup in the project’s root directory.

php ./vendor/bin/pest --init

As we’ve seen, the options briefly presented here alone are very powerful. The official PEST documentation is also very detailed and should generally be your first port of call. In this article, I focused primarily on minimizing the entry barriers for test-driven development in PHP. PHP now also offers a wealth of options for implementing commercial software projects very efficiently and reliably.

Ressourcen

Take professional screenshots

Over the course of the many hours they spend in front of this amazing device, almost every computer user will find themselves in need of saving the screen content as a graphic. The process of creating an image of the monitor’s contents is what seasoned professionals call taking a screenshot.

As with so many things, there are many ways to achieve a screenshot. Some very resourceful people solve the problem by simply pointing their smartphone at the monitor and taking a photo. Why not? As long as you can still recognize something afterwards, everything’s fine. But this short guide doesn’t end there; we’ll take a closer look at the many ways to create screenshots. Even professionals who occasionally write instructions, for example, have to overcome one or two pitfalls.

Before we get to the nitty-gritty, it’s important to mention that it makes a difference whether you want to save the entire screen, the browser window, or even the invisible area of ​​a website as a screenshot. The solution presented for the web browser works pretty much the same for all web browsers on all operating systems. Screenshots intended to cover the monitor area and not a web page use the technologies of the existing operating system. For this reason, we also differentiate between Linux and Windows. Let’s start with the most common scenario: browser screenshots.

Browser

Especially when ordering online, many people feel more comfortable when they can additionally document their purchase with a screenshot. It’s also not uncommon to occasionally save instructions from a website for later use. When taking screenshots of websites, one often encounters the problem that a single page is longer than the area displayed on the monitor. Naturally, the goal is to save the entire content, not just the displayed area. For precisely this case, our only option is a browser plugin.
Fireshot is a plug-in available for all common browsers, such as Brave, Firefox, and Microsoft Edge, that allows us to create screenshots of websites, including hidden content. Fireshot is a browser extension that has been on the market for a very long time. Fireshot comes with a free version, which is already sufficient for the scenario described. Anyone who also needs an image editor when taking screenshots, for example, to highlight areas and add labels, can use the paid Pro version. The integrated editor has the advantage of significantly accelerating workflows in professional settings, such as when creating manuals and documentation. Of course, similar results can be achieved with an external photo editor like GIMP. GIMP is a free image editing program, similarly powerful and professional as the paid version of Photoshop, and is available for Windows and Linux.

Linux

If we want to take screenshots outside of the web browser, we can easily use the operating system’s built-in tools. In Linux, you don’t need to install any additional programs; everything you need is already there. Pressing the Print key on the keyboard opens the tool. You simply have to drag the mouse around the area you want to photograph and press Capture in the control field that appears. It’s not a problem if the control area is in the visible area of ​​the screenshot; it won’t be shown in the screenshot. On German keyboards, you often find the Print key instead of Print. The finished screenshot then ends up in the Screenshots folder with a timestamp in the file name. This folder is a subfolder of Pictures in the user directory.

Windows

The easiest way to take screenshots in Windows is to use the Snipping Tool, which is usually included with your Windows installation. It’s also intuitive to use.

Another very old way in Windows, without a dedicated screenshot creation program, is to press the Ctrl and Print Screen keys simultaneously. Then, open a graphics program, such as Paint, which is included in every Windows installation. In the drawing area, press Ctrl + V simultaneously, and the screenshot appears and can be edited immediately.

These screenshots are usually created in JPG format. JPG is a lossy compression method, so you should check the readability after taking the screenshot. Especially with current monitors with resolutions around 2000 pixels, using the image on a website requires manual post-processing. One option is to reduce the resolution from just under 2000 pixels to the usual 1000 pixels on a website. Ideally, the scaled and edited graphic should be saved in the new WEBP format. WEBP is a lossless graphics compression method that further reduces the file size compared to JPG, which is very beneficial for website loading times.

This already covers a good range of possibilities for taking screenshots. Of course, more could be said about this, but that falls into the realm of graphic design and the efficient use of image editing software.


Become an author

At elmar-dott.com, we are always looking for new authors who would like to publish articles under their own name. Each author receives an author account, which also provides access to paid content.

What topics are we looking for? Security, privacy, mobile, Linux, artificial intelligence, programming, technology in general—in other words, anything related to computers and society.

How can you become an author on elmar-dott.com? It’s quite simple. First, you need to have a topic that interests you and that you can write about. It’s best to write the text in Libre/Open Office. If you have images, please make sure they are clearly visible. You can easily insert the images into the Office document. If you’re unsure whether your topic is suitable for elmar-dott.com, feel free to send us a message beforehand using the contact form. Don’t worry if you have little writing experience: We’ll support you in achieving excellent results. Once you’ve finished your text, it’s best to schedule a consultation via the website. There, we’ll briefly review the text and create your author account. You’ll also find out when the article will be published.

When writing your texts, please follow the few points of the author guidelines:

  • The article was written by you.
  • The article is not plagiarized or copied from other authors.
  • The article was not generated by AI.
  • You hold the copyright to all images used. Free images can be found on the Pixabay website, among other places.

How much do we pay for articles? Unfortunately, we are currently unable to pay any compensation, as the site does not generate any income. In return, authors receive full access to the premium articles. By logging in, you can also write comments and like articles. The long-term goal is to create a community of computer enthusiasts and IT specialists on elmar-dott.com. Since the articles appear here under your name, you can easily include this as a reference in your CV.

I look forward to hearing from you as soon as possible.


Virtual conversations

It’s becoming increasingly fashionable for so-called content creators to engage in conversations with artificial intelligences, so-called Large Language Models (LLMs). These conversations are sometimes quite emotionally driven by the human part. But the titles of these videos are also sometimes very sensationalist. One often reads headlines like ‘ChatGPT debunked’; ‘AI reveals the truth about…’ and similar phrases. Many of these titles are certainly clickbait, intended to motivate viewers to watch the content until the very end. The content is often quite sobering, and the question arises as to how useful such contributions actually are.

LLMs, as a form of artificial neural networks, are very powerful AI systems, and conversations in the sense of the Turing Test are certainly possible. The Turing Test is the question of determining which part of a human-machine conversation is part of the machine. Classic chat bots, such as the ELIZA bot developed by Joseph Weizenbaum in 1966, only partially pass this test. One problem arises from the existing set of rules for grammatical transformation of responses and the lack of context for human input. The exciting thing about ELIZA is that its performance works without complex intelligence. Grammatical transformations alone guide the conversation. In his paper “Computational Linguistics: ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Weizenbaum describes the following conversation:

Men are all alike

IN WHAT WY

They’re always hugging us about somthing or other

CAN YOU THINK OF A SPECIFIC EXAMPLE

Well, my boyfriend made me come here

YOUR BOYFRIEND MADE YOU COME HERE

He say I’m depressed much of the time

I AM SORRY TO HEAR YOU ARE DEPRESSED

It’s true I am unhappy.

DO YOU THINK COMMING HERE WILL HELP YOU NOT TO BE UNHAPPY

I tried some help, that much seems certain

WHAT WOULD IT MEAN MEAN TO YOU GOT SOME HELP

At this point, I’ll cut the dialogue short, because the way the machine responds in capital letters already clearly reveals the concept. Simple linguistic or rhetorical tricks in the answers force the interlocutor into the role of narrator. If you look at various videos in which people have supposedly in-depth conversations with artificial intelligence, you’ll find a similar pattern.

A serious problem with current human-machine communication is the upstream filters designed to prevent humans from changing the internal state of the artificial intelligence. This scenario would be the worst-case scenario for developers and would amount to a hacker attack. Changing the internal state of a neural network without existing filters would be very easy. Just as with humans, every single stimulus means a change. Hence the saying: repetition breeds truth. Regardless of whether a fact is false or correct, if it is repeated often enough, it will be incorporated into the knowledge base. Regardless of whether it is an AI or a human knowledge base. It’s not for nothing that we speak of the individual. What makes us unique as individuals is the sum of our experiences. This statement also applies to a neural network. And this is precisely the crucial point as to why conversations with an AI are more likely to be a waste of time. If the purpose of such a conversation is therapeutic to encourage self-reflection, I rate the benefits as very high. All other applications are highly questionable. To support this statement, I would like to quote Joseph Weizenbaum again. In the book “Who Creates the Computer Myths?” there is a section entitled “A Virtual Conversation.” It describes how, in a film, questions and answers were compiled into a fictional conversation between Weizenbaum and his MIT colleague Marvin Minsky. Weizenbaum makes a telling statement about the concept of conversation in this section:

“…but of course it’s not a conversation between people either, because if I say something, it should change the state of my conversation partner. Otherwise, it’s just not a conversation.”

This is exactly what happens with all these AI conversations. The AI’s state isn’t changed. You keep talking to the machine until it eventually says things like, “Under these circumstances, your statement is correct.” Then you turn off the computer, and if you restart the program at a later point and ask the initial question again, you’ll receive a similar answer to the first time. However, this behavior is intentional by the operators and has been painstakingly built into the AI. So if you vehemently stick to your point, the AI ​​switches to its charming mode and politely says yes and amen to everything. Because the goal is for you to come back and ask more questions.
Here, too, it’s worth reading Weizenbaum. He once compared humanity’s amazing technological achievements. He talked about the content of television and the internet, which can be quite substantial. But as soon as a medium mutates into a mass medium, quality is consistently replaced by quantity.

Even between two human interlocutors, it’s becoming increasingly difficult to have a meaningful conversation. People quickly question what’s being said because it might not fit their own concept. Then they pull out their smartphones and quote the first article they find that supports their own views. Similar behavior can now be observed with AI. More and more people are relying on statements from ChatGPT and the like without checking their veracity. These people are then resistant to any argument, no matter how obvious. In conclusion, we have found in this entire chain of argumentation possible proof of why humanity’s intellectual capacity is massively threatened by AI and other mass media.
Another very amusing point is the idea some people have that the profession of prompt engineer has a bright future. That is, people who tell AI what to do. Consider that not so long ago, it took a lot of effort to learn how to give a computer commands. The introduction of various language models now offers a way to use natural language to tell a computer what you want it to do. I find it rather sarcastic to suggest to people that being able to speak clear and concise sentences is the job of the future.

But I don’t want to end this article on such a negative note. I believe that AI is indeed a powerful tool in the right hands. I’ve become convinced that it’s better not to generate texts with AI. Its use in research should also be approached with great caution. A specialized AI in the hands of an expert can, on the other hand, produce high-quality and, above all, fast results.


Passwords, but secure?

Does someone really need to write about passwords again? – Of course not, but I’ll do it anyway. The topic of secure passwords is a perennial topic for a reason. In this constant game of cat and mouse between hackers and users, there’s only one viable solution: staying on top of things. Faster computers and the availability of AI systems are constantly reshuffling the deck. In cryptography, there’s an unwritten rule that simply keeping information secret isn’t sufficient protection. Rather, the algorithm for keeping it secret should be disclosed, and its security should be proven mathematically.

Security researchers are currently observing a trend toward using artificial intelligence to guess supposedly secure passwords. So far, one rule has been established when dealing with passwords: the longer a password, the more difficult it is to guess. We can test this fact with a simple combination lock. A three-digit combination lock has exactly 1,000 possible combinations. Now, the effort required to manually try all the numbers from 000 to 999 is quite manageable and, with a little skill, can be solved in less than 30 minutes. If you change the combination lock from three to five digits, this work multiplies, and finding the solution in less than 30 minutes becomes more a matter of luck, especially if the combination is in the lower number range. Security is further increased if each digit allows not only numbers from 0 to 9, but also letters, both upper and lower case.

This small and simple example shows how the ‘vicious circle’ works. Faster computers allow for trying out possible combinations in a shorter time, so the number of possible combinations must be driven immeasurably with the least possible effort. While in the early 2000s, eight digits with numbers and letters were sufficient, today it should ideally be 22 digits with numbers, upper and lower case, including special characters. Proton lumo’s AI makes the following recommendation:

  • Length at least 22 characters
  • Mixture: Uppercase/lowercase letters, numbers, special characters, underscore

A practical example of a secure password would be: R3gen!Berg_2025$Flug.

Here we see the first vulnerability. No one can remember such passwords. At work, someone might give you a password policy that you have to follow – oh well, that’s a shame, live with it! But don’t worry, there’s a life hack for everything.

That’s why it’s still common for employees to keep their passwords in close proximity to their PCs. Yes, they still keep them on little slips of paper under the keyboard or as Post-it notes on the edge of the screen. As an IT technician, when I want to log into a coworker’s PC while they’re not at their desk, I still glance over the edge of the screen and then look under the keyboard.

How do I know it’s the password? Sure! I look for a sequence of uppercase and lowercase letters, numbers, and special characters. If there were a Post-it stuck to the edge of my screen with, for example, the inscription “Wed Foot Care 10:45,” I wouldn’t even recognize it as a password at first.

So, as a password, “Wed Foot Care 10:45” would be 16 characters long, with upper and lower case letters, numbers, and special characters. Perfect! And at first, it wouldn’t even be recognizable as a password. By the way: The note should have as little dust or patina as possible.

In everyday working life, there are also such nice peculiarities that you have to change your password monthly, and the new password must not have been used in the last few months. Here, too, employees came up with solutions such as password01, password02, and so on, until all 12 months were completed. So there was an extended verification process, and now it had to contain a certain number of different characters.

But even in our private lives, we shouldn’t take the topic of secure passwords lightly. The services we regularly log in to have become an important part of many people’s lives. Online banking and social media are important points here. The number of online accounts is constantly growing. Of course, it’s clear that you shouldn’t recycle your passwords. So you should use multiple passwords. How best to go about this—how many and how to structure them—is something everyone has to decide for themselves, of course, in a way that suits them personally. But we’re not memory masters, and the less often we need a particular password, the harder it is for us to remember it. Password managers can help.

Password managers

The good old filing cabinet. By the way, battery life: infinite. Even if that might seem unworthy of a computer nerd, it’s still possibly the most effective way to store passwords at home.

With today’s number of passwords, management software is certainly attractive, but there’s a risk that if someone gains control of the software, they could have you – as our American friends colloquially say, “by the balls” – loosely translated into German: in a stranglehold. This rule applies especially to cloud solutions that seem convenient at first glance.

For Linux and Windows, however, there is a solution you can install on your computer to manage the many passwords of your online accounts. This software is called KeePass, is open source, and can also be used legally and free of charge in a commercial setting. This so-called password store stores the passwords encrypted on your hard drive. Of course, it’s quite tedious to copy and paste the login details from the password manager on every website. A small browser plugin called TUSK KeePass can help here. It’s available for all common browsers, including Brave, Firefox, and Opera. Even if other people are looking over your shoulder, your password will never be displayed in plain text. Copying and pasting will also delete it from your clipboard after a few minutes.

It’s a completely different story when you’re on the go and have to work on someone else’s computer. In your personal life, it’s a good idea to adapt passwords to the circumstances, depending on where you use them. Let’s say you want to log into your email account on a PC, but you may not be able to guarantee that you’re not being watched at all times.

At this point, it would certainly be counterproductive to dig out a cheat sheet with a password written down that follows the recommended guidelines: uppercase and lowercase letters, numbers, special characters, including Japanese and Cyrillic, if possible, which you then type character by character with your index finger using the eagle search system.

(with advanced keyboard layout also labeled ‘Kölsch’ instead of ‘Alt’)

If you’re not too bad at typing, meaning you can type a bit faster, you should use a password that you can type in 1-1.5 seconds. This will overwhelm a normal observer, especially if you use the Shift key discreetly while typing. You draw attention to your right hand while typing and discreetly use the Shift or Alt keys occasionally with your left hand.

Perhaps, at a cautious assessment, the leaking of your personal Tetris high score list doesn’t constitute a security-relevant loss. Access to online banking is a completely different matter. It’s therefore certainly sensible to use a separate password for financial transactions, a different one for less critical logins, and a simple one for “run-of-the-mill” registrations.

If you have the option to create alias email addresses, this is also very useful, since logging in usually requires not only a password but also an email address. If possible, having a unique email address there, created only for the corresponding site, can not only increase security but also give you the opportunity to become unreachable if you wish. Every now and then, for example, it happens that I receive advertisements, even though I’ve explicitly opted out of advertising. Strangely enough, these are usually the same ‘birds’ who, for example, don’t stick to their payment terms, which they promised before registration. So I simply take the most effective route and delete the alias email address → and that’s it!

Memorability

I’d also like to say a few words about the memorability of passwords. As we’ve seen in the article, it’s a good idea to use a different password for each online account, if possible. This way, we can avoid having our login to Facebook and other social media accounts affected if Sony’s PlayStation Store is hacked again and all customer data is stolen. Of course, there are now multi-factor authentication, authentication, and many other security solutions, but operators don’t always take care of them. Moreover, the motto in hacker circles is: Every problem has a solution.

To create a marketable password that meets all security criteria, we’ll use a simple approach. Our password consists of a very complex static part that, if possible, avoids any personal reference. As a mnemonic, we can use the image of an image, as in the initial example: a combination of an image (“Regen Berg”) and a year, complemented by another word (“Flug”). It’s also very popular to randomly replace letters with similar-looking numbers, such as replacing the E with a 3 or the I with a 1. To avoid limiting the number of possibilities and ensuring that all E’s are now a 3, we won’t do this for all E’s. This results in a static password part that might look like this: R3gen!Berg_2025$Flug. This static part is easy to remember. If we now need a password for our X login, we supplement the static part with a dynamic segment that applies only to our X account. The static part can be easily introduced with a special character like # and then supplemented with the reference to the login. This could look like this: sOCIAL.med1a-X. As mentioned several times, this is an idea that everyone can adapt to their own needs.

In conclusion

At work, you should always be aware that whoever logs into your account is also acting on your behalf. That is, under your identity.

It’s logical that things sometimes run much more smoothly if a colleague can just “check in” on you. The likelihood of this coming back to haunt you is certainly low as long as they handle your password carefully.

Of course, you shouldn’t underestimate the issue of passwords in general, but even if you lose a password: Life on the planet as we know it won’t change significantly. At least not because of that. I promise!


Process documents with AnythingLLM and Ollama

We already have a guide with GPT4all on how to run your own local LLM. Unfortunately, the previous solution has a small limitation. It cannot process documents such as PDFs. In this new workshop, we will install AnythingLLM with Ollama to be able to analyze documents.

The minimum requirement for this workshop is a computer with 16 GB of RAM, ideally with Linux (Mint, Ubuntu, or Debian) installed. With a few adjustments, this guide can also be followed on Windows and Apple computers. The lower the hardware resources, the longer the response times.

Let’s start with the first step and install Ollama. To do this, open Bash and use the following command: sudo curl -fsSL https://ollama.com/install.sh | sh. This command downloads Ollama and executes the installation script. For the installation to begin, you must enter the administrator password. Ollama is a command-line program that is controlled via the console. After successful installation, a language model must be loaded. Corresponding models can be found on the website https://ollama.com/search.

Proven language models include:

  • lama 3.1 8B: Powerful for more demanding applications.
  • Phi-3-5 3B: Well-suited for logical reasoning and multilingualism.
  • Llama 3.3 2B: Efficient for applications with limited resources.
  • Phi 4 14B: State-of-the-art model with increased hardware requirements but performance comparable to significantly larger models.

Once you’ve chosen a language model, you can copy the corresponding command from the overview and enter it into the terminal. For our example, this will be DeepSeek R1 for demonstration purposes.

As shown in the screenshot, the corresponding command we need to install the model locally in Ollama is: ollama run deepseek-r1. Installing the language model may take some time, depending on your internet connection and computer speed. Once the model has been installed locally in Ollama, we can close the terminal and move on to the next step: installing AnythingLLM.

Installing AnythingLLm is similar to installing Ollama. To do so, open the terminal and enter the following command: curl -fsSL https://cdn.anythingllm.com/latest/installer.sh | sh. Once the installation is complete, you can change to the installation directory, which is usually /home//AnythingLLMDesktop. There, navigate to the start link and make it executable (right-click and select Properties). Additionally, you can create a shortcut on the desktop. Now you can conveniently launch AnythingLLM from the desktop, which we’ll do right now.

After defining the workspace, we can now link Anything with Ollama. To do this, we go to the small wrench icon (Settings) in the lower left corner. There, we select LLM and then Ollama. We can now select the language model stored for Ollama. Save our settings. Now you can switch to chat mode. Of course, you can change the language model at any time. Unlike previous workshops, we can now upload PDF documents and ask questions about the content. Have fun.


Marketing with artificial intelligence

Nothing is as certain as change. This wisdom applies to virtually every area of ​​our lives. The internet is also in a constant state of flux. However, the many changes in the technology sector are happening so rapidly that it’s almost impossible to keep up. Anyone who has based their business model on marketing through online channels is already familiar with the problem. Marketing will also continue to experience significant changes in the future, influenced by the availability of artificial intelligence.

Before we delve into the details in a little more detail, I would like to point out right away that by no means has everything become obsolete. Certainly, some agencies will not be able to continue to assert themselves in the future if they focus on traditional solutions. Therefore, it is also important for contractors to understand which marketing concepts can be implemented that will ultimately achieve their goals. Here, we believe that competence and creativity will not be replaced by AI. Nevertheless, successful agencies will not be able to avoid the targeted use of artificial intelligence. Let’s take a closer look at how internet user behavior has changed since the launch of ChatGPT around 2023.
More and more people are accessing AI systems to obtain information. This naturally leads to a decline in traditional search engines like Google and others. Search engines per se are unlikely to disappear, as AI models also require an indexed database on which to operate. It’s more likely that people will no longer access search engines directly, but will instead have a personal AI assistant that evaluates all search queries for them. This also suggests that the number of freely available websites may decline significantly, as they will hardly be profitable due to a lack of visitors. What will replace them?
Following current trends, it can be assumed that well-known and possibly new platforms such as Instagram, Facebook, and X will continue to gain market power. Short texts, graphics, and videos already dominate the internet. All of these facts already require a profound rethinking of marketing strategies.

They say dead live longer. Therefore, it would be wrong to completely neglect traditional websites and the associated SEO. Be aware of the business strategy you are pursuing with your internet/social media presence. As an agency, we specifically help our clients review and optimize existing strategies or develop entirely new ones.
Questions are clarified as to whether you want to sell goods or services, or whether you want to be perceived as a center of expertise on a specific topic. Here, we follow the classic approach from search engine optimization, which is intended to generate qualified traffic. It is of little use to receive thousands of impressions when only a small fraction of them are interested in the topic. The previously defined marketing goals are promoted with cleverly distributed posts on websites and in social media.
Of course, every marketing strategy stands or falls with the quality of the products or services offered. Once the customer feels they received a bad product or a service was too poor, a negative campaign can spread explosively. Therefore, it is highly desirable to receive honest reviews from real customers on various platforms.
There are countless offers from dubious agencies that offer their clients the opportunity to generate a set number of followers, clicks, or reviews. The results quickly disappear once the service is no longer purchased. Besides, such generic posts created by bots are easy to spot, and many people now selectively ignore them. Thus, the effort is pointless. Furthermore, real reviews and comments are also an important tool for assessing the true external impact of your business. If you are constantly being told how great you are, you might be tempted to believe it. There are some stars who have experienced this firsthand.

Therefore, we rely on regular publications of high-quality content that are part of the marketing objective in order to generate attention. We try to use this attention to encourage user interaction, which in turn leads to greater visibility. Our AI models help us identify current trends in a timely manner so that we can incorporate them into our campaigns.
Based on our experience, artificial intelligence allows us to create and schedule high-frequency publications for a relatively long campaign period. The time a post or comment goes live also influences success.
There are isolated voices that suggest the end of agencies. The reasoning is often that many small business owners can now do all these great things that are part of marketing themselves thanks to AI. We don’t share this view. Many entrepreneurs simply don’t have the time to manage marketing independently across all channels. That’s why we rely on a healthy mix of manual work and automation in many steps. Because we believe that success doesn’t just happen in a test tube. We use our tools and experience to achieve qualitative individual results.


Installing Artificial Intelligence GPD4all on Linux

Artificial intelligence is a very broad field in which it’s easy to lose track. Large Language Models (LLMs), such as ChatGPD, process natural language and can solve various problems depending on the data set. In addition to pleasant conversations, which can be quite therapeutic, LLM can also handle quite complex tasks. One such scenario would be drafting official letters. In this article, we won’t discuss how you can use AI, but we’ll explain how you can install your own AI locally on your computer.

Before we get into the nitty-gritty, we’ll answer the question of what the whole thing is actually useful for. You can easily access AI systems, some of which are available online for free.

What many people aren’t aware of is that all requests sent to ChatGPT, DeepSeek, and the like are logged and permanently stored. We can’t answer the details of this logging, but the IP address and user account with the prompt request are likely among the minimal data collected. However, if you have installed your own AI on your local computer, this information will not be transmitted to the internet. Furthermore, you can interact with the AI as often as you like without incurring any fees.

For our project of installing our own artificial intelligence on your own Linux computer, we don’t need any fancy hardware. A standard computer is perfectly sufficient. As mentioned before, we are using Linux as the operating system because it is much more resource-efficient than Windows 10 or Windows 11. Any Debian-derived Linux can be used for the workshop. Debian derivatives include Ubuntu and Linux Mint.

At least 16 GB of RAM is required. The more RAM, the better. This will make the AI run much more smoothly. The CPU should be at least a current i5/i7 or AMD Ryzen 5+. If you also have an SSD with 1 TB of storage, we have the necessary setup complete. Computers/laptops with this specification can be purchased used for very little money. Without wanting to advertise too much, you can browse the used Lenovo ThinkPad laptops. Other manufacturers with the minimum hardware requirements also provide good services.

After clarifying the necessary requirements, we’ll first install GPT4all on our computer. Don’t worry, it’s quite easy, even for beginners. No special prior knowledge is necessary. Let’s start by downloading the gpd4all.run file from the homepage (https://gpt4all.io/index.html?ref=top-ai-list). Once this is done, we’ll make the file executable.

As shown in the screenshot, we right-click on the downloaded file and select Properties from the menu. Under the Permissions tab, we then check the Execute box. Now you can run the file with the usual double-click, which we do immediately.

Now the installation process begins, where we can, among other things, select where GPT4all will be installed. On Linux, self-installed programs usually go to the /opt directory.

In the next step, we can create a desktop shortcut. To do this, right-click on the empty desktop and select “Create Shortcut.” In the pop-up window, enter a name for the shortcut (e.g., GPT 4 all) and set the path to the executable file (bin/chat), then click OK. Now we can conveniently launch GPT4all from our desktop.

For GPT4all to work, a model must be loaded. As you can see in the screenshots, several models are available. The model must be reselected each time the program is started. The AI can now be used locally on your computer.

Image gallery:

Other AI systems include: