High-performance hardware under Linux for local AI applications

Rate this post

Anyone wanting to experiment a bit with local LLM will quickly discover its limitations. Not everyone has a massively upgraded desktop PC with 2 TB of RAM and a CPU that could fry an egg under full load. A laptop with 32 GB of RAM, or in my case, a Lenovo P14s with 64 GB of RAM, is more typical. Despite this generous configuration, it often fails to load a more demanding AI model, as 128 GB of RAM is fairly standard for many of these models. And you can’t upgrade the RAM in current laptops because the chips are soldered directly onto the motherboard. We have the same problem with the graphics card, of course. That’s why I’ve made it a habit when buying a laptop to configure it with almost all the available options, hoping to be set for 5-8 years. The quality of the Lenovo ThinkPad series, in particular, hasn’t disappointed me in this regard. My current system is about two years old and is still running reliably.

I’ve been using Linux as my operating system for years, and I’m currently running Debian 13. Compared to Windows, Linux and Unix distributions are significantly more resource-efficient and don’t use their resources for graphical animations and complex gradients, but rather provide a powerful environment for the applications they’re used in. Therefore, my urgent advice to anyone wanting to try local LLMs is to get a powerful computer and run Linux on it. But let’s take it one step at a time. First, let’s look at the individual hardware components in more detail.

Let’s start with the CPU. LLMs, CAD applications, and even computer games all perform calculations that can be processed very effectively in parallel. For parallel calculations, the number of available CPU cores is a crucial factor. The more cores, the more parallel calculations can be performed.

Of course, the processors need to be able to quickly request the data for the calculations. This is where RAM comes into play. The more RAM is available, the more efficiently the data can be provided for the calculations. Affordable laptops with 32 GB of RAM are already available. Of course, the purchase price increases exponentially with more RAM. While there are certainly some high-end gaming devices in the consumer market, I wouldn’t recommend them due to their typically short lifespan and comparatively high price.

The next logical step in the hardware chain is the hard drive. Simple SSDs significantly accelerate data transfer to RAM, but there are still improvements. NVMe cards with 2 GB of storage capacity or more can reach speeds of up to 7000 MB/s in the 4th generation.

We have some issues with graphics cards in laptops. Due to their size and the required performance, the graphics cards built into laptops are more of a compromise than a true highlight. A good graphics card would be ideal for parallel calculations, such as those performed in LLMs (Large Linear Machines). As a solution, we can connect the laptop to an external graphics card. Thanks to Bitcoin miners in the crypto community, considerable experience has already been gained in this area. However, to connect an external graphics card to the laptop, you need a port that can handle that amount of data. USB 3 is far too slow for our purposes and would severely limit the advantages of the external graphics card due to its low data rate.

The solution to our problem is Thunderbolt. Thunderbolt ports look like USB-C, but are significantly faster. You can identify Thunderbolt by the small lightning bolt symbol (see Figure 1) on the cables or connectors. These are not the power supply connections. To check if your computer has Thunderbolt, you can use a simple Linux shell command.

ed@local: $ lspci | grep -i thunderbolt
00:07.0 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port #0
00:07.2 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port #2
00:0d.0 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller
00:0d.2 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI #0
00:0d.3 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI #1

In my case, my computer’s output shows that two Thunderbolt 4 ports are available.

To connect an external graphics card, we need a mounting system onto which a PCI card can be inserted. ANQUORA offers a good solution here with the ANQ-L33 eGPU Enclosure. The board can accommodate a graphics card with up to three slots. It costs between €130 and €200. A standard ATX power supply is also required. The required power supply wattage depends on the graphics card’s power consumption. It’s advisable not to buy the cheapest power supply, as the noise level might bother some users. The open design of the board provides ample flexibility in choosing a graphics card.

Selecting a graphics card is a whole other topic. Since I use Linux as my operating system, I need a graphics card that is supported by Linux. For accelerating LLMs, a graphics card with as many GPU cores as possible and a correspondingly large amount of internal memory is necessary. To make the purchase worthwhile and actually notice a performance boost, the card should be equipped with at least 8 GB of RAM. More is always better, of course, but the price of the card will then increase exorbitantly. It’s definitely worth checking the used market.

If you add up all the costs, the investment for an external GPU amounts to at least 500 euros. Naturally, this only includes an inexpensive graphics card. High-end graphics cards can easily exceed the 500-euro price point on their own. Anyone who would like to contribute their expertise in the field of graphics cards is welcome to contribute an article.

To avoid starting your shopping spree blindly and then being disappointed with the result, it’s highly advisable to consider beforehand what you want to do with the local LLM. Supporting programming requires less processing power than generating graphics and audio. Those who use LLMs professionally can save considerably by purchasing a high-end graphics card with self-hosted models compared to the costs of, for example, cloud code. The specification of LLMs depends on the available parameters. The more parameters, the more accurate the response and the more computing power is required. Accuracy is further differentiated by:

  • FP32 (Single-Precision Floating Point): Standard precision, requires the most memory. (e.g., 32 bits per parameter)
  • FP16 (Half-Precision Floating Point): Half the precision, halves the memory requirement compared to FP32, but can slightly reduce precision. (e.g., 16 bits per parameter / 4 bytes)
  • BF16 (Brain Floating Point): Another option for half-precision calculations, often preferred in deep learning due to its better performance in certain operations. (e.g., 16 bits per parameter / 2 bytes)
  • INT8/INT4 (Integer Quantization): Even lower precision, drastically reduces memory requirements and speeds up inference, but can lead to a greater loss of precision. (e.g., 8 bits per parameter / 1 byte)

Other factors influencing the hardware requirements for LLM include:

  • Batch Size: The number of input requests processed simultaneously.
  • Context Length: The maximum length of text that the model can consider in a query. Longer context lengths require more memory because the entire context must be held in memory.
  • Model Architecture: Different architectures have different memory requirements.

To estimate the memory consumption of a model, you can use the following calculation: Parameters * Accuracy = Memory consumption for the model.

7,000,000,000 parameters * 2 bytes/parameter (BF16) = 14,000,000,000 bytes = 14 GB

When considering hardware recommendations, you should refer to the model’s documentation. This usually only specifies the minimum or average requirements. However, there are general guidelines you can use.

  • Small models (up to 7 billion parameters): A GPU with at least 8 GB of VRAM should be sufficient, especially if you are using quantization.
  • Medium-sized models (7-30 billion parameters): A GPU with 16 GB to 24 GB of VRAM is recommended.
  • Large models (over 30 billion parameters): Multiple GPUs, each with at least 24 GB of VRAM, or a single GPU with a very large amount of VRAM (e.g., 48 GB, 80 GB) are required.
  • CPU-only: For small models and simple experiments, the CPU may suffice, but inference will be significantly slower than on a GPU. Here, a large amount of RAM is crucial (several GB / 32+).

We can see that using locally running LLMs can be quite realistic if you have the necessary hardware available. It doesn’t always have to be a supercomputer; however, most solutions from typical electronics retailers are off-the-shelf and not really suitable. Therefore, with this article, I have laid the groundwork for your own experiments.


This entry was posted in Articels and tagged , , , by Elmar Dott. Bookmark the permalink.
Elmar Dott

About Elmar Dott

Elmar Dott has been implementing large web applications as a freelance consultant in international projects for over 20 years. His focus is on DevOps, configuration management, software architectures & release management. As a trainer, he shares his knowledge in training courses and also speaks regularly on current topics at conferences.

Leave a Reply