It doesn’t always have to be Kali Linux!

Kali Linux [1] and Parrot Linux [2] are considered the first choice among Linux distributions when it comes to security and penetration testing. Many relevant programs are already preinstalled on these distributions and can be used out of the box, so to speak.

However, it must also be said that Kali and Parrot are not necessarily the most suitable Linux distributions for everyday use due to their specialization. For daily use, Ubuntu for beginners and Debian for advanced users are more common. For this reason, Kali and Parrot are usually set up and used as virtual machines with VirtualBox or VMWare Player. A very practical approach, especially when it comes to looking at the distribution first before installing it natively on the computer.

In my opinion, the so-called distribution hopping that some people do under Linux is more of a hindrance to getting used to a system in order to be able to work with it efficiently. Which Linux you choose depends primarily on your own taste and the requirements of what you want to do with it. Developers and system administrators will likely have an inclination toward Debian, a version from which many other distributions were derived. Windows switchers often enjoy Linux Mint, and the list goes on.

If you want to feel like a hacker, you can opt for a Kali installation. Things like privacy and anonymous surfing on the Internet are often the actual motives. I had already introduced Kodachi Linux, which specializes in anonymous surfing on the Internet. Of course, it must be made very clear that there is no real anonymous communication on the Internet. However, you can massively reduce the number of possible eavesdroppers with a few easy-to-implement measures. I have addressed the topic of privacy in several articles on this blog. Even if it is an unpopular opinion for many. But a Linux VM that is used for anonymous surfing via an Apple or Windows operating system completely misses its usefulness.

he first point in the “privacy” section is the internet browser. No matter which one you use and how much the different manufacturers emphasize privacy protection, the reality is like the fairy tale “The Emperor’s New Clothes”. Most users know the Tor / Onion network by name. Behind it is the Tor browser, which you can easily download from the Tor Project website [3]. After downloading and unzipping the directory, the Tor Browser can be opened using the start script on the console.

./Browser/start-tor-browser

Anyone using the Tor network can visit URLs ending in .onion. A large number of these sites are known as the so-called dark web and should be surfed with great caution. You can come across very disturbing and illegal content here, but you can also fall victim to phishing attacks and the like. Without going into too much detail about exactly how the Tor network works, you should be aware that you are not completely anonymous here either. Even if the big tech companies are largely ignored, authorities certainly have resources and options, especially when it comes to illegal actions. There are enough examples of this in the relevant press.

If you now think about how the Internet works in broad terms, you will find the next important point: proxy servers. Proxy servers are so-called representatives that, similar to the Tor network, do not send requests to the Internet directly to the homepage, but rather via a third-party server that forwards this request and then returns the answer. For example, if you access the Google website via a proxy, Google will only see the IP address of the proxy server. Even your own provider only sees that you have sent a request to a specific server. The provider does not see in its own log files that this server then makes a request to Google. Only the proxy server appears on both sides, at the provider and on the target website. As a rule, proxy server operators ensure that they do not store any logs with the original IP of their clients. Unfortunately, there is no guarantee for these statements. In order to further reduce the probability of being detected, you can connect several proxy connections in series. With the console program proxychain, this project can be easily implemented. ProxyChain is quickly installed on Debian distributions using the APT package manager.

sudo apt-get install proxychains4

Using it is just as easy. The behavior for proxychain is specified via the configuration file /etc/proxychain.conf. If you change the working mode from stricht_chain to random_chain, a different variation of each proxy server will be randomly assembled for each connection. At the end of the configuration file you can enter the individual proxy servers. Some examples are included in the file. To use proxychain, you simply call it via the console, followed by the application (the browser), which establishes the connection to the Internet via the proxies.

Proxychanin firefox
## RFC6890 Loopback address range
## if you enable this, you have to make sure remote_dns_subnet is not 127
## you'll need to enable it if you want to use an application that 
## connects to localhost.
# localnet 127.0.0.0/255.0.0.0
# localnet ::1/128

The real challenge is finding suitable proxy servers. To get started, you can find a large selection of free proxies worldwide at [4].

Using proxies alone for connections to the Internet only offers limited anonymity. In order for two computers to communicate, an IP address is required that can be linked via the Internet access provider to the correct geographical address where the computer is located. However, additional information is sent to the network via the network card. The so-called MAC address, with which you can directly identify a computer. Since you don’t have to install a new network card every time you restart your computer to get a different MAC address, you can use a small, simple tool called macchanger. Like proxychain, this can also be easily installed via APT. After installation you can set the autostart and you have to decide whether you want to always use the same MAC address or a randomly generated MAC address each time.

Of course, the measures presented so far are only of any use if the connection to the Internet is encrypted. This happens via the so-called Secure Socket Layer (SSL). If you do not connect to the Internet via a VPN and the websites you access only use http instead of https, you can use any packet sniffer (e.g. the Wireshark program) to record the communication and read the content of the communication in plain text. In this way, passwords or confidential messages are spied on on public networks (WiFi). We can safely assume that Internet providers run all of their customers’ communications through so-called packet filters in order to detect suspicious actions. With https connections, these filters cannot look into the packets.

Now you could come up with the idea of ​​illegally connecting to a foreign network using all the measures described so far. After all, no one knows that you are there and all activities on the Internet are assigned to the connection owner. For this reason, I would like to expressly point out that in pretty much all countries such actions are punishable by law and if you are caught doing so, you can quickly end up in prison. If you would like to find out more about the topic of WiFi security in order to protect your own network from illegal access, you will find a detailed workshop on Aircrack-ng in the members’ area (subscription).

The next item on the privacy list is email. For most people, running their own email server is simply not possible. The effort is enormous and not entirely cost-effective. That’s why offers from Google, Microsoft and Co. to provide an email service are gladly accepted. Anyone who does not use this service via a local client and does not cryptographically encrypt the emails sent can be sure that the email provider will scan and read the emails. Without exception! Since configuring a mail client with functioning encryption is more of a geek topic, just like running your own mail server, the options here are very limited. The only solution is the Swiss provider Proton [5], which also provides free email accounts. Proton promotes the protection of its customers’ privacy and implements this through strict encryption. Everyone has to decide for themselves whether they should still send confidential messages via email. Of course, this also applies to the available messengers, which are now used a lot for telephony.

Many people have googled themselves to find out what digital traces they have left behind on the Internet. Of course, this is only scratching the surface, as HR people at larger companies and corporations use more effective ways. Matego is a very professional tool, but there is also a powerful tool in the open source area that can reveal a lot of things. There is also a corresponding workshop for subscribers on this subject. Because if you find your traces, you can also start to cover them up.

As you can see, the topic of privacy and anonymity is very extensive and is only covered superficially in this short article. Nevertheless, the depth of information is sufficient to get a first impression of the matter. It’s not nearly enough to set up a system like Kali if you don’t know the basics to use the tools correctly. Because if you don’t put the different pieces of the puzzle together accurately, the hoped-for effect of providing more privacy on the Internet through anonymity will remain. This article also explains my personal point of view on a technical level as to why there is no such thing as secure, anonymous electronic communication. Anyone who wants to familiarize themselves with the topic will achieve success more quickly with a sensible strategy and their own system, which is gradually expanded, than with a ready-made all-round tool like Kali Linux.

Resources

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Risk Cloud & Serverless

The cloud is one of the most innovative developments since the turn of the millennium and enables us to make widespread use of neural networks, which we popularly refer to as Large Language Models (LLM). This technological leap can only be surpassed by quantum computing. But enough of the buzzwords for SEO optimization, instead let’s take a look behind the scenes. Let’s start with what the cloud actually is and put all the marketing terms aside.

The best way to imagine the cloud is as a gigantic supercomputer made up of many small computers like building blocks. This theoretically allows you to combine any amount of CPU power, RAM and hard drive space. On this supercomputer, which runs in a data center, virtual machines can now be provided that simulate a real computer with freely definable hardware. In this way, the physical hardware resources can be optimally distributed among the provided virtual machines.

When it comes to cloud, we roughly distinguish between three different operating levels: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service. The image below gives an idea of ​​how these levels are divided.

To put it simply, you can say that with IaaS the provider only provides the hardware specification. So CPU, RAM, hard drive and internet connection. Via the administration software e.g. B. Kubernetes, you can now create your own virtual machines/containers and install the corresponding operating systems and services yourself. The entire responsibility for security and network routing lies with the customer. PaaS, on the other hand, already provides a rudimentary virtual machine including the selected operating system. What you ultimately install on this system above the operating system level is up to you. But here too, the issue of security is largely in the hands of the customer. For most hosting providers, typical PaaS products are so-called virtual servers. Users have the least freedom with SaaS. Here you usually only have permission to use software through a user account. Very typical SaaS products are email accounts, but also so-called managed servers. Managed servers are mostly used to provide your own websites. Here the version of the programming language and the database is specified by the server operator.

Managed servers in particular have a long tradition. They emerged at the turn of the millennium to provide an immediately usable environment for dynamic PHP websites with a MySQL database connection. The situation is similar with the serverless products that have recently become fashionable. Depending on your level of experience, you can now buy corresponding products from the major providers AWS, Google and Microsoft Azure.

The idea is to no longer operate your own servers for the services and thus outsource the entire hardware, operation and security effort to the cloud operators. In principle, this isn’t a bad idea, especially when it comes to small companies or startups that don’t have a lot of financial resources at their disposal or simply lack the administrative know-how for networks, Linux and server security.

Of course, serverless offerings that are completely managed externally quickly reach their limits. Especially if you want to provide your own developed individual serverless software in the cloud with as little effort as possible, you will come across many a stumbling block. A problem is often the flexible expandability when requirements change. You can certainly buy products from the various providers’ portfolios and combine them as you like like a building block set, but the costs incurred can quickly add up.

Basically, there is nothing wrong with a pay per use model (i.e. pay for what you use). At first glance, this is not a bad solution for people and organizations with small budgets. But here too, it’s the little details that can quickly grow into serious problems.

If you choose any cloud provider, you are well advised to avoid its proprietary management and automation products and instead use established general products if possible. If you commit yourself to one provider with all the consequences, it will only be possible to switch to another provider with great effort. Changes to the terms and conditions or continuously increasing costs are possible reasons for a forced change. Therefore, test whoever binds himself forever.

But also careless use of resources in cloud systems, e.g. B. due to incorrect configurations or unfavorable deployment strategies, can lead to an explosion in costs. Here you are well advised if there is the option to set limits and activate them. So that once you reach a certain amount, you will be informed that only a ‘certain’ quota is available. Especially with highly available services that suddenly receive an enormous number of new users, such limits can quickly lead to them being disconnected from the network. It is therefore always a good idea to use two cloud solutions, one for development and a separate one for the productive system, in order to minimize the offline risk.

Similar to stock market trading, you can also define limits for cloud services like AWS. Stop-loss orders on the stock market prevent you from selling a stock too cheaply or buying it too expensively. With the pay-per-use model, it’s not much different in the cloud. Here, you need to set appropriate limits with your provider to prevent bills from exceeding your available budget. These limits are also dynamic in the cloud. This means that the framework conditions are constantly changing, requiring the necessary limits to be regularly adjusted to meet current needs. To identify bottlenecks early, a robust monitoring system should be in place. The minimum requirement for an AWS node is determined by its requests. The upper limit of available resources is defined by the limit. Tools like IBM’s Kubecost can largely automate cost monitoring in Kubernetes clusters.

For cloud development environments, you should also keep a close eye on your own development and DevOps team. If an NPM Docker container of over 2 GB is created on the fly every time for a simple JavaScript Angular app, this strategy should definitely be questioned. Even if the cloud can allocate seemingly infinite resources dynamically, that doesn’t mean that this happens for free.

Of course, the issue of security is also an important factor. Of course, you can trust the cloud operator when he says that everything is encrypted and access to customer data and business secrets is not possible. One can certainly assume that the information that is to be accessed in most ventures rarely has any exciting or even exciting content that could be of interest to large cloud operators. If you still want to be on the safe side, you should write off the idea of ​​serverless completely and consider running your own cloud. Thanks to modern and free software, this is now easier than expected.

I have learned from personal experience that, given the complexity of modern web applications, efficient monitoring with Grafana and Prometheus or other solutions such as the ELK Stack or Slunk is essential. But some DevOps teams have difficulties with data collection and proper evaluation. IT decision-makers in particular are asked to get a technical overview so as not to fall for the well-sounding marketing traps of cloud and serverless.


The Future of Build Management

It’s not just high-level languages, which need to convert source code into machine code to make it executable, that require build tools. These tools are now also available for modern scripting languages ​​like Python, Ruby, and PHP, as their scope of responsibility continues to expand. Looking back at the beginnings of this tool category, one inevitably encounters make, the first official representative of what we now call a build tool. Make’s main task was to generate machine code and package the files into a library or executable. Therefore, build tools can be considered automation tools. It’s logical that they also take over many other recurring tasks that arise in a developer’s daily work. For example, one of the most important innovations responsible for Maven’s success was the management of dependencies on other program libraries.

Another class of automation tools that has almost disappeared is the installer. Products like Inno Setup and Wise Installer were used to automate the installation process for desktop applications. These installation routines are a special form of deployment. The deployment process, in turn, depends on various factors. First and foremost, the operating system used is, of course, a crucial criterion. But the type of application also has a significant influence. Is it, for example, a web application that requires a defined runtime environment (server)? We can already see here that many of the questions being asked now fall under the umbrella of DevOps.

As a developer, it’s no longer enough to simply know how to write program code and implement functions. Anyone wanting to build a web application must first get the corresponding server running on which the application will execute. Fortunately, there are now many solutions that significantly simplify the provisioning of a working runtime. But especially for beginners, it’s not always easy to grasp the whole topic. I still remember questions in relevant forums about downloading Java Enterprise, but only finding that the application server was included.

Where automation solutions were lacking in the early 2000s, the challenge today is choosing the right tool. There’s an analogy here from the Java universe. When the Gradle build tool appeared on the market, many projects migrated from Maven to Gradle. The argument was that it offered greater flexibility. Often, the ability to define orchestrated builds was needed—that is, the sequence in which subprojects are created. Instead of acknowledging that this requirement represented an architectural shortcoming and addressing it, complex and difficult-to-manage build logic was built in Gradle. This, in turn, made customizations difficult to implement, and many projects were migrated back to Maven.

From DevOps automations, so-called pipelines have become established. Pipelines can also be understood as processes, and these processes can, in turn, be standardized. The best example of a standardized process is the build lifecycle defined in Maven, also known as the default lifecycle. This process defines 23 sequential steps, which, broadly speaking, perform the following tasks:

  • Resolving and deploying dependencies
  • Compiling the source code
  • Compiling and running unit tests
  • Packaging the files into a library or application
  • Deploying the artifact locally for use in other local development projects
  • Running integration tests
  • Deploying the artifacts to a remote repository server.

This process has proven highly effective in countless Java projects over the years. However, if you run this process as a pipeline on a CI server like Jenkins, you won’t see much. The individual steps of the build lifecycle are interdependent and cannot be triggered individually. It’s only possible to exit the lifecycle prematurely. For example, after packaging, you can skip the subsequent steps of local deployment and running the integration tests.

A weakness of the build process described here becomes apparent when creating web applications. Web frontends usually contain CSS and JavaScript code, which is also automatically optimized. To convert variables defined in SCSS into correct CSS, a SASS preprocessor must be used. Furthermore, it is very useful to compress CSS and JavaScript files as much as possible. This obfuscation process optimizes the loading times of web applications. However, there are already countless libraries for CSS and JavaScript that can be managed with the NPM tool. NPM, in turn, provides so-called development libraries like Grunt, which enable CSS processing and optimization.

We can see how complex the build process of modern applications can become. Compilation is only a small part of it. An important feature of modern build tools is the optimization of the build process. An established solution for this is creating incremental builds. This is a form of caching where only changed files are compiled or processed.

Jenkins Pipelines

But what needs to be done during a release? This process is only needed once an implementation phase is complete, to prepare the artifact for distribution. While it’s possible to include all the steps involved in a release in the build process, this would lead to longer build times. Longer local build times disrupt the developer’s workflow, making it more efficient to define a separate process for this.

An important condition for a release is that all used libraries must also be in their final release versions. If this isn’t the case, it cannot be guaranteed that subsequent releases of this version are identical. Furthermore, all test cases must run correctly, and a failure will abort the process. Additionally, a corresponding revision tag should be set in the source control repository. The finished artifacts must be signed, and API documentation must be created. Of course, the rules described here are just a small selection, and some of the tasks can even be parallelized. By using sophisticated caching, creating a release can be accomplished quickly, even for large monoliths.

Furthermore, by utilizing sophisticated caching, creating a release can be accomplished quickly, even for large monoliths. For Maven, for example, no complete release process, similar to the build process, has been defined. Instead, the community has developed a special plugin that allows for the semi-automation of simple tasks that arise during a release.

If we take a closer look at the topic of documentation and reporting, we find ample opportunities to describe a complete process. Creating API documentation would be just one minor aspect. Far more compelling about standardized reporting are the various code inspections, some of which can even be performed in parallel.

Of course, deployment is also essential. Due to the diversity of potential target environments, a different strategy is appropriate here. One possible approach would be broad support for configuration tools like Ansible, Chef, and Puppet. Virtualization technologies such as Docker and LXC containers are also standard in the age of cloud computing. The main task of deployment would then be provisioning the target environment and deploying the artifacts from a repository server. A wealth of different deployment templates would significantly simplify this process.

If we consistently extrapolate from these assumptions, we conclude that there can be different types of projects. These would be classic development projects, from which artifacts for libraries and applications are created; test projects, which in turn contain the created artifacts as dependencies; and, of course, deployment projects for providing the infrastructure. The area of ​​automated deployment is also reflected in the concepts of Infrastructure as Code and GitOps, which can be taken up and further developed here.


Spring Cleaning for Docker

Anyone interested in this somewhat specialized article doesn’t need an explanation of what Docker is and what this virtualization tool is used for. Therefore, this article is primarily aimed at system administrators, DevOps engineers, and cloud developers. For those who aren’t yet completely familiar with the technology, I recommend our Docker course: From Zero to Hero.

In a scenario where we regularly create new Docker images and instantiate various containers, our hard drive is put under considerable strain. Depending on their complexity, images can easily reach several hundred megabytes to gigabytes in size. To prevent creating new images from feeling like downloading a three-minute MP3 with a 56k modem, Docker uses a build cache. However, if there’s an error in the Dockerfile, this build cache can become quite bothersome. Therefore, it’s a good idea to clear the build cache regularly. Old container instances that are no longer in use can also lead to strange errors. So, how do you keep your Docker environment clean?

While docker rm <container-nane> and docker rmi <image-id> will certainly get you quite far, in build environments like Jenkins or server clusters, this strategy can become a time-consuming and tedious task. But first, let’s get an overview of the overall situation. The command docker system df will help us with this.

root:/home# docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          15        9         5.07GB    2.626GB (51%)
Containers      9         7         11.05MB   5.683MB (51%)
Local Volumes   226       7         6.258GB   6.129GB (97%)
Build Cache     0         0         0B        0B

Before I delve into the details, one important note: The commands presented are very efficient and will irrevocably delete the corresponding areas. Therefore, only use these commands in a test environment before using them on production systems. Furthermore, I’ve found it helpful to also version control the commands for instantiating containers in your text file.

The most obvious step in a Docker system cleanup is deleting unused containers. Specifically, this means that the delete command permanently removes all instances of Docker containers that are not running (i.e., not active). If you want to perform a clean slate on a Jenkins build node before deployment, you can first terminate all containers running on the machine with a single command.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

The -f parameter suppresses the confirmation prompt, making it ideal for automated scripts. Deleting containers frees up relatively little disk space. The main resource drain comes from downloaded images, which can also be removed with a single command. However, before images can be deleted, it must first be ensured that they are not in use by any containers (even inactive ones). Removing unused containers offers another practical advantage: it releases ports blocked by containers. A port in a host environment can only be bound to a container once, which can quickly lead to error messages. Therefore, we extend our script to include the option to delete all Docker images not currently used by containers.

Abonnement / Subscription

[English] This content is only available to subscribers.

[Deutsch] Diese Inhalte sind nur für Abonnenten verfügbar.

Another consequence of our efforts concerns Docker layers. For performance reasons, especially in CI environments, you should avoid using them. Docker volumes, on the other hand, are less problematic. When you remove the volumes, only the references in Docker are deleted. The folders and files linked to the containers remain unaffected. The -a parameter deletes all Docker volumes.

docker volume prune -a -f

Another area affected by our cleanup efforts is the build cache. Especially if you’re experimenting with creating new Dockerfiles, it can be very useful to manually clear the cache from time to time. This prevents incorrectly created layers from persisting in the builds and causing unusual errors later in the instantiated container. The corresponding command is:

docker buildx prune -f

The most radical option is to release all unused resources. There is also an explicit shell command for this.

docker volume prune -a -f

We can, of course, also use the commands just presented for CI build environments like Jenkins or GitLab CI. However, this might not necessarily lead to the desired result. A proven approach for Continuous Integration/Continuous Deployment is to set up your own Docker registry where you can deploy your self-built images. This approach provides a good backup and caching system for the Docker images used. Once correctly created, images can be conveniently deployed to different server instances via the local network without having to constantly rebuild them locally. This leads to a proven approach of using a build node specifically optimized for Docker images/containers to optimally test the created images before use. Even on cloud instances like Azure and AWS, you should prioritize good performance and resource efficiency. Costs can quickly escalate and seriously disrupt a stable project.

In this article, we have seen that in-depth knowledge of the tools used offers several opportunities for cost savings. The motto “We do it because we can” is particularly unhelpful in a commercial environment and can quickly degenerate into an expensive waste of resources.


Cookbook: Maven Source Code Samples

Our Git repository contains an extensive collection of various code examples for Apache Maven projects. Everything is clearly organized by topic.

Back to table of contents: Apache Maven Master Class

  1. Token Replacement
  2. Compiler Warnings
  3. Excecutable JAR Files
  4. Enforcments
  5. Unit & Integration Testing
  6. Multi Module Project (JAR / WAR)
  7. BOM – Bill Of Materials (Dependency Management)
  8. Running ANT Tasks
  9. License Header – Plugin
  10. OWASP
  11. Profiles
  12. Maven Wrapper
  13. Shade Ueber JAR (Plugin)
  14. Java API Documantation (JavaDoc)
  15. Java Sources & Test Case packaging into JARs
  16. Docker
  17. Assemblies
  18. Maven Reporting (site)
  19. Flatten a POM
  20. GPG Signer

Apache Maven Master Class

Apache Maven (Maven for short) was first released on March 30, 2002, as an Apache Top-Level Project under the free Apache 2.0 License. This license also allows free use by companies in a commercial environment without paying license fees.

The word Maven comes from Yiddish and means something like “collector of knowledge.”

Maven is a pure command-line program developed in the Java programming language. It belongs to the category of build tools and is primarily used in Java software development projects. In the official documentation, Maven describes itself as a project management tool, as its functions extend far beyond creating (compiling) binary executable artifacts from source code. Maven can be used to generate quality analyses of program code and API documentation, to name just a few of its diverse applications.

Benefits


  Online Course (yearly subsciption / 365 days)

Maven Master Class
m 3.47 Milli-Bitcoin

Target groups

This online course is suitable for both beginners with no prior knowledge and experienced experts. Each lesson is self-contained and can be individually selected. Extensive supplementary material explains concepts and is supported by numerous references. This allows you to use the Apache Maven Master Class course as a reference. New content is continually being added to the course. If you choose to become an Apache Maven Master Class member, you will also have full access to exclusive content.

Developer

  • Maven Basics
  • Maven on the Command Line
  • IDE Integration
  • Archetypes: Creating Project Structures
  • Test Integration (TDD & BDD) with Maven
  • Test Containers with Maven
  • Multi-Module Projects for Microservices

Build Manager / DevOps

  • Release Management with Maven
  • Deploy to Maven Central
  • Sonatype Nexus Repository Manager
  • Maven Docker Container
  • Creating Docker Images with Maven
  • Encrypted Passwords
  • Process & Build Optimization

Quality Manager

  • Maven Site – The Reporting Engine
  • Determine and evaluate test coverage
  • Static code analysis
  • Review coding style specifications

In-Person Live Training – Build Management with Apache Maven

Index & Abbreviations

[A]

[B]

[C]

[D]

[E]

[F]

[G]

[H]

[I]

[J]

[K]

[L]

[M]

[N]

[O]

[P]

[Q]

[R]

[S]

[T]

[U]

[V]

[W]

[Y]

[Z]

[X]

return to the table of content: Apache Maven Master Class

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z

[A]

[B]

[C]

[D]

[E]

[F]

[G]

[H]

[I]

[J]

[K]

[L]

[M]

[N]

[O]

[P]

[Q]

[R]

[S]

[T]

[U]

[V]

[W]

[Y]

[Z]

[X]

return to the table of content: Apache Maven Master Class