Apache Maven (Maven for short) was first released on March 30, 2002, as an Apache Top-Level Project under the free Apache 2.0 License. This license also allows free use by companies in a commercial environment without paying license fees.
The word Maven comes from Yiddish and means something like “collector of knowledge.”
Maven is a pure command-line program developed in the Java programming language. It belongs to the category of build tools and is primarily used in Java software development projects. In the official documentation, Maven describes itself as a project management tool, as its functions extend far beyond creating (compiling) binary executable artifacts from source code. Maven can be used to generate quality analyses of program code and API documentation, to name just a few of its diverse applications.
This online course is suitable for both beginners with no prior knowledge and experienced experts. Each lesson is self-contained and can be individually selected. Extensive supplementary material explains concepts and is supported by numerous references. This allows you to use the Apache Maven Master Class course as a reference. New content is continually being added to the course. If you choose to become an Apache Maven Master Class member, you will also have full access to exclusive content.
Developer
Maven Basics
Maven on the Command Line
IDE Integration
Archetypes: Creating Project Structures
Test Integration (TDD & BDD) with Maven
Test Containers with Maven
Multi-Module Projects for Microservices
Build Manager / DevOps
Release Management with Maven
Deploy to Maven Central
Sonatype Nexus Repository Manager
Maven Docker Container
Creating Docker Images with Maven
Encrypted Passwords
Process & Build Optimization
Quality Manager
Maven Site – The Reporting Engine
Determine and evaluate test coverage
Static code analysis
Review coding style specifications
In-Person Live Training – Build Management with Apache Maven
If you want to use your Git repository for collaborative editing of source code, you need a Git server. The Git server enables multiple developers to collaborate on the same code base. Installing the Git client on a Linux server is a first step towards your own server solution, but it is far from sufficient. In order to allow multiple people to access a code repository, we need access authorization. After all, the repository should be publicly accessible via the Internet. We want to use user management to prevent unauthorized people from reading and changing the contents of the repositories.
There are many excellent and convenient solutions for operating a Git server that should be preferred to a native server solution. The administration of a native Git server requires Linux knowledge and is carried out exclusively via the command line. Solutions such as the SCM-Manager have a graphical user interface and come with many useful tools for administering the server. These tools are not available with a native installation.
Why should you install Git as a native server? This question is quite easy to answer. The reason is when the server on which the code repository is to be made available has only a few hardware resources. RAM in particular is always a bit of a problem in this context. This is often the case with rented virtual private servers (VPS) or a small RaspberryPI. So we can see that it can make sense to want to run a native Git server.
As a prerequisite, we need a Linux server on which we can install the Git server. This can be a Debian or Ubuntu server. If you use CentOs or other Linux distributions, you must use your distribution’s package manager instead of APT to install the software.
In the first step, we start by updating the packages and installing the Git client.
Now, in the third step, we can create our Git repositories in the newly created home directory of the git user. These differ from the local workspace in that they do not have the source code checked out.
Unfortunately, we are not quite finished with our project yet. In the fourth step, we have to set the user authorization for the created repository. This is done by storing the public key on the Git server for SSH access. To do this, we copy the contents of our private key file into the /home/git/.ssh/authorized_keys file in a separate line. If you now want to deny existing users access, simply comment out the private key number with a #.
If everything has been done correctly, you can access the repository using the following command line command: git clone ssh://git@<IP>/~/<repo>
Replace with the actual server IP. For our example, the correct path for is project.git, so it is the directory we created for the Git repository.
Multiple repositories can be created on the native Git server. It is important to note that all authorized users have read and write access to all repositories created in this way. This can only be restricted by creating multiple users on the operating system of the Linux server that provides our Git repositories, to whom the repositories are then assigned.
We see that a native Git server installation can be implemented quickly, but it is not sufficient for commercial software development. If you like to experiment, you can create a virtual machine and try out this workshop in it.
Ruby has been a well-established programming language for many years and can also be recommended to beginners. Ruby follows the object-oriented paradigm and contains many concepts to support OOP well. In addition, the Ruby on Rails framework makes it very easy to develop complex web applications.
The most difficult hurdle to overcome when getting started with Ruby is the installation of the entire development environment. For this reason, I have written this short tutorial on getting started with Ruby. So let’s start with the installation right away.
My operating system is a Debian 12 Linux and Ruby can be installed very easily with the sudo apt-get install ruby-full command. This procedure can be applied to all Debian-based Linux distributions such as Ubuntu. You can then use ruby -v to check the success in the bash.
If we now follow the tutorial on the Ruby on Rails homepage and want to install the Rails framework via gem rails, we already encounter the first problem. No libraries for Ruby can be installed due to missing authorizations. Now we could come up with the idea to install the libraries as superuser with sudo. Unfortunately, this solution is only temporary and prevents the libraries from being found correctly later in the development environment. It is better to create a folder for the GEMs in the user’s home directory and make this available via a system variable.
export GEM_HOME=/home/<user>/.ruby-gems
export PATH=$PATH:/home/<user>/.ruby-gems
The above line must be entered at the end of the .bashrc file so that the changes remain persistent. It is important that is <user> replaced with the correct user name. The success of this action can be checked via gem environment and should result in an output similar to the one below.
With this setting, Ruby GEMs can now be installed without difficulty. Let’s try this out right away and install the Ruby on Rails framework, which supports us in the development of web applications: gem install rails. This should now run without error messages and with the command rails -v we can see if we were successful.
In the next step we can now create a new Rails project. Here I use the example from the Ruby on Rails documentation and write in the bash: rails new blog. This creates a corresponding directory called blog with the project files. After we have changed to the directory, we still need to install all dependencies. This is done via: bundle install.
Here we encounter another problem. The installation cannot be completed because there seems to be a problem with the psych library. The real problem, however, is that there is no support for YAML files at the operating system level. This can be fixed very quickly by installing the YAML package.
sudo apt-get install libyaml-dev
The problem with psych in Ruby on Rails has existed for a while and has been solved with the YAML installation so that the bundle install command now also runs successfully. Now we are also able to start the server for the Rails application: bin/rails server.
ed@:~/blog$bin/railsserver=> BootingPuma=> Rails7.1.3.3applicationstartingindevelopment=> Run`bin/rails server --help`for more startup optionsPumastartinginsinglemode...* Puma version: 6.4.2 (ruby3.1.2-p20)("The Eagle of Durango")* Min threads: 5* Max threads: 5* Environment: development* PID: 12316* Listening on http://127.0.0.1:3000* Listening on http://[::1]:3000UseCtrl-Ctostop
Bash
If we now call up the URL http://127.0.0.1:3000 in the web browser, we see our Rails web application.
With these steps, we have now created a functioning Ruby environment on our system. Now it’s time to decide on a suitable development environment. If you only occasionally adapt a few scripts, VIM and Sublime Text are sufficient as editors. For complex software projects, a full IDE should be used, as this simplifies the work considerably. The best recommendation is the paid IDE RubyMine from JetBrains. If you support Ruby open source projects as a developer, you can apply for a free license.
A freely available Ruby IDE is VSCode from Microsoft. However, a few plugins have to be integrated first and VSCode is not very intuitive for my taste. Ruby integration for the classic Java IDEs Eclipse and NetBeans are quite outdated and can only be made to work with a great deal of effort.
With this we have already discussed all the important points that are necessary to set up a functioning Ruby environment on your own system. I hope that this little workshop has significantly lowered the entry barrier to learning Ruby. If you like this article, please like it and recommend it to your friends.
README files have a long tradition in software projects. These originally plain text files contained license information and instructions on how to compile the corresponding artifact from the source code or important notes on installing the program. There is no real standard how to build such a README file.
Since GitHub (acquired by Microsoft in 2018) started its triumphant march as a free code hosting platform for open source projects, there was quite early the function that the README file as the start page of the repository display. All that is required is to create a simple text file called README.md in the root directory of the repository.
In order to be able to structure the README files more clearly a possibility for a simple formatting was looked for. Quickly the markdown notation was chosen, because it is easy to use and can be rendered quite performant. Thus, the overview pages are easier to read for people and can be used as project documentation.
It is possible to link several such markdown files together as project documentation. So you get a kind of mini WIKI that is included in the project and also versioned via Git.
The whole thing became so successful that self-hosting solutions such as GitLab or the commercial BitBucket have also adopted this function.
Now, however, the question arises as to what content is best written in such a README file so that it also represents real added value for outsiders. The following points have become established over the course of time:
Short description of the project
Conditions under which the source code may be used (license)
How to use the project (e.g. instructions for compiling or how to include the library in own projects)
Who are the authors of the project and how to contact them
What to do if you want to support the project
Meanwhile, so-called badges (stickers) are very popular. These often reference external services such as the free Continuous Integration Server TravisCI. These help to assess the quality of the project.
On GitHub there are also various templates for README files. However, you also have to look a little at the actual circumstances of your own project and judge which information is really relevant for users. But such templates help a lot to find out if you might have missed a point.
The fact that pretty much every manufacturer of source control management server solutions has integrated the function to display the README.md file as the project start page for the code repository means that a README.me is also a useful thing for commercial projects.
Even if the syntax for markdown is easy to learn, it can be more comfortable to use a MARKDOWN editor directly for extensive editing of such files. You should make sure that the preview is displayed correctly and not only a simple syntax highlighting is offered.
In any case, it is worth taking a look at the GitHub page https://www.readme-templates.com. Further resources on the topic can be found here:
In the previous part of the article treasure chest, I described how the database connection for the TP-CORE library got established. Also I gave a insight to the internal structure of the ConfiguartionDO. Now in the second part I explain the ConfiguartionDAO and its corresponding service. With all this knowledge you able to include the application configuration feature of TP-CORE in your own project to build your own configuration registry.
Lets resume in short the architectural design of the TP-CORE library and where the fragments of the features located. TP-CORE is organized as layer architecture as shown in the graphic below.
As you can see there are three relevant packages (layer) we have to pay attention. As first the business layer resides like all other layers in an equal named package. The whole API of TP-CORE is defined by interfaces and stored in the business layer. The implementation of the defined interfaces are placed in the application layer. Domain Objects are simple data classes and placed in the domain layer. Another important pattern is heavily used in the TP-CORE library is the Data Access Object (DAO).
Now the days micro services and RESTful application are state of the art. Especially in TP-CORE the defined services aren’t REST. This design decision is based on the mind that TP-CORE is a dependency and not a standalone service. Maybe in future, after I got more feedback how and where this library is used, I could rethink the current concept. For now we treat TP-CORE as what it is, a library. That implies for the usage in your project, you can replace, overwrite, extend or wrap the basic implementation of the ConfigurationDAO to your special necessities.
To keep the portability of changing the DBMS Hibernate (HBM) is used as JPA implementation and O/R mapper. The Spring configuration for Hibernate uses the EntityManager instead of the Session, to send requests to the DBMS. Since version 5 Hibernate use the JPA 2 standard to formulate queries.
As I already mentioned, the application configuration feature of TP-CORE is implemented as DAO. The domain object and the database connection was topic of the first part of this article. Now I discuss how to give access to the domain object with the ConfigurationDAO and its implementation ConfigurationHbmDAO. The domain object ConfigurationDO or a list of domain objects will be in general the return value of the DAO. Actions like create are void and throw just an exception in the case of a failure. For a better style the return type is defined as Boolean. This simplifies also writing unit tests.
Sometimes it could be necessary to overwrite a basic implementation. A common scenario is a protected delete. For example: a requirement exist that a special entry is protected against a unwanted deletion. The most easy solution is to overwrite the delete whit a statement, refuses every time a request to delete a domain object whit a specific UUID. Only adding a new method like protectedDelete() is not a god idea, because a developer could use by accident the default delete method and the protected objects are not protected anymore. To avoid this problem you should prefer the possibility of overwriting GenericDAO methods.
As default query to fetch an object, the identifier defined as primary key (PK) is used. A simple expression fetching an object is written in the find method of the GenericHbmDAO. In the specialization as ConfigurationHbmDAO are more complex queries formulated. To keep a good design it is important to avoid any native SQL. Listing 1 shows fetch operations.
The readability of these few lines of source is pretty easy. The query we formulated for getAllConfigurationSetEntries() returns a list of ConfigurationDO objects from the same module whit equal version of a configSet. A module is for example the library TP-CORE it self or an ACL and so on. The configSet is a namespace that describes configuration entries they belong together like a bundle and will used in a service like e-mail. The version is related to the service. If in future some changes needed the version number have increase. Lets get a bit closer to see how the e-mail example will work in particular.
We assume that a e-mail service in the module TP-CORE contains the configuration entries: mailer.host, mailer.port, user and password. As first we define the module=core, configSet=email and version=1. If we call now getAllConfigurationSetEntries(core, 1, email); the result is a list of four domain objects with the entries for mailer.host, mailer.port, user and password. If in a newer version of the email service more configuration entries will needed, a new version will defined. It is very important that in the database the already exiting entries for the mail service will be duplicated with the new version number. Of course as effect the registry table will grow continual, but with a stable and well planned development process those changes occur not that often. The TP-CORE library contains an simple SMTP Mailer which is using the ConfigurationDAO. If you wish to investigate the usage by the MailClient real world example you can have a look on the official documentation in the TP-CORE GitHub Wiki.
The benefit of duplicate all existing entries of a service, when the service configuration got changed is that a history is created. In the case of update a whole application it is now possible to compare the entries of a service by version to decide exist changes they take effect to the application. In practical usage this feature is very helpful, but it will not avoid that updates could change our actual configuration by accident. To solve this problem the domain object has two different entries for the configuration value: default and configuration.
The application configuration follows the convention over configuration paradigm. Each service need by definition for all existing configuration entries a fix defined default value. Those default values can’t changed itself but when the value in the ConfigurationDO is set then the defaultValue entry will ignored. If an application have to be updated its also necessary to support a procedure to capture all custom changes of the updated configuration set and restore them in the new service version. The basic functionality (API) for application configuration in TP-CORE release 3.0 is:
The following listing gives you an idea how a implementation in your own service could look like. This snipped is taken from the JavaMailClient and shows how the internal processing of the fetched ConfigurationDO objects are managed.
privatevoidprocessConfiguration(){ListconfigurationEntries=configurationDAO.getAllConfigurationSetEntries("core",1,"email");for(ConfigurationDOentry: configurationEntries){Stringvalue;if(StringUtils.isEmpty(entry.getValue())){ value =<strong>entry.getDefaultValue</strong>();}else{ value =<strong>entry.getValue</strong>();}if(entry.getKey().equals(cryptoTools.calculateHash("mailer.host",HashAlgorithm.SHA256))){configuration.replace("mailer.host", value);}elseif(entry.getKey().equals(cryptoTools.calculateHash("mailer.port",HashAlgorithm.SHA256))){configuration.replace("mailer.port", value);}elseif(entry.getKey().equals(cryptoTools.calculateHash("user",HashAlgorithm.SHA256))){configuration.replace("mailer.user", value);}elseif(entry.getKey().equals(cryptoTools.calculateHash("password",HashAlgorithm.SHA256))){configuration.replace("mailer.password", value);}}}
Java
Another functionality of the application configuration is located in the service layer. The ConfigurationService operates on the module perspective. The current methods resetModuleToDefault() and filterMandatoryFieldsOfConfigSet() already give a good impression what that means.
If you take a look on the MailClientService you detect the method updateDatabaseConfiguration(). May you wonder why this method is not part of the ConfigurationService? Of course this intention in general is not wrong, but in this specific implementation is the update functionality specialized to the MailClient configuration. The basic idea of the configuration layer is to combine several DAO objects to a composed functionality. The orchestration layer is the correct place to combine services together as a complex process.
Resume
The implementation of the application configuration inside the small library TP-CORE allows to define an application wide configuration registry. This works also in the case the application has a distribute architecture like micro services. The usage is quite simple and can easily extended to own needs. The proof that the idea is well working shows the real world usage in the MailClient and FeatureToggle implementation of TP-CORE.
I hope this article was helpful and may you also like to use TP-CORE in your own project. Feel free to do that, because of the Apache 2 license is also no restriction for commercial usage. If you have some suggestions feel free to leave a comment or give a thumbs up.
Through the years, different techniques to storage configuration settings for applications got established. We can choose between database, property files, XML or YAML, just to give a few impressions of the options we could choose from. But before we jumping into all technical details of a possible implementation, we need to get a bit familiar of some requirements.
Many times in my professional life I touched this topic. Problems occur periodically after an application was updated. My peak of frustration, I reached with Windows 10. After every major update many settings for security and privacy switched back to default, apps I already uninstalled messed up my system again and so on. This was reasons for me to chose an alternative to stop suffering. Now after I switched to Ubuntu Mate I’m fine, because those problems got disappear.
Several times I also had to maintain legacy projects and needed to migrate data to newer versions. A difficult and complex procedure. Because of those activities I questioned myself how this problem could handled in a proper way. My answer you can find in the open source project TP-CORE. The feature application configuration is my way how to avoid the effect of overwriting important configuration entries during the update procedure.
TP-CORE is a free available library with some useful functionality written in Java. The source code is available on GitHub and the binaries are published on Maven Central. To use TP-CORE in your project you can add it as dependency.
The feature of application configuration is implemented as ConfigurationDAO and use a database. My decision for a database approach was driven by the requirement of having a history. Off course the choice have also some limitations. Obviously has the configuration for the database connection needed to be stored somewhere else.
TP-CORE use Spring and Hibernate (JPA) to support several DBMS like PostgreSQL, Oracle or MariaDB. My personal preference is to use PostgreSQL, so we can as next step discuss how to setup our database environment. The easiest way running a PostgreSQL Server is to use the official Docker image. If you need a brief overview how to deal with Docker and PostgreSQL may you like to check my article: Learn to walk with Docker and PostgreSQL. Below is a short listing how the PostgreSQL container could get instantiated in Docker.
May you need to make some changes on the listing above to fit it for your system. After your DBMS is running well we have to create the schemata and the user with a proper password. In our case the schema is called together. the user is also called together and the password will be together too.
To establish the connection from your application to the PostgreSQL DBMS we use a XML configuration from the Spring Framework. The GitHub repository of TP-CORE contains already a working configuration file called spring-dao.xml. The Spring configuration includes some other useful features like transactions and a connection pool. All necessary dependencies are already included. You just need to replace the correct entries for the connection variables:
In the next step you need to tell your application how to instanciate the Spring context, using the configuration file spring-dao.xml. Depending on your application type you have two possibilities. For a standard Java app, you can add the following line to your main method:
The creation of the database table will managed by Hibernate during the application start. When you discover the GitHub repository of the TP-CORE project you will find in the directory /src/main/filters the file database.properties. This file contains more connection strings to other database systems. In the case you wish to compile TP-CORE by your own, you can modify database.properties to your preferred configuration. The full processed configuration file with all token replacements you will find in the target directory.
In the next paragraph we will have a closer look on the Domain Object ConfigurationDO
The most columns you see in the image above, is very clear, for what they got used. As first point we need to clarify, what makes an entry unique? Of course the UUID as primary key fits this requirement as well. In our case the UUID is the primary key and is auto generated by the application, when a new row will created. But using in an application all the time a non human readable id as key, to grab a value is heavily error prone and uncomfortable. For this use case I decided a combination of configuration key, module name and service version to define a unique key entry.
To understand the benefit of this construction I will give a simple example. Imagine you have functionality of sending E-Mails in your application. This functionality requires several configuration entries like host, user and password to connect with an SMTP server. to group all those entries together in one bundle we have the CONFIG_SET. If your application deals with an modular architecture like micro services, it could be also helpful to organize the configuration entries by module or service name. For this reason the MODULE_NAME was also included into this data structure. Both entries can be used like name spaces to fetch relevant information more efficient.
Now it could be possible that some changes of the functionality create new configuration entries or some entries got obsolete. To enable a history and allow a backward compatibility the data structure got extended by SERVICE_VERSION.
Every entry contains a mandatory default value and an optional configuration value. The application can overwrite the default value by filling the configuration value field. This allows updates without effect the custom configuration, as long the developer respect to not fill entries for configuration values and always use the default entry. This definition is the convention over configuration paradigm.
The flags deprecated and mandatory for a configuration key are very explicit and descriptive. Also the column comment don’t need as well any further explanation.
If there are changes of one or more configuration entries for a service, the whole configuration set has to be duplicated with the new service version. As example you can have a look on the MailClient functionality of TP-CORE how the application configuration is used.
A very important information is that the configuration key is in the DBMS stored as SHA-512 hash. This is a simple protection against a direct manipulation of the configuration in the DBMS, outside of the application. For sure this is not a huge security, but minimum it makes the things a bit uncomfortable. In the application code is a human readable key name used. The mapping is automatic, and we don’t need to worry about it.
Resume
In this first part I talked about why I had need my own implementation of a application registry to storage configuration settings. The solution I prefer is using a database and I showed how enable the database configuration in your own project. Shortly we also had a view on the data structure and how the Domain Object is working.
As an IT service provider, we often have to support our clients to reinstall old Windows systems. The most often challenge we have to face by this activity is to backup old files and restore them on the new system. Not only private persons, also companies using the email client Thunderbird. So we decided to publish this short guide how your Thunderbird profile can be backup and restore. To prevent a data loss – you should do backups regularly in the case your hardware or operating system got fully crashed.
backup
Connect an pen drive or hard disk (USB medium) on your computer.
Create a directory of your choice on the USB medium to back up your profile. Choose a name you are able to recognize later the content. (e.g. 2022-01-19_Thunderbird-profile)
Keep the „Explorer window“ open and make sure, that the directory is active.
Open Thunderbird on the computer you want to backup.
To find the old profile, click on the “three bars” in the top right-hand corner.
In the next windows that opens, click on the „Open Folder“.
A new “Explorer window” pop up and shows you all the files in your profile.
Mark all files by clicking – click on the “1st file”, then hold down the <Shift> key on your keyboard and at the same time press the “Arrow down” key until the grey „scroll bar“ in the window reaches the bottom. Once all files are selected (marked in blue), click with the “right mouse button” on any file and select the menu item “Copy“.
Go back to your other “Explorer window“, click the right mouse button and then “Paste“.
Once the copying process is complete, you can close your Thunderbird.
restore
Connect your „USB medium“ to the computer (destination) where you like to transfer your Thunderbird profile.
Open the “Explorer” and create the following directories: „Data“ ➡️ „ Thunderbird“ ➡️ „Post-Office xxx“ (C:\Data\Thunderbird\Post-Office xxx\ „xxx“ you need to replace for your Thunderbird profile name)
Hence your directories are created you can copy your profile data from your USB medium into the newly created directory „Post-Office xxx“.
After completing the copying process, you still need to set up your new profile directory in Thunderbird.
Press the keys <Windows Key>+<R> on your keyboard. The “Run” dialogue opens. Enter the following command: thunderbird -p as you can see in the screenshot and press “OK“.
In the newly opened Popup “Thunderbird – Choose User Profile“, click on “Create Profile…” to start the wizard.
In the 1st window of the “Profile Wizard – Welcome” click on “Next“.
In the 2nd window of the “Profile Wizard – Finish” enter the “Profile Name” (Post-Office xxx) under “1” you can see in the screenshot. Under “2” select the profile path by clicking on “Choose Folder“. (C:\Data\Thunderbird\Post-Office xxx)”.
To finish the process,you just have to press the “Finish” button.
You can now start Thunderbird normally from the start bar – all your emails & settings are restored. If you have questions or suggestions you can write us an e-mail or leave a comment. If you like this guide feel free to share this article
Maybe you have bought you like me an Raspberry Pi4 with 4GB RAM and think about what nice things you could do with it. Since the beginning I got the idea to use it as an lightweight home server. Of course you can easily use a mini computer with more power and obviously more energy consumption too. Not a nice idea for a device is running 24/7. As long you don’t plan to mine your own bitcoins or host a high frequented shop system, a PI device should be sufficient.
I was wanted to increase the network security for my network. For this reason I found the application AdGuard which blocks many spy software from internet services you use on every device is connected to the network where AdGuard is running. Sounds great and is not so difficult to do. Let me share with you my experience.
As first let’s have a look to the overall system and perquisites. After the Router from my Internet Service Provider I connected direct by wire my own Network router an Archer C50. On my Rapsbery PI4 with 4GB RAM run as operation system Ubuntu Linux Server x64 (ARM Architecture). The memory card is a 64 GB ScanDisk Ultra. In the case you need a lot of storage you can connect an external SSD or HDD with an USB 3 – SATA adapter. Be aware that you use a storage is made for permanent usage. Western Digital for example have an label called NAS, which is made for this purpose. If you use standard desktop versions they could get broken quite soon. The PI is connected with the router direct by LAN cable.
The first step you need to do is to install on the Ubuntu the Docker service. this is a simple command: apt-get install docker. if you want to get rid of the sudo you need to add the user to the docker group and restart the docker service. If you want to get a bit more familiar with Docker you can check my video Docker basics in less than 10 minutes.
Before you just copy and past the listing above, you need to change the IP addresses to the ones your network is using. for all the installation, this is the most difficult part. As first the network type we create is macvlan bounded to the network card eth0. eth0 is for the PI4 standard. The name of the network we gonna to create is lan. To get the correct values for subnet, ip-range and gateway you need to connect to your router administration.
To understand the settings, we need a bit of theory. But don’t worry is not much and not that complicated. Mostly your router is reachable by an IP address similar to 192.168.0.1 – this is a static address and something equal we want to have for AdGuard on the PI. The PI itself is in my case reachable by 192.168.0.12, but this IP we can not use for AdGuard. The plan is to make the AdGuard web interface accessible by the IP 192.168.0.2. OK let’s do it. First we have to switch on our router administration to the point DHCP settings. In the Screenshot you can see my configuration. After you changed your adaptions don’t forget to reboot the router to take affect of the changes.
I configured the dynamic IP range between 192.168.0.5 to 192.168.0.199. This means the first 4 numbers before 192.168.0.5 can be used to connect devices with a static IP. Here we see also the entry for our default gateway. Whit this information we are able to return to our network configuration. the subnet IP is like the gateway just the digits in the last IP segment have to change to a zero. The IP range we had limited to the 192.168.0.4 because is one number less than where we configured where the dynamic IP range started. That’s all we need to know to create our network in Docker on the PI.
Now we need to create in the home directory of our PI the places were AdGuard can store the configuration and the data. This you can do with a simple command in the ssh shell.
The container we create is called adguard and we connect this image to our own created network lan with the IP address 192.168.0.2. Then we have to open a lot of ports AdGuard need to do the job. And finally we connect the two volumes for the configuration and data directory inside of the container. As restart policy we set the container to always, this secure that the service is up again after the server or docker was rebooted.
After the execution of the docker run command you can reach the AdGuard configuration page with your browser under: http://192.168.0.2:3000. Here you can create the primary setup to create a login user and so on. After the first setup you can reach the web interface by http://192.168.0.2.
The IP address 192.168.0.2 you need now to past into the field DNS Server for the DHCP settings. Save the entries and restart your router to get all changes working. When the router is up open on your browser any web page from the internet to see that everything is working fine. After this you can login into the AdGuard web console to see if there appearing data on the dashboard. If this is happened then you are don e and your home or office network is protected.
If you think this article was helpful and you like it, you can support my work by sharing this post or leave a like. If you have some suggestions feel free to drop a comment.
For business it’s sometime important to have a central place where employees and clients are able to interact together. NextCloud is a simple and extendable PHP solution with a huge set of features you can host by yourself, to keep full control of your data. A classical Groupware ready for your own cloud.
If you want to install NextCloud on your own server you need as first a well working PHP installation with a HTTP Server like Apache. Also a Database Management System is mandatory. You can chose between MySQL, MariaDB and PostgreSQL servers. The classical way to install and configure all those components takes a lot of time and the maintenance is very difficult. To overcome all this we use a modern approach with the virtualization tool docker.
The system setup is as the following: Ubuntu x64 Server, PostgreSQL Database, pgAdmin DBMS Management and NextCloud.
If you have any question feel free to leave a comment. May you need help to install and operate your own NextCloud installation secure, don’t hesitate to contact us by the contact form. In the case you like the video level a thumbs up and share it.
After some years the virtualization tool Docker proofed it’s importance for the software industry. Usually when you hear something about virtualization you may could think this is something for administrators and will not effect me as a developer as much. But wait. You’re might not right. Because having some basic knowledge about Docker as a developer will helps you in your daily business.
Step 1: create the container and initialize the database
If you wish that your PostgreSQL is always up after you restart your system, you should change the restart policy form no to always. After you created the instance pg-dbms of your PostgreSQL 11 Docker image, you need to cheek if it was success. This you can do by the command:
dockerps-a
Step 2: copy the initialized database directory to a local directory on your host system
The biggest problem with the current container is, that all data will got lost, when you erase the container. This means wen need to find a way how to save this data permanently. The easiest way is to copy the data directory from your container to an directory to your host system. The copy command needs tow parameters source and destination. for the source you need to specify the container were you want to grab the files. in our case the container is named pg-dbms. The destination is a PostgreSQL folder in the home directory of the user ed. If you use Windows instead of Linux it works the same. Just adapt the directory path and try to avoid white-spaces. When the files appeared in the defined directory you’re done with this step.
Step 3: stop the current container
dockerstoppg-dbms
In the case you wish to start a container, just replace the word stop for the word start. The container we created to grab the initial files for the PostgreSQL DBMS we don’t need no longer, so we can erase it, but to do that as first the running container have to be stopped.
Step 4: start the current container
dockerstartpg-dbms
After the container is stopped we are able to erase it.
Step 5: recreate the container with an external volume
Now we can link the directory with the exported initial database to a new created PostgreSQL container. that’s all. The big benefit of this activities is, that now every database we create in PostgreSQL and the data of this database is outside of the docker container on our local machine. This allows a much more simpler backup and prevent losing information when a container has to be updated.
If you have instead of PostgreSQL other images where you need to grab files to reuse them you can use this tutorial too. just adapt to the image and the paths you need. The procedure is almost the same. If you like to get to know more facts about Docker you can watch also my video Docker Basics in less then 10 Minutes. In the case you like this short tutorial share it with your friends and colleagues. To stay informed don’t forget to subscribe to my newsletter.