Date vs. Boolean

When we designing data models and their corresponding tables appears sometimes Boolean as datatype. In general those flags are not really problematic. But maybe there could be a better solution for the data design. Let me give you a short example about my intention.

Assume we have to design a simple domain to store articles. Like a Blog System or any other Content Management. Beside the content of the article and the name of the author could we need a flag which tells the system if the article is visible for the public. Something like published as a Boolean. But there is also an requirement of when the article is scheduled a date for publishing. In the most database designs I observed for those circumstances a Boolean: published and a Date: publishingDate. In my opinion this design is a bit redundant and also error prone. As a fast conclusion I would like to advice you to use from the beginning just Date instead of Boolean. The scenario I described above can also transformed to many other domain solutions.

For now, after we got an idea why we should replace Boolean for Date datatype we will focus about the details how we could reach this goal.

Dealing with standard SQL suggest that replacing a Database Management System (DBMS) for another one should not be a big issue. The reality is unfortunately a bit different. Not all available data types for date like Timestamp are really recommendable to use. By experience I prefer to use the simple java.util.Date to avoid future problems and other surprises. The stored format in the database table looks like: ‘YYYY-MM-dd HH:mm:ss.0’. Between the Date and Time is a single space and .0 indicates an offset. This offset describes the time zone. The Standard Central European Timezone CET has an offset of one hour. That means UTC+01:00 as international format. To define the offset separately I got good results by using java.util.TimeZone, which works perfectly together with Date.

Before we continue I will show you a little code snippet in Java for the OR Manager Hibernate and how you could create those table columns.

@Table(name = "ARTICLE")
public class ArticleDO {

    @CreationTimestamp
    @Column(name = "CREATED")
    @Temporal(TemporalType.DATE)
    private Date created;

    @Column(name = "PUBLISHED")
    @Temporal(TemporalType.DATE)
    private Date published;

    @Column(name = "DEFAULT_TIMEZONE")
    private String defaultTimezone;

    //Constructor
    public ArticleDO() {
        TimeZone.setDefault(Constraints.SYSTEM_DEFAULT_TIMEZONE);
        this.defaultTimezone = "UTC+00:00";
        this.published = new Date('0000-00-00 00:00:00.0');
    }

    public Date isPublished() {
        return published;
    }

    public void setPublished(Date publicationDate) {
    	if(publicationDate != null) {
        	this.published = publicationDate;
    	} else {
    		this.published = new Date(System.currentTimeMillis());
    	}
    }
}    


//SQL
INSERT INTO ARTICLE (CREATED, PUBLISHED, DEFAULT_TIMEZONE)
    VALUES ('1984-04-01 12:00:01.0', '0000-00-00 00:00:00,0', 'UTC+00:00);

Let get a bit closer about the listing above. As first we see the @CreationTimestamp Annotation. That means when the ArticleDO Object got created the variable created will initialized by the current time. This value never should changed, because an article can just once created but several times changed. The Timezone is stored in a String. In the Constructor you can see how the system Timezone could grabbed – but be careful this value should not trusted to much. If you have a user like me traveling a lot you will see in all the places I stay the same system time, because usually I never change that. As default Timezone I define the correct String for UTC-0. The same I do for the variable published. Date can also created by a String what we use to set our default zero value. The Setter for published has the option to define an future date or use the current time in the case the article will published immediately. At the end of the listing I demonstrate a simple SQL import for a single record.

But do not rush to fast. We also need to pay a bit attention how to deal with the UTC offset. Because I observed in huge systems several times problems which occurred because developer was used only default values.

The timezone in general is part of the internationalization concept. For managing the offset adjustments correctly we can decide between different strategies. Like in so many other cases there no clear right or wrong. Everything depends on the circumstances and necessities of your application. If a website is just national wide like for a small business and no time critical events are involved everything become very easy. In this case it will be unproblematic to manage the timezone settings automatically by the DBMS. But keep in mind in the world exist countries like Mexico with more than just one timezone. An international system where clients spread around the globe it could be useful to setup each single DBMS in the cluster to UTC-0 and manage the offset by the application and the connected clients.

Another issue we need to come over is the question how should initialize the date value of a single record by default? Because null values should avoided. A full explanation why returning null is not a good programming style is given by books like ‘Effective Java’ and ‘Clean Code’. Dealing with Null Pointer Exceptions is something I don’t really need. An best practice which well works for me is an default date – time value by ‘0000-00-00 00:00:00.0’. Like this I’m avoiding unwanted publishing’s and the meaning is very clear – for everybody.

As you can see there are good reasons why Boolean data types should replaced by Date. In this little article I demonstrated how easy you can deal with Date and timezone in Java and Hibernate. It should also not be a big thing to convert this example to other programming languages and Frameworks. If you have an own solution feel free to leave a comment and share this article with your colleagues and friends.

Working with JSON in Java RESTful Services using Jackson

Since a long time the Java Script Object Notation [1] become as a lightweight standard to replace XML for information exchange between heterogeneous systems. Both technologies XML and JSON closed those gap to return simple and complex data of a remote method invocation (RMI), when different programming languages got involved. Each of those technologies has its own benefits and disadvantages. A good designed XML document is human readable but needs in comparing to JSON more payload when it send through the network. For almost every programming languages existing plenty implementations to deal with XML and also JSON. We don’t need to reinvent the wheel, to implement our own solution for handling JSON objects. But choosing the right library is not that easy might it seems.

The most popular library for JSON in Java projects is the one I already mentioned: Jackson [2]. because of its huge functionality. Another important point for choosing Jackson instead of other libraries is it’s also used by the Jersey REST Framework [3]. Before we start now our journey with the Java Frameworks Jersey and Jackson, I like to share some thoughts about things, I often observe in huge projects during my professional life. Because of this reason I always proclaim: don’t mix up different implementation libraries for the same technology. The reason is it’s a huge quality and security concern.

The general purpose for using JSON in RESTful applications is to transmit data between a server and a client via HTTP. To achieve that, we need to solve two challenges. First, on the server side, we need create form a Java object a valid JSON representation which we can send to the client. This process we call serialization. On the client side, we do the second step, which is exactly the opposite, we did on the server. De-serialization we call it, when we create a valid object from a JSON String.

In this article we will use on the server side and also on the client side Java as programming language, to deal with JSON objects. But keep in mind REST allows you to have different programming languages on the server and for the client. Java is always a good choice to implement your business logic on the server. The client side often is made with JavaScript. Also PHP, .NET and other programming Languages are possible.

In the next step we will have a look at the project architecture. All artifacts are organized by one Apache Maven Multi-Module project. It’s a good recommendation to follow this structure in your own projects too. The three artifacts we create are: api, server and client.

  • API: contain shared objects which will needed on the server and also client side, like domain objects and interfaces.
  • Server: producer of a RESTful service, depends on API.
  • Client: consumer of the RESTful service, depends on API.

Inside of this artifacts an layer architecture is applied. This means the access to objects from a layer is only allowed to the direction of the underlying layers. In short: from top to down. The layer structure are organized by packages. Not every artifact contains every layer, only the ones which are implemented. The following picture draws an better understanding for the whole architecture is used.

The first piece of code, I’d like to show are the JSON dependencies we will need in the notation for Maven projects.

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>${version}</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-annotations</artifactId>
    <version>${version}</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>${version}</version>
</dependency>

Listing 1

In respect to the size of this article, I only focus how the JSON object is used in RESTful applications. It’s not a full workshop about RESTful (Micro) Services. As code base we reuse my open source GitHub project TP-ACL [4], an access control list. For our example I decided to sliced apart the Role – Functionality from the whole code base.

For now we need as first an Java object which we can serialize to an JSON String. This Domain Object will be the Class RolesDO and is located in the layer domain inside the API module. The roles object contains a name, a description and a flag that indicates if a role is allowed to delete.

@Entity
@Table(name = "ROLES")
public class RolesDO implements Serializable {

    private static final long serialVersionUID = 50L;

    @Id
    @Column(name = "NAME")
    private String name;

    @Column(name = "DESCRIPTION")
    private String description;

    @Column(name = "DELETEABLE")
    private boolean deleteable;

    public RolesDO() {
        this.deleteable = true;
    }

    public RolesDO(final String name) {
        this.name = name;
        this.deleteable = true;
    }

    //Getter & Setter
}

Listing 2

So far so good. As next step we will need to serialize the RolesDO in the server module as a JSON String. This step we will do in the RolesHbmDAO which is stored in the implementation layer within the Server module. The opposite direction, the de-serialization is also implemented in the same class. But slowly, not everything at once. lets have as first a look on the code.

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;

public class RolesDAO {

    public transient EntityManager mainEntityManagerFactory;

    public String serializeAsJson(final RolesDO role) 
            throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        return mapper.writeValueAsString(role);
    }

    public RolesDO deserializeJsonAsObject(final String json, final RolesDO role) 
            throws JsonProcessingException, ClassNotFoundException {
        ObjectMapper mapper = new ObjectMapper();
        return (RolesDO) mapper.readValue(json, Class.forName(object.getCanonicalName()));
    }

    public List<RolesDO> deserializeJsonAsList(final String json)
            throws JsonProcessingException, ClassNotFoundException {       
        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(json, new TypeReference<List>() {});
    }

    public List listProtectedRoles() {

        CriteriaBuilder builder = mainEntityManagerFactory.getCriteriaBuilder();
        CriteriaQuery query = builder.createQuery(RolesDO.class);
        
        Root root = query.from(RolesDO.class);
        query.where(builder.isNull(root.get("deactivated")));
        query.orderBy(builder.asc(root.get("name")));

        return mainEntityManagerFactory.createQuery(query).getResultList();
    }
}

Listing 3

The implementation is not so difficult to understand, but may at this point could the first question appear. Why the de-serilization is in the server module and not in the client module? When the client sends a JSON to the server module, we need to transform this to an real Java object. Simple as that.

Usually the Data Access Object (DAO) Pattern contains all functionality for database operations. This CRUD (create, read, update and delete) functions, we will jump over. If you like to get to know more about how the DAO pattern is working, you could also check my project TP-CORE [4] at GitHub. Therefore we go ahead to the REST service implemented in the object RoleService. Here we just grep the function fetchRole().

@Service
public class RoleService {

    @Autowired
    private RolesDAO rolesDAO;

    @GET
    @Path("/{role}")
    @Produces({MediaType.APPLICATION_JSON})
    public Response fetchRole(final @PathParam("role") String roleName) {
        Response response = null;
        try {
            RolesDO role = rolesDAO.find(roleName);
            if (role != null) {
                String json = rolesDAO.serializeAsJson(role);
                response = Response.status(Response.Status.OK)
                        .type(MediaType.APPLICATION_JSON)
                        .entity(json)
                        .encoding("UTF-8")
                        .build();
            } else {
                response = Response.status(Response.Status.NOT_FOUND).build();
            }

        } catch (Exception ex) {
            LOGGER.log("ERROR CODE 500 " + ex.getMessage(), LogLevel.DEBUG);
            response = Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
        }
        return response;
    }
}

Listing 4

The big secret here we have in the line where we stick the things together. As first the RolesDO is created and in the next line the DAO calls the serializeAsJson() Method with the RoleDO as parameter. The result will be a JSON representation of the RoleDO. If the role exist and no exceptions occur, then the service is ready for consuming. In the case of any problem the service send a HTTP error code instead of the JSON.

Complex Services which combine single services to a process take place in the orchestration layer. At this point we can switch to the client module to learn how the JSON String got transformed back to a Java domain object. In the client we don’t have RolesHbmDAO to use the deserializeJsonAsObject() method. And of course we also don’t want to create duplicate code. This forbids us to copy paste the function into the client module.

As pendant to the fetchRole() on the server side, we use for the client getRole(). The purpose of both implementations is identical. The different naming helps to avoid confusions.

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;

public class Role {
    private final String API_PATH
            = "/acl/" + Constraints.REST_API_VERSION + "/role";
    private WebTarget target;

    public RolesDO getRole(String role) throws JsonProcessingException {
        Response response = target
                .path(API_PATH).path(role)
                .request()
                .accept(MediaType.APPLICATION_JSON)
                .get(Response.class);
        LOGGER.log("(get) HTTP STATUS CODE: " + response.getStatus(), LogLevel.INFO);

        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(response.readEntity(String.class), RolesDO.class);
    }
}

Listing 5

As conclusion we have now seen the serialization and de-serialisation by using the Jackson library of JSON objects is not that difficult. In the most of the cases we just need three methods:

  • serialize a Java object to a JSON String
  • create a Java object from a JSON String
  • de-serialize a list of objects inside a JSON String to a Java object collection

This three methods I already introduced in Listing 2 for the DAO. To prevent duplicate code we should separte those functionality in an own Java Class. This is known as the design pattern Wrapper [5] also known as Adapter. For reaching the best flexibility I implemented the JacksonJsonTools from TP-CORE as Generic.

package org.europa.together.application;

import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.core..JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.List;

public class JacksonJsonTools {

    private static final long serialVersionUID = 15L;

    public String serializeAsJsonObject(final T object)
            throws JsonProcessingException {
        try {
            ObjectMapper mapper = new ObjectMapper();
            return mapper.writeValueAsString(object);
        } catch (JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }

    public T deserializeJsonAsObject(final String json, final Class object)
            throws JsonProcessingException, ClassNotFoundException {
        try {
            Class<?> clazz = Class.forName(object.getCanonicalName());
            ObjectMapper mapper = new ObjectMapper();
            return (T) mapper.readValue(json, clazz);
        } catch (JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }

    public List deserializeJsonAsList(final String json)
            throws JsonProcessingException, ClassNotFoundException {
        try {
            ObjectMapper mapper = new ObjectMapper();
            return mapper.readValue(json, new TypeReference<List>() {
            });
        } catch (com.fasterxml.jackson.core.JsonProcessingException ex) {
            System.err.println(ex.getOriginalMessage());
        }
    }
}

Listing 6

This and much more useful Implementations with a very stable API you find in my project TP-CORE for free usage.

Resources:
[1] https://www.json.org/json-en.html
[2] https://github.com/FasterXML/jackson
[3] https://eclipse-ee4j.github.io/jersey/
[4] https://github.com/ElmarDott/TP-CORE/wiki/%5BCORE-02%5D-generic-Data-Access-Object—DAO
[5] https://en.wikipedia.org/wiki/Adapter_pattern

Working with textfiles on the Linux shell

Linux turns more and more to a popular operating system for IT professional. One of the reasons for this movement are the server solutions. Stability and low resource consuming are some of the important characteristics for this choice. May you already played around with a Microsoft Server you will miss the graphical Desktop in a Linux Server. After a login into a Linux Server you just see the command prompt is waiting for your inputs.

In this short article I introduce you some helpful Linux programs to work with files on the command line. This allows you to gather information, for example from log files. Before I start I’d like to recommend you a simple and powerful editor named joe.

Ctrl + C – Abort the current editing of a file without saving changes
Ctrl + KX – Exit the current editing and save the file
Ctrl + KF – Find text in the current file
Ctrl + V – Paste clipboard into document (CMD + V for Mac)
Ctrl + Y – Delete current line where cursor is

To install joe on an Debian based Linux distribution you just need to type:

sudo apt-get install joe

1. When you need to find content in a huge text file GREP will be your best friend. GREP allows you to search for text pattern in files.

$ gerp <pattern> file.log
    -n : number of lines that matches
    -i : case insensitive
    -v : invert matches
    -E : extended regex
    -c : count number of matches
    -l : find filenames that matches the pattern

2. When you need to analyze network packages NGREP is the tool of your choice.

$ ngrep -I file.pcap
    -d : specify the network interface
    -i : case insensitive
    -x : print in alternate hexdump
    -t : print timestamp
    -I : read a pcap file

3. When you need to see the changes between two versions of a file, DIFF will do the job.

$ diff version1.txt version2.txt
    -a : add
    -c : change
    -d : delete
     # : line numbers
     < : file 1
     > : file 2

4. Sometimes it is necessary to give an order to the entries in a file. SORT is gonna to help you with this task.

$ sort file.log 
     -o : write the result to a file
     -r : reverse order
     -n : numerical sort
     -k : sort by column
     -c : check if orderd
     -u : sort and remove
     -f : ignore case
     -h : human sort

5. If you have to replace Strings inside of a huge text, like find and replace you can do that with SED, the stream editor.

$ sed s/regex/replace/g
     -s : search
     -g : replace
     -d : delete
     -w : append to file
     -e : execute command
     -n : suppress output

6. Parsing fields using delimiters in text files can done by using CUT.

 $ cut -d ":" -f 2 file.log
     -d : use the field delimiter
     -f : field numbers
     -c : specific characters position

7. The extraction of substrings who occurred just once in a text file you will reach with UNIQ.

$ uniq file.txt
     -c : count the numbers of duplicates
     -d : print duplicates
     -i : case insesitive

8.  AWK is a programming language consider to manipulate data.

$ awk {print $2} file.log

A briefly overview to Java frameworks

When you have a look at Merriam Webster about the word framework you find the following explanations:

  • a basic conceptional structure
  • a skeletal, openwork, or structural frame

May you could think that libraries and frameworks are equal things. But this is not correct. The source code calls the functionality of a library directly. When you use a framework it is exactly the opposite. The framework calls specific functions of your business logic. This concept is also know as Inversion of Control (IoC).

For web applications we can distinguish between Client-Side and Server-Side frameworks. The difference is that the client usually run in a web browser, that means to available programming languages are limited to JavaScript. Depending on the web server we are able to chose between different programming languages. the most popular languages for the internet are PHP and Java. All web languages have one thing in common. They produce as output HTML, witch can displayed in a web browser.

In this article I created an short overview of the most common Java frameworks which also could be used in desktop applications. If you wish to have a fast introduction for Java Server Application you can check out my Article about Java EE and Jakarta.

If you plan to use one or some of the discussed frameworks in your Java application, you just need to include them as Maven or Gradle dependency.

Before I continue I wish to telly you, that this frameworks are made to help you in your daily business as developer to solve problems. Every problem have multiple solutions. For this reason it is more important to learn the concepts behind the frameworks instead just how to use a special framework. During the last two decades since I’m programming I saw the rise and fall of plenty Frameworks. Examples of frameworks today almost nobody remember are: Google Web Toolkit and JBoss Seam.

The most used framework in Java for writing and executing unit tests is JUnit. An also often used alternative to JUnit is TestNG. Both solutions working quite equal. The basic idea is execute a function by defined parameters and compare the output with an expected results. When the output fit with the expectation the test passed successful. JUnit and TestNG supporting the Test Driven Development (TDD) paradigm.

If you need to emulate in your test case a behavior of an external system you do not have in the moment your tests are running, then Mockito is your best friend. Mockito works perfectly together with JUnit and TestNG.

The Behavioral Driven Development (BDD) is an evolution to unit tests where you are able to define the circumstances the customer will accepted the integrated functionality. The category of BDD integration tests are called acceptance tests. Integration tests are not a replacement for unit tests, they are an extension to them. The frameworks JGiven and Cucumber are also very similar both are like Mockito an extension for the unit test frameworks JUnit and TestNG.

For dealing in Java with relational databases we can choose between several persistence frameworks. Those frameworks allow you to define your database structure in Java objects without writing any line of SQL The mapping between Java objects and database tables happens in the background. Another very great benefit of using O/R Mapper like Hibernate, iBatis and eclipse link is the possibility to replace the underlying database sever. But this achievement is not so easy to reach as it in the beginning seems.

In the next section I introduce a technique was first introduced by the Spring Framework. Dependency Injection (DI). This allows the loose coupling between modules and an more easy replacement of components without a new compile. The solution from Google for DI is called Guice and Java Enterprise binges its own standard named CDI.

Graphical User Interfaces (GUI) are another category for frameworks. It depends on the chosen technology like JavaFX or JSF which framework is useful. The most of the provided controls are equal. Common libraries for GUI JSF components are PrimeFaces, BootsFaces or ButterFaces. OmniFaces is a framework to have standardized solution for JSF problems, like chaching and so on. Collections for JavaFX controls you can find in ControlsFX and BootstrapFX.

If you have to deal with Event Stream Processing (ESP) may you should have a look on Hazelcast or Apache Kafka. ESP means that the system will react on constantly generated data. The event is a reference to each data point which can be persisted in a database and the stream represent to output of the events.

In December a often used technology comes out of the shadow, because of a attacking vulnerability in Log4J. Log4J together with the Simple Logging Facade for Java (SLF4J) is one of the most used dependencies in the software industry. So you can imagine how critical was this information. Now you can imagine which important role Logging has for software development. Another logging framework is Logback, which I use.

Another very helpful dependency for professional software development is FF4J. This allows you to define feature toggles, also know as feature flags to enable and disable functionality of a software program by configuration.

Name Category
JUnit, TestNG TDD – unit testing
Mockito TDD mocking objects
JGiven, Cucumber BDD – acceptance testing
Hibernate, iBatis, Eclipse Link JPA- O/R Mapper
Spring Framework, Google Guice Dependency Injection
PrimeFaces, BootsFaces, ButterFaces JSF User Interfaces
ControlsFX, BootstrapFX JavaFX User Interfaces
Hazelcast, Apache Kafka Event Stream Processing
SLF4J, Logback, Log4J Logging
FF4j Feature Flags

This list could be much longer. I just tried to focus on the most used ones the are for Java programmers relevant. Feel free to leave a comment to suggest something I may forgot. If you share this article on several social media platforms with your friends or colleagues I will appreciate.

How to reduce the size of a PDF document

 When you own a big collection of PDF files the used storage space can increasing quite high. Sometimes I own PDF documents with more than 100 MB. Well nowadays this storage capacities are not a big issue. But if you want to backup those files to other mediums like USB pen drives or a DVD it would be great to reduce the file size of you PDF collection.

Long a go I worked with a little scrip that allowed me to reduce the file size of a PDF document significantly. This script called a interactive tool called PDF Sam with some command line parameters. Unfortunately many years ago the software PDF Sam become with this option commercial, so I was needed a new solution.

Before I go closer to my approach I will discuss some basic information about what happens in the background. As first, when your PDF blew up to a huge file is the reason because of the included graphics. If you scanned you handwritten notes to save them in one single archive you should be aware that every scan is a image file. By default the PDF processor already optimize those files. This is why the file size almost don’t get reduced when you try to compress them by a tool like zip.

Scanned images can optimized before to include them to a PDF document by a graphic tool like Gimp. Actions you can perform are reduce the image quality and increase the contrast. Specially for scanned handwritten notes are this steps important. If the contrast is very low and maybe you plan to print those documents, it could happens they are not readable. Another problem in this case is that you can’t apply a text search over the document. A solution to this problem is the usage of an OCR tool to transform text in images back to real text.

We resume shortly the previous minds. When we try to reduce the file size of a PDF we need to reduce the quality of the included images. This can be done by reducing the amount of dots per inch (dpi). Be aware that after the compression the image is still readable. As long you do not plan to do a high quality print like a magazine or a book, nothing will get affected.

When we wanna reduce plenty PDF files in a short time we can’t do all those actions by hand. For instance we need an automated solution. To reach the goal it is important that the tool we use support the command line. The we can create a simple batch job to perform the task without any hands on.

We have several options to optimize the images inside a PDF. If it is a great idea to perform all options, depend on the purpose of the usage.

  1. change the image file to the PNG format
  2. reduce the graphic dimensions to the real printable area
  3. reduce the DPI
  4. change the image color profile to gray-scale

As Ubuntu Linux user I have all of the things I need already together. And now comes the part that I explain you my well working solution.

Ghostscript

GPL Ghostscript is used for PostScript/PDF preview and printing. Usually as a back-end to a program such as ghostview, it can display PostScript and PDF documents in an X11 environment.

If you don’t have Ghostscript installed on you system, you can do this very fast.

sudo apt-get update
sudo apt-get -y install ghostscript

 Before you execute any script or command be aware you do not overwrite with the output the existing files. In the case something get wrong you loose all originals to try other options. Before you start to try out anything backup your files or generate the compressed PDF in a separate folder.

gs -sDEVICE=pdfwrite \
   -dCompatibilityLevel=1.4 \
   -dPDFSETTINGS=/default \
   -dNOPAUSE -dQUIET -dBATCH -dDetectDuplicateImages \
   -dCompressFonts=true \
   -r150 \
   -sOutputFile=output.pdf input.pdf

The important parameter is r150, which reduce the output resolution to 150 dpi. In the manage you can check for more parameters to compress the result more stronger. The given command you are able to place in a script, were its surrounded by a FOR loop to fetch all PDF files in a directory, to write them reduced in another directory.

The command I used for a original file with 260 MB and 640 pages. After the operation was done the size got reduced to around 36 MB. The shrunken file is almost 7 times smaller than the original. A huge different. As you can see in the screenshot, the quality of the pictures is almost identical.

As alternative, in the case you won’t come closer to the command line there is a online PDF compression tool in German and English language for free use available.

PDF Workbench

Linux Systems have many powerful tools to deal with PDF documents. For example the Libreoffice Suite have a button where you can generate for every document a proper PDF file. But sometimes you wish to create a PDF in the printing dialog of any other application in your system. With the cups PDF print driver you enable this functionality on your system.

sudo apt-get install printer-driver-cups-pdf

As I already explained, OCR allows you to extract from graphics text to make a document searchable. When you need to work with this type of software be aware that the result is good, but you cant avoid mistakes. Even when you perform an OCR on a scanned book page, you will find several mistakes. OCRFeeder is a free and very powerful solution for Linux systems.

Another powerful helper is the tool PDF Arranger which allows you to add or remove pages to an existing PDF. You are also able to change the order of the pages.

Resources

  • https://www.ghostscript.com
  • https://wiki.gnome.org/Apps/OCRFeeder
  • https://github.com/pdfarranger/pdfarranger
  • https://docupub.com/pdfcompress/