Linux turns more and more to a popular operating system for IT professional. One of the reasons for this movement are the server solutions. Stability and low resource consuming are some of the important characteristics for this choice. May you already played around with a Microsoft Server you will miss the graphical Desktop in a Linux Server. After a login into a Linux Server you just see the command prompt is waiting for your inputs.
In this short article I introduce you some helpful Linux programs to work with files on the command line. This allows you to gather information, for example from log files. Before I start I’d like to recommend you a simple and powerful editor named joe.
Ctrl + C – Abort the current editing of a file without saving changes
Ctrl + KX – Exit the current editing and save the file
Ctrl + KF – Find text in the current file
Ctrl + V – Paste clipboard into document (CMD + V for Mac)
Ctrl + Y – Delete current line where cursor is
To install joe on an Debian based Linux distribution you just need to type:
sudo apt-get install joe
1. When you need to find content in a huge text file GREP will be your best friend. GREP allows you to search for text pattern in files.
gerp <pattern> file.log
-n : number of lines that matches
-i : case insensitive
-v : invert matches
-E : extended regex
-c : count number of matches
-l : find filenames that matches the pattern
Bash2. When you need to analyze network packages NGREP is the tool of your choice.
ngrep -I file.pcap
-d : specify the network interface
-i : case insensitive
-x : print in alternate hexdump
-t : print timestamp
-I : read a pcap file
Bash3. When you need to see the changes between two versions of a file, DIFF will do the job.
diff version1.txt version2.txt
-a : add
-c : change
-d : delete
# : line numbers
< : file 1
> : file 2
Bash4. Sometimes it is necessary to give an order to the entries in a file. SORT is gonna to help you with this task.
sort file.log
-o : write the result to a file
-r : reverse order
-n : numerical sort
-k : sort by column
-c : check if orderd
-u : sort and remove
-f : ignore case
-h : human sort
Bash5. If you have to replace Strings inside of a huge text, like find and replace you can do that with SED, the stream editor.
sed s/regex/replace/g
-s : search
-g : replace
-d : delete
-w : append to file
-e : execute command
-n : suppress output
Bash6. Parsing fields using delimiters in text files can done by using CUT.
cut -d ":" -f 2 file.log
-d : use the field delimiter
-f : field numbers
-c : specific characters position
Bash7. The extraction of substrings who occurred just once in a text file you will reach with UNIQ.
uniq file.txt
-c : count the numbers of duplicates
-d : print duplicates
-i : case insesitive
Bash8. AWK is a programming language consider to manipulate data.
awk {print $2} file.log
Bash- 1
- 2