How to remove duplicate lines from files preserving their order

How to remove duplicate lines of a file in Linux without sorting or changing their order (awk one-liner explained).
Αυτό το άρθρο είναι διαθέσιμο και στα ελληνικά.
I just released stup, a tool for easily keeping organized daily notes in the terminal. You can find it on GitHub here.

Suppose you have a text file and you need to remove all of its duplicate lines.

TL;DR

To remove the duplicate lines preserving their order in the file use:

awk '!visited[$0]++' your_file > deduplicated_file

How it works

The script keeps an associative array with indices equal to the unique lines of the file and values equal to their occurrences. For each line of the file, if the line occurrences are zero then it increases them by one and prints the line, otherwise it just increases the occurrences without printing the line.

I was not familiar with awk and I wanted to understand how is this accomplished with such a short script (awkward). I did my research and here is what is going on:

  • the awk “script” !visited[$0]++ is executed for each line of the input file
  • visited[] is a variable of type associative array (a.k.a. Map). We don’t have to initialize it, awk will do this for us the first time we access it.
  • the $0 variable holds the contents of the line currently being processed
  • visited[$0] accesses the value stored in the map with key equal to $0 (the line being processed), a.k.a. the occurrences (which we set below)
  • the ! negates the occurrences value:
  • the ++ operation increases the variable’s value (visited[$0]) by one.
    • If the value is empty, awk converts it to 0 (number) automatically and then it gets increased.
    • Note: the operation is executed after we access the variable’s value.

Summing up, the whole expression evaluates to:

  • true if the occurrences are zero/empty string
  • false if the occurrences are greater than zero

awk statements consist of a pattern-expression and an associated action.

<pattern/expression> { <action> }

If the pattern succeeds then the associated action is being executed. If we don’t provide an action, awk by default prints the input.

An omitted action is equivalent to { print $0 }

Our script consists of one awk statement with an expression, omitting the action. So this:

awk '!visited[$0]++' your_file > deduplicated_file

is equivalent to this:

awk '!visited[$0]++ { print $0 }' your_file > deduplicated_file

For every line of the file, if the expression succeeds the line is printed to the output. Otherwise, the action is not executed, nothing is printed.

Why not use the uniq command?

The uniq commands removes only the adjacent duplicate lines. Demonstration:

$ cat test.txt
A
A
A
B
B
B
A
A
C
C
C
B
B
A
$ uniq < test.txt
A
B
A
C
B
A

Other approaches

Using the sort command

We can also use the following sort command to remove the duplicate lines but the line order is not preserved.

sort -u your_file > sorted_deduplicated_file

Using cat, sort and cut

The previous approach would produce a de-duplicated file whose lines would be sorted based on the contents. Piping a bunch of commands we can overcome this issue:

cat -n your_file | sort -uk2 | sort -nk1 | cut -f2-

How it works

Suppose we have the following file:

abc
ghi
abc
def
xyz
def
ghi
klm

cat -n test.txt prepends the order number in each line.

1	abc
2	ghi
3	abc
4	def
5	xyz
6	def
7	ghi
8	klm

sort -uk2 sorts the lines based on the second column (k2 option) and keeps only the first occurrence of the lines with the same second column value (u option)

1	abc
4	def
2	ghi
8	klm
5	xyz

sort -nk1 sorts the lines based on their first column (k1 option) treating the column as a number (-n option)

1	abc
2	ghi
4	def
5	xyz
8	klm

Finally, cut -f2- prints each line starting from the second column until its end (-f2- option: note the - suffix which instructs to include the rest of the line)

abc
ghi
def
xyz
klm

References

That’s all. Cat photo.

duplicate cat

Comments and feedback
For feedback, comments, typos etc. please use this issue.
Thanks for visiting!