Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to delete the last n lines of a file? [duplicate]

Tags:

linux

bash

I was wondering if someone could help me out.

Im writing a bash script and i want to delete the last 12 lines of a specific file.

I have had a look around and somehow come up with the following;

head -n -12 /var/lib/pgsql/9.6/data/pg_hba.conf | tee /var/lib/pgsql/9.6/data/pg_hba.conf >/dev/null

But this wipes the file completely.

All i want to do is permanently delete the last 12 lines of that file so i can overwrite it with my own rules.

Any help on where im going wrong?


2 Answers

There are a number of methods, depending on your exact situation. For small, well-formed files (say, less than 1M, with regular sized lines), you might use Vim in ex mode:

ex -snc '$-11,$d|x' smallish_file.txt
  • -s -> silent; this is batch processing, so no UI necessary (faster)
  • -n -> No need for an undo buffer here
  • -c -> the command list
  • '$-11,$d' -> Select the 11 lines from the end to the end (for a total of 12 lines) and delete them. Note the single quote so that the shell does not interpolate $d as a variable.
  • x -> "write and quit"

For a similar, perhaps more authentic throw-back to '69, the ed line-editor could do this for you:

ed -s smallish_file.txt <<< $'-11,$d\nwq'
  • Note the $ outside of the single quote, which is different from the ex command above.

If Vim/ex and Ed are scary, you could use sed with some shell help:

sed -i "$(($(wc -l < smallish_file.txt) - 11)),\$d" smallish_file.txt
  • -i -> inplace: write the change to the file
  • The line count less 11 for a total of 12 lines. Note the escaped dollar symbol ($) so the shell does not interpolate it.

But using the above methods will not be performant for larger files (say, more than a couple of megs). For larger files, use the intermediate/temporary file method, as the other answers have described. A sed approach:

tac some_file.txt | sed '1,12d' | tac > tmp && mv tmp some_file.txt
  • tac to reverse the line order
  • sed to remove the last (now first) 12 lines
  • tac to reverse back to the original order

More efficient than sed is a head approach:

head -n -12 larger_file.txt > tmp_file && mv tmp_file larger_file.txt
  • -n NUM show only the first NUM lines. Negated as we've done, shows up to the last NUM lines.

But for real efficiency -- perhaps for really large files or for where a temporary file would be unwarranted -- truncate the file in place. Unlike the other methods which involve variations of overwriting the entire old file with entire the new content, this one will be near instantaneous no matter the size of the file.

# In readable form:
BYTES=$(tail -12 really_large.txt | wc -c)
truncate -s -$BYTES really_large.txt

# Inline, perhaps as part of a script
truncate -s -$(tail -12 really_large.txt | wc -c) really_large.txt

The truncate command makes files exactly the specified size in bytes. If the file is too short, it will make it larger, and if the file is too large, it will chop off the excess really efficiently. It does this with filesystem semantics, so it involves writing usually no more than a couple of bytes. The magic here is in calculating where to chop:

  • -s -NUM -> Note the dash/negative; says to reduce the file by NUM bytes
  • $(tail -12 really_large.txt | wc -c) -> returns the number of bytes to be removed

So, you pays your moneys and takes your choices. Choose wisely!

like image 172
hunteke Avatar answered Sep 17 '25 22:09

hunteke


Like this:

head -n -12 test.txt > tmp.txt && cp tmp.txt test.txt
like image 41
user unknown Avatar answered Sep 17 '25 22:09

user unknown