Possible Duplicate:
Command line: piping find results to rm
recently frigged up my external hard drive with my photos on it (most are on DVD anyway, but..) by some partition friggery.
Fortunately I was able to put things back together with PhotoRec another Unix partition utility and PDisk.
PhotoRec returned over one thousand folders chalk full of anything from .txt files to important .NEF's.
So I tried to make the sorting easier by using unix since the OSX Finder would simply crumble under such requests as to select and delete a billion .txt files.
But I encounter some BS when I tried to find and delete txt files, or find and move all jpegs recursively into a new folder called jpegs. I am a unix noob so I need some assistance please.
Here is what I did in bash. (I am in the directory that ls would list all the folders and files I need to act upon).
find . -name *.txt | rm
or
sudo find . -name *.txt | rm -f
So it's giving me some BS that I need to unlink the files. Whatever.
I need to find all .txt files recursively and delete them preferably verbose.
Thanks.
To remove duplicate lines from a sorted file and make it unique, we use the uniq command in the Linux system. The uniq command work as a kind of filter program that reports out the duplicate lines in a file. It filters adjacent matching lines from the input and gives a unique output.
To delete the duplicate files use -d –delete. It will prompt user for files to preserve, deleting all others. So if you want to delete all the duplicate files, run the command $ fdupes -d /path/to/directory.
$ find . -name "*.txt" -type f -delete
You can't pipe filenames to rm
. You need to use xargs
instead. Also, remember to quote the file pattern ".txt"
or the shell will expand it.
find . -name "*.txt" | xargs rm
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With