I want to get a list of folders at the current level (not including their subfolders) and simply print the folder name and a count of the number of files in the folder (preferably filtering to *.jpg if possible).
Is this possible in the standard bash shell? ls -l
prints about everything but the file count :)
Browse to the folder containing the files you want to count. Highlight one of the files in that folder and press the keyboard shortcut Ctrl + A to highlight all files and folders in that folder. In the Explorer status bar, you'll see how many files and folders are highlighted, as shown in the picture below.
The easiest way to count files in a directory on Linux is to use the “ls” command and pipe it with the “wc -l” command. The “wc” command is used on Linux in order to print the bytes, characters or newlines count. However, in this case, we are using this command to count the number of files in a directory.
Use File Explorer Open the folder and select all the subfolders or files either manually or by pressing CTRL+A shortcut. If you choose manually, you can select and omit particular files. You can now see the total count near the left bottom of the window. Repeat the same for the files inside a folder and subfolder too.
The ls command is used to list files or directories in Linux and other Unix-based operating systems. Just like you navigate in your File explorer or Finder with a GUI, the ls command allows you to list all files or directories in the current directory by default, and further interact with them via the command line.
I've come up with this one:
find -maxdepth 1 -type d | while read dir; do
count=$(find "$dir" -maxdepth 1 -iname \*.jpg | wc -l)
echo "$dir ; $count"
done
Drop the second -maxdepth 1
if the search within the directories for jpg files should be recursive considering sub-directories. Note that that only considers the name of the files. You could rename a file, hiding that it is a jpg picture. You can use the file
command to do a guess on the content, instead (now, also searches recursively):
find -mindepth 1 -maxdepth 1 -type d | while read dir; do
count=$(find "$dir" -type f | xargs file -b --mime-type |
grep 'image/jpeg' | wc -l)
echo "$dir ; $count"
done
However, that is much slower, since it has to read part of the files and eventually interpret what they contain (if it is lucky, it finds a magic id at the start of the file). The -mindepth 1
prevents it from printing .
(the current directory) as another directory that it searches.
I found this question after I'd already figured out my own similar script. It seems to fit your conditions and is very flexible so I thought I'd add it as an answer.
Advantages:
.
, 1 for first level subdirectories, etc.)find
command, so it's a bit faster on large directoriesRaw code:
find -P . -type f | rev | cut -d/ -f2- | rev | \
cut -d/ -f1-2 | cut -d/ -f2- | sort | uniq -c
Wrapped into a function and explained:
fc() {
# Usage: fc [depth >= 0, default 1]
# 1. List all files, not following symlinks.
# (Add filters like -maxdepth 1 or -iname='*.jpg' here.)
# 2. Cut off filenames in bulk. Reverse and chop to the
# first / (remove filename). Reverse back.
# 3. Cut everything after the specified depth, so that each line
# contains only the relevant directory path
# 4. Cut off the preceeding '.' unless that's all there is.
# 5. Sort and group to unique lines with count.
find -P . -type f \
| rev | cut -d/ -f2- | rev \
| cut -d/ -f1-$((${1:-1}+1)) \
| cut -d/ -f2- \
| sort | uniq -c
}
Produces output like this:
$ fc 0
1668 .
$ fc # depth of 1 is default
6 .
3 .ssh
11 Desktop
44 Downloads
1054 Music
550 Pictures
Of course with the number first it can be piped to sort
:
$ fc | sort
3 .ssh
6 .
11 Desktop
44 Downloads
550 Pictures
1054 Music
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With