I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat
on a large file or ls
on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head
or wc
to prevent too much output having to be printed to terminal?
The -n option tells head to limit the number of lines of output. Alternatively, to limit the output by number of bytes, the -c option would be used.
How do I limit my ls results? The ls | cut -f 1,n file command you suggested would output the first and nth field on each line of text in file , and would completely ignore the output of ls .
ulimit is a built-in Linux shell command that allows viewing or limiting system resource amounts that individual users consume. Limiting resource usage is valuable in environments with multiple users and system performance issues.
In UNIX/Linux, filters are the set of commands that take input from standard input stream i.e. stdin, perform some operations and write output to standard output stream i.e. stdout. The stdin and stdout can be managed as per preferences using redirection and pipes. Common filter commands are: grep, more, sort.
I don't know about the general case, but for each well-known command (cat, ls, find?) you could do the following:
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $@ | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $@ | more"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With