Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to monitor progress of ssh file transfer and output to a log file

I'm writing a bash script to periodically transfer data to a remote system. I have a local command that generates a stream, and a remote command that consumes it, so I'm doing something like this:

generate_data | ssh remoteserver.example.com consume_data

(Where I have ssh keys set up so I can do this non-interactively.) This works fine. However, since this will be an automated process (running as a cron job) and may sometimes be transferring large amounts of data on limited bandwidth, I'd like to be able place periodic progress updates in my log file. I had thought to use pv (pipe viewer) for this, and this is the best I could come up with:

generate_data | pv -fb | ssh remoteserver.example.com consume_data

Again, it works... but pv was really written with terminal output in mind, so I end up with a mess in the log that looks like

2.06MB^M2.19MB^M2.37MB^M 2.5MB^M2.62MB^M2.87MB^M3MB^M3.12MB^M3.37MB

I'd prefer log messages along the lines of

<timestamp> 2.04MB transferred...
<timestamp> 3.08MB transferred...

If anybody has any clever ideas of how to do this, either with different arguments to pv or via some other mechanism, I'd be grateful.

EDIT: Thanks for the answers so far. Of course there are a lot of possible home-brewed solutions; I was hoping to find something that would work "out of the box." (Not that I'm ruling out home-brewed; it may be the easiest thing in the end. Since pv already does 98% of what I need, I'd prefer not to re-invent it.)

PostScript: Here's the line I ended up using, in hopes it might help someone else at some point.

{ generate_data | pv -fbt -i 60 2>&3 | ssh remoteserver consume_data } 3>&1 | awk -vRS='\r' '{print $1 " transferred in " $2; fflush();}' >> logfile
like image 397
eaj Avatar asked May 16 '11 15:05

eaj


4 Answers

If you want to stick with pv, you could postprocess its output a little. At a minimum, turn the CRs into LFs.

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 | tr '\015' '\012'

Use awk for fancier processing.

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"}'

Do however keep in mind that the standard text processing utilities only flush their output at the end of each line if they're printing to a terminal. So if you pipe pv to some other utility whose output goes to a pipe or file, there will be a non-negligible delay due to buffering. If you have GNU awk or some other implementation that has the fflush function (it's common but not standard), make it flush its output on every line:

{ generate_data | pv -bft 2>&3 | consume_data >/dev/null; } 3>&1 |
awk -vRS='\r' '{print $2, $1 " transferred"; fflush()}'
like image 179
Gilles 'SO- stop being evil' Avatar answered Nov 09 '22 06:11

Gilles 'SO- stop being evil'


Here's a small ruby script that I believe does what you want. With the overhead of ruby, I only get about 1MB per second copying a file to the local filesystem, but you mentioned the pipes will have limited bandwidth so this may be OK. I pulled the number_to_human_size function from rails (actionview).

#!/usr/bin/ruby                                                                                                                           
require 'rubygems'
require 'active_support'

# File vendor/rails/actionpack/lib/action_view/helpers/number_helper.rb, line 87                                                          
def number_to_human_size(size)
  case
    when size < 1.kilobyte: '%d Bytes' % size
    when size < 1.megabyte: '%.1f KB'  % (size / 1.0.kilobyte)
    when size < 1.gigabyte: '%.1f MB'  % (size / 1.0.megabyte)
    when size < 1.terabyte: '%.1f GB'  % (size / 1.0.gigabyte)
    else                    '%.1f TB'  % (size / 1.0.terabyte)
  end.sub('.0', '')
rescue
  nil
end

UPDATE_FREQ = 2
count = 0
time1 = Time.now

while (!STDIN.eof?)
  b = STDIN.getc
  count += 1
  print b.chr
  time2 = Time.now
  if time2 - time1 > UPDATE_FREQ
    time1 = time2
    STDERR.puts "#{time2} #{number_to_human_size(count)} transferred..."
  end
end
like image 36
Dan Avatar answered Nov 09 '22 08:11

Dan


You might have a look at bar: http://clpbar.sourceforge.net/

Bar is a simple tool to copy a stream of data and print a display for the user on stderr showing (a) the amount of data passed, (b) the throughput of the data transfer, and (c) the transfer time, or, if the total size of the data stream is known, the estimated time remaining, what percentage of the data transfer has been completed, and a progress bar.

Bar was originally written for the purpose of estimating the amount of time needed to transfer large amounts (many, many gigabytes) of data across a network. (Usually in an SSH/tar pipe.)

like image 32
jlliagre Avatar answered Nov 09 '22 08:11

jlliagre


Source code is at http://code.google.com/p/pipeviewer/source/checkout so you can edit some C and use PV!

EDIT: Yeah getting the source then editing line 578 of display.c where it has this code:

    display = pv__format(&state, esec, sl, tot);
if (display == NULL)
    return;

if (opts->numeric) {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
} else if (opts->cursor) {
    pv_crs_update(opts, display);
} else {
    write(STDERR_FILENO, display, strlen(display)); /* RATS: ignore */
    write(STDERR_FILENO, "\r", 1);
}

you can change "\r" to "\n" and recompile. This may get you some more useful output, by having it on a new line each time. And you could try reformatting that entire output string too if you wanted.

like image 42
jds8288 Avatar answered Nov 09 '22 07:11

jds8288