Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to speedup perforce auto resolve?

I would like to know how to speedup the perforce auto resolve when doing integration (merge yours and theirs if no conflicts exists).

Currently is taking hours for ~5000 files when running it using a proxy server even if the proxy server has the files pre-cached.

Also p4v interface doesn't give you any hint regarding the progress of the task, you do not know if it will finish in a second or next year.

like image 726
sorin Avatar asked Jan 29 '10 16:01

sorin


2 Answers

5000 files isn't very many to resolve, for a moderately powerful server.

Are your files binary of significant size? if your 5000 files are binaries, autoresolve will checksum them on your local hdd to compare against the checksum on the server (not the proxy, which is just relaying the information or files to you), and this can slow you down.

If you know beforehand that you are attempting a one-way resolve (eat yours on your hdd or eat theirs from the server), you can use the 'accept yours' or 'accept theirs' options to autoresolve and skip the checksum operation. From the command-line, that'd be "p4 resolve" with either the "-ay" or "-at" option respectively.

You can also contact your perforce db administrator and have them log the server actions. Maybe there are actions being run when you do your integration and resolve that are holding file locks, causing you to spin and wait until the locks are released. See the reference for 'p4 monitor show -a'.

For example, in our office, it's common on a Monday morning for everyone in the office to integrate up to their private branches and resolve.

like image 52
Epu Avatar answered Oct 17 '22 15:10

Epu


I'm also having a similar issue working with a proxy on the other side of the globe. I've run some experiments and the problem doesn't appear to be affected by the size of the file or the resolve method (accept-theirs, etc.) at least for smallish files.

I'm guessing that there is some round-trip costs per file as the total resolve time is fairly constant regardless of whether I break the command down into individual resolve commands per file, batch them per groups of files, or resolve entire change list. In my case, the overhead is about 1 second per file for > 10k files.

I'm currently working around the issue by logging into a VM co-located with the remote server and performing the resolve from there. You can then submit from the VM and then sync down normally. Since I need to first run tests locally prior to submitting, I shelve the files on the VM and then unshelve them on my local machine. This is not terribly fast either but seems better.

So, not a fix for the problem but a viable workaround in my case that saves hours.

like image 38
user2977198 Avatar answered Oct 17 '22 14:10

user2977198