Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to avoid read timeout when requesting all files? (google drive api)

I have a drive application that requests all files that aren't trashed. But sometimes it throws a IOexception with read timeout. Is there a way to avoid this?

This is the error I get:

An error occurred: java.net.SocketTimeoutException: Read timed out

Maybe my exponential backoff is implemented wrong.

Here's the code I use to get the files:

private static List<File> retrieveAllNoTrashFiles(Drive service) throws IOException, InterruptedException {
    List<File> result = new ArrayList<File>();
    Files.List request = service.files().list().setQ("trashed = false").setMaxResults(1000);
    do {
        try {
            FileList files =executeRequest(service,request);
            result.addAll(files.getItems());
            request.setPageToken(files.getNextPageToken());
        } catch (IOException e) {       //here I sometimes get the read timeout
            System.out.println("An error occurred: " + e);
            request.setPageToken(null);
        }
    } while (request.getPageToken() != null
            && request.getPageToken().length() > 0);

    return result;
}

private static FileList executeRequest(Drive service,Files.List request) throws IOException, InterruptedException {
    Random randomGenerator = new Random();
    for (int n = 0; n < 5; ++n) {
        try {
            return(request.execute());
        } catch (GoogleJsonResponseException e) {
            if (e.getDetails().getCode() == 403
                    && (e.getDetails().getErrors().get(0).getReason().equals("rateLimitExceeded")
                    || e.getDetails().getErrors().get(0).getReason().equals("userRateLimitExceeded"))) {
                // Apply exponential backoff.
                Thread.sleep((1 << n) * 1000 + randomGenerator.nextInt(1001));
            } 
             //else {
                // Other error, re-throw.
               // throw e;
           // }
        }
    }catch(SocketTimeoutException e){
            Thread.sleep((1 << n) * 1000 + randomGenerator.nextInt(1001));
        }
    System.err.println("There has been an error, the request never succeeded.");
    return null;
}
like image 337
DavidVdd Avatar asked Dec 15 '22 18:12

DavidVdd


2 Answers

I had same experience some days ago. The answer to my question was found here. https://code.google.com/p/google-api-java-client/wiki/FAQ. When creating an instance of your Drive, you can call setHttpRequestInitilalizer method, pass a new instance of HttpRequestInitializer as an argument and implement the initialize method. In there, you can increase the ReadTimeout and the ConnectionTimeout.

Here is a sample code:

         Drive drive = new Drive.Builder(this.httpTransport, this.jsonFactory, this.credential).setHttpRequestInitializer(new HttpRequestInitializer() {

            @Override
            public void initialize(HttpRequest httpRequest) throws IOException {

                credential.initialize(httpRequest);
                httpRequest.setConnectTimeout(300 * 60000);  // 300 minutes connect timeout
                httpRequest.setReadTimeout(300 * 60000);  // 300 minutes read timeout

            }
        }).setApplicationName("My Application").build();
like image 167
fasholaide Avatar answered Dec 26 '22 21:12

fasholaide


Exponential backoff is just a way to give you more chance to retry it, hopefully next time it will be better. But it didn't address the root of the problem.

I noticed that you set max result as 1000. Considering that Read Timeout is generally caused by long processing time on server side, I would suggest you to decrease this number to 500 or 200.

I encountered same problems when I am fetching changes using max result count 1000. When I change the size to 200 or 500, it works smoothly.

Because the acceptable max number is related to the server situation and is changing dynamically, you can also design a exponential backoff strategy to automatically update the max result count. For example, start with 1000, if error encountered then change to 500, if there are still error then change to 200. If no error for some while then increase to 500 again.

Hope this helps~

like image 26
Harper Avatar answered Dec 26 '22 23:12

Harper