Backend Error [503]Backend Error [503]Backend Error [503]Backend Error [503]
Your API key may not be working because you're using it for the wrong project. Be sure you're using the key for the project that you created it for, especially if you created multiple projects at the same time. If it's still not working, consider creating a new API key entirely.
The API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages.
Some time ago, maybe a couple of months, the Google upload servers started timing out on uploads much more than they did in the past. That is the error that you are seeing. There is nothing wrong with your code other than the fact that you are probably just reporting the error rather than handling the error.
Most likely you are using the .Upload method. I say this because an error 503 returns "A task was canceled." error when the .UploadAsync method is used. I use .UploadAsync and .ResumeAsync in my upload program.
When you get an error like this while using the .Upload method, it indicates that the server is too busy to handle your request within the timeout period. Your program should recognize this error and call the .Resume method to resume the upload.
Alternatively, you can increase the timeout from the default 100 seconds to something higher using this statement:
YouTube.HttpClient.Timeout = TimeSpan.FromMinutes(HTTP_CLIENT_TIMEOUT_MINUTES);
where YouTube is the variable name of your YouTubeService object.
In my experience, increasing the timeout is not as effective as handling the error and requesting that the upload be resumed. For example, if you set the timeout to five minutes, then your program will still fail if no response is returned after five minutes. Yes, that can happen. I usually set the timeout to two minutes and then resume the upload if an error occurs. Almost always, the upload will resume correctly.
Occasionally, the upload might immediately timeout again. For this reason, I count my resumes and reset the resume counter when a ProgressChanged IUploadProgress.Uploading event is triggered. I have a limit of three resume retries and have never gone over that limit.
Based from this thread, try to handle this error at your end with some form of exponential back-off or retry.
Example: This method implements an exponential backoff strategy to resume a failed upload.
def resumable_upload(insert_request):
response = None
error = None
retry = 0
while response is None:
try:
print "Uploading file..."
status, response = insert_request.next_chunk()
if 'id' in response:
print "Video id '%s' was successfully uploaded." % response['id']
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError, e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS, e:
error = "A retriable error occurred: %s" % e
if error is not None:
print error
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print "Sleeping %f seconds and then retrying..." % sleep_seconds
time.sleep(sleep_seconds)
You can also upload videos more reliably by using the resumable upload protocol for Google APIs. This protocol lets you resume an upload operation after a network interruption or other transmission failure, saving time and bandwidth in the event of network failures.
Also check these links:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With