Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Resumable File Upload on a Multi-Node Environment

From what I understand of chunked file uploads, chunks are stored in-memory so that the upload can be resumed from that point in case a failure occurs. However, I assume that in a multi-node environment this makes it necessary to use a "sticky session" so that the same client is always redirected to the same node (the one containing the chunks in memory). However, apart from this we have no need to use sticky sessions anywhere else, so we'd prefer not to.

Is there any way (using, e.g., Hazelcast or any other in-memory data grid) to distribute the chunks through the nodes of a cluster so that the upload can later be resumed even if the client is connected to a different node? In case that matters, we're using Spring Boot (latest).

like image 600
Roy Stark Avatar asked Nov 22 '22 11:11

Roy Stark


1 Answers

HTTP chunked transfer encoding (aka chunking) is a way to send a single message broken down into multiple chunks. The sender sends a single HTTP message, but broken down into multiple chunks.

Importantly, a chunked transfer occurrs within a single HTTP connection. This means that you don't need "sticky sessions". This also means you cannot resume a chunked transfer as per your requirement.

It sounds like you may want resumable uploads. You could implement resumable uploads using the tus protocol. This example Java server and this Spring Boot example server may be a useful starting point.

like image 194
grantn Avatar answered Jan 31 '23 01:01

grantn