Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Akka-HTTP: File Upload

I'm trying to implement a simple file upload using akka http. My attempt looks as follows:

    import akka.actor.ActorSystem
    import akka.event.{LoggingAdapter, Logging}
    import akka.http.scaladsl.Http
    import akka.http.scaladsl.model.{HttpResponse, HttpRequest}
    import akka.http.scaladsl.model.StatusCodes._
    import akka.http.scaladsl.server.Directives._
    import akka.stream.{ActorMaterializer, Materializer}
    import com.typesafe.config.Config
    import com.typesafe.config.ConfigFactory
    import scala.concurrent.{ExecutionContextExecutor, Future}
    import akka.http.scaladsl.model.StatusCodes
    import akka.http.scaladsl.model.HttpEntity
    import java.io._
    import akka.stream.io._

    object UploadTest extends App {
      implicit val system = ActorSystem()
      implicit val executor = system.dispatcher
      implicit val materializer = ActorMaterializer()

      val config = ConfigFactory.load()
      val logger = Logging(system, getClass)

      val routes = {
        pathSingleSlash {
          (post & extractRequest) { 
            request => {
              val source = request.entity.dataBytes
              val outFile = new File("/tmp/outfile.dat")
              val sink = SynchronousFileSink.create(outFile)
              source.to(sink).run()
              complete(HttpResponse(status = StatusCodes.OK))
            }
          }
        }
      }

      Http().bindAndHandle(routes, config.getString("http.interface"), config.getInt("http.port"))

    }

There are several issues with this code:

  1. Files larger than the configured entity size cannot be uploaded: Request Content-Length 24090745 exceeds the configured limit of 8388608
  2. Executing two uploads in a row result in a dead letters encountered. exception.

What is the best way to overcome the size limitations and how can I properly close the file such that a subsequent upload will overwrite the existing file (ignoring concurrent uploads for the moment)?

like image 541
Daniel Bauer Avatar asked Oct 03 '15 16:10

Daniel Bauer


1 Answers

For point 2, I think that source.to(sink).run() executes the operation asyschronously. It materialises a Future. Therefore your HTTP request may return before the file writing has completed, so if you start a second upload at the client as soon as the first request returns, the first one may not have finished writing to the file.

You could use the onComplete or onSuccess directive to only complete the http request when the future completes:

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/http/directives/alphabetically.html

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/http/routing-dsl/directives/future-directives/onSuccess.html

EDIT:

For the content length problem, one thing you can do is increase the size of that property in application.conf. The default is:

akka.server.parsing.max-content-length = 8m

See http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/http/configuration.html

like image 200
mattinbits Avatar answered Oct 17 '22 05:10

mattinbits