I was trying the following POC to check how to get high concurrency
implicit def executionContext = context.system.dispatchers.lookup("async-futures-dispatcher")
implicit val timeout = 10 seconds
val contestroute = "/contestroute" {
get {
respondWithMediaType(`application/json`) {
dynamic {
onSuccess(
Future {
val start = System.currentTimeMillis()
// np here should be dealt by 200 threads defined below, so why
// overall time takes so long? why doesn't it really utilize all
// threads I have given to it? how to update the code so it
// utilizes the 200 threads?
Thread.sleep(5000)
val status = s"timediff ${System.currentTimeMillis() - start}ms ${Thread.currentThread().getName}"
status
}) { time =>
complete(s"status: $time")
}
}
}
}
}
My config:
async-futures-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "thread-pool-executor"
# Configuration for the thread pool
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 200
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 20.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 200
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
however when I run apache bench like this:
ab -n 200 -c 50 http://LAP:8080/contestroute
Results I get are:
Server Software: Apache-Coyote/1.1
Server Port:erred: 37500 bytes
HTML transferred: 10350 bytes
Requests per second: 4.31 [#/sec] (mean)
Time per request: 34776.278 [ms] (mean)
Time per request: 231.842 [ms] (mean, across all concurrent requests)
Transfer rate: 1.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 406 1021.3 7 3001
Processing: 30132 30466 390.8 30308 31231
Waiting: 30131 30464 391.8 30306 31231
Total: 30140 30872 998.9 30353 33228 8080
Document Path: /contestroute
Document Length: 69 bytes
Concurrency Level: 150
Time taken for tests: 34.776 seconds
Complete requests: 150
Failed requests: 0
Write errors: 0
Non-2xx responses: 150
Total transferred: 37500 bytes
HTML transferred: 10350 bytes
Requests per second: 4.31 [#/sec] (mean)
Time per request: 34776.278 [ms] (mean)
Time per request: 231.842 [ms] (mean, across all concurrent requests)
Transfer rate: 1.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 406 1021.3 7 3001
Processing: 30132 30466 390.8 30308 31231
Waiting: 30131 30464 391.8 30306 31231
Total: 30140 30872 998.9 30353 33228
Am I missing something big? what do I need to change to have my spray
and futures
utilize all threads i given to it?
(to add i'm running on top of tomcat servlet 3.0)
In your example all spray operations and blocking operations happen in the same context. You need to split 2 contexts:
Also I don't see the reason to use dynamic, I guess just 'complete' should be good.
implicit val timeout = 10.seconds
// Execution Context for blocking ops
val blockingExecutionContext = {
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2000))
}
// Execution Context for Spray
import context.dispatcher
override def receive: Receive = runRoute(contestroute)
val contestroute = path("contestroute") {
get {
complete {
Future.apply {
val start = System.currentTimeMillis()
// np here should be dealt by 200 threads defined below, so why
// overall time takes so long? why doesn't it really utilize all
// threads I have given to it? how to update the code so it
// utilizes the 200 threads?
Thread.sleep(5000)
val status = s"timediff ${System.currentTimeMillis() - start}ms ${Thread.currentThread().getName}"
status
}(blockingExecutionContext)
}
}
}
After that you can test it with
ab -n 200 -c 200 http://LAP:8080/contestroute
and you'll see that spray will create all 200 threads for blocking operations
Results:
Concurrency Level: 200
Time taken for tests: 5.096 seconds
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With