If I set threadsafe: true
in my app.yaml
file, what are the rules that govern when a new instance will be created to serve a request, versus when a new thread will be created on an existing instance?
If I have an app which performs something computationally intensive on each request, does multi-threading buy me anything? In other words, is an instance a multi-core instance or a single core?
Or, are new threads only spun up when existing threads are waiting on IO?
Instances are the computing units that App Engine uses to automatically scale your application. At any given time, your application can be running on one instance or many instances, with requests being spread across all of them.
24 hours for HTTP requests and task queue tasks.
The App Engine standard environment is based on container instances running on Google's infrastructure. Containers are preconfigured with one of several available runtimes. The standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data.
The following set of rules are currently used to determine if a given instance can accept a new request:
if processing more than N concurrent requests (today N=10): false elif exceeding the soft memory limit: false elif exceeding the instance class CPU limit: false elif warming up: false else true
The following of total CPU/core limits currently apply to each instance classes:
CLASS 1: 600MHz 1 core CLASS 2: 1.2GHz 1 core CLASS 4: 2.4GHz 1 core CLASS 8: 4.8GHz 2 core
So only a B8
instance can process up to 2 fully CPU bound requests in parallel.
Setting threadsafe: true
(Python) or <threadsafe>true</threadsafe>
(Java) for instances classes < 8 would not allow more than one CPU bound requests to be processed in parallel on a single instance.
If you are not fully CPU bound or doing I/O, the Python and Java runtime will spawn new threads for handling new request up to 10 concurrent requests with threadsafe: true
Also note that even though the Go runtime is single threaded, it does support concurrent requests: It will spawn 1 goroutine per requests and yield control between goroutines while they are performing I/O.
Read the next messages from link which was suggested by Kyle Finley
Jeff Schnitzer: Is there still a hard limit of 10 threads?
Yes, but probably not for the reason you expect. The primary issue we run into is memory management. If we raised the default to 100, many apps would then see out-of-memory deaths (more than they do now), and these deaths show up differently for python/java/go. The right path forward is more intelligent algorithms wrt memory, providing configurability, and so on. This is an example of the kinds of projects we work on for the scheduler, but as with any team we have to prioritize our projects. I'd recommend filing this (or any other desired scheduler enhancements) on the public issue tracker so they can get feedback/data/votes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With