Vert.x is a tool-kit for building reactive applications on the JVM.
I want to use vertx for JVM-based auto-scalable RESTful backend API.
So far what I've found from the documentation, that it takes by default the number of cores in your machine, let's say you have N cores and creates N threads for each core, each thread is a event bus, each thread contains vertx instances. The question is, how does Vertx control the number of instances? based on load-pressure?
This thing about control over the number of Verticles running withing a given thread i still don't get. Please help to clearify this thing. Let's assume machine has 4 cores and I have written two classes extended as AbstractVerticle:
1) let one be some DB data retriever (let's call it RETRIEVER or "R")
2) another one let's say is some converter (let's call it CONVERTER or "C")
Now I run with vertx or compile and run:
$ java -jar retriever.jar
$ java -jar converter.jar
So since we have 4 cores, on start Vertx will create 4 threads per core.
question#1:
how many retriever & converter instances will we have by default in each thread? I guess it's one instance per thread? right? So we'll have for 4 cores in total 4 instances of retriever and 4 inst. of converter? Correct?
question#2:
in case of increasing load-pressure with increasing number of calls to RETRIEVER ("R") and COVERTER ("C") (from 1.000 to 1.000.000 calls) will Vertx automatically manage the number of "R" and "C" instances required to handle increased number of calls to our System ?
On stackoverflow there is a question with a similar problem: Can I set a capacity on the Vert.x HTTP request queue?
Jordan Halterman suggests: "Note also that you can scale your HTTP server across multiple verticle instances in order to handle more requests. In this case you can either use static variables or shared data to share a semaphore across the instances."
question#3:
But how do you exactly do scale your verticle instances to handle more requests ? I couldn't find this in documentation.
I'm grateful in advance for help!
A vert. x instance will create multiple threads. Inside that vert. x instance you have multiple verticles (verticle instances more accurately) running.
All instances are indeed thread safe. However, this does not mean that you can deploy multiple instances of any verticle. Because multi instance deployment means that multiple verticle objects are generated. If your verticle is stateful, the states of multiple instance objects are independent of each other.
A Vert. x instance maintains N event loop threads (where N by default is core*2) by default. If you wanted to utilize all of your cores, you would deploy 2 verticles per core.
Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden. This means a single Vertx process can scale across your server, unlike Node. js.
You misunderstood the docs.
First, there is a single Event Bus (and it is shared between Vert.x instances when Vert.x is started in cluster mode). Its role is to allow a message passing style of communication between your verticles.
See The Event Bus section.
Then there are different types of threads in Vert.x: event loop threads and worker threads. By default, Vert.x creates as many event loop threads as cores on the machine, and a pool of 20 worker threads. Event loop threads are used to handle asynchronous events (file buffer was read, message has been received,... etc). Worker threads are used to execute the blocking parts of your application.
See Multi-Reactor pattern, The Golden Rule and Running Blocking Code
A verticle is the Vert.x unit of deployment. There are three types of verticles but the two you should know are "standard" verticles and "worker" verticles. Standard verticles are assigned a single event loop thread when they are deployed. Whichever type event you handle in your verticle will be handled by this single event loop thread. Worker verticles are guaranteed to be executed by a single worker thread at a time. It may not be the same worker thread each time, but never two worker threads will handle worker verticle events in parallel.
See Verticles
Eventually, to scale a Vert.x application, you deploy multiple instances of your verticles. For standard verticles, each instance will get a different event loop assigned so you will scale across your cores.
See Number of Verticles Instances
Vert.x doesn't automatically adjust the number of Verticles for you. This is something you could build with Vert.x monitoring tools though.
I belive this answers your three questions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With