Can somebody share an experience concerning scaling vertically the ActiveMQ? I'm particularly interested how performance is affected by:
1. Vertical scaling. Vertical scaling is a technique used to increase the number of connections (and therefore load) that a single ActiveMQ broker can handle. By default, the ActiveMQ broker is designed to move messages as efficiently as possible to ensure low latency and good performance.
One key difference is that Kafka allows you to find messages based on their offset (similar to consuming records from a database), whereas ActiveMQ does not. Kafka also supports more advanced features such as access control using ACLs and compression using compression codecs.
By having multiple cluster connections on different addresses a single Apache ActiveMQ Artemis Server can effectively take part in multiple clusters simultaneously.
The items you mention above are all recommendations for scaling ActiveMQ as listed in the How do I configure 10s of 1000s of Queues in a single broker? page. I've utilized each of these tactics in various situations at customer sites and found that they help considerably.
The NIO transport is good for using less sockets when there are a high number of connections into a broker vs. the TCP transport. This efficiency can improve the overall performance of the broker.
I almost always recommend setting org.apache.activemq.UseDedicatedTaskRunner=false
simply because it helps considerably with performance.
Disabling tight encoding is a subtle change and difficult to see the benefit. It depends on the types of messages that you're sending.
The KahaDB outperforms any of the other stores for persistent messaging with ActiveMQ, especially on the trunk. There is a bug that is fixed only on the trunk currently that causes a dramatic increase in persistent messaging performance that will be part of 5.3.1 and 5.4.
I know that this info isn't concrete, but I hope it still helps.
Bruce
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With