Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Warning message repeated in Wildfly after scale-down using KUBE_PING

I am attempting to run Wildfly in HA Full mode on Kubernetes using the KUBE_PING JGroups protocol. Everything starts up fine, and I can scale the cluster up, and the nodes recognize one another without any issues.

The problem occurs when I attempt to scale-down the cluster. ActiveMQ Artemis continually complains that it can't connect to the disconnected node, even though JGroups has acknowledged that the old node has left the cluster.

I'm wondering what I might be doing wrong in my JGroups configuration. I have attached some log messages, as well as my JGroups configuration for KUBE_PING.

Just to make sure I've given as much info as possible, I'm running on the most recent Wildfly official docker image, 15.0.1.Final, which runs on JDK 11.

Thanks in advance for any help!

Edit: Fixed typos

JGroups confirmation of node disconnect

wildfly-kube 12:48:36,514 INFO  [org.apache.activemq.artemis.core.server] (Thread-22 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@10f88645)) AMQ221027: Bridge ClusterConnectionBridge@379d51e3 [name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb], temp=false]@195607a8 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@379d51e3 [name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb], temp=false]@195607a8 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1699294977[nodeUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6, address=jms, server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4], discoveryGroupConfiguration=null]] is connected
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:38,905 WARN  [org.apache.activemq.artemis.core.server] (Thread-5 (ActiveMQ-client-global-threads)) AMQ222095: Connection failed with failedOver=false
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:43,758 ERROR [org.jgroups.protocols.TCP] (TQ-Bundler-7,ejb,wildfly-kube-b6f69fb9-b2hd5) JGRP000034: wildfly-kube-b6f69fb9-b2hd5: failure sending message to wildfly-kube-b6f69fb9-nshvn: java.net.SocketTimeoutException: connect timed out
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,759 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,772 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,777 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,779 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,787 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,788 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,791 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,792 INFO  [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster

Repeated ActiveMQ Artemis Warnings (Every 3 seconds)

wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 13:02:11,825 WARN  [org.apache.activemq.artemis.core.server] (Thread-55 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@866e807)) AMQ224091: Bridge ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1432944139[nodeUUID=7ee91868-337b-11e9-9849-ce422226aad5, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4, address=jms, server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]] is unable to connect to destination. Retrying
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 13:02:14,897 WARN  [org.apache.activemq.artemis.core.server] (Thread-68 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@866e807)) AMQ224091: Bridge ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1432944139[nodeUUID=7ee91868-337b-11e9-9849-ce422226aad5, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4, address=jms, server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]] is unable to connect to destination. Retrying

JGroups configuration

<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="ee">
    <channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
    <stack name="tcp">
        <transport type="TCP" socket-binding="jgroups-tcp">
            <property name="logical_addr_cache_expiration">360000</property>
        </transport>
        <protocol type="kubernetes.KUBE_PING">
            <property name="namespace">${KUBERNETES_CLUSTER_NAMESPACE:default}</property>
            <property name="labels">${KUBERNETES_CLUSTER_LABEL:cluster=nyc}</property>
            <property name="port_range">0</property>
        </protocol>
        <protocol type="MERGE3"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
        <protocol type="FD_ALL"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2">
            <property name="use_mcast_xmit">false</property>
        </protocol>
        <protocol type="UNICAST3"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS">
            <property name="join_timeout">30000</property>
            <property name="print_local_addr">true</property>
            <property name="print_physical_addrs">true</property>
        </protocol>
        <protocol type="MFC"/>
        <protocol type="FRAG3"/>
    </stack>
</stacks>

ActiveMQ Artemis configuration

<subsystem xmlns="urn:jboss:domain:messaging-activemq:5.0">
<server name="default">
    <cluster user="my_admin" password="my_password"/>
    <security-setting name="#">
        <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
    </security-setting>
    <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/>
    <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
    <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
        <param name="batch-delay" value="50"/>
    </http-connector>
    <in-vm-connector name="in-vm" server-id="0">
        <param name="buffer-pooling" value="false"/>
    </in-vm-connector>
    <http-acceptor name="http-acceptor" http-listener="default"/>
    <http-acceptor name="http-acceptor-throughput" http-listener="default">
        <param name="batch-delay" value="50"/>
        <param name="direct-deliver" value="false"/>
    </http-acceptor>
    <in-vm-acceptor name="in-vm" server-id="0">
        <param name="buffer-pooling" value="false"/>
    </in-vm-acceptor>
    <broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
    <discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
    <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
    <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
    <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
    <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
    <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
    <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>

UPDATE: One thing I would add, if the container shuts down gracefully, Artemis seems to handle the disconnect properly. Adding a preStop command to the container definition in my Kubernetes deployment to shut Wildfly down before the container is terminated helped to gracefully take the container out of the cluster.

like image 373
Jonathan Putney Avatar asked Oct 16 '22 06:10

Jonathan Putney


1 Answers

ActiveMQ Artemis only uses JGroups (or any other discovery mechanism) to discover other brokers for the purpose of clustering them together. Once another broker is discovered then they establish TCP connections between themselves after which JGroups doesn't serve any role which means that JGroups seeing the broker leave the cluster is irrelevant.

The fact that the cluster bridge fails is enough to tell ActiveMQ Artemis that the broker has left the cluster. The question at that point is what should the broker do in response to the dead node. By default it will attempt to reconnect indefinitely as it expects the node to come back at some point. This is a reasonable expectation in a traditional use-case but not so much in the cloud. This behavior is controlled by the reconnect-attempts property on the cluster-connection. Set reconnect-attempts to something you think is reasonable (e.g. 10) and you'll see the bridge reconnect give up and stop logging.

like image 189
Justin Bertram Avatar answered Nov 15 '22 09:11

Justin Bertram