Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

DatabaseLessLeasing has failed and Server is not in majority cluster partition

I'm facing a DatabaseLessLeasing issue. Our's is a middleware application. We don't have any database and our application is running on WebLogic server. We have 2 servers in one cluster. Both servers are up and running, but we are using only one server to do the processing. When the primary server fails, whole server and services will migrate to secondary server. This is working fine.

But we had one issue end of last year that our secondary server hardware was down and secondary server was not available. We got the below issue. When we went to Oracle, they suggested to have one more server or have one database which is high availability to hold the cluster leasing information to point out which is the master server. As of now we don't have that option to do as putting the new server means there will be a budget issue and client is not ready for it.

Our Weblogic configuration for cluster are:

  1. one cluster with 2 managed servers
  2. cluster messaging mode is Multicast
  3. Migration Basis is Consensus
  4. load algorithm is Round Robin

This is the log I found

LOG: Critical Health BEA-310006 Critical Subsystem DatabaseLessLeasing has failed. Setting server state to FAILED. Reason: Server is not in the majority cluster partition>

Critical WebLogicServer BEA-000385 Server health failed. Reason: health of critical service 'DatabaseLessLeasing' failed Notice WebLogicServer BEA-000365 Server state changed to FAILED

**Note: **I remember one thing, the server was not down when this happened. Both the servers were running but all of a sudden server tried to restart and it unable to restart. Restart was failed. I saw that status was showing as failedToRestart and application went down.

Can anyone please help me on this issue.

Thank you

like image 249
user1771540 Avatar asked May 17 '13 01:05

user1771540


1 Answers

Consensus leasing requires a majority of servers to continue functioning. Any time there is a network partition, the servers in the majority partition will continue to run while those in the minority partition will fail since they cannot contact the cluster leader or elect a new cluster leader since they will not have the majority of servers. If the partition results in an equal division of servers, then the partition that contains the cluster leader will survive while the other one will fail.

Owing to above functionality, If automatic server migration is enabled, the servers are required to contact the cluster leader and renew their leases periodically. Servers will shut themselves down if they are unable to renew their leases. The failed servers will then be automatically migrated to the machines in the majority partition.

The server which got partitioned (and not part of majority cluster) will get into FAILED state. This behavior is put in place to avoid split-brain scenarios where there are two partitions of a cluster and both think they are the real cluster. When a cluster gets segmented, the largest segment will survive and the smaller segment will shut itself down. When servers cannot reach the cluster master, they determine if they are in the larger partition or not. If they are in the larger partition, they will elect a new cluster master. If not, they will all shut down when their lease expires. Two-node clusters are problematic in this case. When a cluster gets partitioned, which partition is the largest? When the cluster master goes down in a two-node cluster, the remaining server has no way of knowing if it is in the majority or not. In that case, if the remaining server is the cluster master, it will continue to run. If it is not the master, it will shut down.

Usually this error shows up when there are only 2 managed servers in onc cluster.

To solve this kind of issues, create another server; since the cluster is only of 2 nodes, any server will fall out of the majority cluster partition if it loses connectivity/drops cluster broadcast messages. In this scenario, there are no other servers part of the cluster.

For Consensus Leasing, it is always recommended to create a cluster with at-least 3 nodes; that way you can ensure some stability.

In that scenario, even if one server falls out of the cluster, the other two still function correctly as they remain in the majority cluster partition The third one will rejoin the cluster, or will be eventually restarted.

In a scenario where you have only 2 servers as part of the cluster, one falling out from the cluster will result in both the servers being restarted, as they are not a part of the majority cluster partition; this would ultimately result in a very unstable environment.

Another possible scenario is that there was a communication issue between the Managed servers, you should look out for messages like "lost .* message(s)" [in case of unicast it is some thing like "Lost 2 unicast message(s)."] This may be caused due to temporary network issues

like image 111
Prasad Avatar answered Sep 27 '22 17:09

Prasad