Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unable to gossip with any seeds but continuing since node is in its own seed list

To remove a node from 2 node cluster in AWS I ran

nodetool removenode <Host ID>

After this I was supposed to get my cluster back if I put all the cassandra.yaml and cassandra-rackdc.properties correctly. I did it but still, I am not able to get back my cluster.

nodetool status is displaying only one node.

significant system.log on cassandra is :

INFO  [main] 2017-08-14 13:03:46,409 StorageService.java:553 - Cassandra version: 3.9
INFO  [main] 2017-08-14 13:03:46,409 StorageService.java:554 - Thrift API version: 20.1.0
INFO  [main] 2017-08-14 13:03:46,409 StorageService.java:555 - CQL supported versions: 3.4.2 (default: 3.4.2)
INFO  [main] 2017-08-14 13:03:46,445 IndexSummaryManager.java:85 - Initializing index summary manager with a memory pool size of 198 MB and a resize interval of 60 minutes
INFO  [main] 2017-08-14 13:03:46,459 MessagingService.java:570 - Starting Messaging Service on /172.15.81.249:7000 (eth0)
INFO  [ScheduledTasks:1] 2017-08-14 13:03:48,424 TokenMetadata.java:448 - Updating topology for all endpoints that have changed
WARN  [main] 2017-08-14 13:04:17,497 Gossiper.java:1388 - Unable to gossip with any seeds but continuing since node is in its own seed list
INFO  [main] 2017-08-14 13:04:17,499 StorageService.java:687 - Loading persisted ring state
INFO  [main] 2017-08-14 13:04:17,500 StorageService.java:796 - Starting up server gossip

Content of files:

cassandra.yaml : https://pastebin.com/A3BVUUUr

cassandra-rackdc.properties: https://pastebin.com/xmmvwksZ

system.log : https://pastebin.com/2KA60Sve

netstat -atun https://pastebin.com/Dsd17i0G

Both the nodes have same error log.

All required ports are open.

Any suggestion ?

like image 796
Avinash Avatar asked Aug 09 '17 07:08

Avinash


1 Answers

It's usually a best practice to have one seed node per DC if you have just two nodes available in your datacenter. You shouldn't make every node a seed node in this case.

I noticed that node1 has - seeds: "node1,node2" and node2 has - seeds: "node2,node1" in your configuration. A node will start by default without contacting any other seeds if it can find it's IP address as first element in - seeds: ... section in the cassandra.yml configuration file. That's what you can also find in your logs:

... Unable to gossip with any seeds but continuing since node is in its own seed list ...

I suspect, that in your case node1 and node2 are starting without contacting each other, since they identify themselves as seed nodes.

Try to use just node1 for seed node in both instance's configuration and reboot your cluster. In case of node1 being down and node2 is up, you have to change - seeds: ... section in node1 configuration to point just to node2's IP address and just boot node1.

If your nodes can't find each other because of firewall misconfiguration, it's usually a good approach to verify if a specific port is accessible from another location. E.g. you can use nc for checking if a certain port is open:

nc -vz node1 7000

References and Links

See the list of ports Cassandra is using under the following link http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureFireWall.html

See also a detailed documentation on running multiple nodes with plenty of sample commands: http://docs.datastax.com/en/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html

like image 110
Oresztesz Avatar answered Oct 23 '22 20:10

Oresztesz