Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Determining how full a Cassandra cluster is

Tags:

cassandra

I just imported a lot of data in a 9 node Cassandra cluster and before I create a new ColumnFamily with even more data, I'd like to be able to determine how full my cluster currently is (in terms of memory usage). I'm not too sure what I need to look at. I don't want to import another 20-30GB of data and realize I should have added 5-6 more nodes.

In short, I have no idea if I have too few/many nodes right now for what's in the cluster.

Any help would be greatly appreciated :)

$ nodetool -h 192.168.1.87 ring
Address         DC          Rack        Status State   Load            Owns    Token                                       
                                                                               151236607520417094872610936636341427313     
192.168.1.87    datacenter1 rack1       Up     Normal  7.19 GB         11.11%  0                                           
192.168.1.86    datacenter1 rack1       Up     Normal  7.18 GB         11.11%  18904575940052136859076367079542678414      
192.168.1.88    datacenter1 rack1       Up     Normal  7.23 GB         11.11%  37809151880104273718152734159085356828      
192.168.1.84    datacenter1 rack1       Up     Normal  4.2 GB          11.11%  56713727820156410577229101238628035242      
192.168.1.85    datacenter1 rack1       Up     Normal  4.25 GB         11.11%  75618303760208547436305468318170713656      
192.168.1.82    datacenter1 rack1       Up     Normal  4.1 GB          11.11%  94522879700260684295381835397713392071      
192.168.1.89    datacenter1 rack1       Up     Normal  4.83 GB         11.11%  113427455640312821154458202477256070485     
192.168.1.51    datacenter1 rack1       Up     Normal  2.24 GB         11.11%  132332031580364958013534569556798748899     
192.168.1.25    datacenter1 rack1       Up     Normal  3.06 GB         11.11%  151236607520417094872610936636341427313

-

# nodetool -h 192.168.1.87 cfstats
  Keyspace: stats
  Read Count: 232
  Read Latency: 39.191931034482764 ms.
  Write Count: 160678758
  Write Latency: 0.0492021849459404 ms.
  Pending Tasks: 0
    Column Family: DailyStats
    SSTable count: 5267
    Space used (live): 7710048931
    Space used (total): 7710048931
    Number of Keys (estimate): 10701952
    Memtable Columns Count: 4401
    Memtable Data Size: 23384563
    Memtable Switch Count: 14368
    Read Count: 232
    Read Latency: 29.047 ms.
    Write Count: 160678813
    Write Latency: 0.053 ms.
    Pending Tasks: 0
    Bloom Filter False Postives: 0
    Bloom Filter False Ratio: 0.00000
    Bloom Filter Space Used: 115533264
    Key cache capacity: 200000
    Key cache size: 1894
    Key cache hit rate: 0.627906976744186
    Row cache: disabled
    Compacted row minimum size: 216
    Compacted row maximum size: 42510
    Compacted row mean size: 3453

-

[default@stats] describe;
Keyspace: stats:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
    Options: [replication_factor:3]
  Column Families:
    ColumnFamily: DailyStats (Super)
      Key Validation Class: org.apache.cassandra.db.marshal.BytesType
      Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
      Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type/org.apache.cassandra.db.marshal.UTF8Type
      Row cache size / save period in seconds / keys to save : 0.0/0/all
      Row Cache Provider: org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
      Key cache size / save period in seconds: 200000.0/14400
      GC grace seconds: 864000
      Compaction min/max thresholds: 4/32
      Read repair chance: 1.0
      Replicate on write: true
      Built indexes: []
      Column Metadata:
       (removed)
      Compaction Strategy: org.apache.cassandra.db.compaction.LeveledCompactionStrategy
      Compression Options:
        sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
like image 482
Pierre Avatar asked Dec 23 '11 20:12

Pierre


People also ask

How do you find the number of nodes in Cassandra?

Use nodetool command of Cassandra. Go to bin directory of Cassandra and then run the following command. There are many more options in nodetool but here I am giving an example of ring and status command.

How many nodes does Cassandra cluster have?

As we said earlier, each instance of Cassandra has evolved to contain 256 virtual nodes. The Cassandra server runs core processes. For example, processes like spreading replicas around nodes or routing requests.

How many Cassandra nodes minimum you should have in your cluster to make sure it is highly available?

In most cases, you'll want to operate with strong consistency and so need at least 3 nodes. This allows your Cassandra service to continue uninterrupted if you suffer a hardware failure or some other loss of the Cassandra service on a single node (this will happen sooner or later).

How do I check my Cassandra cluster status?

Check the status of the Cassandra nodes in your cluster - Go to the /<Install_Dir>/apache-cassandra/bin/ directory and type the ./nodetool status command. If the status for all the nodes shows as UN , then the nodes are up and running. If the status for any node shows as DN , then that particular node is down.


1 Answers

Obviously, there are two types of memory -- disk and RAM. I'm going to assume you're talking about disk space.

First, you should find out how much space you're currently using per node. Check the on-disk usage of the cassandra data dir (by default /var/lib/cassandra/data) with this command: du -ch /var/lib/cassandra/data You should then compare that to the size of your disk, which can be found with df -h. Only consider the entry for the df results for the disk your cassandra data is on, by checking the Mounted on column.

Using those stats, you should be able to calculate how full in % the cassandra data partition. Generally you don't want to get too close to 100% because cassandra's normal compaction processes temporarily use more disk space. If you don't have enough, then a node can get caught with a full disk, which can be painful to resolve (as I side note I occasionally keep a "ballast" file of a few Gigs that I can delete just in case I need to open some extra space). I've generally found that not exceeding about 70% disk usage is on the safe side for the 0.8 series.

If you're using a newer version of cassandra, then I'd recommend giving the Leveled Compaction strategy a shot to reduce temporary disk usage. Instead of potentially using twice as much disk space, the new strategy will at most use 10x of a small, fixed size (5MB by default).

You can read more about how compaction temporarily increases disk usage on this excellent blog post from Datastax: http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra It also explains the compaction strategies.

So to do a little capacity planning, you can figure up how much more space you'll need. With a replication factor of 3 (what you're using above), adding 20-30GB of raw data would add 60-90GB after replication. Split between your 9 nodes, that's maybe 3GB more per node. Does adding that kind of disk usage per node push you too close to having full disks? If so, you might want to consider adding more nodes to the cluster.

One other note is that your nodes' loads aren't very even -- from 2GB up to 7GB. If you're using the ByteOrderPartitioner over the random one, then that can cause uneven load and "hotspots" in your ring. You should consider using random if possible. The other possibility could be that you have extra data hanging out that needs to be taken care of (Hinted Handoffs and snapshots come to mind). Consider cleaning that up by running nodetool repair and nodetool cleanup on each node one at a time (be sure to read up on what those do first!).

Hope that helps.

like image 107
Andrew Avatar answered Oct 16 '22 23:10

Andrew