Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cassandra: NoSpamLogger log Maximum memory usage reached

Every 15 min I see this log entry:

2018-07-30 10:29:57,529 INFO [pool-1-thread-2] NoSpamLogger.java:91 log Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

I´ve been reading through this Question, but I cant see anything wrong with my tables: NoSpamLogger.java Maximum memory usage reached Cassandra

I have 4 large tables:

iot_data/derived_device_data histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00              0.00              0.00               642                12
75%             0.00              0.00              0.00             17084               642
95%             0.00              0.00              0.00            263210             11864
98%             0.00              0.00              0.00           1629722             61214
99%             0.00              0.00              0.00           1955666             88148
Min             0.00              0.00              0.00               150                 0
Max             0.00              0.00              0.00           4055269            152321


iot_data/derived_device_data_by_year histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00              0.00              0.00             51012              1597
75%             0.00              0.00              0.00           2346799             61214
95%             0.00              0.00              0.00          52066354           1629722
98%             0.00              0.00              0.00          52066354           1629722
99%             0.00              0.00              0.00          52066354           1629722
Min             0.00              0.00              0.00              6867               216
Max             0.00              0.00              0.00          52066354           1629722

iot_data/device_data histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00             29.52              0.00              2299               149
75%             0.00             42.51              0.00            182785              9887
95%             0.00             61.21              0.00           2816159            152321
98%             0.00             61.21              0.00           4055269            219342
99%             0.00             61.21              0.00          17436917           1131752
Min             0.00             17.09              0.00                43                 0
Max             0.00             61.21              0.00          74975550           4866323

iot_data/device_data_by_week_sensor histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00             35.43              0.00              8239               446
75%             0.00             51.01              0.00            152321              8239
95%             0.00             61.21              0.00           2816159            152321
98%             0.00             61.21              0.00           4055269            219342
99%             0.00             61.21              0.00          12108970            785939
Min             0.00             20.50              0.00                43                 0
Max             0.00             61.21              0.00          74975550           4866323

Altough I know that the derived_device_data // derived_device_data_by_year tables need some refactoring, none of them is close to the 100MB mark. Why am I getting this log entry though?

EDIT: I noticed the same log entry on my test systems, which are running almost without data but with the same configuration as prod. 12GB RAM, cassandra 3.11.2

like image 899
Alex Tbk Avatar asked Jul 30 '18 09:07

Alex Tbk


1 Answers

You may need to check the value of vm.max_map_count and and settings for swap. If swap is enabled, then it could affect performance of the both systems. Default value of vm.max_map_count could be also too low, and affect both Cassandra and Elasticsearch (see recommendation for ES).

Also, you may need explicitly set heap size for Cassandra and file_cache_size_in_mb - with 12Gb RAM, Cassandra will use 1/4 that is 3Gb, and file_cache_size_in_mb will be ~750Mb (1/4 of heap) - that could be too low.

P.S. Because it's logged as INFO, then it's considered harmless. See https://issues.apache.org/jira/browse/CASSANDRA-12221 & https://issues.apache.org/jira/browse/CASSANDRA-11681

like image 195
Alex Ott Avatar answered Oct 15 '22 20:10

Alex Ott