Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cassandra read error

tl;dr I solved the issue upgrading to Cassandra 3.2. This bug was apparently causing the problem.


I'm running a two-node cluster of Cassandra with versions [cqlsh 5.0.1 | Cassandra 3.0.1 | CQL spec 3.3.1 | Native protocol v4].

There is one table that I can't read, I have the following error:

cqlsh:kepler> select type from md_data limit 1;
Traceback (most recent call last):
  File "/local/chernals/dsc-cassandra-3.0.1/bin/cqlsh.py", line 1258, in perform_simple_statement
    result = future.result()
  File "/local/chernals/dsc-cassandra-3.0.1/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py", line 3122, in result
    raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

I can read other tables without any problem.

The schema of that table is:

CREATE TABLE kepler.md_data (
    name text,
    tag text,
    id timeuuid,
    parameter frozen<parameter>,
    blob_value blob,
    real_value float,
    telegram map<text, text> static,
    text_value text,
    type text,
    PRIMARY KEY ((name, tag, id), parameter)
) WITH CLUSTERING ORDER BY (parameter ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';
CREATE INDEX parameter_idx ON kepler.md_data (parameter);

Would there be some issues with such a schema and the different versions of Cassandra/cqlsh I'm running?

Note that when the table is empty then I can "read" it (it's empty but the select statement succeeds).

Edit:

Super strange problem as I'm having a hard time reproducing it all the time. I moved to a test setup with only 1 node. It seems to be linked with the number of rows present in the table.

cqlsh:kepler> select type from md_data;
Traceback (most recent call last):
  File "/local/chernals/dsc-cassandra-3.0.1/bin/cqlsh.py", line 1258, in perform_simple_statement
    result = future.result()
  File "/local/chernals/dsc-cassandra-3.0.1/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py", line 3122, in result
    raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

cqlsh:kepler> TRUNCATE TABLE md_data;
cqlsh:kepler> select type from md_data;

 name | tag | id | parameter | blob_value | real_value | telegram | text_value | type
------+-----+----+-----------+------------+------------+----------+------------+------

(0 rows)
cqlsh:kepler>

Edit: Error message from cassandra -f

WARN  11:07:00 Uncaught exception on thread Thread[SharedPool-Worker-3,5,main]: {}
java.lang.AssertionError: null
    at org.apache.cassandra.db.columniterator.AbstractSSTableIterator$IndexState.updateBlock(AbstractSSTableIterator.java:463) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:268) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:158) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:352) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1721) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2375) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_66]
    at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$TraceSessionFutureTask.run(AbstractTracingAwareExecutorService.java:136) [apache-cassandra-3.0.1.jar:3.0.1]
    at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.0.1.jar:3.0.1]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
like image 783
Cedric H. Avatar asked Jan 15 '16 01:01

Cedric H.


1 Answers

I solved the issue upgrading to Cassandra 3.2. This bug was apparently causing the problem.

like image 191
Cedric H. Avatar answered Oct 12 '22 08:10

Cedric H.