Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cassandra CQLSH OperationTimedOut error=Client request timeout. See Session.execute[_async](timeout)

I want to transfer data from one Cassandra cluster (reached via 192.168.0.200) to another Cassandra cluster (reached via 127.0.0.1). The data is 523 rows but each row is about 1 MB. I am using the COPY FROM and COPY TO command. I get the following error when I issue the COPY TO command:

Error for (8948428671687021382, 9075041744804640605):
OperationTimedOut - errors={
'192.168.0.200': 'Client request timeout. See Session.execute[_async](timeout)'},
last_host=192.168.0.200 (will try again later attempt 1 of 5).

I tried to change the ~/.cassandra/cqlshrc file to:

[connection]
client_timeout = 5000

But this hasn't helped.

like image 503
Rishabh Poddar Avatar asked Oct 10 '16 10:10

Rishabh Poddar


4 Answers

You may want to increment the request timeout (default: 10 seconds), not the connect timeout.

Try:

cqlsh --request-timeout=6000

or add:

[connection]
request_timeout = 6000

to your ~/.cassandra/cqlshrc file.

like image 141
Luca Gibelli Avatar answered Oct 23 '22 15:10

Luca Gibelli


Regarding the copy timeout the correct way is to use the PAGETIMEOUT parameter as already pointed.

copy keyspace.table to '/dev/null' WITH PAGETIMEOUT=10000;

Trying to set the --request-timeout=6000 with cqlsh does not help in that situation.

like image 4
remigiusz boguszewicz Avatar answered Oct 23 '22 13:10

remigiusz boguszewicz


It's not clear which version of Cassandra you're using here so I'm going to assume 3.0.x

The COPY function is good but not always the best choice (i.e. if you have a lot of data), however for this though you might want to check some of your timeout settings in cassandra

The docs here show a pagetimeout setting too which may help you.

Moving data between two clusters can be done a number of other ways. You could use of any of the following:

  1. The sstableloader
  2. One of the drivers like the java driver
  3. Using spark to copy data from one cluster to another, like in this example
  4. Using OpsCenter to clone a cluster
  5. The cassandra bulk loader (I've known a number of people to use this)

Of course #3 and #4 need DSE cassandra but its just to give you an idea. I wasn't sure if you were using Apache Cassandra or Datastax Enterprise Cassandra.

Anyway, hope this helps!

like image 3
markc Avatar answered Oct 23 '22 14:10

markc


Hi besides the following,

1.Check tombstones
In cassandra tombstones degrade the performance of reads and following issue occur OperationTimedOut: errors={'127.0.0.1': 'Client request timeout. See Session.execute_async'}, last_host=127.0.0.1
Note
When we insert data in to the table with null values in columns it creates a tombstones. we need to avoid null inserts inside the table.
There are multiple options available like unset(https://docs.datastax.com/en/latest-csharp-driver-api/html/T_Cassandra_Unset.htm) and ignoreNulls (https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md) property in spark.
You can check your table status using the following command
nodetool tablestats keyspace1.tablename

2.Remove Tombstones
If your working on a single node you can remove tombstones by altering your table ALTER table keyspace1.tablename WITH gc_grace_seconds = '0';

3.read_request_timeout_in_ms:Configure the value in cassandra.yaml file to increase timeout for a read request

like image 1
Hitesh Shahani Avatar answered Oct 23 '22 13:10

Hitesh Shahani