Sometimes; when I perform a DELETE; it doesn't work.
My config : [cqlsh 5.0.1 | Cassandra 3.0.3 | CQL spec 3.4.0 | Native protocol v4]
cqlsh:my_db> SELECT * FROM conversations WHERE user_id=120 AND conversation_id=2 AND peer_type=1;
user_id | conversation_id | peer_type | message_map
---------+-----------------+-----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
120 | 2 | 1 | {0: {real_id: 68438, date: 1455453523, sent: True}, 1: {real_id: 68437, date: 1455453520, sent: True}, 2: {real_id: 68436, date: 1455453517, sent: True}, 3: {real_id: 68435, date: 1455453501, sent: True}, 4: {real_id: 68434, date: 1455453500, sent: True}, 5: {real_id: 68433, date: 1455453499, sent: True}, 6: {real_id: 68432, date: 1455453498, sent: True}, 7: {real_id: 68431, date: 1455453494, sent: True}, 8: {real_id: 68430, date: 1455453480, sent: True}}
(1 rows)
cqlsh:my_db> DELETE message_map FROM conversations WHERE user_id=120 AND conversation_id=2 AND peer_type=1;
cqlsh:my_db> SELECT * FROM conversations WHERE user_id=120 AND conversation_id=2 AND peer_type=1;
user_id | conversation_id | peer_type | message_map
---------+-----------------+-----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
120 | 2 | 1 | {0: {real_id: 68438, date: 1455453523, sent: True}, 1: {real_id: 68437, date: 1455453520, sent: True}, 2: {real_id: 68436, date: 1455453517, sent: True}, 3: {real_id: 68435, date: 1455453501, sent: True}, 4: {real_id: 68434, date: 1455453500, sent: True}, 5: {real_id: 68433, date: 1455453499, sent: True}, 6: {real_id: 68432, date: 1455453498, sent: True}, 7: {real_id: 68431, date: 1455453494, sent: True}, 8: {real_id: 68430, date: 1455453480, sent: True}}
(1 rows)
CQLSH doesn't return me any error on the DELETE instruction, but it's like if it wasn't taken in account.
Do you know why ?
NB : This is my table definition :
CREATE TABLE be_telegram.conversations (
user_id bigint,
conversation_id int,
peer_type int,
message_map map<int, frozen<message>>,
PRIMARY KEY (user_id, conversation_id, peer_type)
) WITH CLUSTERING ORDER BY (conversation_id ASC, peer_type ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
A DELETE
statement removes one or more columns from one or more rows in a table, or it removes the entire row if no columns are specified. Cassandra applies selections within the same partition key
atomically and in isolation.
When a column is deleted, it is not removed from disk immediately. The deleted column is marked with a tombstone
and then removed after the configured grace period has expired. The optional timestamp
defines the new tombstone
record.
The way Cassandra deletes data differs from the way a relational database deletes data. A relational database might spend time scanning through data looking for expired data and throwing it away or an administrator might have to partition expired data by month, for example, to clear it out faster. Data in a Cassandra column can have an optional expiration date called TTL (time to live).
Marking data with a tombstone signals Cassandra to retry sending a delete request to a replica that was down at the time of delete. If the replica comes back up within the grace period of time, it eventually receives the delete request. However, if a node is down longer than the grace period, the node can miss the delete because the tombstone disappears after gc_grace_seconds. Cassandra always attempts to replay missed updates when the node comes back up again. After a failure, it is a best practice to run node repair to repair inconsistencies across all of the replicas when bringing a node back into the cluster. If the node doesn't come back within gc_grace,_seconds, remove the node, wipe it, and bootstrap it again.
In your case, compaction
is sized-tiered
. So please try compaction process.
Compaction
Periodic compaction is essential to a healthy Cassandra database because Cassandra does not insert/update in place. As inserts/updates occur, instead of overwriting the rows, Cassandra writes a new timestamped version of the inserted or updated data in another SSTable. Cassandra manages the accumulation of SSTables on disk using compaction.
Cassandra also does not delete in place because the SSTable is immutable. Instead, Cassandra marks data to be deleted using a tombstone. Tombstones exist for a configured time period defined by the gc_grace_seconds value set on the table. During compaction, there is a temporary spike in disk space usage and disk I/O because the old and new SSTables co-exist. This diagram depicts the compaction process:
Compaction merges the data in each SSTable data by partition key, selecting the latest data for storage based on its timestamp. Cassandra can merge the data performantly, without random IO, because rows are sorted by partition key within each SSTable. After evicting tombstones and removing deleted data, columns, and rows, the compaction process consolidates SSTables into a single file. The old SSTable files are deleted as soon as any pending reads finish using the files. Disk space occupied by old SSTables becomes available for reuse.
Data input to SSTables is sorted to prevent random I/O during SSTable consolidation. After compaction, Cassandra uses the new consolidated SSTable instead of multiple old SSTables, fulfilling read requests more efficiently than before compaction. The old SSTable files are deleted as soon as any pending reads finish using the files. Disk space occupied by old SSTables becomes available for reuse.
so try this
nodetool <options> repair
options are:
( -h | --host ) <host name> | <ip address>
( -p | --port ) <port number>
( -pw | --password ) <password >
( -u | --username ) <user name>
-- Separates an option and argument that could be mistaken for a option.
keyspace is the name of a keyspace.
table is one or more table names, separated by a space.
This command starts the compaction process on tables that use the SizeTieredCompactionStrategy
or DateTieredCompactionStrategy
. You can specify a keyspace for compaction. If you do not specify a keyspace
, the nodetool
command uses the current keyspace
. You can specify one or more tables for compaction
. If you do not specify a table(s), compaction of all tables in the keyspace occurs. This is called a major compaction. If you do specify a table(s), compaction of the specified table(s) occurs. This is called a minor compaction. A major compaction consolidates all existing SSTables into a single SSTable. During compaction, there is a temporary spike in disk space usage and disk I/O because the old and new SSTables co-exist. A major compaction can cause considerable disk I/O.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With