I'm curious. I have flow like that: We have big collection/table with a lot of data. And have some select
oriented query which is long and takes 3 seconds.
However we looking to highly concurrent environment and each seconds we get 100 new records in our database.
So let's say we have a query, and before that query starts we have 1000 items which satisfy those query. Query takes 3 seconds, and each seconds there is 50 new items that match those query added to the db. My questions is - which is result this query return to me (is it still 1000 or 1150 or something in between) and how this depend from different database engine (SQL, NoSQL). O
It's not a question about exact number, but a bit more - why it will be those number.
Looks like question is a bit wide. Let's limit DB with MySQL, Postgres, MongoDB and Cassandra.
Speaking generally (mainly because you didn't name a specific database), the concurrency level of a database is configurable, and falls under the category of performance tuning.
Some common lock granularities are:
So, if you used ROW level locking, you would likely get all 1150 results, at the expense of higher locking overhead. Or, if you used TABLE level locking, you would get 1000 results very quickly, but at the expense of your data stream being blocked from writing to your db for 3s.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With