I want to insert n records into a single table. There may be many concurrent users and they may insert/update/select data from this table. What is better way to insert in a such table say 1000 records:
Is this depends on something, or there is a single way that is always the optimal one? Is this depends whether transactions are used or not, while inserting data? Is there other better ways to perform a such insert?
The database I use is MS SQL, but it's interesting how it works in other DB like Oracle.
This totally depends on what RDBMS
you are using.
In Oracle
, writes never block reads, that's why you can safely put your data all at once. Note, though, that this will degrade performance, since the concurrent queries will need to fetch the data out of UNDO
tablespace that will require extra reads.
In SQL Server
writes do block reads on affected rows / pages / tables (depending on lock escalation issues), unless you set TRANSACTION ISOLATION LEVEL
to SNAPSHOT
.
In all transactional engines that allow concurrent writing and reading, the engine needs to store both old and new data somewhere for it to be available at the same time.
In Oracle
, old data is copied into the UNDO
tablespace.
In SQL Server
, it gets copied into tempdb
(only when SNAPSHOT
isolation is enabled, otherwise it's just locked).
This always requires some resources (memory or disk), and you can run out of these resources if your UPDATE
query affects lots of rows.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With