Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PostgreSQL slow insert of a single row with RETURNING (taking 500ms in production)

Bit stumped here. I have a small table called request_to_print. This is the table structure.

enter image description here

It has a single primary key of request_internal_id.

The table has 747 rows in it.

When an insert takes place, I've noticed it's taking 500ms+ which seems excessive. Most inserts are done in 1ms, and on far larger tables.

2020-10-06T10:38:29.284+11:00   Executed DbCommand (566ms) [Parameters=[@p0='', @p1='94ac21e6-bdd5-409c-90f6-4e014d1de763', @p2=NULL (DbType = DateTime), @p3='2020-10-05T23:38:28' (Nullable = true) (DbType = DateTime), @p4='E001'], CommandType='Text', CommandTimeout='30']
2020-10-06T10:38:29.284+11:00   INSERT INTO request_to_print (card_number, customer_internal_id, request_sent_date, requested_date, requesting_store)
2020-10-06T10:38:29.284+11:00   VALUES (@p0, @p1, @p2, @p3, @p4)
2020-10-06T10:38:29.284+11:00   RETURNING request_internal_id;

I have run this command in our test environment in similar conditions. Because it's an insert, there's not a lot of stuff to analyze/explain. I mean, it's just putting a row of data in. Here's what I got from tracing the insert:

[
  {
    "Plan": {
      "Node Type": "ModifyTable",
      "Operation": "Insert",
      "Parallel Aware": false,
      "Relation Name": "request_to_print",
      "Schema": "public",
      "Alias": "request_to_print",
      "Startup Cost": 0,
      "Total Cost": 0.02,
      "Plan Rows": 1,
      "Plan Width": 304,
      "Actual Startup Time": 0.087,
      "Actual Total Time": 0.088,
      "Actual Rows": 1,
      "Actual Loops": 1,
      "Output": [
        "request_internal_id"
      ],
      "Shared Hit Blocks": 2,
      "Shared Read Blocks": 0,
      "Shared Dirtied Blocks": 0,
      "Shared Written Blocks": 0,
      "Local Hit Blocks": 0,
      "Local Read Blocks": 0,
      "Local Dirtied Blocks": 0,
      "Local Written Blocks": 0,
      "Temp Read Blocks": 0,
      "Temp Written Blocks": 0,
      "I/O Read Time": 0,
      "I/O Write Time": 0,
      "Plans": [
        {
          "Node Type": "Result",
          "Parent Relationship": "Member",
          "Parallel Aware": false,
          "Startup Cost": 0,
          "Total Cost": 0.02,
          "Plan Rows": 1,
          "Plan Width": 304,
          "Actual Startup Time": 0.03,
          "Actual Total Time": 0.031,
          "Actual Rows": 1,
          "Actual Loops": 1,
          "Output": [
            "uuid_generate_v4()",
            "'2705000000835'::character varying(100)",
            "'aaaaaaaa-d0c9-46bc-b83e-cec6a22d6e82'::uuid",
            "now()",
            "'E001'::character varying(10)",
            "NULL::timestamp without time zone"
          ],
          "Shared Hit Blocks": 0,
          "Shared Read Blocks": 0,
          "Shared Dirtied Blocks": 0,
          "Shared Written Blocks": 0,
          "Local Hit Blocks": 0,
          "Local Read Blocks": 0,
          "Local Dirtied Blocks": 0,
          "Local Written Blocks": 0,
          "Temp Read Blocks": 0,
          "Temp Written Blocks": 0,
          "I/O Read Time": 0,
          "I/O Write Time": 0
        }
      ]
    },
    "Planning Time": 0.03,
    "Triggers": [],
    "Execution Time": 0.107
  }
]

Can anyone assist as to why such a seemingly innocent insert on a small table is taking so long? For context, I am using Entity Framework Core 3.1 and a .NET Core application to perform the inserts (via the NPGSQL library).

like image 715
JamesMatson Avatar asked Oct 06 '20 01:10

JamesMatson


People also ask

How can I improve my insert performance?

To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.

How many INSERTs can Postgres handle per second?

Single row INSERTs So, if your app and database are in different regions and latency is 5ms for example, then you can expect to see around 100 INSERTs (1000 milliseconds /(5ms+5ms)) per second. In the same region on AWS with lets say 1ms latency, this number can go up to ~500 INSERTs per second.

What does Postgres return on insert?

In an INSERT , the data available to RETURNING is the row as it was inserted. This is not so useful in trivial inserts, since it would just repeat the data provided by the client. But it can be very handy when relying on computed default values.

How do I add multiple rows in PostgreSQL?

PostgreSQL INSERT Multiple Rows First, specify the name of the table that you want to insert data after the INSERT INTO keywords. Second, list the required columns or all columns of the table in parentheses that follow the table name. Third, supply a comma-separated list of rows after the VALUES keyword.


1 Answers

The execution plan says that the operation was completed in 0.1ms; I guess that was not the slow execution.

Here are some ideas to investigate this:

  • Use auto_explain to log the plans of slow executions. If you can afford the substantial overhead, set auto_explain.analyze and auto_explain.buffers to on so the plan tells you where the time was spent.

  • Temporarily reduce deadlock_timeout to 0.1 and set log_lock_waits to on. Then you'll see in the log if the statement is blocked behind a lock.

  • Use operating system tools to determine if the I/O system is totally overloaded.

  • Read the log file for unusual nessages.

like image 198
Laurenz Albe Avatar answered Nov 15 '22 05:11

Laurenz Albe