Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PostgreSQL backend process high memory usage issue

Tags:

postgresql

We are evaluating using PostgreSQL to implement a multitenant database, Currently we are running some tests on single-database-multiple-schema model (basically, all tenants have the same set of database objects under then own schema within the same database). The application will maintain a connection pool that will be shared among all tenants/schemas.

e.g. If the database has 500 tenants/schemas and each tenants has 200 tables/views, the total number of tables/views will be 500 * 200 = 100,000.

Since the connection pool will be used by all tenants, eventually each connection will hit all the tables/views.

In our tests, when the connection hits more views, we found the memory usage of the backend process increases quite fast and most of them are private memory. Those memory will be hold until the connection is closed.

We have a test case that one backend process uses more the 30GB memory and eventually get an out of memory error.

To help understand the issue, I wrote code to create a simplified test cases - MTDB_destroy: used to clear tenant schemas - MTDB_Initialize: used to create a multitenant DB - MTDB_RunTests: simplified test case, basically select from all tenant views one by one.

The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4

To make sure I have a clean environment, I re-created database cluster and leave majority configurations as default, (the only thing I HAVE to change is to increase "max_locks_per_transaction" since MTDB_destroy needs to drop many objects.)

This is what I do to reproduce the issue:

  1. create a new database
  2. create the three functions using the code attached
  3. connect to the new created db and run the initialize scripts

    -- Initialize

    select MTDB_Initialize('tenant', 100, 100, true);

    -- not sure if vacuum analyze is useful here, I just run it

    vacuum analyze;

    -- check the tables/views created

    select table_schema, table_type, count(*) from information_schema.tables where table_schema like 'tenant%' group by table_schema, table_type order by table_schema, table_type;

  4. open another connection to the new created db and run the test scripts

    -- get backend process id for current connection

    SELECT pg_backend_pid();

    -- open a linux console and run ps -p and watch VIRT, RES and SHR

    -- run tests

    select MTDB_RunTests('tenant', 1);

Observations:

  1. when the connection for running tests was first created,

    VIRT = 182MB, RES = 6240K, SHR=4648K

  2. after run the tests once, (took 175 seconds)

    VIRT = 1661MB RES = 1.5GB SHR = 55MB

  3. re-run the test again (took 167 seconds)

    VIRT = 1661MB RES = 1.5GB SHR = 55MB

  4. re-run the test again (took 165 seconds)

    VIRT = 1661MB RES = 1.5GB SHR = 55MB

as we scale up the number of tables, the memory usages go up in the tests too.

Can anyone help explain what's happening here? Is there a way we can control memory usage of PostgreSQL backend process?

Thanks.

Samuel

-- MTDB_destroy
create or replace function MTDB_destroy (schemaNamePrefix varchar(100))
returns int as $$
declare
   curs1 cursor(prefix varchar) is select schema_name from information_schema.schemata where schema_name like prefix || '%';
   schemaName varchar(100);
   count integer;
begin
   count := 0;
   open curs1(schemaNamePrefix);
   loop
      fetch curs1 into schemaName;
      if not found then exit; end if;           
      count := count + 1;
      execute 'drop schema ' || schemaName || ' cascade;';
   end loop;  
   close curs1;
   return count;
end $$ language plpgsql;

-- MTDB_Initialize
create or replace function MTDB_Initialize (schemaNamePrefix varchar(100), numberOfSchemas integer, numberOfTablesPerSchema integer, createViewForEachTable boolean)
returns integer as $$
declare   
   currentSchemaId integer;
   currentTableId integer;
   currentSchemaName varchar(100);
   currentTableName varchar(100);
   currentViewName varchar(100);
   count integer;
begin
   -- clear
   perform MTDB_Destroy(schemaNamePrefix);

   count := 0;
   currentSchemaId := 1;
   loop
      currentSchemaName := schemaNamePrefix || ltrim(currentSchemaId::varchar(10));
      execute 'create schema ' || currentSchemaName;

      currentTableId := 1;
      loop
         currentTableName := currentSchemaName || '.' || 'table' || ltrim(currentTableId::varchar(10));
         execute 'create table ' || currentTableName || ' (f1 integer, f2 integer, f3 varchar(100), f4 varchar(100), f5 varchar(100), f6 varchar(100), f7 boolean, f8 boolean, f9 integer, f10 integer)';
         if (createViewForEachTable = true) then
            currentViewName := currentSchemaName || '.' || 'view' || ltrim(currentTableId::varchar(10));
            execute 'create view ' || currentViewName || ' as ' ||
                     'select t1.* from ' || currentTableName || ' t1 ' ||
             ' inner join ' || currentTableName || ' t2 on (t1.f1 = t2.f1) ' ||
             ' inner join ' || currentTableName || ' t3 on (t2.f2 = t3.f2) ' ||
             ' inner join ' || currentTableName || ' t4 on (t3.f3 = t4.f3) ' ||
             ' inner join ' || currentTableName || ' t5 on (t4.f4 = t5.f4) ' ||
             ' inner join ' || currentTableName || ' t6 on (t5.f5 = t6.f5) ' ||
             ' inner join ' || currentTableName || ' t7 on (t6.f6 = t7.f6) ' ||
             ' inner join ' || currentTableName || ' t8 on (t7.f7 = t8.f7) ' ||
             ' inner join ' || currentTableName || ' t9 on (t8.f8 = t9.f8) ' ||
             ' inner join ' || currentTableName || ' t10 on (t9.f9 = t10.f9) ';                    
         end if;
         currentTableId := currentTableId + 1;
         count := count + 1;
         if (currentTableId > numberOfTablesPerSchema) then exit; end if;
      end loop;   

      currentSchemaId := currentSchemaId + 1;
      if (currentSchemaId > numberOfSchemas) then exit; end if;     
   end loop;
   return count;
END $$ language plpgsql;

-- MTDB_RunTests
create or replace function MTDB_RunTests(schemaNamePrefix varchar(100), rounds integer)
returns integer as $$
declare
   curs1 cursor(prefix varchar) is select table_schema || '.' || table_name from information_schema.tables where table_schema like prefix || '%' and table_type = 'VIEW';
   currentViewName varchar(100);
   count integer;
begin
   count := 0;
   loop
      rounds := rounds - 1;
      if (rounds < 0) then exit; end if;

      open curs1(schemaNamePrefix);
      loop
         fetch curs1 into currentViewName;
         if not found then exit; end if;
         execute 'select * from ' || currentViewName;
         count := count + 1;
      end loop;
      close curs1;
   end loop;
   return count;  
end $$ language plpgsql;
like image 369
Samuel Avatar asked Apr 07 '11 21:04

Samuel


2 Answers

Are these connections idle in transaction or just idle? Sounds like unfinished transactions are holding onto memory, or maybe you've got a memory leak or something.

like image 191
Scott Marlowe Avatar answered Oct 30 '22 11:10

Scott Marlowe


For people who see this thread when searching around (as i did), I found what appeared to be the same problem in a different context. Idle processes slowly consuming more and more memory until the OOM killer takes them out (causing periodic DB crashes).

We traced the problem back to really long running PHP scripts which kept one connection open for a long time. We were able to get the memory under control by periodically closing the connection and re-connecting.

From what i've read postgres does a lot of caching so if you have one session hitting a lot of different tables/queries this cache data can continue to grow and grow.

-Ken

like image 44
Ken Koch Avatar answered Oct 30 '22 11:10

Ken Koch