Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Oracle preferred columns lengths

Does the multiplication factor of a column's length somehow influence the database performance?

In other words, what is the difference between the performance of the following two tables:

TBL1:
  - CLMN1 VARCHAR2(63)
  - CLMN2 VARCHAR2(129)
  - CLMN3 VARCHAR2(250)

and

TBL2:
  - CLMN1 VARCHAR2(64)
  - CLMN2 VARCHAR2(128)
  - CLMN3 VARCHAR2(256)

Should we always attempt to make a column's length to some power of 2 or does only the maximum size matter?

Some of the developers claim that there is some link between the multiplication factor of the length of the columns in a database, because it influences how Oracle distributes and saves the data on the disk and shares its cache in memory. Can someone prove or disprove this?

like image 538
Andremoniy Avatar asked Jan 17 '13 08:01

Andremoniy


1 Answers

There is no difference in performance. And there are no hidden optimizations done because of power of 2.

The only thing that does make a difference in how things are stored is the actual data. 100 characters stored in a VARCHAR2(2000) column are stored exactly the same way as 100 characters stored in a VARCHAR2(500) column.

Think of the length as a business constraint, not as part of the data type. The only thing that should driver your decision about the length are the business constraints about the data that is put in there.

Edit: the only situation where the length does make a difference, is when you need an index on that column. Older Oracle versions (< 10) did have a limit on the key length and that was checked when creating the index.

Even though it's possible in Oracle 11, it might not be the wisest choice to have an index on a value with 4000 characters.

Edit 2:

So I was curious and setup a simple test:

create table narrow (id varchar(40));
create table wide (id varchar(4000));

Then filled both tables with strings composed of 40 'X'. If there was indeed a (substantial) difference between the storage, this should show up somehow when retrieving the data, right?

Both tables have exactly 1048576 rows.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set autotrace traceonly statistics
SQL> select count(*) from wide;


Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
       6833  consistent gets
          0  physical reads
          0  redo size
        349  bytes sent via SQL*Net to client
        472  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select count(*) from narrow;


Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
       6833  consistent gets
          0  physical reads
          0  redo size
        349  bytes sent via SQL*Net to client
        472  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

So the full table scan for both tables did exactly the same. So what happens when we actually select the data?

SQL> select * from wide;

1048576 rows selected.


Statistics
----------------------------------------------------------
          4  recursive calls
          2  db block gets
      76497  consistent gets
          0  physical reads
          0  redo size
   54386472  bytes sent via SQL*Net to client
     769427  bytes received via SQL*Net from client
      69907  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1048576  rows processed

SQL> select * from narrow;

1048576 rows selected.


Statistics
----------------------------------------------------------
          4  recursive calls
          2  db block gets
      76485  consistent gets
          0  physical reads
          0  redo size
   54386472  bytes sent via SQL*Net to client
     769427  bytes received via SQL*Net from client
      69907  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1048576  rows processed

SQL>

There is a slight difference in consistent gets, but that could be due to caching.

like image 156
a_horse_with_no_name Avatar answered Oct 22 '22 13:10

a_horse_with_no_name