CASE 1: I have a table with 30 columns and I query using 4 columns in the where clause.
CASE 2: I have a table with 6 columns and I query using 4 columns in the where clause.
What is the difference in performance in both cases?
For example i have table
table A { b varchar(10), c varchar(10), d varchar(10), e varchar(10), f varchar(10), g varchar(10), h varchar(10) } SELECT b,c,d FROM A WHERE f='foo' create table B { b varchar(10), c varchar(10), d varchar(10), e varchar(10), f varchar(10) } SELECT b,c,d FROM B WHERE f='foo'
Both A And B table have same structure means only difference in number of column and column used in where condition is also same and column in select is also same. difference is that table B only have some unused column these are not being used in select and where condition in that case is there any difference in performance of both queries ?
Answer: Yes, the size of the row (dba_tables. avg_row_len) and the size of the data blocks affects SQL retrieval speed, and this is especially imporant for data warehouse and decision support system "fact" tables, which often have many dozens of columns.
Yes the number of columns will - indirectly - influence the performance. The data in the columns will also affect the speed.
It could be negligible, but if you are near the boundary between being able to cache the entire table in memory or not, a few extra columns could make a big difference to the execution speed.
MySQL has hard limit of 4096 columns per table, but the effective maximum may be less for a given table.
Does the total number of columns in a table impact performance (if the same subset of columns is selected, and if there are no indices on the table)
Yes, marginally, with no indexes at all, both queries (Table A and Table B) will do table scans. Given that Table B
has fewer columns than Table A
, the rows per page (density) will be higher on B
and so B
will be marginally quicker as fewer pages need to be fetched.
However, given that your queries are of the form:
SELECT b,c,d FROM X WHERE f='foo';
the performance of the query will be dominated by the indexing on column f
, rather than the number of columns in the underlying tables.
For the OP's exact queries, the fastest performance will result from the following indexing:
A(f) INCLUDE (b,c,d)
B(f) INCLUDE (b,c,d)
Irrespective of the number of columns in Table A or Table B, with the above indexes in place, performance should be identical for both queries (assuming the same number of rows and similar data in both tables), given that SQL will hit the indexes which are now of similar column widths and row densities, without needing any additional data from the original table.
Does the number of columns in the select affect query performance?
The main benefit of returning fewer columns in a SELECT
is that SQL might be able to avoid reading from the table / cluster, and instead, if it can retrieve all the selected
data from an index (either as indexed columns and / or included columns in the case of a covering index).
Obviously, the columns used in the predicate (where filter), i.e. f
in your example, MUST be in the indexed columns of the index, and the data distribution must be sufficiently selective, in order for an index to be used in the first place.
There is also a secondary benefit in returning fewer columns from a SELECT
, as this will reduce any I/O overhead, especially if there is a slow network between the Database server and the app consuming the data - i.e. it is good practice to only ever return the columns you actually need, and to avoid using SELECT *
.
Edit
Some other plans:
B(f)
with no other key or INCLUDE
columns, or with an incomplete set of INCLUDE
columns (i.e. one or more of b, c or d
are missing):SQL Server will likely need to do a Key or RID Lookup as even if the index is used, there will be a need to "join" back to the table to retrieve the missing columns in the select clause. (The lookup type depends on whether the table has a clustered PK or not)
B(f,b,c,d)
This will still be very performant, as the index will be used and the table avoided, but won't be quite as good as the covering index, because the density of the index tree will be less due to the additional key columns in the index.
Test it and see!
There will be a performance difference, however 99% of the time you won't notice it - usually you won't even be able to detect it!
You can't even guarantee that that the table with fewer columns will be quicker - if its bothering you then try it and see.
Technical rubbish: (from the perspective of Microsoft SQL Server)
With the assumption that in all other respects (indexes, row counts, the data contained in the 6 common columns etc...) the tables are identical, then the only real difference will be that the larger table is spread over more pages on disk / in memory.
SQL server only attempts to read the data it absolutely requires, however it will always load an entire page at a time (8 KB). Even with the exact same amount data is required as the output to the query, if that data is spread over more pages then more IO is required.
That said, SQL server is incredibly efficient with its data access, and so you are very unlikely to see a noticeable impact on performance except in extreme circumstances.
Besides, it is also likely that your query will be run against the index rather than the table anyway, and so with indexes exactly the same size the change is likely to be 0.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With