Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How would I improve the performance in the following scenario

I have a Table, with certain number of columns, I have applied a certain algorithm and was able to divide the existing Table into 5 tables. Here is the image of the databases after the application of algorithm. enter image description here So I have divided the stsi table into base, card_type, country, cvv, . The STSI had the following attributes .. id, name, phone, email, branch, country, ac_no, credit_card, card_type, cvv. So after the application of the algorithm, The base table has the id, name, email, branch, ac_no, credit_card, phone. And the remaining attributes are card_type, country and cvv. These attributes are given a separate table each. Let's say for the tale cvv. The attributes will be id and cvv. The id will be a primary_key to the base table. So as per the image I was able to reduce the number of rows in the newer tables formed since cvv has 7829 rows instead of 9000 as in STSI, because of the nulls in STSI. The performance was increased with respect to space. But I am unable to increase the time complexity.

I intended that newer tables should have lesser time complexity, as they have relatively lesser number of rows. But I am not able to get any performance increase. I have tried indexing, but it did not result in any performance gain. what can I possibly do for increase in performance with respect to time, when executed on new tables.

ps: The queries are
select id,cvv from stsi - 0.0005 seconds select id,cvv from cvv - 0.0005 seconds
I am hoping that second query should take lesser time!

like image 437
gates Avatar asked Apr 15 '15 13:04

gates


1 Answers

At 0.5ms, its likely the limiter is actual system response time (disk read, CPU processing, etc) and not the query itself. No amount of optimization is going to reduce that response time.

As a general rule, when you are looking at simple select queries (select val1, val2 from table), the biggest driver of performance is going to be the underlying system configuration (disk configuration and memory availability mainly) and database design.

Using good indexing can help query response time by reducing the amount of data that has to be read to produce results. In the above example, placing an index on the CCV table consisting of ID and CCV would likely yield a faster response as your dataset grows.

I assume, based on your bolding, that your question stems of the fact that STSI has more rows than CCV and you expected CCV to be faster. The reality is that you are likely seeing the first constraint here (system configuration) and not database design.

Half a millisecond is damn fast. I dont know that you should expect to see anything faster on consumer grade hardware even if you were comparing a 9000 row table to a 9 row table.

like image 107
TDavis Avatar answered Sep 28 '22 03:09

TDavis