Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is MySQL database access speed limited primarily by the db, or by the language used to access it?

I need to update a large db quickly. It may be easier to code in a scripting language but I suspect a C program would do the update faster. Anybody know if there have been comparative speed tests?

like image 682
asdfasd Avatar asked May 19 '11 21:05

asdfasd


3 Answers

It wouldn't. The rate of the update speed depends on:

  • database configuration (engine used, db config)
  • hardware of the server, especially the HDD subsystem
  • network bandwith between source and target machine
  • amount of data transfered

I suspect that you think that a scripting language will be a hog in this last part - amount of data transfered.

Any scripting language will be fast enough to deliver the data. If you have a large amount of data that you need to parse / transform quickly - then yes, C would definitely be language of choice. However if it's sending simple string data to the db, there's no point in doing that, although it's not like it's difficult to create a simple C program for UPDATE operation. It's not like it's that complicated to do it in C, it's almost on par with using PHP's mysql_ functions from "complexity" point of view.

like image 158
N.B. Avatar answered Oct 21 '22 01:10

N.B.


Are you concerned about speed because you're already dealing with a situation where speed is a problem, or are you just planning ahead?

I can say comfortably that DB interactions are generally constrained by IO, network bandwidth, memory, database traffic, SQL complexity, database configuration, indexing issues, and the quantity of data being selected far more than by the choice of a scripting language versus C.

When you run into bottlenecks, they'll almost always be solved by a better algorithm, smarter use of indexes, faster IO devices, more caching... those sorts of things (beginning with algorithms).

The fourth component of LAMP is a scripting language after all. When fine tuning, memcache becomes an option, as well as persistent interpreters (such as mod_perl in a web environment, for example).

like image 29
DavidO Avatar answered Oct 21 '22 03:10

DavidO


The majority cost in database transactions lie on the database side. The cost of interpreting / compiling your SQL statement and evaluating the query execution is much more substantial than any difference to be found in the language of what sent it.

It is in rare situations that the application's CPU usage for database-intensive work is a greater factor than the CPU use of the database server, or the disk speed of that server.

Unless your applications are long-running and don't wait on the database, I wouldn't worry about benchmarking them. If they do need benchmarking, you should do it yourself. Data use cases vary wildly and you need your own numbers.

like image 22
Jeff Ferland Avatar answered Oct 21 '22 01:10

Jeff Ferland