Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Testing custom ORM solution performance overhead - how to?

I have created a prototype of a custom ORM tool using aspect oriented programming (PostSHarp) and achieving persistence ignorance (before compile-time). Now I tried to find out how much overhead does it introduce compared to using pure DataReader and ADO.NET. I made a test case - insert, read, delete data (about 1000 records) in MS SQL Server 2008 and MySQL Community Edition. I run this test multiple times using pure ADO.NET and my custom tool.

I expected that results will depend on many factors - memory, swapping, CPU, other processes so I ran tests for many times (20-40). But the results were really unexpected. They just differed too much between those cases. If there were just some extreme values, I could ignore them (maybe swapping ocurred or smth. like that) but they were so different that I am sure I cannot trust this kind of testing. Almost half of times my ORM showed 10% better performance than pure ADO.NET, other times it was -10%.

Is there any way I can make those tests reliable? I do not have a powerful computer with lots of memory, but maybe I somehow can make MS SQL and MySQL or ADO.NET to be as consistent as possible during those tests? And how about count of records - which is more reliable, using small amount of records and running more times or other way?

like image 392
JustAMartin Avatar asked Dec 30 '09 21:12

JustAMartin


3 Answers

Have you seen ORMBattle.NET? See FAQ there, there are some ideas related to measuring performance overhead introduced by a particular ORM tool. Test suite is open source.

Concerning your results:

  • Some ORM tools automatically batch statement sequences (i.e. send several SQL statements together). If this feature is implemented well in ORM, it's easy to beat plain ADO.NET by 2-4 times on CRUD operations, if ADO.NET test does not involve batching. Tests on ORMBattle.NET test both cases.
  • A lot depends on how you establish transaction boundaries there. Please refer to ORMBattle.NET FAQ for details.
  • CRUD tests aren't best performance indicator at all. In general, it's pretty easy to get peak possible performance here, since in general, RDBMS must do much more than ORM in this case.

P.S. I'm one of ORMBattle.NET authors, so if you're interested in details / possible contributions, you can contact me directly (or join ORMBattle.NET Google Groups).

like image 182
Alex Yakunin Avatar answered Nov 15 '22 08:11

Alex Yakunin


I would run the test for a longer duration and with many more iterations as small differences would average out over time and you should get a clearer picture. Also, make sure you eliminate any external things that may be affecting your test, such as other processes running, non enough free memory, cold start vs warm start, network usage, etc.

Also, make sure that your database file and log file have enough free space allocated so you aren't waiting for the DB to grow the file during certain tests.

like image 32
John Lemp Avatar answered Nov 15 '22 06:11

John Lemp


First of all you need to find out where does the variance come from. The ORM layer itself or the database?

Many times the source of such variance is the database itself. Databases are very complex systems, with many active processes inside that can interact with the result of performance measurements. To achieve some reproductible results you'll have to place your database under 'laboratory' conditions and make sure nothing unexpected happens. what that means depends from vendor to vendor and you need know some pretty advanced topics in order to tacle something like this. For instance, on a SQL Server database the typical sources of variation are:

  • cold cache vs. warm cache (both data and procedures)
  • log and database growth events
  • maintenance jobs
  • ghost cleanup
  • lazy writer
  • checkpoints
  • external memory pressure
like image 42
Remus Rusanu Avatar answered Nov 15 '22 08:11

Remus Rusanu