Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Transferring lots of objects with Guid IDs to the client

I have a web app that uses Guids as the PK in the DB for an Employee object and an Association object.

One page in my app returns a large amount of data showing all Associations all Employees may be a part of.

So right now, I am sending to the client essentially a bunch of objects that look like:

{assocation_id: guid, employees: [guid1, guid2, ..., guidN]}

It turns out that many employees belong to many associations, so I am sending down the same Guids for those employees over and over again in these different objects. For example, it is possible that I am sending down 30,000 total guids across all associations in some cases, of which there are only 500 unique employees.

I am wondering if it is worth me building some kind of lookup index that I also send to the client like

{ 1: Guid1, 2: Guid2 ... } 

and replacing all of the Guids in the objects I send down with those ints,

or if simply gzipping the response will compress it enough that this extra effort is not worth it?

Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).

like image 627
Davis Dimitriov Avatar asked Mar 15 '12 14:03

Davis Dimitriov


3 Answers

Your wrote at the end of your question the following

Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).

I think it's your main problem. If you don't solve the main problem you will be able to reduce the size of transferred data to 10 times for example, but you still don't solve the main problem. Let us we think about the question: Why so many data should be sent to the client (to the web browser)?

The data on the client side are needed to display some information to the user. The monitor is not so large to show 30,000 total on one page. No user are able to grasp so much information. So I am sure that you display only small part of the information. In the case you should send only the small part of information which you display.

You don't describe how the guids will be used on the client side. If you need the information during row editing for example. You can transfer the data only when the user start editing. In the case you need transfer the data only for one association.

If you need display the guids directly, then you can't display all the information at once. So you can send the information for one page only. If the user start to scroll or start "next page" button you can send the next portion of data. In the way you can really dramatically reduce the size of transferred data.

If you do have no possibility to redesign the part of application you can implement your original suggestion: by replacing of GUID "{7EDBB957-5255-4b83-A4C4-0DF664905735}" or "7EDBB95752554b83A4C40DF664905735" to the number like 123 you reduce the size of GUID from 34 characters to 3. If you will send additionally array of "guid mapping" elements like

123:"7EDBB95752554b83A4C40DF664905735",

you can reduce the original size of data 30000*34 = 1020000 (1 MB) to 300*39 + 30000*3 = 11700+90000 = 101700 (100 KB). So you can reduce the size of data in 10 times. The usage of compression of dynamic data on the web server can reduce the size of data additionally.

In any way you should examine why your page is so slowly. If the program works in LAN, then the transferring of even 1MB of data can be quick enough. Probably the page is slowly during placing of the data on the web page. I mean the following. If you modify some element on the page the position of all existing elements have to be recalculated. If you would be work with disconnected DOM objects first and then place the whole portion of data on the page you can improve the performance dramatically. You don't posted in the question which technology you use in you web application so I don't include any examples. If you use jQuery for example I could give some example which clear more what I mean.

like image 185
Oleg Avatar answered Nov 15 '22 06:11

Oleg


The lookup index you propose is nothing else than a "custom" compression scheme. As amdmax stated, this will increase your performance if you have a lot of the same GUIDs, but so will gzip.

IMHO, the extra effort of writing the custom coding will not be worth it.

Oleg states correctly, that it might be worth fetching the data only when the user needs it. But this of course depends on your specific requirements.

like image 34
gzm0 Avatar answered Nov 15 '22 07:11

gzm0


if simply gzipping the response will compress it enough that this extra effort is not worth it?
The answer is: Yes, it will.

Compressing the data will remove redundant parts as good as possible (depending on the algorithm) until decompression.

To get sure, just send/generate the data uncompressed and compressed and compare the results. You can count the duplicate GUIDs to calculate how big your data block would be with the dictionary compression method. But I guess gzip will be better because it can also compress the syntactic elements like braces, colons, etc. inside your data object.

like image 29
Tarion Avatar answered Nov 15 '22 08:11

Tarion