Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance of Firebase with large data sets

Tags:

firebase

I'm testing firebase for a project that may have a reasonably large numbers of keys, potentially millions.

I've tested loading a few 10k of records using node, and the load performance appears good. However the "FORGE" Web UI becomes unusably slow and renders every single record if I expand my root node.

Is Firebase not designed for this volume of data, or am I doing something wrong?

like image 312
winjer Avatar asked Apr 26 '13 15:04

winjer


People also ask

Is Firebase good for large database?

Yes, firebase is very fast. I would recommend the firestore database over the realtime one in terms of indexing and linking data. The challenge will come on how your app will handle that large data, and measures you will take for the app to perform at its best.

Can Firebase handle 10 million users?

The limit you're referring to is the limit for the number of concurrently connected users to Firebase Realtime Database on the free Spark plan. Once you upgrade to a payment plan, your project will allow 200,000 simultaneously connected users.

Is Firebase good for large scale applications?

No, for enterprise-grade application, firebase is not a good choice and not scalable as well.


1 Answers

It's simply the limitations of the Forge UI. It's still fairly rudimentary.

The real-time functions in Firebase are not only suited for, but designed for large data sets. The fact that records stream in real-time is perfect for this.

Performance is, as with any large data app, only as good as your implementation. So here are a few gotchas to keep in mind with large data sets.

DENORMALIZE, DENORMALIZE, DENORMALIZE

If a data set will be iterated, and its records can be counted in thousands, store it in its own path.

This is bad for iterating large data sets:

/users/uid /users/uid/profile /users/uid/chat_messages /users/uid/groups /users/uid/audit_record 

This is good for iterating large data sets:

/user_profiles/uid /user_chat_messages/uid /user_groups/uid /user_audit_records/uid 

Avoid 'value' on large data sets

Use the child_added since value must load the entire record set to the client.

Watch for hidden value operations on children

When you call child_added, you are essentially calling value on every child record. So if those children contain large lists, they are going to have to load all that data to return. Thus, the DENORMALIZE section above.

like image 89
Kato Avatar answered Sep 20 '22 21:09

Kato