"modern" because that definition may change over time (and specifically I mean desktop browsers)
"handle" because that may vary depending on machine configurations/memory, but specifically I mean a general use case.
This question came to mind over a particular problem I'm trying to solve involving large datasets.
Essentially, whenever a change is made to a particular dataset I get the full dataset back and I have to render this data in the browser.
So for example, over a websocket I get a push event that tells me a dataset has changes, and then I have to render this dataset in HTML by grabbing an existing DOM element, duplicating it, populating the elements with data from this set using classnames or other element identifiers, and then add it back to the DOM.
Keep in mind that any object (JSON) in this dataset may have as many as 1000+ child objects, and there may be as many as 10,000+ parent objects, so as you can see there may be an instance where the returned dataset is upwards towards 1,000,000 => 10,000,000 data points or more.
Now the fun part comes when I have to render this stuff. For each data point there may be 3 or 4 tags used to render and style the data, and there may be event listeners for any of these tags (maybe on the parent container to lighten things up using delegation).
To sum it all up, there can be a lot of incoming information that needs to be rendered and I'm trying to figure out the best way to handle this scenario.
Ideally, you'd just want to render the changes for that single data point that has changes rather than re-rendering the whole set, but this may not be an option due to how the backend was designed.
My main concern here is to understand the limitations of the browser/DOM and looking at this problem through the lense of the frontend. There are some changes that should happen on the backend for sure (data design, caching, pagination), but that isnt the focus here.
This isn't a typical use case for HTML/DOM, as I know there are limitations, but what exactly are they? Are we still capped out at about 3000-4000 elements?
I've got a number of related subquestions for this that I'm actively looking up but I thought it'd be nice to share some thoughts with the rest of the stackoverflow community and try to pool some information together about this issue.
What is "reasonable" amount of DOM elements that a modern browser can handle before it starts becoming slow/non-responsive?
How can I benchmark the number of DOM elements a browser can handle?
What are some strategies for handling large datasets that need to be rendered (besides pagination)?
Are templating frameworks like mustache and handlebars more performant for rendering html from data/json (on the frontend) than using jQuery or Regular Expressions?
Your answer is: 1 OR millions. I'm going to copy/paste an answer from a similar question on SO.
To be honest, if you really need an absolute answer to this question, then you might want to reconsider your design.
No answer given here will be right, as it depends upon many factors that are specific to your application. E.g. heavy vs. little CSS use, size of the divs, amount of actual graphics rendering required per div, target browser/platform, number of DOM event listeners etc..
Just because you can doesn't mean that you should! :-)"
See: how many div's can you have before the dom slows and becomes unstable?
This really is an unanswerable question, with too many factors at too many angles. I will say this however, in a single page load, I used a javascript setinterval at 1ms to continually add new divs to a page with the ID incrementing by 1. My Chrome browser just passed 20,000, and is using 600MB Ram.
This is a question for which only a statistically savvy answer could be accurate and comprehensive.
Why
The appropriate equation is this, where N is the number of nodes, bytesN is the total bytes required to represent them in the DOM, the node index range is n ∈ [0, N)
, bytesOverhead is the amount of memory used for a node with absolute minimum attribute configuration and no innerHTML, and bytesContent is the amount of memory used to fill such a minimal node.
bytesN = ∑N (bytesContentn + bytesOverheadn)
The value requested in the question is the maximum value of N in the worst case handheld device, operating system, browser, and operating conditions. Solving for N for each permutation is not trivial. The equation above reveals three dependencies, each of which could drastically alter the answer.
Dependencies
The Rigorous Solution
One could run tests to determine (1) and (2) for each of the common http user agents used on handheld devices. The distribution of user agents for any given site can be obtained by configuring the logging mechanism of the web server to place the HTTP_USER_AGENT if it isn't there by default and then stripping all but that field in the log and counting the instances of each value.
The number of bytes per character would need to be tested for both attributes values and UTF-8 inner text (or whatever the encoding) to get a clear pair of factors for calculating (1).
The memory available would need to be tested too under a variety of common conditions, which would be a major research project by itself.
The particular value of N chosen would have to be ZERO to handle the actual worst case, so one would chose a certain percentage of typical cases of content, node structures, and run time conditions. For instance, one may take a sample of cases using some form of randomized in situ (within normal environmental conditions) study and find N that satisfies 95% of those cases.
Perhaps a set of cases could be tested in the above ways and the results placed in a table. Such would represent a direct answer to your question.
I'm guessing it would take an well educated mobile software engineer with flare for mathematics, especially statistics, five full time weeks to get reasonable results.
A More Practical Estimation
One could guess the worst case scenario. With a few full days of research and a few proof-of-concept apps, this proposal could be refined. Absent of the time to do that, here's a good first guess.
Consider a cell phone that permits 1 Gbyte for DOM because normal operating conditions use 3 Gbytes out of the 4 GBytes for the above mentioned purposes. One might assume the average consumption of memory for a node to be as follows, to get a ballpark figure.
In this case Nworst_case, the worst case max nodes,
= 1,024 X 1,024 X 1,024 / (2 X 40 + 2 X 4 X 10 + 1 X 4 X 4 + 160) = 3,195,660 . 190,476.
I would not, however, build a document in a browser with three million DOM nodes if it could be at all avoided. Consider employing the more common practice below.
Common Practice
The best solution is to stay far below what Nworst_case might be and simply reduce the total number of nodes to the degree possible using standard HTTP design techniques.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With