Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Choosing efficient selectors based on computational complexity

Given the CSS processing specifics, most notably RTL matching and selector efficiency, how should selectors be written purely from the perspective of rendering engine performance?

This should cover general aspects and include using or avoiding pseudo-classes, pseudo-elements and relationship selectors.

like image 497
Oleg Avatar asked Jun 27 '12 02:06

Oleg


People also ask

What are the three types of selector?

Simple selectors (select elements based on name, id, class) Combinator selectors (select elements based on a specific relationship between them) Pseudo-class selectors (select elements based on a certain state)

Is universal selector slow?

In modern browsers the performance impact is negligible, provided you don't apply slow-effects to every element (eg. box-shadow, z-axis rotation). The myth that the universal-selector is slow is a hangover from ten years ago when it was slow.

What are advanced selectors?

Advanced selectors refer to CSS selectors that do something more complex than selecting a tag, class, or ID. This includes selecting a type of tag within another tag, or selecting an element with a certain ID or class, but only when the mouse is hovering over it.


1 Answers

At runtime an HTML document is parsed into a DOM tree containing N elements with an average depth D. There is also a total of S CSS rules in the stylesheets applied.

  1. Elements' styles are applied individually meaning there is a direct relationship between N and overall complexity. Worth noting, this can be somewhat offset by browser logic such as reference caching and recycling styles from identical elements. For instance, the following list items will have the same CSS properties applied (assuming no pseudo-classes such as :nth-child are applied):

    <ul class="sample">
      <li>one</li>
      <li>two</li>
      <li>three</li>
    </ul>
    
  2. Selectors are matched right-to-left for individual rule eligibility - i.e. if the right-most key does not match a particular element, there is no need to further process the selector and it is discarded. This means that the right-most key should match as few elements as possible. Below, the p descriptor will match more elements including paragraphs outside of target container (which, of course, will not have the rule apply but will still result in more iterations of eligibility checking for that particular selector):

    .custom-container p {}
    .container .custom-paragraph {}
    
  3. Relationship selectors: descendant selector requires for up to D elements to be iterated over. For instance, successfully matching .container .content may only require one step should the elements be in a parent-child relationship, but the DOM tree will need to be traversed all the way up to html before an element can be confirmed a mismatch and the rule safely discarded. This applies to chained descendant selectors as well, with some allowances.

    On the other hand, a > child selector, an + adjacent selector or :first-child still require an additional element to be evaluated but only have an implied depth of one and will never require further tree traversal.

  4. The behavior definition of pseudo-elements such as :before and :after implies they are not part of the RTL paradigm. The logic the assumption is that there is no pseudo element per se until a rule instructs for it to be inserted before or after an element's content (which in turn requires extra DOM manipulation but there is no additional computation required to match the selector itself).

  5. I couldn't find any information on pseudo-classes such as :nth-child() or :disabled. Verifying an element state would require additional computation, but from the rule parsing perspective it would only make sense for them to be excluded from RTL processing.

Given these relationships, computational complexity O(N*D*S) should be lowered primarily by minimizing the depth of CSS selectors and addressing point 2 above. This will result in quantifiably stronger improvements when compared to minimizing the number of CSS rules or HTML elements alone^

Shallow, preferably one-level, specific selectors are processed faster. This is taken to a whole new level by Google (programmatically, not by hand!), for instance there is rarely a three-key selector and most of the rules in search results look like

#gb {}
#gbz, #gbg {}
#gbz {}
#gbg {}
#gbs {}
.gbto #gbs {}
#gbx3, #gbx4 {}
#gbx3 {}
#gbx4 {}
/*...*/

^ - while this is true from a rendering engine performance standpoint, there are always additional factors such as traffic overhead and DOM parsing etc.

Sources: 1 2 3 4 5

like image 175
Oleg Avatar answered Oct 03 '22 20:10

Oleg