When you place semicolons between css rules, the rule following the semi colon will be ignored. This can lead to some very strange results. The MDN has a jsfiddle that can be used to show this effect rather clearly.
This is the initial state, and this is after the first rule has a semicolon at its end.
Fortunately it is, essentially, universal practice to exclude semicolons from between one's css blocks.
My question is: Why is this the case? I've heard that this is the case because it will save space (in this case, exactly one character per css rule). But this reasoning, while true, seems a tad strange. I couldn't find specifics on how much space each char in a css file occupies, but if it's analogous to JS, this SO post tells us that each char is approximately 16 bits, or 2 bytes. Meaning we would save 2 bytes per rule.
According to this list of average connection speed by country the global average connection speed is 5.1 Megabits/second. Since we save exactly 1 char per rule by not allowing semi-colons, and each char is 16 bits, we can show that on average the amount of rules it takes it takes us to save one second is:
5,100,000(bits/second) / 16(bits{saved}/rule)
(5,100,000/16)*[(bits * rule)/(second * bits] or
318750 (rule/second)
And so based on the global average connection speed, it would require over 300,000 rules to save us one second of time.
Surely there must exist more efficient methods of saving download time for the user, and there does such as minification/uglification of css/js. Or the reduction of length of the names of CSS Properties, since these are much longer than 1 char and can appear many times, shortening these could save orders of magnitudes of more bytes when compared to chopping off a trailing semicolon.
More important than the bytes saved, in my opinion, is how confusing this can get for the developer. Many of us are trained by habit to follow closed braces with a semicolon.
returnType/functionDec functionName(arguments){
//...function body
};
is a VERY common pattern found in a great many of languages (including JavaScript), and it is absolutely possible to imagine a developer typing
cssRuleA{
/*style Rules */
};
cssRuleB{
/* Style Rules*/
};
as an accidental result of this habit. The console will log no errors, the developer will have no indication that a mistake has been made outside of styles not showing up correctly. The absolutely WORST part of this, is that even though cssRuleA is what's causing the error, it will work just fine, cssRuleB will be the rule not displaying correctly even if there is nothing wrong with it. The fact that
can especially cause issues in large projects where style/UI issues can have many different possible roots.
Does there exist some factor inherent in CSS that makes this convention make more sense? Is there something in some white papers I missed that explains why this is how CSS behaves? Personally, I tried to see if it is faster to exclude semicolons from a perspective of Finite Automata/Grammars, but I couldn't definitively determine if it was faster or not.
In CSS, semicolons are needed to separate each statement ... However, the final semicolon in a set of statements is optional (but most developers recommend using it). In your case, there was only one statement, so it is technically the last and thus, the semicolon is technically not needed.
CSS declaration blocks Blocks sometimes can be nested, so opening and closing braces must be matched. Such blocks are naturally called declaration blocks and declarations inside them are separated by a semicolon, ' ; ' ( U+003B SEMICOLON ).
In CSS, the colon separates syntax, and the semicolon denotes that that particular styling is over. For example: position : relative ; The statement above is saying that you want the CSS to be looking at the position attribute, and you want it to be of a relative nature.
Because preprocessor definitions are substituted before the compiler acts on the source code, any errors that are introduced by #define are difficult to trace. if there is a ; at the end, it's considered as a part of VALUE .
So in conditional statements like if ..else and looping statements like while, for, do-while, doesn't require Semicolon.
In computer programming, the semicolon is often used to separate multiple statements (for example, in Perl, Pascal, and SQL; see Pascal: Semicolons as statement separators). In other languages, semicolons are called terminators and are required after every statement (such as in PL/I, Java, and the C family).
In CSS, rules are defined by either blocks, or statements, but not both at the same time. A block is a chunk of code that is surrounded by a pair of curly braces. A statement is a chunk of code that ends with a semicolon.
An empty "rule" is not a valid CSS rule, because it cannot be parsed as either a qualified rule or an at-rule. So it stands to reason that a lone ;
between two blocks is invalid, for the same reason that a block that doesn't contain a prelude (either a selector-list, or an at-keyword followed by an optional prelude) is invalid: because it cannot be parsed into anything meaningful.
Only at-rules may take the form of statements and therefore be terminated by a semicolon (examples include @charset
and @import
); qualified rules never do. So when a malformed rule is encountered, if the parser isn't already parsing an at-rule, then it is treated as a qualified rule and everything up to and including the next matching set of curly braces is consumed and discarded, including the semicolon. This is described succinctly in section 2.2 of css-syntax-3 (it says the text is non-normative, but that's only because the normative rules are defined in the grammar itself).
And the reason error handling takes such an eager approach in CSS is mostly due to selector error handling — if it were conservative, browsers might end up inadvertently parsing the following rule as something completely unexpected. For example, if IE6, which doesn't understand >
, were to ignore just the p >
in p > span {...}
and regard everything starting with span
as valid, the rule would end up matching any span
element in IE6, whilst matching only the appropriate subset of elements in supporting browsers. (In fact, a similar issue does exist in IE6 with chained class selectors — .foo.bar
is treated as .bar
.) You could think of this, therefore, not as liberal error handling, but conservative application of CSS rules. Better not to apply a rule when in doubt than apply it with unexpected results.
Whoever told you it was for performance reasons is just making it up.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With