Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between (char)0 and '\0'? in C

What is the difference between using (char)0 and '\0' to denote the terminating null character in a character array?

like image 358
Sathya Avatar asked Sep 28 '11 05:09

Sathya


People also ask

What is the difference between 0 and '\ 0 in C?

Each array is terminated with '\0' or null character but if we store a '0' inside a string both are not same according to the C language. '0' means 48 according to the ASCII Table whereas '\0' means 0 according to the ASCII table.

What is the '\ 0 character in C?

'\0' is referred to as NULL character or NULL terminator It is the character equivalent of integer 0(zero) as it refers to nothing In C language it is generally used to mark an end of a string.

What is the meaning of '\ 0?

“\0” is the null termination character. It is used to mark the end of a string. Without it, the computer has no way to know how long a group of characters (string) goes. In C/C++ it looks for the Null Character to find the end of a string.

Is null and '\ 0 the same?

Recently a client asked me what is the difference between NULL and 0 values in Tableau. The answer to that is rather simple: a NULL means that there is no value, we're looking at a blank/empty cell, and 0 means the value itself is 0.


2 Answers

They're both a 0, but (char) 0 is a char, while '\0' is (unintuitively) an int. This type difference should not usually affect your program if the value is 0.

I prefer '\0', since that is the constant intended for that.

like image 55
Matthew Flaschen Avatar answered Oct 08 '22 16:10

Matthew Flaschen


The backslash notation in a character literal allows you to specify the numeric value of a character instead of using the character itself. So '\1'[*] means "the character whose numeric value is 1", '\2' means "the character whose numeric value is 2", etc. Almost. Due to a quirk of C, character literals actually have type int, and indeed int is used to handle characters in other contexts too, such as the return value of fgetc. So '\1' means "the numeric value as an int, of the character whose numeric value is 1", and so on.

Since characters are numeric values in C, "the character whose numeric value is 1" actually is (char)1, and the extra decoration in '\1' has no effect on the compiler - '\1' has the same type and value as 1 in C. So the backslash notation is more needed in string literals than it is in character literals, for inserting non-printable characters that don't have their own escape code.

Personally, I prefer to write 0 when I mean 0, and let implicit conversions do their thing. Some people find that very difficult to understand. When working with those people, it's helpful to write '\0' when you mean a character with value 0, that is to say in cases where you expect your 0 is soon going to implicitly convert to char type. Similarly, it can help to write NULL when you mean a null pointer constant, 0.0 when you mean a double with value 0, and so on.

Whether it makes any difference to the compiler, and whether it needs a cast, depends on context. Since '\0' has exactly the same type and value as 0, it needs to be cast to char in exactly the same circumstances. So '\0' and (char)0 differ in type, for exactly equivalent expressions you can either consider (char)'\0' vs (char)0, or '\0' vs 0. NULL has implementation-defined type -- sometimes it needs to be cast to a pointer type, since it may have integer type. 0.0 has type double, so is certainly different from 0. Still, float f = 1.0; is identical to float f = 1; and float f = 1.0f, whereas 1.0 / i, where i is an int, usually has a different value from 1 / i.

So, any general rule whether to use '\0' or 0 is purely for the convenience of readers of your code - it's all the same to the compiler. Pick whichever you (and your colleagues) prefer the look of, or perhaps define a macro ASCII_NUL.

[*] or '\01' - since the backslash introduces an octal number, not decimal, it's sometimes wise to make this a bit more obvious by ensuring it starts with a 0. Makes no difference for 0, 1, 2 of course. I say "sometimes", because backslash can only be followed by 3 octal digits, so you can't write \0101 instead of \101, to remind the reader that it's an octal value. It's all quite awkward, and leads to even more decoration: \x41 for a capital A, and you could therefore write '\x0' for 0 if you want.

like image 21
Steve Jessop Avatar answered Oct 08 '22 14:10

Steve Jessop