I’m learning Java through "introduction to Java programming 9th edition" by Daniel Liang at chapter 9 "strings" I’ve encountered this piece of code :
public static int hexCharToDecimal(char ch) {
if (ch >= 'A' && ch <= 'F')
return 10 + ch - 'A';
else
return ch - '0';
}
Can someone explain what just happened in here? How is possible to add/subtract chars from integers and what's the meaning behind it?
Regardless of how Java actually stores the char datatype, what's certain is this, the character 'A' subtracted from the character 'A' would be represented as the null character, \0 . In memory, this means every bit is 0 .
Character arithmetic is used to implement arithmetic operations like addition and subtraction on characters in C language. It is used to manipulate the strings. When the characters are used with the arithmetic operations, it converts them into integer value automatically i.e. ASCII value of characters.
Python Remove Character from String using replace() We can use string replace() function to replace a character with a new character. If we provide an empty string as the second argument, then the character will get removed from the string.
From the Docs
The char data type is a single 16-bit Unicode character.
A char
is represented by its code point value:
'\u0000'
(or 0)'\uffff'
(or 65,535)You can see all of the English alphabetic code points on an ASCII table.
Note that 0 == \u0000
and 65,535 == \uffff
, as well as everything in between. They are corresponding values.
A char
is actually just stored as a number (its code point value). We have syntax to represent characters like char c = 'A';
, but it's equivalent to char c = 65;
and 'A' == 65
is true.
So in your code, the chars are being represented by their decimal values to do arithmetic (whole numbers from 0 to 65,535).
For example, the char 'A'
is represented by its code point 65
(decimal value in ASCII table):
System.out.print('A'); // prints A
System.out.print((int)('A')); // prints 65 because you casted it to an int
As a note, a short
is a 16-bit signed integer, so even though a char
is also 16-bits, the maximum integer value of a char
(65,535) exceeds the maximum integer value of a short
(32,767). Therefore, a cast to (short)
from a char
cannot always work. And the minimum integer value of a char
is 0, whereas the minimum integer value of a short
is -32,768.
For your code, let's say that the char
was 'D'
. Note that 'D' == 68
since its code point is 68
.
return 10 + ch - 'A';
This returns 10 + 68 - 65
, so it will return 13
.
Now let's say the char was 'Q' == 81
.
if (ch >= 'A' && ch <= 'F')
This is false since 'Q' > 'F'
(81 > 70
), so it would go into the else
block and execute:
return ch - '0';
This returns 81 - 48
so it will return 33
.
Your function returns an int
type, but if it were to instead return a char
or have the int
casted to a char
afterward, then the value 33
returned would represent the '!'
character, since 33
is its code point value. Look up the character in ASCII table or Unicode table to verify that '!' == 33
(compare decimal values).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With