import java.io.UnsupportedEncodingException;
public class TestChar {
public static void main(String[] args) throws UnsupportedEncodingException {
String cnStr = "龙";
String enStr = "a";
byte[] cnBytes = cnStr.getBytes("UTF-8");
byte[] enBytes = enStr.getBytes("UTF-8");
System.out.println("bytes size of Chinese:" + cnBytes.length);
System.out.println("bytes size of English:" + enBytes.length);
// in java, char takes two bytes, the question is:
char cnc = '龙'; // will '龙‘ take two or three bytes ?
char enc = 'a'; // will 'a' take one or two bytes ?
}
}
Output :
bytes size of Chinese:3
bytes size of English:1
Here, My JVM is set as UTF-8, from the output, we know Chinese character '龙' takes 3 bytes, and English character 'a' takes one byte. My question is:
In Java, char takes two bytes, here, char cnc = '龙'; char enc = 'a'; will cnc only takes two bytes instead of 3 bytes ? And 'a' takes two bytes instead of one byte ?
The codepoint value of 龙
is 40857. That fits inside the two bytes of a char.
It takes 3 bytes to encode in UTF-8 because not all 2-byte sequences are valid in UTF-8.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With