I made the following "simulation":
byte[] b = new byte[256];
for (int i = 0; i < 256; i ++) {
b[i] = (byte) (i - 128);
}
byte[] transformed = new String(b, "cp1251").getBytes("cp1251");
for (int i = 0; i < b.length; i ++) {
if (b[i] != transformed[i]) {
System.out.println("Wrong : " + i);
}
}
For cp1251
this outputs only one wrong byte - at position 25.
For KOI8-R
- all fine.
For cp1252
- 4 or 5 differences.
What is the reason for this and how can this be overcome?
I know it is wrong to represent byte arrays as strings in whatever encoding, but it is a requirement of the protocol of a payment provider, so I don't have a choice.
Update: representing it in ISO-8859-1
works, and I'll use it for the byte[]
part, and cp1251
for the textual part, so the question remains only out of curiousity
Convert byte[] to String (text data) toString() to get the string from the bytes; The bytes. toString() only returns the address of the object in memory, NOT converting byte[] to a string ! The correct way to convert byte[] to string is new String(bytes, StandardCharsets. UTF_8) .
By using String Class Constructor The simplest way to convert a byte array into String, we can use String class constructor with byte[] as the constructor argument.
// Code to convert byte arr to str: byte[] by_original = {0,1,-2,3,-4,-5,6}; String str1 = new String(by_original); System. out. println("str1 >> "+str1); // Code to convert str to byte arr: byte[] by_new = str1. getBytes(); for(int i=0;i<by_new.
The simplest way to do so is using valueOf() method of String class in java. lang package. This method takes the byte value to be parsed and returns the value in String type from it.
Some of the "bytes" are not supported in the target set - they are replaced with the ?
character. When you convert back, ?
is normally converted to the byte value 63 - which isn't what it was before.
What is the reason for this
The reason is that character encodings are not necesarily bijective and there is no good reason to expect them to be. Not all bytes or byte sequences are legal in all encodings, and usually illegal sequences are decoded to some sort of placeholder character like '?' or U+FFFD, which of course does not produce the same bytes when re-encoded.
Additionally, some encodings may map some legal different byte sequences to the same string.
It appears that both cp1251 and cp1252 have byte values that do not correspond to defined characters; i.e. they are "unmappable".
The javadoc for String(byte[], String)
says this:
The behavior of this constructor when the given bytes are not valid in the given charset is unspecified. The
CharsetDecoder
class should be used when more control over the decoding process is required.
Other constructors say this:
This method always replaces malformed-input and unmappable-character sequences with this charset's default replacement string.
If you see this kind of thing happening in practice it indicates that either you are using the wrong character set, or you've been given some bad data. Either way, it is probably not a good idea to carry on as if there was no problem.
I've been trying to figure out if there is a way to get a CharsetDecoder to "preserve" unmappable characters, and I don't think it is possible unless you are willing to implementing a custom decoder/encoder pair. But I've also concluded that it does not make sense to even try. It is (theoretically) wrong map those unmappable characters to real Unicode code points. And if you do, how is your application going to handle them?
Actually there shall be one difference: a byte of value 24 is converted to a char
of value 0xFFFD
; that's the "Unicode replacement character", used for untranslatable bytes. When converted back, you get a question mark (value 63).
In CP1251, the code 24 means "end of input" and cannot be part of a proper string, which is why Java deems it as "untranslatable".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With