Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java UTF8 encoding

Tags:

java

utf-8

I have a scenario in which some special characters are stored in a database (sybase) in the system's default encoding and I have to fetch this data and send it to a third-party in UTF-8 encoding using a Java program.

There is precondition that the data sent to the third-party should not exceed a defined maximum size. Since upon conversion to UTF-8 a character may be replaced by 2 or 3 characters then my logic dictates that after getting the data from the database I must encode it into the UTF-8 string and then split the string. The following are my observations:

When any special character like Chinese or Greek characters or any special character > ASCII 256 is encountered and when I convert it into UTF-8, a single character maybe represented by more than 1 byte.

So how can I be sure that the conversion is proper? For conversion I am using the following

// storing the data from database into string
string s = getdata from the database;

// converting all the data in byte array utf8 encoding
byte [] b = s.getBytes("UTF-8");

// creating a new string as my split logic is based on the string format

String newString = new String(b,"UTF-8");

But when I output this newString to the console I get ? for the special characters.

So I have some doubts:

  • If my conversion logic is wrong , then how could I correct it.
  • After doing my conversion to UTF-8, can I double-check whether my conversion is OK or not? I mean is it the correct message which needs to be sent to the third-party, I assume that if the message is not user-readable after conversion then there is some problem with the conversion.

Would like to have some points of view from all the experts out there.

Please do let me know if any further info is needed from my side.

like image 339
one_pacifist Avatar asked Nov 06 '22 04:11

one_pacifist


1 Answers

You say you're writing the Unicode to a text file, but that requires a conversion from Unicode.

But a conversion to what? That depends on how you open the file.

For example, System.out.println(myUnicodeString) will convert the Unicode to the encoding that System.out was constructed with, most likely your platform's default encoding. If you're running Windows, then this is likely to be windows-1252.

If you tell Java to use UTF-8 encoding when it writes to a file, you'll get a file containing UTF-8:

PrintWriter pw = new PrintWriter(new FileOutputStream("filename.txt"), "UTF-8");
pw.println(myUnicodeString);
like image 89
Adrian Pronk Avatar answered Nov 15 '22 04:11

Adrian Pronk