Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java: Readers and Encodings

Tags:

java

io

encoding

Java's default encoding is ASCII. Yes? (See my edit below)

When a textfile is encoded in UTF-8? How does a Reader know that he has to use UTF-8?

The Readers I talk about are:

  • FileReaders
  • BufferedReaders from Sockets
  • A Scanner from System.in
  • ...

EDIT

It turns our the encoding is depends on the OS, which means that the following is not true on every OS:

'a'== 97
like image 518
Martijn Courteaux Avatar asked Dec 11 '09 13:12

Martijn Courteaux


2 Answers

How does a Reader know that he have to use UTF-8?

You normally specify that yourself in an InputStreamReader. It has a constructor taking the character encoding. E.g.

Reader reader = new InputStreamReader(new FileInputStream("c:/foo.txt"), "UTF-8");

All other readers (as far as I know) uses the platform default character encoding, which may indeed not per-se be the correct encoding (such as -cough- CP-1252).

You can in theory also detect the character encoding automatically based on the byte order mark. This distinguishes the several unicode encodings from other encodings. Java SE unfortunately doesn't have any API for this, but you can homebrew one which can be used to replace InputStreamReader as in the example here above:

public class UnicodeReader extends Reader {
    private static final int BOM_SIZE = 4;
    private final InputStreamReader reader;

    /**
     * Construct UnicodeReader
     * @param in Input stream.
     * @param defaultEncoding Default encoding to be used if BOM is not found,
     * or <code>null</code> to use system default encoding.
     * @throws IOException If an I/O error occurs.
     */
    public UnicodeReader(InputStream in, String defaultEncoding) throws IOException {
        byte bom[] = new byte[BOM_SIZE];
        String encoding;
        int unread;
        PushbackInputStream pushbackStream = new PushbackInputStream(in, BOM_SIZE);
        int n = pushbackStream.read(bom, 0, bom.length);

        // Read ahead four bytes and check for BOM marks.
        if ((bom[0] == (byte) 0xEF) && (bom[1] == (byte) 0xBB) && (bom[2] == (byte) 0xBF)) {
            encoding = "UTF-8";
            unread = n - 3;
        } else if ((bom[0] == (byte) 0xFE) && (bom[1] == (byte) 0xFF)) {
            encoding = "UTF-16BE";
            unread = n - 2;
        } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE)) {
            encoding = "UTF-16LE";
            unread = n - 2;
        } else if ((bom[0] == (byte) 0x00) && (bom[1] == (byte) 0x00) && (bom[2] == (byte) 0xFE) && (bom[3] == (byte) 0xFF)) {
            encoding = "UTF-32BE";
            unread = n - 4;
        } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE) && (bom[2] == (byte) 0x00) && (bom[3] == (byte) 0x00)) {
            encoding = "UTF-32LE";
            unread = n - 4;
        } else {
            encoding = defaultEncoding;
            unread = n;
        }

        // Unread bytes if necessary and skip BOM marks.
        if (unread > 0) {
            pushbackStream.unread(bom, (n - unread), unread);
        } else if (unread < -1) {
            pushbackStream.unread(bom, 0, 0);
        }

        // Use given encoding.
        if (encoding == null) {
            reader = new InputStreamReader(pushbackStream);
        } else {
            reader = new InputStreamReader(pushbackStream, encoding);
        }
    }

    public String getEncoding() {
        return reader.getEncoding();
    }

    public int read(char[] cbuf, int off, int len) throws IOException {
        return reader.read(cbuf, off, len);
    }

    public void close() throws IOException {
        reader.close();
    }
}

Edit as a reply on your edit:

So the encoding is depends on the OS. So that means that not on every OS this is true:

'a'== 97

No, this is not true. The ASCII encoding (which contains 128 characters, 0x00 until with 0x7F) is the basis of all other character encodings. Only the characters outside the ASCII charset may risk to be displayed differently in another encoding. The ISO-8859 encodings covers the characters in the ASCII range with the same codepoints. The Unicode encodings covers the characters in the ISO-8859-1 range with the same codepoints.

You may find each of those blogs an interesting read:

  1. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) (more theoretical of the two)
  2. Unicode - How to get the characters right? (more practical of the two)
like image 177
BalusC Avatar answered Oct 12 '22 18:10

BalusC


Java's default encoding depends on your OS. For Windows, it's normally "windows-1252", for Unix it's typically "ISO-8859-1" or "UTF-8".

A reader knows the correct encoding because you tell it the correct encoding. Unfortunately, not all readers let you do this (for example, FileReader doesn't), so often you have to use an InputStreamReader.

like image 10
kdgregory Avatar answered Oct 12 '22 19:10

kdgregory