Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I be sure of the file encoding?

I have a PHP file that I created with VIM, but I'm not sure which is its encoding.

When I use the terminal and check the encoding with the command file -bi foo (My operating system is Ubuntu 11.04) it gives me the next result:

text/html; charset=us-ascii

But, when I open the file with gedit it says its encoding is UTF-8.

Which one is correct? I want the file to be encoded in UTF-8.

My guess is that there's no BOM in the file and that the command file -bi reads the file and doesn't find any UTF-8 characters, so it assumes that it's ascii, but in reality it's encoded in UTF-8.

like image 991
ecantu Avatar asked Jun 13 '12 16:06

ecantu


People also ask

How do I know if my file is UTF 16 or UTF-8?

There are a few options you can use: check the content-type to see if it includes a charset parameter which would indicate the encoding (e.g. Content-Type: text/plain; charset=utf-16 ); check if the uploaded data has a BOM (the first few bytes in the file, which would map to the unicode character U+FEFF - 2 bytes for ...

How can I tell if a file is ISO-8859-1?

If you find a byte with its high-order bit set, where the bytes both immediately before and immediately after it don't have their high-order bit set, you know it's ISO encoded (because bytes >127 always occur in sequences in UTF-8).

How can I tell if a file is UTF 16?

Just scan the file, if you find any NULL ("\0"), it must be UTF-16. JavaScript got to have ASCII chars and they are represented by a leading 0 in UTF-16.


3 Answers

$ file --mime my.txt 
my.txt: text/plain; charset=iso-8859-1
like image 84
Green Lei Avatar answered Oct 20 '22 14:10

Green Lei


Well, first of all, note that ASCII is a subset of UTF-8, so if your file contains only ASCII characters, it's correct to say that it's encoded in ASCII and it's correct to say that it's encoded in UTF-8.

That being said, file typically only examines a short segment at the beginning of the file to determine its type, so it might be declaring it us-ascii if there are non-ASCII characters but they are beyond the initial segment of the file. On the other hand, gedit might say that the file is UTF-8 even if it's ASCII because UTF-8 is gedit's preferred character encoding and it intends to save the file with UTF-8 if you were to add any non-ASCII characters during your edit session. Again, if that's what gedit is saying, it wouldn't be wrong.

Now to your question:

  1. Run this command:

    tr -d \\000-\\177 < your-file | wc -c
    

    If the output says "0", then the file contains only ASCII characters. It's in ASCII (and it's also valid UTF-8) End of story.

  2. Run this command

    iconv -f utf-8 -t ucs-4 < your-file >/dev/null
    

    If you get an error, the file does not contain valid UTF-8 (or at least, some part of it is corrupted).

    If you get no error, the file is extremely likely to be UTF-8. That's because UTF-8 has properties that make it very hard to mistake typical text in any other commonly used character encoding for valid UTF-8.

like image 60
Celada Avatar answered Oct 20 '22 15:10

Celada


(on Linux)

$ chardet <filename>

it also delivers the confidence level [0-1] of the output.

like image 30
Arthur Zennig Avatar answered Oct 20 '22 15:10

Arthur Zennig