Environment: Gcc/G++ Linux
I have a non-ascii file in file system and I'm going to open it.
Now I have a wchar_t*, but I don't know how to open it. (my trusted fopen only opens char* file)
Please help. Thanks a lot.
There are two possible answers:
If you want to make sure all Unicode filenames are representable, you can hard-code the assumption that the filesystem uses UTF-8 filenames. This is the "modern" Linux desktop-app approach. Just convert your strings from wchar_t
(UTF-32) to UTF-8 with library functions (iconv
would work well) or your own implementation (but lookup the specs so you don't get it horribly wrong like Shelwien did), then use fopen
.
If you want to do things the more standards-oriented way, you should use wcsrtombs
to convert the wchar_t
string to a multibyte char
string in the locale's encoding (which hopefully is UTF-8 anyway on any modern system) and use fopen
. Note that this requires that you previously set the locale with setlocale(LC_CTYPE, "")
or setlocale(LC_ALL, "")
.
And finally, not exactly an answer but a recommendation:
Storing filenames as wchar_t
strings is probably a horrible mistake. You should instead store filenames as abstract byte strings, and only convert those to wchar_t
just-in-time for displaying them in the user interface (if it's even necessary for that; many UI toolkits use plain byte strings themselves and do the interpretation as characters for you). This way you eliminate a lot of possible nasty corner cases, and you never encounter a situation where some files are inaccessible due to their names.
(Files can have anything you want inside them.)
With respect to filenames, linux does not really have a string encoding to worry about. Filenames are byte strings that need to be null-terminated.
This doesn't precisely mean that Linux is UTF-8, but it does mean that it's not compatible with wide characters as they could have a zero in a byte that's not the end byte.
But UTF-8 preserves the no-nulls-except-at-the-end model, so I have to believe that the practical approach is "convert to UTF-8" for filenames.
The content of files is a matter for standards above the Linux kernel level, so here there isn't anything Linux-y that you can or want to do. The content of files will be solely the concern of the programs that read and write them. Linux just stores and returns the byte stream, and it can have all the embedded nuls you want.
Convert wchar string to utf8 char string, then use fopen.
typedef unsigned int uint;
typedef unsigned short word;
typedef unsigned char byte;
int UTF16to8( wchar_t* w, char* s ) {
uint c;
word* p = (word*)w;
byte* q = (byte*)s; byte* q0 = q;
while( 1 ) {
c = *p++;
if( c==0 ) break;
if( c<0x080 ) *q++ = c; else
if( c<0x800 ) *q++ = 0xC0+(c>>6), *q++ = 0x80+(c&63); else
*q++ = 0xE0+(c>>12), *q++ = 0x80+((c>>6)&63), *q++ = 0x80+(c&63);
}
*q = 0;
return q-q0;
}
int UTF8to16( char* s, wchar_t* w ) {
uint cache,wait,c;
byte* p = (byte*)s;
word* q = (word*)w; word* q0 = q;
while(1) {
c = *p++;
if( c==0 ) break;
if( c<0x80 ) cache=c,wait=0; else
if( (c>=0xC0) && (c<=0xE0) ) cache=c&31,wait=1; else
if( (c>=0xE0) ) cache=c&15,wait=2; else
if( wait ) (cache<<=6)+=c&63,wait--;
if( wait==0 ) *q++=cache;
}
*q = 0;
return q-q0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With