How can I print a string like this: €áa¢cée£
on the console/screen? I tried this:
#include <iostream>
#include <string>
using namespace std;
wstring wStr = L"€áa¢cée£";
int main (void)
{
wcout << wStr << " : " << wStr.length() << endl;
return 0;
}
which is not working. Even confusing, if I remove €
from the string, the print out comes like this: ?a?c?e? : 7
but with €
in the string, nothing gets printed after the €
character.
If I write the same code in python:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
wStr = u"€áa¢cée£"
print u"%s" % wStr
it prints out the string correctly on the very same console. What am I missing in c++ (well, I'm just a noob)? Cheers!!
#include <iostream>
#include <string>
using namespace std;
string wStr = "€áa¢cée£";
char *pStr = 0;
int main (void)
{
cout << wStr << " : " << wStr.length() << endl;
pStr = &wStr[0];
for (unsigned int i = 0; i < wStr.length(); i++) {
cout << "char "<< i+1 << " # " << *pStr << " => " << pStr << endl;
pStr++;
}
return 0;
}
First of all, it reports 14
as the length of the string: €áa¢cée£ : 14
Is it because it's counting 2 byte per character?
And all I get this:
char 1 # ? => €áa¢cée£
char 2 # ? => ??áa¢cée£
char 3 # ? => ?áa¢cée£
char 4 # ? => áa¢cée£
char 5 # ? => ?a¢cée£
char 6 # a => a¢cée£
char 7 # ? => ¢cée£
char 8 # ? => ?cée£
char 9 # c => cée£
char 10 # ? => ée£
char 11 # ? => ?e£
char 12 # e => e£
char 13 # ? => £
char 14 # ? => ?
as the last cout output. So, actual problem still remains, I believe. Cheers!!
Update 2: based on n.m.'s second suggestion
#include <iostream>
#include <string>
using namespace std;
wchar_t wStr[] = L"€áa¢cée£";
int iStr = sizeof(wStr) / sizeof(wStr[0]); // length of the string
wchar_t *pStr = 0;
int main (void)
{
setlocale (LC_ALL,"");
wcout << wStr << " : " << iStr << endl;
pStr = &wStr[0];
for (int i = 0; i < iStr; i++) {
wcout << *pStr << " => " << static_cast<void*>(pStr) << " => " << pStr << endl;
pStr++;
}
return 0;
}
And this is what I get as my result:
€áa¢cée£ : 9
€ => 0x1000010e8 => €áa¢cée£
á => 0x1000010ec => áa¢cée£
a => 0x1000010f0 => a¢cée£
¢ => 0x1000010f4 => ¢cée£
c => 0x1000010f8 => cée£
é => 0x1000010fc => ée£
e => 0x100001100 => e£
£ => 0x100001104 => £
=> 0x100001108 =>
Why there it's reported as 9
than 8
? Or this is what I should expect? Cheers!!
std::wstring str1 = L"SomeText"; std::wstring strr2(L"OtherText!"); printf("Wide String1- %ls \n", str1. c_str()); wprintf(L"Wide String2- %s \n", str2. c_str()); For printf: %s is narrow char string and %ls is wide char string.
These are the two classes that you will actually use. std::string is used for standard ascii and utf-8 strings. std::wstring is used for wide-character/unicode (utf-16) strings. There is no built-in class for utf-32 strings (though you should be able to extend your own from basic_string if you need one).
std::to_wstring in c++This function is used to convert the numerical value to the wide string i.e. it parses a numerical value of datatypes (int, long long, float, double ) to a wide string. It returns a wide string of data type wstring representing the numerical value passed in the function.
UTF-16 is a specific Unicode encoding. std::wstring is a string implementation that uses wchar_t as its underlying type for storing each character. (In contrast, regular std::string uses char ). The encoding used with wchar_t does not necessarily have to be UTF-16—it could also be UTF-32 for example.
Drop the L
before the string literal. Use std::string
, not std::wstring
.
UPD: There's a better (correct) solution. keep wchar_t, wstring and the L, and call setlocale(LC_ALL,"")
in the beginning of your program.
You should call setlocale(LC_ALL,"")
in the beginning of your program anyway. This instructs your program to work with your environment's locale, instead of the default "C" locale. Your environment has a UTF-8 one so everything should work.
Without calling setlocale(LC_ALL,"")
, the program works with UTF-8 sequences without "realizing" that they are UTF-8. If a correct UTF-8 sequence is printed on the terminal, it will be interpreted as UTF-8 and everything will look fine. That's what happens if you use string
and char
: gcc uses UTF-8 as a default encoding for strings, and the ostream happily prints them without applying any conversion. It thinks it has a sequence of ASCII characters.
But when you use wchar_t
, everything breaks: gcc uses UTF-32, the correct re-encoding is not applied (because the locale is "C") and the output is garbage.
When you call setlocale(LC_ALL,"")
the program knows it should recode UTF-32 to UTF-8, and everything is fine and dandy again.
This all assumes that we only ever want to work with UTF-8. Using arbitrary locales and encodings is beyond the scope of this answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With