Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why Isn't dchar the Standard Character Type in D?

Just browsing the digitalmars.D.learn forum, and D-related question on StackOverflow, it seems to me that a major point of mistakes for a beginner D programmer (me included) is the difference in usage and abilities of char, wchar, dchar, and the associated string types. This leads to problems such as the following:

  • error instantiating redBlackTree template
  • Cannot Slice Take!R from std.range in D?
  • std.algorithm.joiner(string[],string) - why result elements are dchar and not char?

I know it must be for backwards compatibility reasons and familiarity for developers coming from C++ or C, but I think a fairly compelling argument can be made that this possible gain is offset by the problems experienced by those same developers when they try something non-trivial with a char or string and expect it to work as it would in C/C++, only to have it fail in difficult-to-debug ways.

To stave off a lot of these problems, I've seen experienced members of the D development community time and time again tell the inexperienced coder to use dchar to avoid such problems, which begs the question of why is a char not a 32-bit unicode character by default, with 8-bit ASCII characters relegated to achar or something similar, to be touched only if necessary?

like image 423
Meta Avatar asked Nov 13 '12 21:11

Meta


2 Answers

Personally, I wish that char didn't exist and that instead of char, wchar, and dchar, we had something more like utf8, utf16, and utf32. Then everyone would be immediately forced to realize that char was not what should be used for individual characters, but that's not the way it went. I'd say that it's almost certainly the case that char was simply taken from C/C++ and then the others were added to improve Unicode support. After all, there's nothing fundamentally wrong with char. It's just that so many programmers have the mistaken understanding that char is always a character (which isn't necessarily true even in C/C++). But Walter Bright has a very good understanding of Unicode and seems to think that everyone else should as well, so he tends to make decisions with regards to Unicode which work extremely well if you understand Unicode but don't work quite as well if you don't (and most programmers don't). D pretty much forces you to come to at least a basic understanding of Unicode, which isn't all bad, but it does trip some people up.

But the reality of the matter is that while it makes good sense to use dchar for individual characters, it generally doesn't make sense to use it for strings. Sometimes, that's what you need, but UTF-32 requires way more space than UTF-8 does. That could affect performance and definitely affects the memory footprint of your programs. And a lot of string processing doesn't need random access at all. So, having UTF-8 strings as the default makes far more sense than having UTF-32 strings be the default.

The way strings are managed in D generally works extremely well. It's just that the name char has an incorrect connotation for many people, and the language unfortunately chooses for character literals to default to char rather than dchar in many cases.

I think a fairly compelling argument can be made that this possible gain is offset by the problems experienced by those same developers when they try something non-trivial with a char or string and expect it to work as it would in C/C++, only to have it fail in difficult-to-debug ways.

The reality of the matter is that strings in C/C++ work the same way that they do in D, only they don't protect you from being ignorant or stupid, unlike in D. char in C/C++ is always 8 bits and is typically treated as a UTF-8 code unit by the OS (at least in *nix land - Windows does weird things for the encoding for char and generally requires you to use wchar_t for Unicode). Certainly, any Unicode strings that you have in C/C++ are in UTF-8 unless you explicitly use a string type which uses a different encoding. std::string and C strings all operate on code units rather than code points. But the average C/C++ programmer treats them as if each of their elements were a whole character, which is just plain wrong unless you're only using ASCII, and in this day and age, that's often a very bad assumption.

D takes the route of actually building proper Unicode support into the language and into its standard library. This forces you to come to at least a basic understanding of Unicode and often makes it harder to screw it up while giving those who do understand it extremely powerful tools for managing Unicode strings not only correctly but efficiently. C/C++ just side steps the issue and lets programmers step on Unicode land mines.

like image 66
Jonathan M Davis Avatar answered Jan 03 '23 20:01

Jonathan M Davis


I understood the question as "Why dchar is not used in strings by default?"

dchar is a UTF-32 code unit. You rarely want to deal with UTF-32 code units because you waste too much space, especially if you deal only with ASCII strings.

Using UTF-8 code units (adequate type in D is char) is much more space-efficient.

D string is an immutable(char)[], ie an array of UTF-8 code units.

Yes, arguably dealing with UTF-32 code-units may boost the speed of your application if you constantly do random-access with strings. But if you know that you are going to do that with some particular text, use the dstring type in that case. This said, you should now understand why D treats strings as dchar ranges.

like image 26
DejanLekic Avatar answered Jan 03 '23 21:01

DejanLekic