Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

JavaScript strings - UTF-16 vs UCS-2?

I've read in some places that JavaScript strings are UTF-16, and in other places they're UCS-2. I did some searching around to try to figure out the difference and found this:

Q: What is the difference between UCS-2 and UTF-16?

A: UCS-2 is obsolete terminology which refers to a Unicode implementation up to Unicode 1.1, before surrogate code points and UTF-16 were added to Version 2.0 of the standard. This term should now be avoided.

UCS-2 does not define a distinct data format, because UTF-16 and UCS-2 are identical for purposes of data exchange. Both are 16-bit, and have exactly the same code unit representation.

Sometimes in the past an implementation has been labeled "UCS-2" to indicate that it does not support supplementary characters and doesn't interpret pairs of surrogate code points as characters. Such an implementation would not handle processing of character properties, code point boundaries, collation, etc. for supplementary characters.

via: http://www.unicode.org/faq/utf_bom.html#utf16-11

So my question is, is it because the JavaScript string object's methods and indexes act on 16-bit data values instead of characters what make some people consider it UCS-2? And if so, would a JavaScript string object oriented around characters instead of 16-bit data chunks be considered UTF-16? Or is there something else I'm missing?

Edit: As requested, here are some sources saying JavaScript strings are UCS-2:

http://blog.mozilla.com/nnethercote/2011/07/01/faster-javascript-parsing/ http://terenceyim.wordpress.com/tag/ucs2/

EDIT: For anyone who may come across this, be sure to check out this link:

http://mathiasbynens.be/notes/javascript-encoding

like image 788
patorjk Avatar asked Jan 03 '12 17:01

patorjk


People also ask

Is UCS-2 the same as UTF-16?

UCS-2 is obsolete and replaced by UTF-16, which is more powerful, and more efficient (potentially fewer bytes for same number of characters). UCS-2 is fixed width, UTF-16 is variable width with a minimum of two bytes and a maximum of four bytes. UCS-2 and UTF-16 have identical code points for most characters.

Are JavaScript strings UTF-16?

The String type is generally used to represent textual data in a running ECMAScript program, in which case each element in the String is treated as a UTF-16 code unit value.

Does Windows use UTF-16 or UCS-2?

Windows uses UTF-16. Previously, it used UCS-2. Support for UTF-16 was added in Windows 2000. UTF-16 is a variable width 2-byte or 4-byte character encoding for Unicode.

Should I use UTF-8 or UTF-16?

If your data is mostly in western languages and you want to reduce the amount of storage needed, go with UTF-8 as for those languages it will take about half the storage of UTF-16.


2 Answers

JavaScript, strictly speaking, ECMAScript, pre-dates Unicode 2.0, so in some cases you may find references to UCS-2 simply because that was correct at the time the reference was written. Can you point us to specific citations of JavaScript being "UCS-2"?

Specifications for ECMAScript versions 3 and 5 at least both explicitly declare a String to be a collection unsigned 16-bit integers and that if those integer values are meant to represent textual data, then they are UTF-16 code units. See section 8.4 of the ECMAScript Language Specification.


EDIT: I'm no longer sure my answer is entirely correct. See the excellent article mentioned above, http://mathiasbynens.be/notes/javascript-encoding, which in essence says that while a JavaScript engine may use UTF-16 internally, and most do, the language itself effectively exposes those characters as if they were UCS-2.

like image 60
dgvid Avatar answered Sep 19 '22 16:09

dgvid


It's UTF-16/USC-2. It can handle surrogate pairs, but the charAt/charCodeAt returns a 16-bit char and not the Unicode codepoint. If you want to have it handle surrogate pairs, I suggest a quick read through this.

like image 27
Daniel Moses Avatar answered Sep 20 '22 16:09

Daniel Moses