I understand that Oracle supports multiple character sets, but how can determine if the current 11g system where I work has that functionality enabled?
You need to logon at OS level with SIDADM userid and run disp+work. The generated output will show whether the system is Unicode or not.
Oracle uses the database character set for: Data stored in SQL CHAR datatypes ( CHAR , VARCHAR2 , CLOB , and LONG) Identifiers such as table names, column names, and PL/SQL variables. Entering and storing SQL and PL/SQL source code.
The term “multibyte character” is defined by ISO C to denote a byte sequence that encodes an ideogram, no matter what encoding scheme is employed. All multibyte characters are members of the “extended character set.” A regular single-byte character is just a special case of a multibyte character.
Answer. The database character set value of an Oracle database can be determined by running the following command in Oracle's SQL*Plus or PDSQL: select * from NLS_DATABASE_PARAMETERS where parameter='NLS_CHARACTERSET'; The Oracle character set must be compatible with the client code page that ClearQuest is utilizing.
SELECT *
FROM v$nls_parameters
WHERE parameter LIKE '%CHARACTERSET';
will show you the database and national character set. The database character set controls the encoding of data in CHAR
and VARCHAR2
columns. If the database supports Unicode in those columns, the database character set should be AL32UTF8 (or UTF8 in some rare cases). The national character set controls the encoding of data in NCHAR
and NVARCHAR2
columns. If the database character set does not support Unicode, you may be able to store Unicode data in columns with these data types but that generally adds complexity to the system-- applications may have to change to support the national character set.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With