UTF8 national character data type support WIP patch and list of open issues.

Started by Boguk, Maksymover 12 years ago58 messages
#1Boguk, Maksym
maksymb@fast.au.fujitsu.com
1 attachment(s)

Hi,

As part of my job I started developing in-core support for the UTF8
National Character types (national character/national character
variable).
I attached current WIP patch (against HEAD) to community review.

Target usage: ability to store UTF8 national characters in some
selected fields inside a single-byte encoded database.
For sample if I have a ru-RU.koi8r encoded database with mostly Russian
text inside, it would be nice to be able store an Japanese text in one
field without converting the whole database to UTF8 (convert such
database to UTF8 easily could almost double the database size even if
only one field in whole database will use any symbols outside of
ru-RU.koi8r encoding).

What has been done:

1)Addition of new string data types NATIONAL CHARACTER and NATIONAL
CHARACTER VARIABLE.
These types differ from the char/varchar data types in one important
respect: NATIONAL string types are always have UTF8 encoding even
(independent from used database encoding).
Of course that lead to encoding conversion overhead when comparing
NATIONAL string types with common string types (that is expected and
unavoidable).
2)Some ECPG support for these types
3)Some documentation patch (not finished)

What need to be done:

1)Full set of string functions and operators for NATIONAL types (we
could not use generic text functions because they assume that the stings
will have database encoding).
Now only basic set implemented.
2)Need implement some way to define default collation for a NATIONAL
types.
3)Need implement some way to input UTF8 characters into NATIONAL types
via SQL (there are serious open problem... it will be defined later in
the text).

Most serious open problem that the patch in current state doesn't allow
input/output UTF8 symbols which could not be represented in used
database encoding into NATIONAL fields.
It happen because encoding conversion from the client_encoding to the
database encoding happens before syntax analyze/parse stage and throw an
error for symbols which could not be represented.
I don't see any good solution to this problem except made whole codebase
use an UTF8 encoding for the all internal operations with huge
performance hit.
May be someone have good idea how to deal with this issue.

That is really WIP patch (with lots things on todo list/required
polish).

Kindly please tell me what you think about this idea/patch in general.

PS: It is my first patch to PostgreSQL so there are a lot of space to
improvement/style for sure.

Kind Regards,
Maksym

Attachments:

pgHEAD_UTF8NCHAR.patchapplication/octet-stream; name=pgHEAD_UTF8NCHAR.patchDownload
diff -U 5 -N -r -w ../orig.HEAD/doc/src/sgml/datatype.sgml ./doc/src/sgml/datatype.sgml
--- ../orig.HEAD/doc/src/sgml/datatype.sgml	2013-08-19 20:22:06.000000000 -0400
+++ ./doc/src/sgml/datatype.sgml	2013-08-19 20:54:00.000000000 -0400
@@ -159,10 +159,22 @@
        <entry></entry>
        <entry>currency amount</entry>
       </row>
 
       <row>
+       <entry><type>national character varying [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry><type>nvarchar [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry>variable-length national character string</entry>
+      </row>
+
+      <row>
+       <entry><type>national character [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry><type>nchar [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry>fixed-length national character string</entry>
+      </row>
+
+      <row>
        <entry><type>numeric [ (<replaceable>p</replaceable>,
          <replaceable>s</replaceable>) ]</type></entry>
        <entry><type>decimal [ (<replaceable>p</replaceable>,
          <replaceable>s</replaceable>) ]</type></entry>
        <entry>exact numeric of selectable precision</entry>
@@ -4661,6 +4673,195 @@
     <type>internal</> argument.
    </para>
 
   </sect1>
 
+  <sect1 id="datatype-national-character">
+   <title>National Character Types</title>
+
+   <indexterm zone="datatype-national-character">
+    <primary>national character string</primary>
+    <secondary>data types</secondary>
+   </indexterm>
+
+   <indexterm>
+    <primary>string</primary>
+    <see>national character string</see>
+   </indexterm>
+
+   <indexterm zone="datatype-national-character">
+    <primary>national character</primary>
+   </indexterm>
+
+   <indexterm zone="datatype-national-character">
+    <primary>national character varying</primary>
+   </indexterm>
+
+   <indexterm zone="datatype-national-character">
+    <primary>nchar</primary>
+   </indexterm>
+
+   <indexterm zone="datatype-national-character">
+    <primary>nvarchar</primary>
+   </indexterm>
+
+    <table id="datatype-national-character-table">
+     <title>National Character Types</title>
+     <tgroup cols="2">
+      <thead>
+       <row>
+        <entry>Name</entry>
+        <entry>Description</entry>
+       </row>
+      </thead>
+      <tbody>
+       <row>
+        <entry><type>national character varying(<replaceable>n</>)</type>, <type>nvarchar(<replaceable>n</>)</type></entry>
+        <entry>variable-length with limit</entry>
+       </row>
+       <row>
+        <entry><type>national character(<replaceable>n</>)</type>, <type>nchar(<replaceable>n</>)</type></entry>
+        <entry>fixed-length, blank padded</entry>
+       </row>
+     </tbody>
+     </tgroup>
+    </table>
+
+   <para>
+    <xref linkend="datatype-national-character-table"> shows the
+    general-purpose national character types available in
+    <productname>PostgreSQL</productname>.
+   </para>
+
+   <para>
+    <acronym>SQL</acronym> defines two primary national character types:
+    <type>national character varying(<replaceable>n</>)</type> and
+    <type>national character(<replaceable>n</>)</type>, where <replaceable>n</>
+    is a positive integer.  Both of these types can store strings up to
+    <replaceable>n</> national characters (not bytes) in length.  An attempt to store a
+    longer string into a column of these types will result in an
+    error, unless the excess characters are all spaces, in which case
+    the string will be truncated to the maximum length. 
+    If the string to be stored is shorter than the declared
+    length, values of type <type>national character</type> will be space-padded;
+    values of type <type>national character varying</type> will simply store the
+    shorter
+    string.
+   </para>
+
+   <para>
+    If one explicitly casts a value to <type>national character
+    varying(<replaceable>n</>)</type> or
+    <type>national character(<replaceable>n</>)</type>, then an over-length
+    value will be truncated to <replaceable>n</> characters without
+    raising an error. (This too is required by the
+    <acronym>SQL</acronym> standard.)
+   </para>
+
+   <para>
+    The notations <type>nvarchar(<replaceable>n</>)</type> and
+    <type>nchar(<replaceable>n</>)</type> are aliases for <type>national character
+    varying(<replaceable>n</>)</type> and
+    <type>national character(<replaceable>n</>)</type>, respectively.
+    <type>national character</type> without length specifier is equivalent to
+    <type>national character(1)</type>. If <type>national character varying</type> is used
+    without length specifier, the type accepts strings of any size. The
+    latter is a <productname>PostgreSQL</> extension.
+   </para>
+
+   <para>
+    Values of type <type>national character</type> are physically padded
+    with spaces to the specified width <replaceable>n</>, and are
+    stored and displayed that way.  However, the padding spaces are
+    treated as semantically insignificant.  Trailing spaces are
+    disregarded when comparing two values of type <type>national character</type>,
+    and they will be removed when converting a <type>national character</type> value
+    to one of the other string types.  Note that trailing spaces
+    <emphasis>are</> semantically significant in
+    <type>national character varying</type> values, and
+    when using pattern matching, e.g. <literal>LIKE</>,
+    regular expressions.
+   </para>
+
+   <para>
+    The storage requirement for a short string (up to 126 bytes) is 1 byte
+    plus the actual string, which includes the space padding in the case of
+    <type>national character</type>.  Longer strings have 4 bytes of overhead instead
+    of 1.  Long strings are compressed by the system automatically, so
+    the physical requirement on disk might be less. Very long values are also
+    stored in background tables so that they do not interfere with rapid
+    access to shorter column values. In any case, the longest
+    possible character string that can be stored is about 1 GB. (The
+    maximum value that will be allowed for <replaceable>n</> in the data
+    type declaration is less than that. It wouldn't be useful to
+    change this because with multibyte character encodings the number of
+    characters and bytes can be quite different. If you desire to
+    store long strings with no specific upper limit, use
+    <type>text</type> or <type>national character varying</type> without a length
+    specifier, rather than making up an arbitrary length limit.)
+   </para>
+
+   <tip>
+    <para>
+     There is no performance difference among these two types,
+     apart from increased storage space when using the blank-padded
+     type, and a few extra CPU cycles to check the length when storing into
+     a length-constrained column.  While
+     <type>national character(<replaceable>n</>)</type> has performance
+     advantages in some other database systems, there is no such advantage in
+     <productname>PostgreSQL</productname>; in fact
+     <type>national character(<replaceable>n</>)</type> is usually the slowest of
+     the three because of its additional storage costs.  In most situations
+     <type>text</type> or <type>national character varying</type> should be used
+     instead.
+    </para>
+   </tip>
+
+   <para>
+    Refer to <xref linkend="sql-syntax-strings"> for information about
+    the syntax of string literals, and to <xref linkend="functions">
+    for information about available operators and functions. The
+    database character set determines the character set used to store
+    textual values. The <type>national character varying</type> or <type>national character</type> types are supported only when the database encoding is UTF-8; for more information on character set support,
+    refer to <xref linkend="multibyte">.
+   </para>
+
+   <example>
+    <title>Using the National Character Types</title>
+
+<programlisting>
+CREATE TABLE test1 (a national character(4));
+INSERT INTO test1 VALUES (N'ok');
+SELECT a, char_length(a) FROM test1; -- <co id="co.datatype-nchar">
+<computeroutput>
+  a   | char_length
+------+-------------
+ ok   |           2
+</computeroutput>
+
+CREATE TABLE test2 (b nvarchar(5));
+INSERT INTO test2 VALUES (N'ok');
+INSERT INTO test2 VALUES (N'good      ');
+INSERT INTO test2 VALUES (N'too long');
+<computeroutput>ERROR:  value too long for type national character varying(5)</computeroutput>
+INSERT INTO test2 VALUES ('too long'::nvarchar(5)); -- explicit truncation
+SELECT b, char_length(b) FROM test2;
+<computeroutput>
+   b   | char_length
+-------+-------------
+ ok    |           2
+ good  |           5
+ too l |           5
+</computeroutput>
+</programlisting>
+    <calloutlist>
+     <callout arearefs="co.datatype-nchar">
+      <para>
+       The <function>char_length</function> function is discussed in
+       <xref linkend="functions-string">.
+      </para>
+     </callout>
+    </calloutlist>
+   </example>
+
+  </sect1>
  </chapter>
diff -U 5 -N -r -w ../orig.HEAD/doc/src/sgml/ecpg.sgml ./doc/src/sgml/ecpg.sgml
--- ../orig.HEAD/doc/src/sgml/ecpg.sgml	2013-08-19 20:22:06.000000000 -0400
+++ ./doc/src/sgml/ecpg.sgml	2013-08-19 20:54:00.000000000 -0400
@@ -888,10 +888,20 @@
 
       <row>
        <entry><type>boolean</type></entry>
        <entry><type>bool</type><footnote><para>declared in <filename>ecpglib.h</filename> if not native</para></footnote></entry>
       </row>
+      
+      <row>
+       <entry><type>national character(<replaceable>n</>)</type>, <type>nvarchar(<replaceable>n</>)</type></entry>
+       <entry><type>NCHAR[<replaceable>n</>+1]</type>, <type>NVARCHAR[<replaceable>n</>+1]</type></entry>
+      </row>
+      
+      <row>
+       <entry><type>national character(<replaceable>n</>)</type>, <type>nvarchar(<replaceable>n</>)</type>, <type>character(<replaceable>n</>)</type>, <type>varchar(<replaceable>n</>)</type>, <type>text</type></entry>
+       <entry><type>UVARCHAR[<replaceable>2*n</>]</type></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
 
    <sect3 id="ecpg-char">
@@ -966,10 +976,197 @@
      also hold values of other SQL types, which will be stored in
      their string forms.
     </para>
    </sect3>
 
+   <sect3 id="ecpg-nchar">
+    <title>Handling National Character Strings</title>
+
+    <para>
+     To handle SQL national character string data types, such
+     as <type>nvarchar</type> and <type>nchar</type>, there are two
+     possible ways to declare the host variables.
+    </para>
+
+    <para>
+     One way is using <type>NCHAR[]</type>, an array
+     of <type>NCHAR</type>, internally which is mapped to an array of <type>char</type> which is the most common way to handle
+     character data in C.
+<programlisting>
+EXEC SQL BEGIN DECLARE SECTION;
+    NCHAR str[50];
+EXEC SQL END DECLARE SECTION;
+</programlisting>
+     Note that you have to take care of the length yourself.  If you
+     use this host variable as the target variable of a query which
+     returns a string with more than 49 characters, a buffer overflow
+     occurs.
+    </para>
+
+    <para>
+     The other way is using the <type>NVARCHAR</type> type, which is a
+     special type provided by ECPG.  The definition on an array of
+     type <type>NVARCHAR</type> is converted into a
+     named <type>struct</> for every variable. A declaration like:
+<programlisting>
+NVARCHAR var[180];
+</programlisting>
+     is converted into:
+<programlisting>
+struct varchar_var { int len; char arr[180]; } var;
+</programlisting>
+     The member <structfield>arr</structfield> hosts the string
+     including a terminating zero byte.  Thus, to store a string in
+     a <type>NVARCHAR</type> host variable, the host variable has to be
+     declared with the length including the zero byte terminator.  The
+     member <structfield>len</structfield> holds the length of the
+     string stored in the <structfield>arr</structfield> without the
+     terminating zero byte.  When a host variable is used as input for
+     a query, if <literal>strlen(arr)</literal>
+     and <structfield>len</structfield> are different, the shorter one
+     is used.
+    </para>
+
+    <para>
+     Two or more <type>NVARCHAR</type> host variables cannot be defined
+     in single line statement.  The following code will confuse
+     the <command>ecpg</command> preprocessor:
+<programlisting>
+NVARCHAR v1[128], v2[128];   /* WRONG */
+</programlisting>
+     Two variables should be defined in separate statements like this:
+<programlisting>
+NVARCHAR v1[128];
+NVARCHAR v2[128];
+</programlisting>
+    </para>
+
+    <para>
+     <type>NVARCHAR</type> and <type>NCHAR</type> can be written in upper or lower case, but
+     not in mixed case.
+    </para>
+
+    <para>
+     <type>NCHAR</type> and <type>NVARCHAR</type> host variables can
+     also hold values of other SQL types, which will be stored in
+     their string forms. NVARCHAR type is pretty similar to VARCHAR type 
+     and added mainly for SQL standard compatibility.
+    </para>
+   </sect3>
+
+   <sect3 id="ecpg-uvarchar">
+    <title>Handling Strings with UVARCHAR type</title>
+
+    <para>
+     ECPG contains a special type <type>UVARCHAR</type> for defining host variables to store 
+     character string and national character string values from respective SQL types. 
+
+     This type is preprocessed into a structure with length and buffer fields.
+
+     <programlisting>
+      struct uvarchar_var { int len; char arr[180]; } uvc;
+     </programlisting>
+
+    The data stored by <type>UVARCHAR</type> type structure is always in UTF-16 encoding. There is an implicit conversion from UTF-8 to UTF-16 when UVARCHAR type is created.
+    </para>
+    
+    <para>
+     The sample usage of UVARCHAR to handle string data
+     
+     <programlisting>
+      #include &lt;stdio.h>
+ 
+      EXEC SQL INCLUDE sqlca;
+      EXEC SQL WHENEVER SQLERROR   SQLPRINT;
+      
+      int main( int argc, char * argv[] )
+      {
+          EXEC SQL BEGIN DECLARE SECTION;
+          UVARCHAR uv[100];
+          EXEC SQL END DECLARE SECTION;
+      
+          EXEC SQL CONNECT TO postgres;
+          EXEC SQL CREATE TABLE test (a nvarchar);
+          EXEC SQL INSERT INTO test (a) values ('ok'),
+      ...
+          EXEC SQL SELECT a INTO :uv FROM test;
+          printf("value=%s length=%d\n", uv.arr, uv.len);
+      ...
+      ...
+      }
+     </programlisting>
+
+     The member <structfield>arr</structfield> hosts the string
+     without a terminating zero byte.  As the default encoding for 
+     <type>UVARCHAR</type> is UTF-16, to store a string in a 
+     <type>UVARCHAR</type> host variable, the host variable has to be
+     declared with the minimum length of 2*n. The member <structfield>len</structfield> 
+     holds the length of the string stored in the <structfield>arr</structfield>.
+     When a host variable is used as input for a query, 
+     if <literal>strlen(arr)</literal> and <structfield>len</structfield> 
+     are different, the shorter one is used.
+
+    </para>
+
+   </sect3>
+
+   <sect3 id="ecpg-datatype-comparison">
+    <title>Comparison between VARCHAR, NVARCHAR and UVARCHAR Types</title>
+    <para>
+    The host data types used for handling string values, differ in few aspects like memory usage, encoding, performance etc.
+    <xref linkend="ecpg-datatype-comaprison-table"> shows the comparison of 
+    host data types.
+ 
+    </para>
+    <table id="ecpg-datatype-comaprison-table">
+     <title>Comparison between VARCHAR, NVARCHAR and UVARCHAR Types</title>
+
+     <tgroup cols="4">
+      <thead>
+       <row>
+        <entry></entry>
+        <entry>VARCHAR</entry>
+        <entry>NVARCHAR</entry>
+        <entry>UVARCHAR</entry>
+       </row>
+      </thead>
+
+      <tbody>
+       <row>
+        <entry><type>Maximum Size for 'n' characters</type></entry>
+        <entry><type>4*n + 1</type></entry>
+        <entry><type>4*n + 1</type></entry>
+        <entry><type>4*n</type></entry>
+       </row>
+       <row>
+        <entry><type>Minimum Size for 'n' characters</type></entry>
+        <entry><type>n + 1</type></entry>
+        <entry><type>n + 1</type></entry>
+        <entry><type>2*n</type></entry>
+       </row>
+       <row>
+        <entry><type>Encoding Used</type></entry>
+        <entry><type>client encoding</type></entry>
+        <entry><type>client encoding</type></entry>
+        <entry><type>UTF-16</type></entry>
+       </row>
+       <row>
+        <entry><type>Zero Terminated</type></entry>
+        <entry><type>Yes</type></entry>
+        <entry><type>Yes</type></entry>
+        <entry><type>No</type></entry>
+       </row>
+
+      </tbody>
+     </tgroup>
+    </table>
+
+	<para>
+     Please note that the data conversion from UTF-8 encoding to UTF-16 encoding will incur a negligible performance overhead in case of UVARCHAR Type.
+    </para>
+   </sect3>
+
    <sect3 id="ecpg-special-types">
     <title>Accessing Special Data Types</title>
 
     <para>
      ECPG contains some special types that help you to interact easily
diff -U 5 -N -r -w ../orig.HEAD/doc/src/sgml/syntax.sgml ./doc/src/sgml/syntax.sgml
--- ../orig.HEAD/doc/src/sgml/syntax.sgml	2013-08-19 20:22:06.000000000 -0400
+++ ./doc/src/sgml/syntax.sgml	2013-08-19 20:54:00.000000000 -0400
@@ -551,10 +551,37 @@
      To include the escape character in the string literally, write it
      twice.
     </para>
    </sect3>
 
+   <sect3 id="sql-syntax-strings-N">
+    <title>String Constants with prefix N</title>
+
+    <indexterm  zone="sql-syntax-strings-N">
+     <primary>Prefix N</primary>
+     <secondary>in string constants</secondary>
+    </indexterm>
+
+    <para>
+    The standard syntax for specifying national character type string constants 
+    is to add a prefix 'N' to the string constant. This string constant syntax
+    is supported only when the database encoding is UTF-8.
+    
+    The following trivial example shows how to pass a national character type 
+    string constant with prefix 'N'
+    
+        <programlisting>
+			INSERT INTO test VALUES (N'ok');
+		</programlisting>
+    
+    </para>
+    <para>
+    Please note that this syntax is optional and national character type string constants can also be specified bounded by single quotes (<literal>'</literal>), for example
+     <literal>'ok'</literal>.
+    </para>
+   </sect3>
+
    <sect3 id="sql-syntax-dollar-quoting">
     <title>Dollar-quoted String Constants</title>
 
      <indexterm>
       <primary>dollar quoting</primary>
diff -U 5 -N -r -w ../orig.HEAD/src/backend/catalog/information_schema.sql ./src/backend/catalog/information_schema.sql
--- ../orig.HEAD/src/backend/catalog/information_schema.sql	2013-08-19 20:22:07.000000000 -0400
+++ ./src/backend/catalog/information_schema.sql	2013-08-19 20:54:00.000000000 -0400
@@ -77,11 +77,11 @@
     RETURNS NULL ON NULL INPUT
     AS
 $$SELECT
   CASE WHEN $2 = -1 /* default typmod */
        THEN null
-       WHEN $1 IN (1042, 1043) /* char, varchar */
+       WHEN $1 IN (1042, 1043, 5001, 6001) /* char, varchar, nchar, nvarchar */
        THEN $2 - 4
        WHEN $1 IN (1560, 1562) /* bit, varbit */
        THEN $2
        ELSE null
   END$$;
@@ -90,11 +90,11 @@
     LANGUAGE sql
     IMMUTABLE
     RETURNS NULL ON NULL INPUT
     AS
 $$SELECT
-  CASE WHEN $1 IN (25, 1042, 1043) /* text, char, varchar */
+  CASE WHEN $1 IN (25, 1042, 1043, 5001, 6001) /* text, char, varchar, nchar, nvarchar */
        THEN CASE WHEN $2 = -1 /* default typmod */
                  THEN CAST(2^30 AS integer)
                  ELSE information_schema._pg_char_max_length($1, $2) *
                       pg_catalog.pg_encoding_max_length((SELECT encoding FROM pg_catalog.pg_database WHERE datname = pg_catalog.current_database()))
             END
diff -U 5 -N -r -w ../orig.HEAD/src/backend/parser/gram.y ./src/backend/parser/gram.y
--- ../orig.HEAD/src/backend/parser/gram.y	2013-08-19 20:22:08.000000000 -0400
+++ ./src/backend/parser/gram.y	2013-08-19 21:42:23.000000000 -0400
@@ -439,13 +439,15 @@
 
 %type <typnam>	Typename SimpleTypename ConstTypename
 				GenericType Numeric opt_float
 				Character ConstCharacter
 				CharacterWithLength CharacterWithoutLength
+				NCharacterWithLength NCharacterWithoutLength
 				ConstDatetime ConstInterval
 				Bit ConstBit BitWithLength BitWithoutLength
 %type <str>		character
+%type <str>             ncharacter
 %type <str>		extract_arg
 %type <str>		opt_charset
 %type <boolean> opt_varying opt_timezone opt_no_inherit
 
 %type <ival>	Iconst SignedIconst
@@ -562,10 +564,11 @@
 	MAPPING MATCH MATERIALIZED MAXVALUE MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
 	NULLS_P NUMERIC
+	NVARCHAR
 
 	OBJECT_P OF OFF OFFSET OIDS ON ONLY OPERATOR OPTION OPTIONS OR
 	ORDER ORDINALITY OUT_P OUTER_P OVER OVERLAPS OVERLAY OWNED OWNER
 
 	PARSER PARTIAL PARTITION PASSING PASSWORD PLACING PLANS POSITION
@@ -10277,13 +10280,22 @@
 				}
 			| CharacterWithoutLength
 				{
 					$$ = $1;
 				}
+			| NCharacterWithLength
+				{
+					$$ = $1;
+				}
+			| NCharacterWithoutLength
+				{
+					$$ = $1;
+				}
 		;
 
-ConstCharacter:  CharacterWithLength
+ConstCharacter:
+			CharacterWithLength
 				{
 					$$ = $1;
 				}
 			| CharacterWithoutLength
 				{
@@ -10294,13 +10306,79 @@
 					 * was not specified.
 					 */
 					$$ = $1;
 					$$->typmods = NIL;
 				}
+			| NCharacterWithLength
+				{
+					$$ = $1;
+				}
+			| NCharacterWithoutLength
+				{
+					/* Length was not specified so allow to be unrestricted.
+					* This handles problems with fixed-length (bpchar) strings
+					* which in column definitions must default to a length
+					* of one, but should not be constrained if the length
+					* was not specified.
+					*/
+					$$ = $1;
+					$$->typmods = NIL;
+				}
 		;
 
-CharacterWithLength:  character '(' Iconst ')' opt_charset
+NCharacterWithLength:
+			NATIONAL ncharacter '(' Iconst ')' opt_charset
+				{
+					if (($6 != NULL) && (strcmp($6, "sql_text") != 0))
+				{
+						char *type;
+
+						type = palloc(strlen($2) + 1 + strlen($2) + 1);
+						strcpy(type, $2);
+						strcat(type, "_");
+						strcat(type, $6);
+						$1 = type;
+					}
+					$$ = SystemTypeName($2);
+					$$->typmods = list_make1(makeIntConst($4, @4));
+					$$->location = @1;
+				}
+		;
+
+NCharacterWithoutLength:
+			NATIONAL ncharacter opt_charset
+				{
+					if (($3 != NULL) && (strcmp($3, "sql_text") != 0))
+					{
+						char *type;
+
+						type = palloc(strlen($2) + 1 + strlen($3) + 1);
+						strcpy(type, $2);
+						strcat(type, "_");
+						strcat(type, $3);
+						$2 = type;
+					}
+					$$ = SystemTypeName($2);
+					/* nchar defaults to char(1), varchar to no limit */
+					if (strcmp($2, "nbpchar") == 0)
+						$$->typmods = list_make1(makeIntConst(1, -1));
+
+					$$->location = @1;
+				}
+		;
+
+ncharacter:
+			CHARACTER opt_varying
+				{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| CHAR_P opt_varying
+				{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| VARCHAR
+				{ $$ = "nvarchar"; }
+		;
+
+CharacterWithLength:
+			  character '(' Iconst ')' opt_charset
 				{
 					if (($5 != NULL) && (strcmp($5, "sql_text") != 0))
 					{
 						char *type;
 
@@ -10315,11 +10393,12 @@
 					$$->typmods = list_make1(makeIntConst($3, @3));
 					$$->location = @1;
 				}
 		;
 
-CharacterWithoutLength:	 character opt_charset
+CharacterWithoutLength:
+			 character opt_charset
 				{
 					if (($2 != NULL) && (strcmp($2, "sql_text") != 0))
 					{
 						char *type;
 
@@ -10330,30 +10409,29 @@
 						$1 = type;
 					}
 
 					$$ = SystemTypeName($1);
 
-					/* char defaults to char(1), varchar to no limit */
-					if (strcmp($1, "bpchar") == 0)
+					/* [n]char defaults to [national ]char(1), varchar to no limit */
+					if ((strcmp($1, "bpchar") == 0) || (strcmp($1, "nbpchar") == 0))
 						$$->typmods = list_make1(makeIntConst(1, -1));
 
 					$$->location = @1;
 				}
 		;
 
-character:	CHARACTER opt_varying
+character:
+			CHARACTER opt_varying
 										{ $$ = $2 ? "varchar": "bpchar"; }
 			| CHAR_P opt_varying
 										{ $$ = $2 ? "varchar": "bpchar"; }
 			| VARCHAR
 										{ $$ = "varchar"; }
-			| NATIONAL CHARACTER opt_varying
-										{ $$ = $3 ? "varchar": "bpchar"; }
-			| NATIONAL CHAR_P opt_varying
-										{ $$ = $3 ? "varchar": "bpchar"; }
 			| NCHAR opt_varying
-										{ $$ = $2 ? "varchar": "bpchar"; }
+				{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| NVARCHAR
+				{ $$ = "nvarchar"; }
 		;
 
 opt_varying:
 			VARYING									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
@@ -12739,10 +12817,11 @@
 			| NATIONAL
 			| NCHAR
 			| NONE
 			| NULLIF
 			| NUMERIC
+			| NVARCHAR
 			| OUT_P
 			| OVERLAY
 			| POSITION
 			| PRECISION
 			| REAL
diff -U 5 -N -r -w ../orig.HEAD/src/backend/parser/scan.l ./src/backend/parser/scan.l
--- ../orig.HEAD/src/backend/parser/scan.l	2013-08-19 20:22:08.000000000 -0400
+++ ./src/backend/parser/scan.l	2013-08-19 22:27:02.000000000 -0400
@@ -480,11 +480,11 @@
 					const ScanKeyword *keyword;
 
 					SET_YYLLOC();
 					yyless(1);				/* eat only 'n' this time */
 
-					keyword = ScanKeywordLookup("nchar",
+					keyword = ScanKeywordLookup("nvarchar",
 												yyextra->keywords,
 												yyextra->num_keywords);
 					if (keyword != NULL)
 					{
 						yylval->keyword = keyword->name;
diff -U 5 -N -r -w ../orig.HEAD/src/backend/utils/adt/format_type.c ./src/backend/utils/adt/format_type.c
--- ../orig.HEAD/src/backend/utils/adt/format_type.c	2013-08-19 20:22:09.000000000 -0400
+++ ./src/backend/utils/adt/format_type.c	2013-08-19 20:54:00.000000000 -0400
@@ -216,10 +216,25 @@
 			}
 			else
 				buf = pstrdup("character");
 			break;
 
+		case NBPCHAROID:
+			if (with_typemod)
+				buf = printTypmod("national character", typemod, typeform->typmodout);
+			else if (typemod_given)
+			{
+				/*
+				 * bpchar with typmod -1 is not the same as CHARACTER, which
+				 * means CHARACTER(1) per SQL spec.  Report it as bpchar so
+				 * that parser will not assign a bogus typmod.
+				 */
+			}
+			else
+				buf = pstrdup("national character");
+			break;
+
 		case FLOAT4OID:
 			buf = pstrdup("real");
 			break;
 
 		case FLOAT8OID:
@@ -291,10 +306,17 @@
 			if (with_typemod)
 				buf = printTypmod("character varying", typemod, typeform->typmodout);
 			else
 				buf = pstrdup("character varying");
 			break;
+
+		case NVARCHAROID:
+			if (with_typemod)
+				buf = printTypmod("national character varying", typemod, typeform->typmodout);
+			else
+				buf = pstrdup("national character varying");
+			break;
 	}
 
 	if (buf == NULL)
 	{
 		/*
@@ -381,11 +403,13 @@
 		return -1;
 
 	switch (type_oid)
 	{
 		case BPCHAROID:
+		case NBPCHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 			/* typemod includes varlena header */
 
 			/* typemod is in characters not bytes */
 			return (typemod - VARHDRSZ) *
 				pg_encoding_max_length(GetDatabaseEncoding())
diff -U 5 -N -r -w ../orig.HEAD/src/backend/utils/adt/Makefile ./src/backend/utils/adt/Makefile
--- ../orig.HEAD/src/backend/utils/adt/Makefile	2013-08-19 20:22:09.000000000 -0400
+++ ./src/backend/utils/adt/Makefile	2013-08-20 04:53:45.000000000 -0400
@@ -18,11 +18,11 @@
 OBJS = acl.o arrayfuncs.o array_selfuncs.o array_typanalyze.o \
 	array_userfuncs.o arrayutils.o bool.o \
 	cash.o char.o date.o datetime.o datum.o domains.o \
 	enum.o float.o format_type.o \
 	geo_ops.o geo_selfuncs.o int.o int8.o json.o jsonfuncs.o like.o \
-	lockfuncs.o misc.o nabstime.o name.o numeric.o numutils.o \
+	lockfuncs.o misc.o nabstime.o name.o numeric.o numutils.o nvarchar.o \
 	oid.o oracle_compat.o pseudotypes.o rangetypes.o rangetypes_gist.o \
 	rowtypes.o regexp.o regproc.o ruleutils.o selfuncs.o \
 	tid.o timestamp.o varbit.o varchar.o varlena.o version.o xid.o \
 	network.o mac.o inet_cidr_ntop.o inet_net_pton.o \
 	ri_triggers.o pg_lzcompress.o pg_locale.o formatting.o \
diff -U 5 -N -r -w ../orig.HEAD/src/backend/utils/adt/nvarchar.c ./src/backend/utils/adt/nvarchar.c
--- ../orig.HEAD/src/backend/utils/adt/nvarchar.c	1969-12-31 19:00:00.000000000 -0500
+++ ./src/backend/utils/adt/nvarchar.c	2013-08-20 04:54:06.000000000 -0400
@@ -0,0 +1,421 @@
+/*-------------------------------------------------------------------------
+ *
+ * nvarchar.c
+ *	  Functions for the built-in types nchar(n) and nvarchar(n).
+ *
+ * Portions Copyright (c) 1996-2012, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/utils/adt/nvarchar.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+
+#include "access/hash.h"
+#include "access/tuptoaster.h"
+#include "libpq/pqformat.h"
+#include "nodes/nodeFuncs.h"
+#include "utils/array.h"
+#include "utils/builtins.h"
+#include "mb/pg_wchar.h"
+#include "utils/formatting.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_collation.h"
+
+/*
+ *		====================
+ *		FORWARD DECLARATIONS
+ *		====================
+ */
+static int32 ntextcmp(text *left, text *right, Oid collid);
+
+// being really dumb for ntext collation initially 
+static const char *ntext_collation = "C.UTF-8";
+
+/*
+ *      ===================
+ *      TYPE CAST FUNCTIONS
+ *      ===================
+ */
+
+/*
+ * convert text to ntext (the database encoding to UTF8)
+ */
+Datum
+text2nvarchar(PG_FUNCTION_ARGS)
+{
+    text       *in_val = PG_GETARG_TEXT_PP(0);
+    char       *in_string;
+    text       *out_val;
+
+    in_string = text_to_cstring(in_val);
+    out_val = cstring_to_text(pg_server_to_any(in_string, strlen(in_string), PG_UTF8));
+
+    PG_RETURN_TEXT_P(out_val);
+}
+
+/*
+ * convert ntext to text (UTF8 to the database encoding)
+ */
+Datum
+nvarchar2text(PG_FUNCTION_ARGS)
+{
+    text       *in_val = PG_GETARG_TEXT_PP(0);
+    char       *in_string;
+    text       *out_val;
+
+    in_string = text_to_cstring(in_val);
+    out_val = cstring_to_text(pg_any_to_server(in_string, strlen(in_string), PG_UTF8));
+
+    PG_RETURN_TEXT_P(out_val);
+}
+
+/*
+ * convert rtrimmed text to ntext (the database encoding to UTF8)
+ * used for conversion from char type to nvarchar type
+ */
+Datum
+rtrimtext2nvarchar(PG_FUNCTION_ARGS)
+{
+    text       *in_val = PG_GETARG_TEXT_PP(0);
+    char       *in_string;
+    text       *out_val;
+    text       *trimmed_val;
+
+	trimmed_val = dotrim(VARDATA_ANY(in_val), VARSIZE_ANY_EXHDR(in_val),
+				" ", 1,
+				false, true);
+    in_string = text_to_cstring(trimmed_val);
+    out_val = cstring_to_text(pg_server_to_any(in_string, strlen(in_string), PG_UTF8));
+
+    PG_RETURN_TEXT_P(out_val);
+}
+
+/*
+ * convert ntext to rtrimmed text (UTF8 to the database encoding)
+ * used for conversion from nchar types to varchar/text types
+ */
+Datum
+nvarchar2rtrimtext(PG_FUNCTION_ARGS)
+{
+    text       *in_val = PG_GETARG_TEXT_PP(0);
+    char       *in_string;
+    text       *out_val;
+	text       *trimmed_val;
+
+    in_string = text_to_cstring(in_val);
+    out_val = cstring_to_text(pg_any_to_server(in_string, strlen(in_string), PG_UTF8));
+    trimmed_val = dotrim(VARDATA_ANY(out_val), VARSIZE_ANY_EXHDR(out_val),
+                " ", 1,
+                false, true);
+    PG_RETURN_TEXT_P(trimmed_val);
+}
+
+
+
+/*
+ *		=================
+ *		UTILITY FUNCTIONS
+ *		=================
+ */
+
+/*
+ *      ntextin          - converts "..." (assume that "..." using db encoding we need decode it to UTF8) to internal representation
+ */
+
+Datum
+ntextin(PG_FUNCTION_ARGS)
+{
+    char       *in_val = PG_GETARG_CSTRING(0);
+
+    PG_RETURN_TEXT_P(cstring_to_text(pg_server_to_any(in_val, strlen(in_val), PG_UTF8)));
+}
+
+/*
+ *      ntextout         - converts internal representation to "..." (assume that "..." using db encoding we need decode it from UTF8)
+ */
+
+Datum
+ntextout(PG_FUNCTION_ARGS)
+{
+    text       *in_val = PG_GETARG_TEXT_PP(0);
+    char       *in_string;
+
+    in_string = text_to_cstring(in_val);
+    PG_RETURN_CSTRING(pg_any_to_server(in_string, strlen(in_string), PG_UTF8));
+}
+
+
+/*
+ * Convert a C string to VARCHAR internal representation.  atttypmod
+ * is the declared length of the type plus VARHDRSZ.
+ */
+Datum
+nvarcharin(PG_FUNCTION_ARGS)
+{
+    char       *s = PG_GETARG_CSTRING(0);
+#ifdef NOT_USED
+    Oid         typelem = PG_GETARG_OID(1);
+#endif
+    int32       atttypmod = PG_GETARG_INT32(2);
+    VarChar    *result;
+
+//    result = varchar_input(s, strlen(s), atttypmod);
+//    PG_RETURN_VARCHAR_P(result);
+	PG_RETURN_VARCHAR_P(cstring_to_text(pg_server_to_any(s, strlen(s), PG_UTF8)));
+}
+
+
+/*
+ * Convert a VARCHAR value to a C string.
+ *
+ * Uses the text to C string conversion function, which is only appropriate
+ * if VarChar and text are equivalent types.
+ */
+Datum
+nvarcharout(PG_FUNCTION_ARGS)
+{
+    Datum       txt = PG_GETARG_DATUM(0);
+    char        *in_string;
+
+    in_string = text_to_cstring(txt);
+    PG_RETURN_CSTRING(pg_any_to_server(in_string, strlen(txt), PG_UTF8));
+}
+
+
+Datum
+ntextlower(PG_FUNCTION_ARGS)
+{
+    text       *in_string = PG_GETARG_TEXT_PP(0);
+    char       *out_string;
+    text       *result;
+
+    out_string = str_tolower(VARDATA_ANY(in_string),
+                             VARSIZE_ANY_EXHDR(in_string),
+                             CollationGetCollid(ntext_collation));
+    result = cstring_to_text(out_string);
+    pfree(out_string);
+
+    PG_RETURN_TEXT_P(result);
+}
+
+Datum
+ntextupper(PG_FUNCTION_ARGS)
+{
+    text       *in_string = PG_GETARG_TEXT_PP(0);
+    char       *out_string;
+    text       *result;
+
+    out_string = str_toupper(VARDATA_ANY(in_string),
+                             VARSIZE_ANY_EXHDR(in_string),
+                             CollationGetCollid(ntext_collation));
+    result = cstring_to_text(out_string);
+    pfree(out_string);
+
+    PG_RETURN_TEXT_P(result);
+}
+
+
+/*
+ * actual octet length of an utf8 text 
+ */
+Datum
+ntextoctetlen(PG_FUNCTION_ARGS)
+{
+//        Datum           arg = PG_GETARG_DATUM(0);
+//        /* We need not detoast the input at all */
+//        PG_RETURN_INT32(toast_raw_datum_size(arg) - VARHDRSZ);
+	text       *in_val = PG_GETARG_TEXT_PP(0);
+	char       *in_string;
+	int		   len;
+
+	in_string = text_to_cstring(in_val);
+	len = strlen(in_string);
+
+	PG_RETURN_INT32(len);
+}
+
+/*
+ * ntextcmp()
+ * Internal comparison function for ntext strings.
+ * Returns int32 negative, zero, or positive.
+ */
+static int32
+ntextcmp(text *left, text *right, Oid collid)
+{
+	return varstr_cmp(VARDATA_ANY(left), VARSIZE_ANY_EXHDR(left),
+						VARDATA_ANY(right), VARSIZE_ANY_EXHDR(right),
+						collid);
+}
+
+/*
+ *		==================
+ *		INDEXING FUNCTIONS
+ *		==================
+ */
+Datum
+ntext_cmp(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	int32		result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation));
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_INT32(result);
+}
+
+PG_FUNCTION_INFO_V1(ntext_hash);
+
+Datum
+ntext_hash(PG_FUNCTION_ARGS)
+{
+	text	    *txt = PG_GETARG_TEXT_PP(0);
+	Datum		result;
+
+	result = hash_any((unsigned char *) VARDATA_ANY(txt), VARSIZE_ANY_EXHDR(txt));
+
+	/* Avoid leaking memory for toasted inputs */
+	PG_FREE_IF_COPY(txt, 0);
+
+	PG_RETURN_DATUM(result);
+}
+
+/*
+ *		==================
+ *		OPERATOR FUNCTIONS
+ *		==================
+ */
+Datum
+ntext_eq(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	/*
+	 * Since we only care about equality or not-equality, we can avoid all the
+	 * expense of strcoll() here, and just do bitwise comparison.
+	 */
+	result = (strcmp(text_to_cstring(left), text_to_cstring(right)) == 0);
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+Datum
+ntext_ne(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	/*
+	 * Since we only care about equality or not-equality, we can avoid all the
+	 * expense of strcoll() here, and just do bitwise comparison.
+	 */
+	result = (strcmp(text_to_cstring(left), text_to_cstring(right)) != 0);
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+Datum
+ntext_lt(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) < 0;
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+Datum
+ntext_le(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) <= 0;
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+Datum
+ntext_gt(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) > 0;
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+Datum
+ntext_ge(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	bool		result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) >= 0;
+
+	PG_FREE_IF_COPY(left, 0);
+	PG_FREE_IF_COPY(right, 1);
+
+	PG_RETURN_BOOL(result);
+}
+
+/*
+ *		===================
+ *		AGGREGATE FUNCTIONS
+ *		===================
+ */
+Datum
+ntext_smaller(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	text	   *result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) < 0 ? left : right;
+	PG_RETURN_TEXT_P(result);
+}
+
+Datum
+ntext_larger(PG_FUNCTION_ARGS)
+{
+	text	   *left = PG_GETARG_TEXT_PP(0);
+	text	   *right = PG_GETARG_TEXT_PP(1);
+	text	   *result;
+
+	result = ntextcmp(left, right, CollationGetCollid(ntext_collation)) > 0 ? left : right;
+	PG_RETURN_TEXT_P(result);
+}
+
+
diff -U 5 -N -r -w ../orig.HEAD/src/backend/utils/adt/oracle_compat.c ./src/backend/utils/adt/oracle_compat.c
--- ../orig.HEAD/src/backend/utils/adt/oracle_compat.c	2013-08-19 20:22:09.000000000 -0400
+++ ./src/backend/utils/adt/oracle_compat.c	2013-08-19 20:54:00.000000000 -0400
@@ -18,15 +18,10 @@
 #include "utils/builtins.h"
 #include "utils/formatting.h"
 #include "mb/pg_wchar.h"
 
 
-static text *dotrim(const char *string, int stringlen,
-	   const char *set, int setlen,
-	   bool doltrim, bool dortrim);
-
-
 /********************************************************************
  *
  * lower
  *
  * Syntax:
@@ -365,11 +360,11 @@
 }
 
 /*
  * Common implementation for btrim, ltrim, rtrim
  */
-static text *
+text *
 dotrim(const char *string, int stringlen,
 	   const char *set, int setlen,
 	   bool doltrim, bool dortrim)
 {
 	int			i;
diff -U 5 -N -r -w ../orig.HEAD/src/backend/utils/adt/selfuncs.c ./src/backend/utils/adt/selfuncs.c
--- ../orig.HEAD/src/backend/utils/adt/selfuncs.c	2013-08-19 20:22:09.000000000 -0400
+++ ./src/backend/utils/adt/selfuncs.c	2013-08-19 20:54:00.000000000 -0400
@@ -3624,11 +3624,13 @@
 			/*
 			 * Built-in string types
 			 */
 		case CHAROID:
 		case BPCHAROID:
+		case NBPCHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case TEXTOID:
 		case NAMEOID:
 			{
 				char	   *valstr = convert_string_datum(value, valuetypid);
 				char	   *lostr = convert_string_datum(lobound, boundstypid);
@@ -3886,11 +3888,13 @@
 			val = (char *) palloc(2);
 			val[0] = DatumGetChar(value);
 			val[1] = '\0';
 			break;
 		case BPCHAROID:
+		case NBPCHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case TEXTOID:
 			val = TextDatumGetCString(value);
 			break;
 		case NAMEOID:
 			{
@@ -5873,11 +5877,13 @@
 	 */
 	switch (datatype)
 	{
 		case TEXTOID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case BPCHAROID:
+		case NBPCHAROID:
 			collation = DEFAULT_COLLATION_OID;
 			constlen = -1;
 			break;
 
 		case NAMEOID:
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_amop.h ./src/include/catalog/pg_amop.h
--- ../orig.HEAD/src/include/catalog/pg_amop.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_amop.h	2013-08-19 20:54:00.000000000 -0400
@@ -251,10 +251,17 @@
 DATA(insert (	426   1042 1042 2 s 1059	403 0 ));
 DATA(insert (	426   1042 1042 3 s 1054	403 0 ));
 DATA(insert (	426   1042 1042 4 s 1061	403 0 ));
 DATA(insert (	426   1042 1042 5 s 1060	403 0 ));
 
+DATA(insert (   426   5001 5001 1 s 5058        403 0 ));
+DATA(insert (   426   5001 5001 2 s 5059        403 0 ));
+DATA(insert (   426   5001 5001 3 s 5054        403 0 ));
+DATA(insert (   426   5001 5001 4 s 5061        403 0 ));
+DATA(insert (   426   5001 5001 5 s 5060        403 0 ));
+
+
 /*
  *	btree bytea_ops
  */
 
 DATA(insert (	428   17 17 1 s 1957	403 0 ));
@@ -506,10 +513,14 @@
  *	hash index _ops
  */
 
 /* bpchar_ops */
 DATA(insert (	427   1042 1042 1 s 1054	405 0 ));
+
+/* nbpchar_ops */
+DATA(insert (   427   5001 5001 1 s 5054        405 0 ));
+
 /* char_ops */
 DATA(insert (	431   18 18 1 s 92	405 0 ));
 /* date_ops */
 DATA(insert (	435   1082 1082 1 s 1093	405 0 ));
 /* float_ops */
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_amproc.h ./src/include/catalog/pg_amproc.h
--- ../orig.HEAD/src/include/catalog/pg_amproc.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_amproc.h	2013-08-19 20:54:00.000000000 -0400
@@ -78,10 +78,11 @@
 DATA(insert (	397   2277 2277 1 382 ));
 DATA(insert (	421   702 702 1 357 ));
 DATA(insert (	423   1560 1560 1 1596 ));
 DATA(insert (	424   16 16 1 1693 ));
 DATA(insert (	426   1042 1042 1 1078 ));
+DATA(insert (   426   5001 5001 1 1078 ));
 DATA(insert (	428   17 17 1 1954 ));
 DATA(insert (	429   18 18 1 358 ));
 DATA(insert (	434   1082 1082 1 1092 ));
 DATA(insert (	434   1082 1082 2 3136 ));
 DATA(insert (	434   1082 1114 1 2344 ));
@@ -126,20 +127,22 @@
 DATA(insert (	1996   1083 1083 1 1107 ));
 DATA(insert (	2000   1266 1266 1 1358 ));
 DATA(insert (	2002   1562 1562 1 1672 ));
 DATA(insert (	2095   25 25 1 2166 ));
 DATA(insert (	2097   1042 1042 1 2180 ));
+DATA(insert (   2097   5001 5001 1 2180 ));
 DATA(insert (	2099   790 790 1  377 ));
 DATA(insert (	2233   703 703 1  380 ));
 DATA(insert (	2234   704 704 1  381 ));
 DATA(insert (	2789   27 27 1 2794 ));
 DATA(insert (	2968   2950 2950 1 2960 ));
 DATA(insert (	3522   3500 3500 1 3514 ));
 
 
 /* hash */
 DATA(insert (	427   1042 1042 1 1080 ));
+DATA(insert (   427   5001 5001 1 1080 ));
 DATA(insert (	431   18 18 1 454 ));
 DATA(insert (	435   1082 1082 1 450 ));
 DATA(insert (	627   2277 2277 1 626 ));
 DATA(insert (	1971   700 700 1 451 ));
 DATA(insert (	1971   701 701 1 452 ));
@@ -165,10 +168,11 @@
 DATA(insert (	2226   29 29 1 450 ));
 DATA(insert (	2227   702 702 1 450 ));
 DATA(insert (	2228   703 703 1 450 ));
 DATA(insert (	2229   25 25 1 400 ));
 DATA(insert (	2231   1042 1042 1 1080 ));
+DATA(insert (   2231   5001 5001 1 1080 ));
 DATA(insert (	2235   1033 1033 1 329 ));
 DATA(insert (	2969   2950 2950 1 2963 ));
 DATA(insert (	3523   3500 3500 1 3515 ));
 
 
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_cast.h ./src/include/catalog/pg_cast.h
--- ../orig.HEAD/src/include/catalog/pg_cast.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_cast.h	2013-08-19 20:54:00.000000000 -0400
@@ -212,16 +212,77 @@
 DATA(insert ( 1043 2205 1079 i f ));
 
 /*
  * String category
  */
-DATA(insert (	25 1042    0 i b ));
-DATA(insert (	25 1043    0 i b ));
+/*
+ * 501  UTF8 conversion to database encoding
+ * 502  database encoding to UTF8
+ * 401  rtrim
+ * 503  conversion from UTF8 + rtrim 
+ * 504  rtrim + conversion to UTF8
+ * national -> common is always 'a' = assigment autocasts (and they could fail if encoding error happen)
+ * common -> nactional is always 'i' = implicit (because they should work always)
+ * temporary assume rtrim not depend on actual encoding (which is definitely wrong assumption)
+*/
+
+/*
+ * char -> text
+ * char -> varchar
+ * char -> nchar
+ * char -> nvarchar
+*/
 DATA(insert ( 1042	 25  401 i f ));
 DATA(insert ( 1042 1043  401 i f ));
+DATA(insert ( 1042 5001  502 i f ));
+DATA(insert ( 1042 6001  504 i f ));
+
+/*
+ * varchar -> text
+ * varchar -> char
+ * varchar -> nvarchar
+ * varchar -> nchar
+*/
 DATA(insert ( 1043	 25    0 i b ));
 DATA(insert ( 1043 1042    0 i b ));
+DATA(insert ( 1043 6001 502 i f ));
+DATA(insert ( 1043 5001 502 i f ));
+
+/*
+ * text -> varchar
+ * text -> char
+ * text -> nchar
+ * text -> nvarchar
+*/
+DATA(insert (   25 1043   0 i b ));
+DATA(insert (   25 1042   0 i b ));
+DATA(insert (   25 5001 502 i f ));
+DATA(insert (   25 6001 502 i f ));
+
+/*
+ * nvarchar -> text
+ * nvarchar -> varchar
+ * nvarchar -> char
+ * nvarchar -> nchar
+*/
+DATA(insert ( 6001   25  501 a f ));
+DATA(insert ( 6001 1043  501 a f ));
+DATA(insert ( 6001 1042  501 a f ));
+DATA(insert ( 6001 5001  0   i f ));
+
+/*
+ * nchar -> text
+ * nchar -> varchar
+ * nchar -> char
+ * nchar -> nvarchar
+*/
+DATA(insert ( 5001   25  503 a f ));
+DATA(insert ( 5001 1042  503 a f ));
+DATA(insert ( 5001 1043  501 a f ));
+DATA(insert ( 5001 6001  401 i f ));
+
+
 DATA(insert (	18	 25  946 i f ));
 DATA(insert (	18 1042  860 a f ));
 DATA(insert (	18 1043  946 a f ));
 DATA(insert (	19	 25  406 i f ));
 DATA(insert (	19 1042  408 a f ));
@@ -347,11 +408,13 @@
 
 /*
  * Length-coercion functions
  */
 DATA(insert ( 1042 1042  668 i f ));
+DATA(insert ( 5001 5001 5668 i f ));
 DATA(insert ( 1043 1043  669 i f ));
+DATA(insert ( 6001 6001 5669 i f ));
 DATA(insert ( 1083 1083 1968 i f ));
 DATA(insert ( 1114 1114 1961 i f ));
 DATA(insert ( 1184 1184 1967 i f ));
 DATA(insert ( 1186 1186 1200 i f ));
 DATA(insert ( 1266 1266 1969 i f ));
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_opclass.h ./src/include/catalog/pg_opclass.h
--- ../orig.HEAD/src/include/catalog/pg_opclass.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_opclass.h	2013-08-19 20:54:00.000000000 -0400
@@ -95,10 +95,12 @@
 DATA(insert (	405		array_ops			PGNSP PGUID  627 2277 t 0 ));
 DATA(insert (	403		bit_ops				PGNSP PGUID  423 1560 t 0 ));
 DATA(insert (	403		bool_ops			PGNSP PGUID  424   16 t 0 ));
 DATA(insert (	403		bpchar_ops			PGNSP PGUID  426 1042 t 0 ));
 DATA(insert (	405		bpchar_ops			PGNSP PGUID  427 1042 t 0 ));
+DATA(insert (	403				nbpchar_ops						PGNSP PGUID  426 5001 t 0 ));
+DATA(insert (	405				nbpchar_ops						PGNSP PGUID  427 5001 t 0 ));
 DATA(insert (	403		bytea_ops			PGNSP PGUID  428   17 t 0 ));
 DATA(insert (	403		char_ops			PGNSP PGUID  429   18 t 0 ));
 DATA(insert (	405		char_ops			PGNSP PGUID  431   18 t 0 ));
 DATA(insert (	403		cidr_ops			PGNSP PGUID 1974  869 f 0 ));
 DATA(insert (	405		cidr_ops			PGNSP PGUID 1975  869 f 0 ));
@@ -154,10 +156,12 @@
 DATA(insert (	403		timetz_ops			PGNSP PGUID 2000 1266 t 0 ));
 DATA(insert (	405		timetz_ops			PGNSP PGUID 2001 1266 t 0 ));
 DATA(insert (	403		varbit_ops			PGNSP PGUID 2002 1562 t 0 ));
 DATA(insert (	403		varchar_ops			PGNSP PGUID 1994   25 f 0 ));
 DATA(insert (	405		varchar_ops			PGNSP PGUID 1995   25 f 0 ));
+DATA(insert (	403				nvarchar_ops					PGNSP PGUID 1994   25 f 0 ));
+DATA(insert (	405				nvarchar_ops					PGNSP PGUID 1995   25 f 0 ));
 DATA(insert OID = 3128 ( 403	timestamp_ops	PGNSP PGUID  434 1114 t 0 ));
 #define TIMESTAMP_BTREE_OPS_OID 3128
 DATA(insert (	405		timestamp_ops		PGNSP PGUID 2040 1114 t 0 ));
 DATA(insert (	403		text_pattern_ops	PGNSP PGUID 2095   25 f 0 ));
 DATA(insert (	403		varchar_pattern_ops PGNSP PGUID 2095   25 f 0 ));
@@ -185,10 +189,11 @@
 DATA(insert (	2742	_text_ops			PGNSP PGUID 2745  1009 t 25 ));
 DATA(insert (	2742	_abstime_ops		PGNSP PGUID 2745  1023 t 702 ));
 DATA(insert (	2742	_bit_ops			PGNSP PGUID 2745  1561 t 1560 ));
 DATA(insert (	2742	_bool_ops			PGNSP PGUID 2745  1000 t 16 ));
 DATA(insert (	2742	_bpchar_ops			PGNSP PGUID 2745  1014 t 1042 ));
+DATA(insert (	2742	_nbpchar_ops					PGNSP PGUID 2745  5014 t 5001 ));
 DATA(insert (	2742	_bytea_ops			PGNSP PGUID 2745  1001 t 17 ));
 DATA(insert (	2742	_char_ops			PGNSP PGUID 2745  1002 t 18 ));
 DATA(insert (	2742	_cidr_ops			PGNSP PGUID 2745  651 t 650 ));
 DATA(insert (	2742	_date_ops			PGNSP PGUID 2745  1182 t 1082 ));
 DATA(insert (	2742	_float4_ops			PGNSP PGUID 2745  1021 t 700 ));
@@ -205,10 +210,11 @@
 DATA(insert (	2742	_time_ops			PGNSP PGUID 2745  1183 t 1083 ));
 DATA(insert (	2742	_timestamptz_ops	PGNSP PGUID 2745  1185 t 1184 ));
 DATA(insert (	2742	_timetz_ops			PGNSP PGUID 2745  1270 t 1266 ));
 DATA(insert (	2742	_varbit_ops			PGNSP PGUID 2745  1563 t 1562 ));
 DATA(insert (	2742	_varchar_ops		PGNSP PGUID 2745  1015 t 1043 ));
+DATA(insert (	2742	_nvarchar_ops			PGNSP PGUID 2745  5015 t 6001 ));
 DATA(insert (	2742	_timestamp_ops		PGNSP PGUID 2745  1115 t 1114 ));
 DATA(insert (	2742	_money_ops			PGNSP PGUID 2745  791 t 790 ));
 DATA(insert (	2742	_reltime_ops		PGNSP PGUID 2745  1024 t 703 ));
 DATA(insert (	2742	_tinterval_ops		PGNSP PGUID 2745  1025 t 704 ));
 DATA(insert (	403		uuid_ops			PGNSP PGUID 2968  2950 t 0 ));
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_operator.h ./src/include/catalog/pg_operator.h
--- ../orig.HEAD/src/include/catalog/pg_operator.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_operator.h	2013-08-19 20:54:00.000000000 -0400
@@ -741,26 +741,50 @@
 DATA(insert OID =  971 (  "@@"	   PGNSP PGUID l f f	0  604	600    0  0 poly_center - - ));
 DESCR("center of");
 
 DATA(insert OID = 1054 ( "="	   PGNSP PGUID b t t 1042 1042	 16 1054 1057 bpchareq eqsel eqjoinsel ));
 DESCR("equal");
+DATA(insert OID = 5054 ( "="       PGNSP PGUID b t t 5001 5001   16 5054 5057 nbpchareq eqsel eqjoinsel ));
+DESCR("equal");
 
 DATA(insert OID = 1055 ( "~"	   PGNSP PGUID b f f 1042 25	 16    0 1056 bpcharregexeq regexeqsel regexeqjoinsel ));
 DESCR("matches regular expression, case-sensitive");
+DATA(insert OID = 5055 ( "~"       PGNSP PGUID b f f 5001 25     16    0 5056 nbpcharregexeq regexeqsel regexeqjoinsel ));
+DESCR("matches regular expression, case-sensitive");
+
 #define OID_BPCHAR_REGEXEQ_OP		1055
 DATA(insert OID = 1056 ( "!~"	   PGNSP PGUID b f f 1042 25	 16    0 1055 bpcharregexne regexnesel regexnejoinsel ));
 DESCR("does not match regular expression, case-sensitive");
+DATA(insert OID = 5056 ( "!~"      PGNSP PGUID b f f 5001 25     16    0 5055 nbpcharregexne regexnesel regexnejoinsel ));
+DESCR("does not match regular expression, case-sensitive");
+
 DATA(insert OID = 1057 ( "<>"	   PGNSP PGUID b f f 1042 1042	 16 1057 1054 bpcharne neqsel neqjoinsel ));
 DESCR("not equal");
+DATA(insert OID = 5057 ( "<>"      PGNSP PGUID b f f 5001 5001   16 5057 5054 nbpcharne neqsel neqjoinsel ));
+DESCR("not equal");
+
 DATA(insert OID = 1058 ( "<"	   PGNSP PGUID b f f 1042 1042	 16 1060 1061 bpcharlt scalarltsel scalarltjoinsel ));
 DESCR("less than");
+DATA(insert OID = 5058 ( "<"       PGNSP PGUID b f f 5001 5001   16 5060 5061 nbpcharlt scalarltsel scalarltjoinsel ));
+DESCR("less than");
+
 DATA(insert OID = 1059 ( "<="	   PGNSP PGUID b f f 1042 1042	 16 1061 1060 bpcharle scalarltsel scalarltjoinsel ));
 DESCR("less than or equal");
+DATA(insert OID = 5059 ( "<="      PGNSP PGUID b f f 5001 5001   16 5061 5060 nbpcharle scalarltsel scalarltjoinsel ));
+DESCR("less than or equal");
+
 DATA(insert OID = 1060 ( ">"	   PGNSP PGUID b f f 1042 1042	 16 1058 1059 bpchargt scalargtsel scalargtjoinsel ));
 DESCR("greater than");
+DATA(insert OID = 5060 ( ">"       PGNSP PGUID b f f 5001 5001   16 5058 5059 nbpchargt scalargtsel scalargtjoinsel ));
+DESCR("greater than");
+
+
 DATA(insert OID = 1061 ( ">="	   PGNSP PGUID b f f 1042 1042	 16 1059 1058 bpcharge scalargtsel scalargtjoinsel ));
 DESCR("greater than or equal");
+DATA(insert OID = 5061 ( ">="      PGNSP PGUID b f f 5001 5001   16 5059 5058 nbpcharge scalargtsel scalargtjoinsel ));
+DESCR("greater than or equal");
+
 
 /* generic array comparison operators */
 DATA(insert OID = 1070 (  "="	   PGNSP PGUID b t t 2277 2277 16 1070 1071 array_eq eqsel eqjoinsel ));
 DESCR("equal");
 #define ARRAY_EQ_OP 1070
@@ -885,13 +909,18 @@
 #define OID_TEXT_LIKE_OP		1209
 DATA(insert OID = 1210 (  "!~~"   PGNSP PGUID b f f  25 25	16 0 1209 textnlike nlikesel nlikejoinsel ));
 DESCR("does not match LIKE expression");
 DATA(insert OID = 1211 (  "~~"	  PGNSP PGUID b f f  1042 25	16 0 1212 bpcharlike likesel likejoinsel ));
 DESCR("matches LIKE expression");
+DATA(insert OID = 5211 (  "~~"    PGNSP PGUID b f f  5001 25    16 0 5212 nbpcharlike likesel likejoinsel ));
+DESCR("matches LIKE expression");
+
 #define OID_BPCHAR_LIKE_OP		1211
 DATA(insert OID = 1212 (  "!~~"   PGNSP PGUID b f f  1042 25	16 0 1211 bpcharnlike nlikesel nlikejoinsel ));
 DESCR("does not match LIKE expression");
+DATA(insert OID = 5212 (  "!~~"   PGNSP PGUID b f f  5001 25    16 0 5211 nbpcharnlike nlikesel nlikejoinsel ));
+DESCR("does not match LIKE expression");
 
 /* case-insensitive regex hacks */
 DATA(insert OID = 1226 (  "~*"		 PGNSP PGUID b f f	19	25	16 0 1227 nameicregexeq icregexeqsel icregexeqjoinsel ));
 DESCR("matches regular expression, case-insensitive");
 #define OID_NAME_ICREGEXEQ_OP		1226
@@ -902,13 +931,18 @@
 #define OID_TEXT_ICREGEXEQ_OP		1228
 DATA(insert OID = 1229 (  "!~*"		 PGNSP PGUID b f f	25	25	16 0 1228 texticregexne icregexnesel icregexnejoinsel ));
 DESCR("does not match regular expression, case-insensitive");
 DATA(insert OID = 1234 (  "~*"		PGNSP PGUID b f f  1042  25  16 0 1235 bpcharicregexeq icregexeqsel icregexeqjoinsel ));
 DESCR("matches regular expression, case-insensitive");
+DATA(insert OID = 5234 (  "~*"          PGNSP PGUID b f f  5001  25  16 0 5235 nbpcharicregexeq icregexeqsel icregexeqjoinsel ));
+DESCR("matches regular expression, case-insensitive");
+
 #define OID_BPCHAR_ICREGEXEQ_OP		1234
 DATA(insert OID = 1235 ( "!~*"		PGNSP PGUID b f f  1042  25  16 0 1234 bpcharicregexne icregexnesel icregexnejoinsel ));
 DESCR("does not match regular expression, case-insensitive");
+DATA(insert OID = 5235 ( "!~*"          PGNSP PGUID b f f  5001  25  16 0 5234 nbpcharicregexne icregexnesel icregexnejoinsel ));
+DESCR("does not match regular expression, case-insensitive");
 
 /* timestamptz operators */
 DATA(insert OID = 1320 (  "="	   PGNSP PGUID b t t 1184 1184	 16 1320 1321 timestamptz_eq eqsel eqjoinsel ));
 DESCR("equal");
 DATA(insert OID = 1321 (  "<>"	   PGNSP PGUID b f f 1184 1184	 16 1321 1320 timestamptz_ne neqsel neqjoinsel ));
@@ -1176,13 +1210,18 @@
 #define OID_TEXT_ICLIKE_OP		1627
 DATA(insert OID = 1628 (  "!~~*"  PGNSP PGUID b f f  25 25	16 0 1627 texticnlike icnlikesel icnlikejoinsel ));
 DESCR("does not match LIKE expression, case-insensitive");
 DATA(insert OID = 1629 (  "~~*"   PGNSP PGUID b f f  1042 25	16 0 1630 bpchariclike iclikesel iclikejoinsel ));
 DESCR("matches LIKE expression, case-insensitive");
+DATA(insert OID = 5629 (  "~~*"   PGNSP PGUID b f f  5001 25    16 0 5630 nbpchariclike iclikesel iclikejoinsel ));
+DESCR("matches LIKE expression, case-insensitive");
 #define OID_BPCHAR_ICLIKE_OP	1629
 DATA(insert OID = 1630 (  "!~~*"  PGNSP PGUID b f f  1042 25	16 0 1629 bpcharicnlike icnlikesel icnlikejoinsel ));
 DESCR("does not match LIKE expression, case-insensitive");
+DATA(insert OID = 5630 (  "!~~*"  PGNSP PGUID b f f  5001 25    16 0 5629 nbpcharicnlike icnlikesel icnlikejoinsel ));
+DESCR("does not match LIKE expression, case-insensitive");
+
 
 /* NUMERIC type - OID's 1700-1799 */
 DATA(insert OID = 1751 (  "-"	   PGNSP PGUID l f f	0 1700 1700    0	0 numeric_uminus - - ));
 DESCR("negate");
 DATA(insert OID = 1752 (  "="	   PGNSP PGUID b t t 1700 1700	 16 1752 1753 numeric_eq eqsel eqjoinsel ));
@@ -1395,16 +1434,27 @@
 DATA(insert OID = 2318 ( "~>~"	PGNSP PGUID b f f 25 25 16 2314 2315 text_pattern_gt scalargtsel scalargtjoinsel ));
 DESCR("greater than");
 
 DATA(insert OID = 2326 ( "~<~"	PGNSP PGUID b f f 1042 1042 16 2330 2329 bpchar_pattern_lt scalarltsel scalarltjoinsel ));
 DESCR("less than");
+DATA(insert OID = 5326 ( "~<~"  PGNSP PGUID b f f 5001 5001 16 5330 5329 nbpchar_pattern_lt scalarltsel scalarltjoinsel ));
+DESCR("less than");
+
 DATA(insert OID = 2327 ( "~<=~" PGNSP PGUID b f f 1042 1042 16 2329 2330 bpchar_pattern_le scalarltsel scalarltjoinsel ));
 DESCR("less than or equal");
+DATA(insert OID = 5327 ( "~<=~" PGNSP PGUID b f f 5001 5001 16 5329 5330 nbpchar_pattern_le scalarltsel scalarltjoinsel ));
+DESCR("less than or equal");
+
 DATA(insert OID = 2329 ( "~>=~" PGNSP PGUID b f f 1042 1042 16 2327 2326 bpchar_pattern_ge scalargtsel scalargtjoinsel ));
 DESCR("greater than or equal");
+DATA(insert OID = 5329 ( "~>=~" PGNSP PGUID b f f 5001 5001 16 5327 5326 nbpchar_pattern_ge scalargtsel scalargtjoinsel ));
+DESCR("greater than or equal");
+
 DATA(insert OID = 2330 ( "~>~"	PGNSP PGUID b f f 1042 1042 16 2326 2327 bpchar_pattern_gt scalargtsel scalargtjoinsel ));
 DESCR("greater than");
+DATA(insert OID = 5330 ( "~>~"  PGNSP PGUID b f f 5001 5001 16 5326 5327 nbpchar_pattern_gt scalargtsel scalargtjoinsel ));
+DESCR("greater than");
 
 /* crosstype operations for date vs. timestamp and timestamptz */
 
 DATA(insert OID = 2345 ( "<"	   PGNSP PGUID b f f	1082	1114   16 2375 2348 date_lt_timestamp scalarltsel scalarltjoinsel ));
 DESCR("less than");
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_proc.h ./src/include/catalog/pg_proc.h
--- ../orig.HEAD/src/include/catalog/pg_proc.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_proc.h	2013-08-19 20:54:00.000000000 -0400
@@ -743,10 +743,19 @@
 DESCR("convert int8 to float8");
 DATA(insert OID = 483 (  int8			   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 20 "701" _null_ _null_ _null_ _null_	dtoi8 _null_ _null_ _null_ ));
 DESCR("convert float8 to int8");
 
 /* OIDS 500 - 599 */
+/* national characters support functions */
+DATA(insert OID =  501 (  text             PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 25 "6001" _null_ _null_ _null_ _null_   nvarchar2text _null_ _null_ _null_ ));
+DESCR("convert nvarchar to text");
+DATA(insert OID =  502 (  nvarchar         PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 6001 "25" _null_ _null_ _null_ _null_   text2nvarchar _null_ _null_ _null_ ));
+DESCR("convert text to nvarchar");
+DATA(insert OID =  503 (  text             PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 25 "5001" _null_ _null_ _null_ _null_   nvarchar2rtrimtext _null_ _null_ _null_ ));
+DESCR("convert nbpvarchar to rtrimmed text");
+DATA(insert OID =  504 (  nvarchar         PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 6001 "1042" _null_ _null_ _null_ _null_   rtrimtext2nvarchar _null_ _null_ _null_ ));
+DESCR("convert trimmed text (bpchar) to nvarchar");
 
 /* OIDS 600 - 699 */
 
 DATA(insert OID = 626 (  hash_array		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "2277" _null_ _null_ _null_ _null_ hash_array _null_ _null_ _null_ ));
 DESCR("hash");
@@ -767,14 +776,23 @@
 DATA(insert OID = 658 (  namege			   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "19 19" _null_ _null_ _null_ _null_ namege _null_ _null_ _null_ ));
 DATA(insert OID = 659 (  namene			   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "19 19" _null_ _null_ _null_ _null_ namene _null_ _null_ _null_ ));
 
 DATA(insert OID = 668 (  bpchar			   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 1042 "1042 23 16" _null_ _null_ _null_ _null_ bpchar _null_ _null_ _null_ ));
 DESCR("adjust char() to typmod length");
+DATA(insert OID = 5668 (  nbpchar                    PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 5001 "5001 23 16" _null_ _null_ _null_ _null_ bpchar _null_ _null_ _null_ ));
+DESCR("adjust char() to typmod length");
+
 DATA(insert OID = 3097 ( varchar_transform PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2281 "2281" _null_ _null_ _null_ _null_ varchar_transform _null_ _null_ _null_ ));
 DESCR("transform a varchar length coercion");
+DATA(insert OID = 5097 ( nvarchar_transform PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2281 "2281" _null_ _null_ _null_ _null_ varchar_transform _null_ _null_ _null_ ));
+DESCR("transform a nvarchar length coercion");
+
 DATA(insert OID = 669 (  varchar		   PGNSP PGUID 12 1 0 0 varchar_transform f f f f t f i 3 0 1043 "1043 23 16" _null_ _null_ _null_ _null_ varchar _null_ _null_ _null_ ));
 DESCR("adjust varchar() to typmod length");
+DATA(insert OID = 5669 (  nvarchar                   PGNSP PGUID 12 1 0 0 nvarchar_transform f f f f t f i 3 0 6001 "6001 23 16" _null_ _null_ _null_ _null_ varchar _null_ _null_ _null_ ));
+DESCR("adjust nvarchar() to typmod length");
+
 
 DATA(insert OID = 676 (  mktinterval	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 704 "702 702" _null_ _null_ _null_ _null_ mktinterval _null_ _null_ _null_ ));
 
 DATA(insert OID = 619 (  oidvectorne	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "30 30" _null_ _null_ _null_ _null_ oidvectorne _null_ _null_ _null_ ));
 DATA(insert OID = 677 (  oidvectorlt	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "30 30" _null_ _null_ _null_ _null_ oidvectorlt _null_ _null_ _null_ ));
@@ -1118,38 +1136,79 @@
 DESCR("make ACL item");
 DATA(insert OID = 3943 (  acldefault	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1034 "18 26" _null_ _null_ _null_ _null_  acldefault_sql _null_ _null_ _null_ ));
 DESCR("TODO");
 DATA(insert OID = 1689 (  aclexplode	PGNSP PGUID 12 1 10 0 0 f f f f t t s 1 0 2249 "1034" "{1034,26,26,25,16}" "{i,o,o,o,o}" "{acl,grantor,grantee,privilege_type,is_grantable}" _null_ aclexplode _null_ _null_ _null_ ));
 DESCR("convert ACL item array to table, for use by information schema");
-DATA(insert OID = 1044 (  bpcharin		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 1042 "2275 26 23" _null_ _null_ _null_ _null_ bpcharin _null_ _null_ _null_ ));
+
+DATA(insert OID = 1043 (  bpcharin		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 1042 "2275 26 23" _null_ _null_ _null_ _null_ bpcharin _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 1045 (  bpcharout		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "1042" _null_ _null_ _null_ _null_ bpcharout _null_ _null_ _null_ ));
 DESCR("I/O");
+
+DATA(insert OID = 5101 (  nbpcharin                 PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 5001 "2275 26 23" _null_ _null_ _null_ _null_ bpcharin _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5045 (  nbpcharout                PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "5001" _null_ _null_ _null_ _null_ bpcharout _null_ _null_ _null_ ));
+DESCR("I/O");
+
 DATA(insert OID = 2913 (  bpchartypmodin   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_	bpchartypmodin _null_ _null_ _null_ ));
 DESCR("I/O typmod");
 DATA(insert OID = 2914 (  bpchartypmodout  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_	bpchartypmodout _null_ _null_ _null_ ));
 DESCR("I/O typmod");
+
+DATA(insert OID = 5913 (  nbpchartypmodin   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_       bpchartypmodin _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+DATA(insert OID = 5914 (  nbpchartypmodout  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_       bpchartypmodout _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+
 DATA(insert OID = 1046 (  varcharin		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 1043 "2275 26 23" _null_ _null_ _null_ _null_ varcharin _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 1047 (  varcharout	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "1043" _null_ _null_ _null_ _null_ varcharout _null_ _null_ _null_ ));
 DESCR("I/O");
+
+DATA(insert OID = 5046 (  nvarcharin       PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 6001 "2275 26 23" _null_ _null_ _null_ _null_ nvarcharin _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5047 (  nvarcharout      PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "6001" _null_ _null_ _null_ _null_ nvarcharout _null_ _null_ _null_ ));
+DESCR("I/O");
+
 DATA(insert OID = 2915 (  varchartypmodin  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_	varchartypmodin _null_ _null_ _null_ ));
 DESCR("I/O typmod");
 DATA(insert OID = 2916 (  varchartypmodout PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_	varchartypmodout _null_ _null_ _null_ ));
 DESCR("I/O typmod");
+
+DATA(insert OID = 5915 (  nvarchartypmodin  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_       varchartypmodin _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+DATA(insert OID = 5916 (  nvarchartypmodout PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_       varchartypmodout _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+
 DATA(insert OID = 1048 (  bpchareq		   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpchareq _null_ _null_ _null_ ));
 DATA(insert OID = 1049 (  bpcharlt		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpcharlt _null_ _null_ _null_ ));
 DATA(insert OID = 1050 (  bpcharle		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpcharle _null_ _null_ _null_ ));
 DATA(insert OID = 1051 (  bpchargt		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpchargt _null_ _null_ _null_ ));
 DATA(insert OID = 1052 (  bpcharge		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpcharge _null_ _null_ _null_ ));
 DATA(insert OID = 1053 (  bpcharne		   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpcharne _null_ _null_ _null_ ));
+
+DATA(insert OID = 5048 (  nbpchareq                 PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchareq _null_ _null_ _null_ ));
+DATA(insert OID = 5049 (  nbpcharlt                 PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharlt _null_ _null_ _null_ ));
+DATA(insert OID = 5050 (  nbpcharle                 PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharle _null_ _null_ _null_ ));
+DATA(insert OID = 5051 (  nbpchargt                 PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchargt _null_ _null_ _null_ ));
+DATA(insert OID = 5052 (  nbpcharge                 PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharge _null_ _null_ _null_ ));
+DATA(insert OID = 5053 (  nbpcharne                 PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharne _null_ _null_ _null_ ));
+
 DATA(insert OID = 1063 (  bpchar_larger    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1042 "1042 1042" _null_ _null_ _null_ _null_ bpchar_larger _null_ _null_ _null_ ));
 DESCR("larger of two");
 DATA(insert OID = 1064 (  bpchar_smaller   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1042 "1042 1042" _null_ _null_ _null_ _null_ bpchar_smaller _null_ _null_ _null_ ));
 DESCR("smaller of two");
 DATA(insert OID = 1078 (  bpcharcmp		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "1042 1042" _null_ _null_ _null_ _null_ bpcharcmp _null_ _null_ _null_ ));
 DESCR("less-equal-greater");
+
+DATA(insert OID = 5063 (  nbpchar_larger    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 5001 "5001 5001" _null_ _null_ _null_ _null_ bpchar_larger _null_ _null_ _null_ ));
+DESCR("larger of two");
+DATA(insert OID = 5064 (  nbpchar_smaller   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 5001 "5001 5001" _null_ _null_ _null_ _null_ bpchar_smaller _null_ _null_ _null_ ));
+DESCR("smaller of two");
+DATA(insert OID = 5078 (  nbpcharcmp                PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "5001 5001" _null_ _null_ _null_ _null_ bpcharcmp _null_ _null_ _null_ ));
+DESCR("less-equal-greater");
+
 DATA(insert OID = 1080 (  hashbpchar	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1042" _null_ _null_ _null_ _null_	hashbpchar _null_ _null_ _null_ ));
 DESCR("hash");
 DATA(insert OID = 1081 (  format_type	   PGNSP PGUID 12 1 0 0 0 f f f f f f s 2 0 25 "26 23" _null_ _null_ _null_ _null_ format_type _null_ _null_ _null_ ));
 DESCR("format a type oid and atttypmod to canonical SQL");
 DATA(insert OID = 1084 (  date_in		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 1082 "2275" _null_ _null_ _null_ _null_ date_in _null_ _null_ _null_ ));
@@ -1492,10 +1551,14 @@
 DESCR("character length");
 DATA(insert OID = 1374 (  octet_length			 PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "25" _null_ _null_ _null_ _null_	textoctetlen _null_ _null_ _null_ ));
 DESCR("octet length");
 DATA(insert OID = 1375 (  octet_length			 PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1042" _null_ _null_ _null_ _null_ bpcharoctetlen _null_ _null_ _null_ ));
 DESCR("octet length");
+DATA(insert OID = 5374 (  octet_length                   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "5001" _null_ _null_ _null_ _null_ bpcharoctetlen _null_ _null_ _null_ ));
+DESCR("octet length");
+DATA(insert OID = 5375 (  octet_length                   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "6001" _null_ _null_ _null_ _null_ textoctetlen _null_ _null_ _null_ ));
+DESCR("octet length");
 
 DATA(insert OID = 1377 (  time_larger	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1083 "1083 1083" _null_ _null_ _null_ _null_ time_larger _null_ _null_ _null_ ));
 DESCR("larger of two");
 DATA(insert OID = 1378 (  time_smaller	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1083 "1083 1083" _null_ _null_ _null_ _null_ time_smaller _null_ _null_ _null_ ));
 DESCR("smaller of two");
@@ -1547,10 +1610,15 @@
 DATA(insert OID = 1400 (  name		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 19 "1043" _null_ _null_ _null_ _null_	text_name _null_ _null_ _null_ ));
 DESCR("convert varchar to name");
 DATA(insert OID = 1401 (  varchar	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 1043 "19" _null_ _null_ _null_ _null_	name_text _null_ _null_ _null_ ));
 DESCR("convert name to varchar");
 
+DATA(insert OID = 5400 (  name             PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 19 "6001" _null_ _null_ _null_ _null_       text_name _null_ _null_ _null_ ));
+DESCR("convert nvarchar to name");
+DATA(insert OID = 5401 (  nvarchar         PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 6001 "19" _null_ _null_ _null_ _null_       name_text _null_ _null_ _null_ ));
+DESCR("convert name to nvarchar");
+
 DATA(insert OID = 1402 (  current_schema	PGNSP PGUID 12 1 0 0 0 f f f f t f s 0 0 19 "" _null_ _null_ _null_ _null_ current_schema _null_ _null_ _null_ ));
 DESCR("current schema name");
 DATA(insert OID = 1403 (  current_schemas	PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 1003 "16" _null_ _null_ _null_ _null_	current_schemas _null_ _null_ _null_ ));
 DESCR("current schema search list");
 
@@ -1821,10 +1889,13 @@
 
 DATA(insert OID = 1624 (  mul_d_interval	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 1186 "701 1186" _null_ _null_ _null_ _null_ mul_d_interval _null_ _null_ _null_ ));
 
 DATA(insert OID = 1631 (  bpcharlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ textlike _null_ _null_ _null_ ));
 DATA(insert OID = 1632 (  bpcharnlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ textnlike _null_ _null_ _null_ ));
+DATA(insert OID = 5631 (  nbpcharlike       PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textlike _null_ _null_ _null_ ));
+DATA(insert OID = 5632 (  nbpcharnlike      PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textnlike _null_ _null_ _null_ ));
+
 
 DATA(insert OID = 1633 (  texticlike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "25 25" _null_ _null_ _null_ _null_ texticlike _null_ _null_ _null_ ));
 DATA(insert OID = 1634 (  texticnlike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "25 25" _null_ _null_ _null_ _null_ texticnlike _null_ _null_ _null_ ));
 DATA(insert OID = 1635 (  nameiclike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "19 25" _null_ _null_ _null_ _null_ nameiclike _null_ _null_ _null_ ));
 DATA(insert OID = 1636 (  nameicnlike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "19 25" _null_ _null_ _null_ _null_ nameicnlike _null_ _null_ _null_ ));
@@ -1836,10 +1907,18 @@
 DATA(insert OID = 1658 (  bpcharregexeq    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ textregexeq _null_ _null_ _null_ ));
 DATA(insert OID = 1659 (  bpcharregexne    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ textregexne _null_ _null_ _null_ ));
 DATA(insert OID = 1660 (  bpchariclike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ texticlike _null_ _null_ _null_ ));
 DATA(insert OID = 1661 (  bpcharicnlike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 25" _null_ _null_ _null_ _null_ texticnlike _null_ _null_ _null_ ));
 
+DATA(insert OID = 5656 (  nbpcharicregexeq        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticregexeq _null_ _null_ _null_ ));
+DATA(insert OID = 5657 (  nbpcharicregexne        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticregexne _null_ _null_ _null_ ));
+DATA(insert OID = 5658 (  nbpcharregexeq    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textregexeq _null_ _null_ _null_ ));
+DATA(insert OID = 5659 (  nbpcharregexne    PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textregexne _null_ _null_ _null_ ));
+DATA(insert OID = 5660 (  nbpchariclike          PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticlike _null_ _null_ _null_ ));
+DATA(insert OID = 5661 (  nbpcharicnlike         PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticnlike _null_ _null_ _null_ ));
+
+
 /* Oracle Compatibility Related Functions - By Edmund Mergl <E.Mergl@bawue.de> */
 DATA(insert OID =  868 (  strpos	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "25 25" _null_ _null_ _null_ _null_ textpos _null_ _null_ _null_ ));
 DESCR("position of substring");
 DATA(insert OID =  870 (  lower		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 25 "25" _null_ _null_ _null_ _null_ lower _null_ _null_ _null_ ));
 DESCR("lowercase");
@@ -3256,10 +3335,18 @@
 DATA(insert OID = 2177 ( bpchar_pattern_ge	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpchar_pattern_ge _null_ _null_ _null_ ));
 DATA(insert OID = 2178 ( bpchar_pattern_gt	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "1042 1042" _null_ _null_ _null_ _null_ bpchar_pattern_gt _null_ _null_ _null_ ));
 DATA(insert OID = 2180 ( btbpchar_pattern_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "1042 1042" _null_ _null_ _null_ _null_ btbpchar_pattern_cmp _null_ _null_ _null_ ));
 DESCR("less-equal-greater");
 
+DATA(insert OID = 5174 ( nbpchar_pattern_lt        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_lt _null_ _null_ _null_ ));
+DATA(insert OID = 5175 ( nbpchar_pattern_le        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_le _null_ _null_ _null_ ));
+DATA(insert OID = 5177 ( nbpchar_pattern_ge        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_ge _null_ _null_ _null_ ));
+DATA(insert OID = 5178 ( nbpchar_pattern_gt        PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_gt _null_ _null_ _null_ ));
+DATA(insert OID = 5180 ( nbtbpchar_pattern_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "5001 5001" _null_ _null_ _null_ _null_ btbpchar_pattern_cmp _null_ _null_ _null_ ));
+DESCR("less-equal-greater");
+
+
 DATA(insert OID = 2188 ( btint48cmp			PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "23 20" _null_ _null_ _null_ _null_ btint48cmp _null_ _null_ _null_ ));
 DESCR("less-equal-greater");
 DATA(insert OID = 2189 ( btint84cmp			PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "20 23" _null_ _null_ _null_ _null_ btint84cmp _null_ _null_ _null_ ));
 DESCR("less-equal-greater");
 DATA(insert OID = 2190 ( btint24cmp			PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "21 23" _null_ _null_ _null_ _null_ btint24cmp _null_ _null_ _null_ ));
@@ -3643,14 +3730,22 @@
 DESCR("I/O");
 DATA(insert OID = 2430 (  bpcharrecv		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 1042 "2281 26 23" _null_ _null_ _null_ _null_  bpcharrecv _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 2431 (  bpcharsend		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "1042" _null_ _null_ _null_ _null_	bpcharsend _null_ _null_ _null_ ));
 DESCR("I/O");
+DATA(insert OID = 5430 (  nbpcharrecv              PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 5001 "2281 26 23" _null_ _null_ _null_ _null_  bpcharrecv _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5431 (  nbpcharsend              PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "5001" _null_ _null_ _null_ _null_       bpcharsend _null_ _null_ _null_ ));
+DESCR("I/O");
 DATA(insert OID = 2432 (  varcharrecv		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 1043 "2281 26 23" _null_ _null_ _null_ _null_  varcharrecv _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 2433 (  varcharsend		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "1043" _null_ _null_ _null_ _null_	varcharsend _null_ _null_ _null_ ));
 DESCR("I/O");
+DATA(insert OID = 5432 (  nvarcharrecv             PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 6001 "2281 26 23" _null_ _null_ _null_ _null_  varcharrecv _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5433 (  nvarcharsend             PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "6001" _null_ _null_ _null_ _null_       varcharsend _null_ _null_ _null_ ));
+DESCR("I/O");
 DATA(insert OID = 2434 (  charrecv			   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 18 "2281" _null_ _null_ _null_ _null_	charrecv _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 2435 (  charsend			   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 17 "18" _null_ _null_ _null_ _null_ charsend _null_ _null_ _null_ ));
 DESCR("I/O");
 DATA(insert OID = 2436 (  boolrecv			   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 16 "2281" _null_ _null_ _null_ _null_	boolrecv _null_ _null_ _null_ ));
diff -U 5 -N -r -w ../orig.HEAD/src/include/catalog/pg_type.h ./src/include/catalog/pg_type.h
--- ../orig.HEAD/src/include/catalog/pg_type.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/catalog/pg_type.h	2013-08-19 20:54:00.000000000 -0400
@@ -457,11 +457,13 @@
 DATA(insert OID = 1010 (  _tid		 PGNSP PGUID -1 f b A f t \054 0	27 0 array_in array_out array_recv array_send - - array_typanalyze i x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1011 (  _xid		 PGNSP PGUID -1 f b A f t \054 0	28 0 array_in array_out array_recv array_send - - array_typanalyze i x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1012 (  _cid		 PGNSP PGUID -1 f b A f t \054 0	29 0 array_in array_out array_recv array_send - - array_typanalyze i x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1013 (  _oidvector PGNSP PGUID -1 f b A f t \054 0	30 0 array_in array_out array_recv array_send - - array_typanalyze i x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1014 (  _bpchar	 PGNSP PGUID -1 f b A f t \054 0 1042 0 array_in array_out array_recv array_send bpchartypmodin bpchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DATA(insert OID = 5014 (  _nbpchar       PGNSP PGUID -1 f b A f t \054 0 5001 0 array_in array_out array_recv array_send nbpchartypmodin nbpchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
 DATA(insert OID = 1015 (  _varchar	 PGNSP PGUID -1 f b A f t \054 0 1043 0 array_in array_out array_recv array_send varchartypmodin varchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DATA(insert OID = 5015 (  _nvarchar       PGNSP PGUID -1 f b A f t \054 0 6001 0 array_in array_out array_recv array_send nvarchartypmodin nvarchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
 DATA(insert OID = 1016 (  _int8		 PGNSP PGUID -1 f b A f t \054 0	20 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1017 (  _point	 PGNSP PGUID -1 f b A f t \054 0 600 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1018 (  _lseg		 PGNSP PGUID -1 f b A f t \054 0 601 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1019 (  _path		 PGNSP PGUID -1 f b A f t \054 0 602 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
 DATA(insert OID = 1020 (  _box		 PGNSP PGUID -1 f b A f t \073 0 603 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
@@ -483,14 +485,23 @@
 #define CSTRINGARRAYOID		1263
 
 DATA(insert OID = 1042 ( bpchar		 PGNSP PGUID -1 f b S f t \054 0	0 1014 bpcharin bpcharout bpcharrecv bpcharsend bpchartypmodin bpchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
 DESCR("char(length), blank-padded string, fixed storage length");
 #define BPCHAROID		1042
+
+DATA(insert OID = 5001 ( nbpchar         PGNSP PGUID -1 f b S f t \054 0        0 5014 nbpcharin nbpcharout nbpcharrecv nbpcharsend nbpchartypmodin nbpchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DESCR("nchar(length), blank-padded national string, fixed storage length");
+#define NBPCHAROID              5001
+
 DATA(insert OID = 1043 ( varchar	 PGNSP PGUID -1 f b S f t \054 0	0 1015 varcharin varcharout varcharrecv varcharsend varchartypmodin varchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
 DESCR("varchar(length), non-blank-padded string, variable storage length");
 #define VARCHAROID		1043
 
+DATA(insert OID = 6001 ( nvarchar         PGNSP PGUID -1 f b S f t \054 0        0 5015 nvarcharin nvarcharout nvarcharrecv nvarcharsend nvarchartypmodin nvarchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DESCR("nvarchar(length), non-blank-padded national string, variable storage length");
+#define NVARCHAROID		6001
+
 DATA(insert OID = 1082 ( date		 PGNSP PGUID	4 t b D f t \054 0	0 1182 date_in date_out date_recv date_send - - - i p f 0 -1 0 0 _null_ _null_ _null_ ));
 DESCR("date");
 #define DATEOID			1082
 DATA(insert OID = 1083 ( time		 PGNSP PGUID	8 FLOAT8PASSBYVAL b D f t \054 0	0 1183 time_in time_out time_recv time_send timetypmodin timetypmodout - d p f 0 -1 0 0 _null_ _null_ _null_ ));
 DESCR("time of day");
diff -U 5 -N -r -w ../orig.HEAD/src/include/parser/kwlist.h ./src/include/parser/kwlist.h
--- ../orig.HEAD/src/include/parser/kwlist.h	2013-08-19 20:22:12.000000000 -0400
+++ ./src/include/parser/kwlist.h	2013-08-19 20:54:00.000000000 -0400
@@ -255,10 +255,11 @@
 PG_KEYWORD("nowait", NOWAIT, UNRESERVED_KEYWORD)
 PG_KEYWORD("null", NULL_P, RESERVED_KEYWORD)
 PG_KEYWORD("nullif", NULLIF, COL_NAME_KEYWORD)
 PG_KEYWORD("nulls", NULLS_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("numeric", NUMERIC, COL_NAME_KEYWORD)
+PG_KEYWORD("nvarchar", NVARCHAR, COL_NAME_KEYWORD)
 PG_KEYWORD("object", OBJECT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("of", OF, UNRESERVED_KEYWORD)
 PG_KEYWORD("off", OFF, UNRESERVED_KEYWORD)
 PG_KEYWORD("offset", OFFSET, RESERVED_KEYWORD)
 PG_KEYWORD("oids", OIDS, UNRESERVED_KEYWORD)
diff -U 5 -N -r -w ../orig.HEAD/src/include/utils/builtins.h ./src/include/utils/builtins.h
--- ../orig.HEAD/src/include/utils/builtins.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/include/utils/builtins.h	2013-08-19 20:54:00.000000000 -0400
@@ -324,10 +324,33 @@
 extern Datum btfloat4sortsupport(PG_FUNCTION_ARGS);
 extern Datum btfloat8sortsupport(PG_FUNCTION_ARGS);
 extern Datum btoidsortsupport(PG_FUNCTION_ARGS);
 extern Datum btnamesortsupport(PG_FUNCTION_ARGS);
 
+/* nvarchar.c */
+extern Datum ntextin(PG_FUNCTION_ARGS);
+extern Datum ntextout(PG_FUNCTION_ARGS);
+extern Datum nvarcharin(PG_FUNCTION_ARGS);
+extern Datum nvarcharout(PG_FUNCTION_ARGS);
+extern Datum ntextlower(PG_FUNCTION_ARGS);
+extern Datum ntextupper(PG_FUNCTION_ARGS);
+extern Datum ntextoctetlen(PG_FUNCTION_ARGS);
+extern Datum nvarchar2text(PG_FUNCTION_ARGS);
+extern Datum text2nvarchar(PG_FUNCTION_ARGS);
+extern Datum nvarchar2rtrimtext(PG_FUNCTION_ARGS);
+extern Datum rtrimtext2nvarchar(PG_FUNCTION_ARGS);
+extern Datum ntext_cmp(PG_FUNCTION_ARGS);
+extern Datum ntext_hash(PG_FUNCTION_ARGS);
+extern Datum ntext_eq(PG_FUNCTION_ARGS);
+extern Datum ntext_ne(PG_FUNCTION_ARGS);
+extern Datum ntext_gt(PG_FUNCTION_ARGS);
+extern Datum ntext_ge(PG_FUNCTION_ARGS);
+extern Datum ntext_lt(PG_FUNCTION_ARGS);
+extern Datum ntext_le(PG_FUNCTION_ARGS);
+extern Datum ntext_smaller(PG_FUNCTION_ARGS);
+extern Datum ntext_larger(PG_FUNCTION_ARGS);
+
 /* float.c */
 extern PGDLLIMPORT int extra_float_digits;
 
 extern double get_float8_infinity(void);
 extern float get_float4_infinity(void);
@@ -831,10 +854,13 @@
 extern Datum byteanlike(PG_FUNCTION_ARGS);
 extern Datum like_escape(PG_FUNCTION_ARGS);
 extern Datum like_escape_bytea(PG_FUNCTION_ARGS);
 
 /* oracle_compat.c */
+extern text *dotrim(const char *string, int stringlen,
+       const char *set, int setlen,
+       bool doltrim, bool dortrim);
 extern Datum lower(PG_FUNCTION_ARGS);
 extern Datum upper(PG_FUNCTION_ARGS);
 extern Datum initcap(PG_FUNCTION_ARGS);
 extern Datum lpad(PG_FUNCTION_ARGS);
 extern Datum rpad(PG_FUNCTION_ARGS);
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/data.c ./src/interfaces/ecpg/ecpglib/data.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/data.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/data.c	2013-08-20 05:02:12.000000000 -0400
@@ -225,12 +225,14 @@
 		}
 
 		switch (type)
 		{
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_varchar:
+			case ECPGt_nvarchar:
 			case ECPGt_string:
 				break;
 
 			default:
 				pval++;
@@ -448,10 +450,11 @@
 							   ECPG_SQLSTATE_DATATYPE_MISMATCH, pval);
 					return (false);
 					break;
 
 				case ECPGt_char:
+				case ECPGt_nchar:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
 					{
 						char	   *str = (char *) (var + offset * act_tuple);
 
@@ -506,10 +509,11 @@
 						pval += size;
 					}
 					break;
 
 				case ECPGt_varchar:
+				case ECPGt_nvarchar:
 					{
 						struct ECPGgeneric_varchar *variable =
 						(struct ECPGgeneric_varchar *) (var + offset * act_tuple);
 
 						variable->len = size;
@@ -545,17 +549,20 @@
 									default:
 										break;
 								}
 								sqlca->sqlwarn[0] = sqlca->sqlwarn[1] = 'W';
 
+								/*
+								 * it will be not too good result if it
+								 * truncate in middle of utf byte-sequence
+								 */
 								variable->len = varcharsize;
 							}
 						}
 						pval += size;
 					}
 					break;
-
 				case ECPGt_decimal:
 				case ECPGt_numeric:
 					if (isarray && *pval == '"')
 						nres = PGTYPESnumeric_from_asc(pval + 1, &scan_length);
 					else
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/descriptor.c ./src/interfaces/ecpg/ecpglib/descriptor.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/descriptor.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/descriptor.c	2013-08-20 05:00:38.000000000 -0400
@@ -198,15 +198,17 @@
 get_char_item(int lineno, void *var, enum ECPGttype vartype, char *value, int varcharsize)
 {
 	switch (vartype)
 	{
 		case ECPGt_char:
+		case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			strncpy((char *) var, value, varcharsize);
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			{
 				struct ECPGgeneric_varchar *variable =
 				(struct ECPGgeneric_varchar *) var;
 
 				if (varcharsize == 0)
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/error.c ./src/interfaces/ecpg/ecpglib/error.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/error.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/error.c	2013-08-19 20:54:00.000000000 -0400
@@ -37,10 +37,20 @@
 			 * expanded.
 			 */
 					 ecpg_gettext("out of memory on line %d"), line);
 			break;
 
+		case ECPG_ENCODING_ERROR:
+            snprintf(sqlca->sqlerrm.sqlerrmc, sizeof(sqlca->sqlerrm.sqlerrmc),
+
+            /*
+             * translator: this string will be truncated at 149 characters
+             * expanded.
+             */
+                     ecpg_gettext("encoding conversion error on line %d"), line);
+            break;
+
 		case ECPG_UNSUPPORTED:
 			snprintf(sqlca->sqlerrm.sqlerrmc, sizeof(sqlca->sqlerrm.sqlerrmc),
 
 			/*
 			 * translator: this string will be truncated at 149 characters
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/execute.c ./src/interfaces/ecpg/ecpglib/execute.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/execute.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/execute.c	2013-08-20 05:05:56.000000000 -0400
@@ -244,12 +244,16 @@
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CIDROID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), BPCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
+		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NBPCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
+			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), VARCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
+		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NVARCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
+			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), DATEOID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMEOID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMESTAMPOID, ECPG_ARRAY_NONE, stmt->lineno))
@@ -360,10 +364,11 @@
 		if (!PQfformat(results, act_field))
 		{
 			switch (var->type)
 			{
 				case ECPGt_char:
+				case ECPGt_nchar:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
 					if (!var->varcharsize && !var->arrsize)
 					{
 						/* special mode for handling char**foo=0 */
@@ -385,10 +390,11 @@
 						}
 						var->offset *= var->varcharsize;
 						len = var->offset * ntuples;
 					}
 					break;
+				case ECPGt_nvarchar:
 				case ECPGt_varchar:
 					len = ntuples * (var->varcharsize + sizeof(int));
 					break;
 				default:
 					len = var->offset * ntuples;
@@ -421,11 +427,11 @@
 		ecpg_add_mem(var->ind_value, stmt->lineno);
 	}
 
 	/* fill the variable with the tuple(s) */
 	if (!var->varcharsize && !var->arrsize &&
-		(var->type == ECPGt_char || var->type == ECPGt_unsigned_char || var->type == ECPGt_string))
+		(var->type == ECPGt_char || var->type == ECPGt_unsigned_char || var->type == ECPGt_string || var->type == ECPGt_nchar))
 	{
 		/* special mode for handling char**foo=0 */
 
 		/* filling the array of (char*)s */
 		char	  **current_string = (char **) var->value;
@@ -791,10 +797,11 @@
 
 				*tobeinserted_p = mallocedval;
 				break;
 
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 				{
 					/* set slen to string length if type is char * */
 					int			slen = (var->varcharsize == 0) ? strlen((char *) var->value) : (unsigned int) var->varcharsize;
@@ -825,10 +832,11 @@
 
 					*tobeinserted_p = mallocedval;
 				}
 				break;
 			case ECPGt_varchar:
+			case ECPGt_nvarchar:
 				{
 					struct ECPGgeneric_varchar *variable =
 					(struct ECPGgeneric_varchar *) (var->value);
 
 					if (!(newcopy = (char *) ecpg_alloc(variable->len + 1, lineno)))
@@ -842,11 +850,10 @@
 						return false;
 
 					*tobeinserted_p = mallocedval;
 				}
 				break;
-
 			case ECPGt_decimal:
 			case ECPGt_numeric:
 				{
 					char	   *str = NULL;
 					int			slen;
@@ -1229,10 +1236,12 @@
 						desc_inlist.value = sqlda->sqlvar[i].sqldata;
 						desc_inlist.pointer = &(sqlda->sqlvar[i].sqldata);
 						switch (desc_inlist.type)
 						{
 							case ECPGt_char:
+							case ECPGt_nchar:
+							case ECPGt_nvarchar:
 							case ECPGt_varchar:
 								desc_inlist.varcharsize = strlen(sqlda->sqlvar[i].sqldata);
 								break;
 							default:
 								desc_inlist.varcharsize = 0;
@@ -1284,10 +1293,12 @@
 						desc_inlist.value = sqlda->sqlvar[i].sqldata;
 						desc_inlist.pointer = &(sqlda->sqlvar[i].sqldata);
 						switch (desc_inlist.type)
 						{
 							case ECPGt_char:
+							case ECPGt_nchar:
+							case ECPGt_nvarchar:
 							case ECPGt_varchar:
 								desc_inlist.varcharsize = strlen(sqlda->sqlvar[i].sqldata);
 								break;
 							default:
 								desc_inlist.varcharsize = 0;
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/extern.h ./src/interfaces/ecpg/ecpglib/extern.h
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/extern.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/extern.h	2013-08-19 20:54:00.000000000 -0400
@@ -202,7 +202,8 @@
 #define ECPG_SQLSTATE_DUPLICATE_CURSOR		"42P03"
 
 /* implementation-defined internal errors of ecpg */
 #define ECPG_SQLSTATE_ECPG_INTERNAL_ERROR	"YE000"
 #define ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY	"YE001"
+#define ECPG_SQLSTATE_ECPG_ENCODING_ERROR   "YE002"
 
 #endif   /* _ECPG_LIB_EXTERN_H */
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/misc.c ./src/interfaces/ecpg/ecpglib/misc.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/misc.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/misc.c	2013-08-20 05:04:28.000000000 -0400
@@ -303,10 +303,11 @@
 ECPGset_noind_null(enum ECPGttype type, void *ptr)
 {
 	switch (type)
 	{
 		case ECPGt_char:
+		case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			*((char *) ptr) = '\0';
 			break;
 		case ECPGt_short:
@@ -332,10 +333,11 @@
 			memset((char *) ptr, 0xff, sizeof(float));
 			break;
 		case ECPGt_double:
 			memset((char *) ptr, 0xff, sizeof(double));
 			break;
+		case ECPGt_nvarchar:
 		case ECPGt_varchar:
 			*(((struct ECPGgeneric_varchar *) ptr)->arr) = 0x00;
 			((struct ECPGgeneric_varchar *) ptr)->len = 0;
 			break;
 		case ECPGt_decimal:
@@ -405,10 +407,11 @@
 			return (_check(ptr, sizeof(float)));
 			break;
 		case ECPGt_double:
 			return (_check(ptr, sizeof(double)));
 			break;
+		case ECPGt_nvarchar:
 		case ECPGt_varchar:
 			if (*(((struct ECPGgeneric_varchar *) ptr)->arr) == 0x00)
 				return true;
 			break;
 		case ECPGt_decimal:
@@ -546,5 +549,6 @@
 	struct var_list *ptr;
 
 	for (ptr = ivlist; ptr != NULL && ptr->number != number; ptr = ptr->next);
 	return (ptr) ? ptr->pointer : NULL;
 }
+
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/pg_type.h ./src/interfaces/ecpg/ecpglib/pg_type.h
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/pg_type.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/pg_type.h	2013-08-19 20:54:00.000000000 -0400
@@ -46,10 +46,12 @@
 #define CASHOID 790
 #define INETOID 869
 #define CIDROID 650
 #define BPCHAROID		1042
 #define VARCHAROID		1043
+#define NBPCHAROID              5001
+#define NVARCHAROID             6001
 #define DATEOID			1082
 #define TIMEOID			1083
 #define TIMESTAMPOID	1114
 #define TIMESTAMPTZOID	1184
 #define INTERVALOID		1186
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/sqlda.c ./src/interfaces/ecpg/ecpglib/sqlda.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/sqlda.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/sqlda.c	2013-08-19 20:54:00.000000000 -0400
@@ -132,10 +132,11 @@
 				break;
 			case ECPGt_interval:
 				ecpg_sqlda_align_add_size(offset, sizeof(int64), sizeof(interval), &offset, &next_offset);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
 				{
 					long		datalen = strlen(PQgetvalue(res, row, i)) + 1;
@@ -371,10 +372,11 @@
 				ecpg_sqlda_align_add_size(offset, sizeof(int64), sizeof(interval), &offset, &next_offset);
 				sqlda->sqlvar[i].sqldata = (char *) sqlda + offset;
 				sqlda->sqlvar[i].sqllen = sizeof(interval);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
 				datalen = strlen(PQgetvalue(res, row, i)) + 1;
 				ecpg_sqlda_align_add_size(offset, sizeof(int), datalen, &offset, &next_offset);
@@ -560,10 +562,11 @@
 				ecpg_sqlda_align_add_size(offset, sizeof(int64), sizeof(interval), &offset, &next_offset);
 				sqlda->sqlvar[i].sqldata = (char *) sqlda + offset;
 				sqlda->sqlvar[i].sqllen = sizeof(interval);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
 				datalen = strlen(PQgetvalue(res, row, i)) + 1;
 				ecpg_sqlda_align_add_size(offset, sizeof(int), datalen, &offset, &next_offset);
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/ecpglib/typename.c ./src/interfaces/ecpg/ecpglib/typename.c
--- ../orig.HEAD/src/interfaces/ecpg/ecpglib/typename.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/ecpglib/typename.c	2013-08-20 05:02:32.000000000 -0400
@@ -20,10 +20,12 @@
 	switch (typ)
 	{
 		case ECPGt_char:
 		case ECPGt_string:
 			return "char";
+		case ECPGt_nchar:
+			return "national char";
 		case ECPGt_unsigned_char:
 			return "unsigned char";
 		case ECPGt_short:
 			return "short";
 		case ECPGt_unsigned_short:
@@ -46,10 +48,12 @@
 			return "double";
 		case ECPGt_bool:
 			return "bool";
 		case ECPGt_varchar:
 			return "varchar";
+		case ECPGt_nvarchar:
+			return "nvarchar";
 		case ECPGt_char_variable:
 			return "char";
 		case ECPGt_decimal:
 			return "decimal";
 		case ECPGt_numeric:
@@ -85,12 +89,16 @@
 			return SQL3_REAL;	/* float4 */
 		case FLOAT8OID:
 			return SQL3_DOUBLE_PRECISION;		/* float8 */
 		case BPCHAROID:
 			return SQL3_CHARACTER;		/* bpchar */
+		case NBPCHAROID:
+			return SQL3_CHARACTER;		/* bpchar */
 		case VARCHAROID:
 			return SQL3_CHARACTER_VARYING;		/* varchar */
+		case NVARCHAROID:
+			return SQL3_CHARACTER_VARYING;		/* nvarchar */
 		case DATEOID:
 			return SQL3_DATE_TIME_TIMESTAMP;	/* date */
 		case TIMEOID:
 			return SQL3_DATE_TIME_TIMESTAMP;	/* time */
 		case TIMESTAMPOID:
@@ -107,11 +115,13 @@
 {
 	switch (type)
 	{
 		case CHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case BPCHAROID:
+		case NBPCHAROID:
 		case TEXTOID:
 			return ECPGt_char;
 		case INT2OID:
 			return ECPGt_short;
 		case INT4OID:
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/include/ecpgerrno.h ./src/interfaces/ecpg/include/ecpgerrno.h
--- ../orig.HEAD/src/interfaces/ecpg/include/ecpgerrno.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/include/ecpgerrno.h	2013-08-19 20:54:00.000000000 -0400
@@ -43,10 +43,13 @@
 #define ECPG_INVALID_DESCRIPTOR_INDEX	-241
 #define ECPG_UNKNOWN_DESCRIPTOR_ITEM	-242
 #define ECPG_VAR_NOT_NUMERIC		-243
 #define ECPG_VAR_NOT_CHAR		-244
 
+/*encoding conversion errors */
+#define ECPG_ENCODING_ERROR     -250
+
 /* finally the backend error messages, they start at 400 */
 #define ECPG_PGSQL			-400
 #define ECPG_TRANS			-401
 #define ECPG_CONNECT			-402
 #define ECPG_DUPLICATE_KEY		-403
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/include/ecpgtype.h ./src/interfaces/ecpg/include/ecpgtype.h
--- ../orig.HEAD/src/interfaces/ecpg/include/ecpgtype.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/include/ecpgtype.h	2013-08-20 05:02:01.000000000 -0400
@@ -44,10 +44,12 @@
 	ECPGt_int, ECPGt_unsigned_int, ECPGt_long, ECPGt_unsigned_long,
 	ECPGt_long_long, ECPGt_unsigned_long_long,
 	ECPGt_bool,
 	ECPGt_float, ECPGt_double,
 	ECPGt_varchar, ECPGt_varchar2,
+	ECPGt_nvarchar,				/* the same as ECPGt_varchar actually */
+	ECPGt_nchar,                            /* converted to char *arr[N] */
 	ECPGt_numeric,				/* this is a decimal that stores its digits in
 								 * a malloced array */
 	ECPGt_decimal,				/* this is a decimal that stores its digits in
 								 * a fixed array */
 	ECPGt_date,
@@ -102,5 +104,6 @@
 #ifdef __cplusplus
 }
 #endif
 
 #endif   /* _ECPGTYPE_H */
+
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/include/sqltypes.h ./src/interfaces/ecpg/include/sqltypes.h
--- ../orig.HEAD/src/interfaces/ecpg/include/sqltypes.h	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/include/sqltypes.h	2013-08-19 20:54:00.000000000 -0400
@@ -44,10 +44,11 @@
 #define	SQLTEXT		ECPGt_char
 #define	SQLVCHAR	ECPGt_char
 #define SQLINTERVAL     ECPGt_interval
 #define	SQLNCHAR	ECPGt_char
 #define	SQLNVCHAR	ECPGt_char
+#define SQLNVARCHAR     ECPGt_char
 #ifdef HAVE_LONG_LONG_INT_64
 #define	SQLINT8		ECPGt_long_long
 #define	SQLSERIAL8	ECPGt_long_long
 #else
 #define	SQLINT8		ECPGt_long
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/c_keywords.c ./src/interfaces/ecpg/preproc/c_keywords.c
--- ../orig.HEAD/src/interfaces/ecpg/preproc/c_keywords.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/c_keywords.c	2013-08-20 05:09:08.000000000 -0400
@@ -25,10 +25,12 @@
 
 	/*
 	 * category is not needed in ecpg, it is only here so we can share the
 	 * data structure with the backend
 	 */
+	{"NCHAR", NCHAR, 0},
+	{"NVARCHAR", NVARCHAR, 0},
 	{"VARCHAR", VARCHAR, 0},
 	{"auto", S_AUTO, 0},
 	{"bool", SQL_BOOL, 0},
 	{"char", CHAR_P, 0},
 	{"const", S_CONST, 0},
@@ -38,10 +40,12 @@
 	{"hour", HOUR_P, 0},
 	{"int", INT_P, 0},
 	{"long", SQL_LONG, 0},
 	{"minute", MINUTE_P, 0},
 	{"month", MONTH_P, 0},
+	{"nchar", NCHAR, 0},
+	{"nvarchar", NVARCHAR, 0},
 	{"register", S_REGISTER, 0},
 	{"second", SECOND_P, 0},
 	{"short", SQL_SHORT, 0},
 	{"signed", SQL_SIGNED, 0},
 	{"static", S_STATIC, 0},
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.header ./src/interfaces/ecpg/preproc/ecpg.header
--- ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.header	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/ecpg.header	2013-08-20 05:08:24.000000000 -0400
@@ -267,11 +267,13 @@
 			{
 				newvar = ptr->variable;
 				skip_set_var = true;
 			}
 			else if ((ptr->variable->type->type != ECPGt_varchar
+					  && ptr->variable->type->type != ECPGt_nvarchar
 					  && ptr->variable->type->type != ECPGt_char
+					  && ptr->variable->type->type != ECPGt_nchar
 					  && ptr->variable->type->type != ECPGt_unsigned_char
 					  && ptr->variable->type->type != ECPGt_string)
 					 && atoi(ptr->variable->type->size) > 1)
 			{
 				newvar = new_variable(cat_str(4, mm_strdup("("),
@@ -283,11 +285,13 @@
 																			   ptr->variable->type->u.element->counter),
 														  ptr->variable->type->size),
 									  0);
 			}
 			else if ((ptr->variable->type->type == ECPGt_varchar
+					  || ptr->variable->type->type == ECPGt_nvarchar
 					  || ptr->variable->type->type == ECPGt_char
+					  || ptr->variable->type->type == ECPGt_nchar
 					  || ptr->variable->type->type == ECPGt_unsigned_char
 					  || ptr->variable->type->type == ECPGt_string)
 					 && atoi(ptr->variable->type->size) > 1)
 			{
 				newvar = new_variable(cat_str(4, mm_strdup("("),
@@ -296,11 +300,11 @@
 											  mm_strdup(var_text)),
 									  ECPGmake_simple_type(ptr->variable->type->type,
 														   ptr->variable->type->size,
 														   ptr->variable->type->counter),
 									  0);
-				if (ptr->variable->type->type == ECPGt_varchar)
+				if (ptr->variable->type->type == ECPGt_varchar || ptr->variable->type->type == ECPGt_nvarchar)
 					var_ptr = true;
 			}
 			else if (ptr->variable->type->type == ECPGt_struct
 					 || ptr->variable->type->type == ECPGt_union)
 			{
@@ -543,10 +547,11 @@
 		this->type->type_sizeof = ECPGstruct_sizeof;
 		this->struct_member_list = (type_enum == ECPGt_struct || type_enum == ECPGt_union) ?
 		ECPGstruct_member_dup(struct_member_list[struct_level]) : NULL;
 
 		if (type_enum != ECPGt_varchar &&
+			type_enum != ECPGt_nvarchar &&
 			type_enum != ECPGt_char &&
 			type_enum != ECPGt_unsigned_char &&
 			type_enum != ECPGt_string &&
 			atoi(this->type->type_index) >= 0)
 			mmerror(PARSE_ERROR, ET_ERROR, "multidimensional arrays for simple data types are not supported");
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.tokens ./src/interfaces/ecpg/preproc/ecpg.tokens
--- ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.tokens	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/ecpg.tokens	2013-08-20 05:07:52.000000000 -0400
@@ -14,14 +14,16 @@
                 SQL_RETURNED_LENGTH SQL_RETURNED_OCTET_LENGTH SQL_SCALE
                 SQL_SECTION SQL_SHORT SQL_SIGNED SQL_SQL SQL_SQLERROR
                 SQL_SQLPRINT SQL_SQLWARNING SQL_START SQL_STOP
                 SQL_STRUCT SQL_UNSIGNED SQL_VAR SQL_WHENEVER
 
+
 /* C tokens */
 %token  S_ADD S_AND S_ANYTHING S_AUTO S_CONST S_DEC S_DIV
                 S_DOTPOINT S_EQUAL S_EXTERN S_INC S_LSHIFT S_MEMPOINT
                 S_MEMBER S_MOD S_MUL S_NEQUAL S_OR S_REGISTER S_RSHIFT
                 S_STATIC S_SUB S_VOLATILE
                 S_TYPEDEF
 
 %token CSTRING CVARIABLE CPP_LINE IP
 %token DOLCONST ECONST NCONST UCONST UIDENT
+
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.trailer ./src/interfaces/ecpg/preproc/ecpg.trailer
--- ../orig.HEAD/src/interfaces/ecpg/preproc/ecpg.trailer	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/ecpg.trailer	2013-08-20 05:07:32.000000000 -0400
@@ -184,11 +184,11 @@
 			/* if array see what's inside */
 			if (type == ECPGt_array)
 				type = argsinsert->variable->type->u.element->type;
 
 			/* handle varchars */
-			if (type == ECPGt_varchar)
+			if (type == ECPGt_varchar || type == ECPGt_nvarchar)
 				$$ = make2_str(mm_strdup(argsinsert->variable->name), mm_strdup(".arr"));
 			else
 				$$ = mm_strdup(argsinsert->variable->name);
 		}
 		;
@@ -209,15 +209,17 @@
 					type = p->type->u.element->type;
 
 				switch (type)
 				{
 					case ECPGt_char:
+					case ECPGt_nchar:
 					case ECPGt_unsigned_char:
 					case ECPGt_string:
 						$$ = $1;
 						break;
 					case ECPGt_varchar:
+					case ECPGt_nvarchar:
 						$$ = make2_str($1, mm_strdup(".arr"));
 						break;
 					default:
 						mmerror(PARSE_ERROR, ET_ERROR, "invalid data type");
 						$$ = $1;
@@ -548,10 +550,26 @@
 				$$.type_str = EMPTY; /*mm_strdup("varchar");*/
 				$$.type_dimension = mm_strdup("-1");
 				$$.type_index = mm_strdup("-1");
 				$$.type_sizeof = NULL;
 			}
+			else if (strcmp($1, "nvarchar") == 0)
+            {
+                $$.type_enum = ECPGt_nvarchar;
+                $$.type_str = EMPTY; /*mm_strdup("nvarchar");*/
+                $$.type_dimension = mm_strdup("-1");
+                $$.type_index = mm_strdup("-1");
+                $$.type_sizeof = NULL;
+            }
+            else if (strcmp($1, "nchar") == 0)
+            {
+                $$.type_enum = ECPGt_nchar;
+                $$.type_str = EMPTY; /*mm_strdup("nchar");*/
+                $$.type_dimension = mm_strdup("-1");
+                $$.type_index = mm_strdup("-1");
+                $$.type_sizeof = NULL;
+            }
 			else if (strcmp($1, "float") == 0)
 			{
 				$$.type_enum = ECPGt_float;
 				$$.type_str = mm_strdup("float");
 				$$.type_dimension = mm_strdup("-1");
@@ -625,11 +643,11 @@
 			else
 			{
 				/* this is for typedef'ed types */
 				struct typedefs *this = get_typedef($1);
 
-				$$.type_str = (this->type->type_enum == ECPGt_varchar) ? EMPTY : mm_strdup(this->name);
+				$$.type_str = (this->type->type_enum == ECPGt_varchar || this->type->type_enum == ECPGt_nvarchar) ? EMPTY : mm_strdup(this->name);
 				$$.type_enum = this->type->type_enum;
 				$$.type_dimension = this->type->type_dimension;
 				$$.type_index = this->type->type_index;
 				if (this->type->type_sizeof && strlen(this->type->type_sizeof) != 0)
 					$$.type_sizeof = this->type->type_sizeof;
@@ -860,10 +878,11 @@
 						type = ECPGmake_array_type(ECPGmake_struct_type(struct_member_list[struct_level], actual_type[struct_level].type_enum, actual_type[struct_level].type_str, actual_type[struct_level].type_sizeof), dimension);
 
 					$$ = cat_str(5, $1, mm_strdup($2), $3.str, $4, $5);
 					break;
 
+				case ECPGt_nvarchar:
 				case ECPGt_varchar:
 					if (atoi(dimension) < 0)
 						type = ECPGmake_simple_type(actual_type[struct_level].type_enum, length, varchar_counter);
 					else
 						type = ECPGmake_array_type(ECPGmake_simple_type(actual_type[struct_level].type_enum, length, varchar_counter), dimension);
@@ -884,10 +903,29 @@
 					else
 						$$ = cat_str(8, make2_str(mm_strdup(" struct varchar_"), vcn), mm_strdup(" { int len; char arr["), mm_strdup(length), mm_strdup("]; } "), mm_strdup($2), dim_str, $4, $5);
 					varchar_counter++;
 					break;
 
+				case ECPGt_nchar:
+                                        if (atoi(dimension) == -1)
+                                        {
+                                                int i = strlen($5);
+
+                                                if (atoi(length) == -1 && i > 0) /* char <var>[] = "string" */
+                                                {
+                                                        /* if we have an initializer but no string size set, let's use the initializer's length */
+                                                        free(length);
+                                                        length = mm_alloc(i+sizeof("sizeof()"));
+                                                        sprintf(length, "sizeof(%s)", $5+2);
+                                                }
+                                                type = ECPGmake_simple_type(actual_type[struct_level].type_enum, length, 0);
+                                        }
+                                        else
+                                                type = ECPGmake_array_type(ECPGmake_simple_type(actual_type[struct_level].type_enum, length, 0), dimension);
+
+                                        $$ = cat_str(6, mm_strdup("char"), $1, mm_strdup($2), $3.str, $4, $5);
+                                        break;
 				case ECPGt_char:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
 					if (atoi(dimension) == -1)
 					{
@@ -1346,18 +1384,20 @@
 							type = ECPGmake_struct_type(struct_member_list[struct_level], $5.type_enum, $5.type_str, $5.type_sizeof);
 						else
 							type = ECPGmake_array_type(ECPGmake_struct_type(struct_member_list[struct_level], $5.type_enum, $5.type_str, $5.type_sizeof), dimension);
 						break;
 
+					case ECPGt_nvarchar:
 					case ECPGt_varchar:
 						if (atoi(dimension) == -1)
 							type = ECPGmake_simple_type($5.type_enum, length, 0);
 						else
 							type = ECPGmake_array_type(ECPGmake_simple_type($5.type_enum, length, 0), dimension);
 						break;
 
 					case ECPGt_char:
+					case ECPGt_nchar:
 					case ECPGt_unsigned_char:
 					case ECPGt_string:
 						if (atoi(dimension) == -1)
 							type = ECPGmake_simple_type($5.type_enum, length, 0);
 						else
@@ -1842,10 +1882,12 @@
 		| CHAR_P			{ $$ = mm_strdup("char"); }
 		| FLOAT_P			{ $$ = mm_strdup("float"); }
 		| TO				{ $$ = mm_strdup("to"); }
 		| UNION				{ $$ = mm_strdup("union"); }
 		| VARCHAR			{ $$ = mm_strdup("varchar"); }
+		| NVARCHAR			{ $$ = mm_strdup("nvarchar"); }
+		| NCHAR				{ $$ = mm_strdup("nchar"); }
 		| '['				{ $$ = mm_strdup("["); }
 		| ']'				{ $$ = mm_strdup("]"); }
 		| '='				{ $$ = mm_strdup("="); }
 		| ':'				{ $$ = mm_strdup(":"); }
 		;
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/type.c ./src/interfaces/ecpg/preproc/type.c
--- ../orig.HEAD/src/interfaces/ecpg/preproc/type.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/type.c	2013-08-20 05:08:51.000000000 -0400
@@ -135,10 +135,13 @@
 	switch (type)
 	{
 		case ECPGt_char:
 			return ("ECPGt_char");
 			break;
+		case ECPGt_nchar:
+			return ("ECPGt_nchar");
+			break;
 		case ECPGt_unsigned_char:
 			return ("ECPGt_unsigned_char");
 			break;
 		case ECPGt_short:
 			return ("ECPGt_short");
@@ -173,10 +176,14 @@
 		case ECPGt_bool:
 			return ("ECPGt_bool");
 			break;
 		case ECPGt_varchar:
 			return ("ECPGt_varchar");
+			break;
+		case ECPGt_nvarchar:
+			return ("ECPGt_nvarchar");
+			break;
 		case ECPGt_NO_INDICATOR:		/* no indicator */
 			return ("ECPGt_NO_INDICATOR");
 			break;
 		case ECPGt_char_variable:		/* string that should not be quoted */
 			return ("ECPGt_char_variable");
@@ -381,12 +388,12 @@
 				/*
 				 * we have to use the & operator except for arrays and
 				 * pointers
 				 */
 
+			case ECPGt_nvarchar:
 			case ECPGt_varchar:
-
 				/*
 				 * we have to use the pointer except for arrays with given
 				 * bounds
 				 */
 				if (((atoi(arrsize) > 0) ||
@@ -404,10 +411,11 @@
 					sprintf(offset, "sizeof(struct varchar_%d)", counter);
 				else
 					sprintf(offset, "sizeof(struct varchar)");
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_char_variable:
 			case ECPGt_string:
 
 				/*
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/preproc/variable.c ./src/interfaces/ecpg/preproc/variable.c
--- ../orig.HEAD/src/interfaces/ecpg/preproc/variable.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/preproc/variable.c	2013-08-20 05:04:08.000000000 -0400
@@ -527,11 +527,11 @@
 	if (pointer_len > 2)
 		mmerror(PARSE_ERROR, ET_FATAL, ngettext("multilevel pointers (more than 2 levels) are not supported; found %d level",
 												"multilevel pointers (more than 2 levels) are not supported; found %d levels", pointer_len),
 				pointer_len);
 
-	if (pointer_len > 1 && type_enum != ECPGt_char && type_enum != ECPGt_unsigned_char && type_enum != ECPGt_string)
+	if (pointer_len > 1 && type_enum != ECPGt_char && type_enum != ECPGt_nchar && type_enum != ECPGt_unsigned_char && type_enum != ECPGt_string)
 		mmerror(PARSE_ERROR, ET_FATAL, "pointer to pointer is not supported for this data type");
 
 	if (pointer_len > 1 && (atoi(*length) >= 0 || atoi(*dimension) >= 0))
 		mmerror(PARSE_ERROR, ET_FATAL, "multidimensional arrays are not supported");
 
@@ -552,10 +552,11 @@
 			if (atoi(*length) >= 0)
 				mmerror(PARSE_ERROR, ET_FATAL, "multidimensional arrays for structures are not supported");
 
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			/* pointer has to get dimension 0 */
 			if (pointer_len)
 				*dimension = mm_strdup("0");
 
 			/* one index is the string length */
@@ -565,10 +566,11 @@
 				*dimension = mm_strdup("-1");
 			}
 
 			break;
 		case ECPGt_char:
+                case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			/* char ** */
 			if (pointer_len == 2)
 			{
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c ./src/interfaces/ecpg/test/expected/compat_informix-sqlda.c
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/compat_informix-sqlda.c	2013-08-19 20:54:00.000000000 -0400
@@ -95,10 +95,11 @@
 #define	SQLTEXT		ECPGt_char
 #define	SQLVCHAR	ECPGt_char
 #define SQLINTERVAL     ECPGt_interval
 #define	SQLNCHAR	ECPGt_char
 #define	SQLNVCHAR	ECPGt_char
+#define SQLNVARCHAR     ECPGt_char
 #ifdef HAVE_LONG_LONG_INT_64
 #define	SQLINT8		ECPGt_long_long
 #define	SQLSERIAL8	ECPGt_long_long
 #else
 #define	SQLINT8		ECPGt_long
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.c ./src/interfaces/ecpg/test/expected/preproc-type.c
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/preproc-type.c	2013-08-19 20:54:00.000000000 -0400
@@ -81,10 +81,21 @@
   
   	 
 	 
    
   
+  
+         
+         
+   
+   
+  
+         
+         
+   
+
+  
 #line 29 "type.pgc"
  struct TBempl empl ;
  
 #line 30 "type.pgc"
  string str ;
@@ -98,48 +109,72 @@
  int len ;
  
 #line 35 "type.pgc"
  char text [ 10 ] ;
  } vc ;
+ 
+#line 41 "type.pgc"
+ struct nvarchar { 
+#line 39 "type.pgc"
+ int len ;
+ 
+#line 40 "type.pgc"
+ char text [ 10 ] ;
+ } nvc ;
+ 
+#line 46 "type.pgc"
+ struct uvarchar { 
+#line 44 "type.pgc"
+ int len ;
+ 
+#line 45 "type.pgc"
+ char text [ 20 ] ;
+ } uvc ;
 /* exec sql end declare section */
-#line 37 "type.pgc"
+#line 48 "type.pgc"
 
 
   /* exec sql var vc is [ 10 ] */
-#line 39 "type.pgc"
+#line 50 "type.pgc"
+
+  /* exec sql var nvc is [ 10 ] */
+#line 51 "type.pgc"
+
+  /* exec sql var uvc is [ 20 ] */
+#line 52 "type.pgc"
 
   ECPGdebug (1, stderr);
 
   empl.idnum = 1;
   { ECPGconnect(__LINE__, 0, "regress1" , NULL, NULL , NULL, 0); }
-#line 43 "type.pgc"
+#line 56 "type.pgc"
 
   if (sqlca.sqlcode)
     {
       printf ("connect error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) )", ECPGt_EOIT, ECPGt_EORT);}
-#line 51 "type.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) , string4 char ( 10 ) , string5 char ( 10 ) )", ECPGt_EOIT, ECPGt_EORT);}
+#line 64 "type.pgc"
 
   if (sqlca.sqlcode)
     {
       printf ("create error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' )", ECPGt_EOIT, ECPGt_EORT);}
-#line 58 "type.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' , 'fourth str' , 'fifth str' )", ECPGt_EOIT, ECPGt_EORT);}
+#line 71 "type.pgc"
 
   if (sqlca.sqlcode)
     {
       printf ("insert error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select idnum , name , accs , string1 , string2 , string3 from empl where idnum = $1 ", 
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select idnum , name , accs , string1 , string2 , string3 , string4 , string5 from empl where idnum = $1 ", 
 	ECPGt_long,&(empl.idnum),(long)1,(long)1,sizeof(long), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
 	ECPGt_long,&(empl.idnum),(long)1,(long)1,sizeof(long), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_char,&(empl.name),(long)21,(long)1,(21)*sizeof(char), 
@@ -149,22 +184,27 @@
 	ECPGt_char,(str),(long)11,(long)1,(11)*sizeof(char), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_char,&(ptr),(long)0,(long)1,(1)*sizeof(char), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_varchar,&(vc),(long)10,(long)1,sizeof(struct varchar), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nvarchar,&(nvc),(long)10,(long)1,sizeof(struct varchar), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_uvarchar,&(uvc),(long)20,(long)1,sizeof(struct varchar), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);}
-#line 68 "type.pgc"
+#line 81 "type.pgc"
 
   if (sqlca.sqlcode)
     {
       printf ("select error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
-  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text);
+  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s' vc.len='%d' nvc='%10.10s' nvc.len='%d' uvc='%20.20s' uvc.len='%d'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text, vc.len, nvc.text, nvc.len, uvc.text, uvc.len);
 
   { ECPGdisconnect(__LINE__, "CURRENT");}
-#line 76 "type.pgc"
+#line 89 "type.pgc"
 
 
   free(ptr);
   exit (0);
 }
+
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.stderr ./src/interfaces/ecpg/test/expected/preproc-type.stderr
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.stderr	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/preproc-type.stderr	2013-08-19 20:54:00.000000000 -0400
@@ -1,40 +1,44 @@
 [NO_PID]: ECPGdebug: set to 1
 [NO_PID]: sqlca: code: 0, state: 00000
 [NO_PID]: ECPGconnect: opening database regress1 on <DEFAULT> port <DEFAULT>  
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: query: create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 63: query: create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) , string4 char ( 10 ) , string5 char ( 10 ) ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: using PQexec
+[NO_PID]: ecpg_execute on line 63: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: OK: CREATE TABLE
+[NO_PID]: ecpg_execute on line 63: OK: CREATE TABLE
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: query: insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 71: query: insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' , 'fourth str' , 'fifth str' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: using PQexec
+[NO_PID]: ecpg_execute on line 71: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 71: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: query: select idnum , name , accs , string1 , string2 , string3 from empl where idnum = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 78: query: select idnum , name , accs , string1 , string2 , string3 , string4 , string5 from empl where idnum = $1 ; with 1 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: using PQexecParams
+[NO_PID]: ecpg_execute on line 78: using PQexecParams
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: free_params on line 65: parameter 1 = 1
+[NO_PID]: free_params on line 78: parameter 1 = 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: correctly got 1 tuples with 6 fields
+[NO_PID]: ecpg_execute on line 78: correctly got 1 tuples with 8 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: 1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: 1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: user name            offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: user name            offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: 320 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: 320 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: first str  offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: first str  offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_store_result on line 65: allocating memory for 1 tuples
+[NO_PID]: ecpg_store_result on line 78: allocating memory for 1 tuples
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: second str offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: second str offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: third str  offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 78: RESULT: third str  offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 78: RESULT: fourth str offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 78: RESULT: fifth str  offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
 [NO_PID]: ecpg_finish: connection regress1 closed
 [NO_PID]: sqlca: code: 0, state: 00000
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.stdout ./src/interfaces/ecpg/test/expected/preproc-type.stdout
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/preproc-type.stdout	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/preproc-type.stdout	2013-08-19 20:54:00.000000000 -0400
@@ -1 +1 @@
-id=1 name='user name           ' accs=320 str='first str ' ptr='second str' vc='third str '
+id=1 name='user name           ' accs=320 str='first str ' ptr='second str' vc='third str ' vc.len='10' nvc='fourth str' nvc.len='10' uvc='                   f' uvc.len='20'
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.c ./src/interfaces/ecpg/test/expected/sql-fetch.c
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.c	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/sql-fetch.c	2013-08-19 20:54:00.000000000 -0400
@@ -25,210 +25,250 @@
 int main() {
   /* exec sql begin declare section */
      
       
   
+      
+     
+     
+  
 #line 9 "fetch.pgc"
  char str [ 25 ] ;
  
 #line 10 "fetch.pgc"
  int i , count = 1 ;
-/* exec sql end declare section */
+ 
 #line 11 "fetch.pgc"
+ char nstr [ 25 ] ;
+ 
+#line 12 "fetch.pgc"
+  struct varchar_1  { int len; char arr[ 25 ]; }  _varchar ;
+ 
+#line 13 "fetch.pgc"
+  struct varchar_2  { int len; char arr[ 25 ]; }  _nvarchar ;
+ 
+#line 14 "fetch.pgc"
+  struct varchar_3  { int len; char arr[ 25 ]; }  _uvarchar ;
+/* exec sql end declare section */
+#line 15 "fetch.pgc"
 
 
   ECPGdebug(1, stderr);
   { ECPGconnect(__LINE__, 0, "regress1" , NULL, NULL , NULL, 0); }
-#line 14 "fetch.pgc"
+#line 18 "fetch.pgc"
 
 
   /* exec sql whenever sql_warning  sqlprint ; */
-#line 16 "fetch.pgc"
+#line 20 "fetch.pgc"
 
   /* exec sql whenever sqlerror  sqlprint ; */
-#line 17 "fetch.pgc"
+#line 21 "fetch.pgc"
 
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table My_Table ( Item1 int , Item2 text )", ECPGt_EOIT, ECPGt_EORT);
-#line 19 "fetch.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table My_Table ( Item1 int , Item2 text , Item3 text , Item4 varchar , Item5 varchar , Item6 varchar )", ECPGt_EOIT, ECPGt_EORT);
+#line 23 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 19 "fetch.pgc"
+#line 23 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 19 "fetch.pgc"
+#line 23 "fetch.pgc"
 
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 1 , 'text1' )", ECPGt_EOIT, ECPGt_EORT);
-#line 21 "fetch.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 1 , 'text1' , 'text1' , 'text1' , 'text1' , 'text1' )", ECPGt_EOIT, ECPGt_EORT);
+#line 25 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 21 "fetch.pgc"
+#line 25 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 21 "fetch.pgc"
+#line 25 "fetch.pgc"
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 2 , 'text2' )", ECPGt_EOIT, ECPGt_EORT);
-#line 22 "fetch.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 2 , 'text2' , 'text2' , 'text2' , 'text2' , 'text2' )", ECPGt_EOIT, ECPGt_EORT);
+#line 26 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 22 "fetch.pgc"
+#line 26 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 22 "fetch.pgc"
+#line 26 "fetch.pgc"
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 3 , 'text3' )", ECPGt_EOIT, ECPGt_EORT);
-#line 23 "fetch.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 3 , 'text3' , 'text3' , 'text3' , 'text3' , 'text3' )", ECPGt_EOIT, ECPGt_EORT);
+#line 27 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 23 "fetch.pgc"
+#line 27 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 23 "fetch.pgc"
+#line 27 "fetch.pgc"
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 4 , 'text4' )", ECPGt_EOIT, ECPGt_EORT);
-#line 24 "fetch.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 4 , 'text4' , 'text4' , 'text4' , 'text4' , 'text4' )", ECPGt_EOIT, ECPGt_EORT);
+#line 28 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 24 "fetch.pgc"
+#line 28 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 24 "fetch.pgc"
+#line 28 "fetch.pgc"
 
 
   /* declare C cursor for select * from My_Table */
-#line 26 "fetch.pgc"
+#line 30 "fetch.pgc"
 
 
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "declare C cursor for select * from My_Table", ECPGt_EOIT, ECPGt_EORT);
-#line 28 "fetch.pgc"
+#line 32 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 28 "fetch.pgc"
+#line 32 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 28 "fetch.pgc"
+#line 32 "fetch.pgc"
 
 
   /* exec sql whenever not found  break ; */
-#line 30 "fetch.pgc"
+#line 34 "fetch.pgc"
 
   while (1) {
   	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "fetch 1 in C", ECPGt_EOIT, 
 	ECPGt_int,&(i),(long)1,(long)1,sizeof(int), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_char,(str),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nchar,(nstr),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_varchar,&(_varchar),(long)25,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nvarchar,&(_nvarchar),(long)25,(long)1,sizeof(struct varchar_2), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_uvarchar,&(_uvarchar),(long)25,(long)1,sizeof(struct varchar_3), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
-#line 32 "fetch.pgc"
+#line 36 "fetch.pgc"
 
 if (sqlca.sqlcode == ECPG_NOT_FOUND) break;
-#line 32 "fetch.pgc"
+#line 36 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 32 "fetch.pgc"
+#line 36 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 32 "fetch.pgc"
+#line 36 "fetch.pgc"
 
-	printf("%d: %s\n", i, str);
+	printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
   }
 
   /* exec sql whenever not found  continue ; */
-#line 36 "fetch.pgc"
+#line 40 "fetch.pgc"
 
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "move backward 2 in C", ECPGt_EOIT, ECPGt_EORT);
-#line 37 "fetch.pgc"
+#line 41 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 37 "fetch.pgc"
+#line 41 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 37 "fetch.pgc"
+#line 41 "fetch.pgc"
 
 
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "fetch $0 in C", 
 	ECPGt_int,&(count),(long)1,(long)1,sizeof(int), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
 	ECPGt_int,&(i),(long)1,(long)1,sizeof(int), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_char,(str),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nchar,(nstr),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_varchar,&(_varchar),(long)25,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nvarchar,&(_nvarchar),(long)25,(long)1,sizeof(struct varchar_2), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_uvarchar,&(_uvarchar),(long)25,(long)1,sizeof(struct varchar_3), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
-#line 39 "fetch.pgc"
+#line 43 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 39 "fetch.pgc"
+#line 43 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 39 "fetch.pgc"
+#line 43 "fetch.pgc"
 
-  printf("%d: %s\n", i, str);
+  printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
 
   /* declare D cursor for select * from My_Table where Item1 = $1 */
-#line 42 "fetch.pgc"
+#line 46 "fetch.pgc"
 
 
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "declare D cursor for select * from My_Table where Item1 = $1", 
 	ECPGt_const,"1",(long)1,(long)1,strlen("1"), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
-#line 44 "fetch.pgc"
+#line 48 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 44 "fetch.pgc"
+#line 48 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 44 "fetch.pgc"
+#line 48 "fetch.pgc"
 
 
   /* exec sql whenever not found  break ; */
-#line 46 "fetch.pgc"
+#line 50 "fetch.pgc"
 
   while (1) {
   	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "fetch 1 in D", ECPGt_EOIT, 
 	ECPGt_int,&(i),(long)1,(long)1,sizeof(int), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_char,(str),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nchar,(nstr),(long)25,(long)1,(25)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_varchar,&(_varchar),(long)25,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nvarchar,&(_nvarchar),(long)25,(long)1,sizeof(struct varchar_2), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_uvarchar,&(_uvarchar),(long)25,(long)1,sizeof(struct varchar_3), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
-#line 48 "fetch.pgc"
+#line 52 "fetch.pgc"
 
 if (sqlca.sqlcode == ECPG_NOT_FOUND) break;
-#line 48 "fetch.pgc"
+#line 52 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 48 "fetch.pgc"
+#line 52 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 48 "fetch.pgc"
+#line 52 "fetch.pgc"
 
-	printf("%d: %s\n", i, str);
+	printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
   }
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "close D", ECPGt_EOIT, ECPGt_EORT);
-#line 51 "fetch.pgc"
+#line 55 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 51 "fetch.pgc"
+#line 55 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 51 "fetch.pgc"
+#line 55 "fetch.pgc"
 
 
   { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table My_Table", ECPGt_EOIT, ECPGt_EORT);
-#line 53 "fetch.pgc"
+#line 57 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 53 "fetch.pgc"
+#line 57 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 53 "fetch.pgc"
+#line 57 "fetch.pgc"
 
 
   { ECPGdisconnect(__LINE__, "ALL");
-#line 55 "fetch.pgc"
+#line 59 "fetch.pgc"
 
 if (sqlca.sqlwarn[0] == 'W') sqlprint();
-#line 55 "fetch.pgc"
+#line 59 "fetch.pgc"
 
 if (sqlca.sqlcode < 0) sqlprint();}
-#line 55 "fetch.pgc"
+#line 59 "fetch.pgc"
 
 
   return 0;
 }
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.stderr ./src/interfaces/ecpg/test/expected/sql-fetch.stderr
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.stderr	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/sql-fetch.stderr	2013-08-19 20:54:00.000000000 -0400
@@ -1,147 +1,195 @@
 [NO_PID]: ECPGdebug: set to 1
 [NO_PID]: sqlca: code: 0, state: 00000
 [NO_PID]: ECPGconnect: opening database regress1 on <DEFAULT> port <DEFAULT>  
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 19: query: create table My_Table ( Item1 int , Item2 text ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 23: query: create table My_Table ( Item1 int , Item2 text , Item3 text , Item4 varchar , Item5 varchar , Item6 varchar ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 19: using PQexec
+[NO_PID]: ecpg_execute on line 23: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 19: OK: CREATE TABLE
+[NO_PID]: ecpg_execute on line 23: OK: CREATE TABLE
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 21: query: insert into My_Table values ( 1 , 'text1' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 25: query: insert into My_Table values ( 1 , 'text1' , 'text1' , 'text1' , 'text1' , 'text1' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 21: using PQexec
+[NO_PID]: ecpg_execute on line 25: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 21: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 25: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 22: query: insert into My_Table values ( 2 , 'text2' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 26: query: insert into My_Table values ( 2 , 'text2' , 'text2' , 'text2' , 'text2' , 'text2' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 22: using PQexec
+[NO_PID]: ecpg_execute on line 26: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 22: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 26: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 23: query: insert into My_Table values ( 3 , 'text3' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 27: query: insert into My_Table values ( 3 , 'text3' , 'text3' , 'text3' , 'text3' , 'text3' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 23: using PQexec
+[NO_PID]: ecpg_execute on line 27: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 23: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 27: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 24: query: insert into My_Table values ( 4 , 'text4' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 28: query: insert into My_Table values ( 4 , 'text4' , 'text4' , 'text4' , 'text4' , 'text4' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 24: using PQexec
+[NO_PID]: ecpg_execute on line 28: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 24: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 28: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 28: query: declare C cursor for select * from My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 32: query: declare C cursor for select * from My_Table; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 28: using PQexec
+[NO_PID]: ecpg_execute on line 32: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 28: OK: DECLARE CURSOR
+[NO_PID]: ecpg_execute on line 32: OK: DECLARE CURSOR
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 36: query: fetch 1 in C; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: using PQexec
+[NO_PID]: ecpg_execute on line 36: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_execute on line 36: correctly got 1 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: 1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 36: RESULT: 1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: text1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 36: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_get_data on line 36: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: using PQexec
+[NO_PID]: ecpg_get_data on line 36: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_get_data on line 36: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: 2 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 36: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: text2 offset: -1; array: no
+[NO_PID]: ecpg_execute on line 36: query: fetch 1 in C; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 36: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: using PQexec
+[NO_PID]: ecpg_execute on line 36: correctly got 1 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_get_data on line 36: RESULT: 2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: 3 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 36: RESULT: text2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: text3 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 36: RESULT: text2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_get_data on line 36: RESULT: text2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: using PQexec
+[NO_PID]: ecpg_get_data on line 36: RESULT: text2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_get_data on line 36: RESULT: text2 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: 4 offset: -1; array: no
+[NO_PID]: ecpg_execute on line 36: query: fetch 1 in C; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 32: RESULT: text4 offset: -1; array: no
+[NO_PID]: ecpg_execute on line 36: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 36: correctly got 1 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: using PQexec
+[NO_PID]: ecpg_get_data on line 36: RESULT: 3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 36: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 36: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 32: correctly got 0 tuples with 2 fields
+[NO_PID]: ecpg_execute on line 36: correctly got 1 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: raising sqlcode 100 on line 32: no data found on line 32
+[NO_PID]: ecpg_get_data on line 36: RESULT: 4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 36: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 36: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 36: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 36: correctly got 0 tuples with 6 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: raising sqlcode 100 on line 36: no data found on line 36
 [NO_PID]: sqlca: code: 100, state: 02000
-[NO_PID]: ecpg_execute on line 37: query: move backward 2 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 41: query: move backward 2 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 41: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 41: OK: MOVE 2
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: correctly got 1 tuples with 6 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 43: RESULT: 4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 43: RESULT: text4 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 43: RESULT: text4 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 37: using PQexec
+[NO_PID]: ecpg_get_data on line 43: RESULT: text4 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 37: OK: MOVE 2
+[NO_PID]: ecpg_get_data on line 43: RESULT: text4 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 39: query: fetch 1 in C; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_get_data on line 43: RESULT: text4 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 39: using PQexec
+[NO_PID]: ecpg_execute on line 48: query: declare D cursor for select * from My_Table where Item1 = $1; with 1 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 39: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_execute on line 48: using PQexecParams
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 39: RESULT: 4 offset: -1; array: no
+[NO_PID]: free_params on line 48: parameter 1 = 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 39: RESULT: text4 offset: -1; array: no
+[NO_PID]: ecpg_execute on line 48: OK: DECLARE CURSOR
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 44: query: declare D cursor for select * from My_Table where Item1 = $1; with 1 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 52: query: fetch 1 in D; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 44: using PQexecParams
+[NO_PID]: ecpg_execute on line 52: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: free_params on line 44: parameter 1 = 1
+[NO_PID]: ecpg_execute on line 52: correctly got 1 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 44: OK: DECLARE CURSOR
+[NO_PID]: ecpg_get_data on line 52: RESULT: 1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: query: fetch 1 in D; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_get_data on line 52: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: using PQexec
+[NO_PID]: ecpg_get_data on line 52: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: correctly got 1 tuples with 2 fields
+[NO_PID]: ecpg_get_data on line 52: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 48: RESULT: 1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 52: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 48: RESULT: text1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 52: RESULT: text1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: query: fetch 1 in D; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 52: query: fetch 1 in D; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: using PQexec
+[NO_PID]: ecpg_execute on line 52: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 48: correctly got 0 tuples with 2 fields
+[NO_PID]: ecpg_execute on line 52: correctly got 0 tuples with 6 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: raising sqlcode 100 on line 48: no data found on line 48
+[NO_PID]: raising sqlcode 100 on line 52: no data found on line 52
 [NO_PID]: sqlca: code: 100, state: 02000
-[NO_PID]: ecpg_execute on line 51: query: close D; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 55: query: close D; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 51: using PQexec
+[NO_PID]: ecpg_execute on line 55: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 51: OK: CLOSE CURSOR
+[NO_PID]: ecpg_execute on line 55: OK: CLOSE CURSOR
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 53: query: drop table My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 57: query: drop table My_Table; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 53: using PQexec
+[NO_PID]: ecpg_execute on line 57: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_check_PQresult on line 53: bad response - ERROR:  cannot DROP TABLE "my_table" because it is being used by active queries in this session
+[NO_PID]: ecpg_check_PQresult on line 57: bad response - ERROR:  cannot DROP TABLE "my_table" because it is being used by active queries in this session
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: raising sqlstate 55006 (sqlcode -400): cannot DROP TABLE "my_table" because it is being used by active queries in this session on line 53
+[NO_PID]: raising sqlstate 55006 (sqlcode -400): cannot DROP TABLE "my_table" because it is being used by active queries in this session on line 57
 [NO_PID]: sqlca: code: -400, state: 55006
-SQL error: cannot DROP TABLE "my_table" because it is being used by active queries in this session on line 53
+SQL error: cannot DROP TABLE "my_table" because it is being used by active queries in this session on line 57
 [NO_PID]: ecpg_finish: connection regress1 closed
 [NO_PID]: sqlca: code: 0, state: 00000
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.stdout ./src/interfaces/ecpg/test/expected/sql-fetch.stdout
--- ../orig.HEAD/src/interfaces/ecpg/test/expected/sql-fetch.stdout	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/expected/sql-fetch.stdout	2013-08-19 20:54:00.000000000 -0400
@@ -1,6 +1,6 @@
-1: text1
-2: text2
-3: text3
-4: text4
-4: text4
-1: text1
+1: text1, text1, text1, text1
+2: text2, text2, text2, text2
+3: text3, text3, text3, text3
+4: text4, text4, text4, text4
+4: text4, text4, text4, text4
+1: text1, text1, text1, text1
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/preproc/type.pgc ./src/interfaces/ecpg/test/preproc/type.pgc
--- ../orig.HEAD/src/interfaces/ecpg/test/preproc/type.pgc	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/preproc/type.pgc	2013-08-19 20:54:00.000000000 -0400
@@ -32,13 +32,26 @@
   struct varchar
   {
   	int len;
 	char text[10];
   } vc;
+  struct nvarchar
+  {
+        int len;
+        char text[10];
+  } nvc;
+  struct uvarchar
+  {
+        int len;
+        char text[20];
+  } uvc;
+
   EXEC SQL END DECLARE SECTION;
 
   EXEC SQL var vc is varchar[10];
+  EXEC SQL var nvc is nvarchar[10];
+  EXEC SQL var uvc is uvarchar[20];
   ECPGdebug (1, stderr);
 
   empl.idnum = 1;
   EXEC SQL connect to REGRESSDB1;
   if (sqlca.sqlcode)
@@ -46,35 +59,36 @@
       printf ("connect error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
   EXEC SQL create table empl
-    (idnum integer, name char(20), accs smallint, string1 char(10), string2 char(10), string3 char(10));
+    (idnum integer, name char(20), accs smallint, string1 char(10), string2 char(10), string3 char(10), string4 char(10), string5 char(10));
   if (sqlca.sqlcode)
     {
       printf ("create error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  EXEC SQL insert into empl values (1, 'user name', 320, 'first str', 'second str', 'third str');
+  EXEC SQL insert into empl values (1, 'user name', 320, 'first str', 'second str', 'third str', 'fourth str', 'fifth str');
   if (sqlca.sqlcode)
     {
       printf ("insert error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  EXEC SQL select idnum, name, accs, string1, string2, string3
-	into :empl, :str, :ptr, :vc
+  EXEC SQL select idnum, name, accs, string1, string2, string3, string4, string5 
+	into :empl, :str, :ptr, :vc, :nvc, :uvc
 	from empl
 	where idnum =:empl.idnum;
   if (sqlca.sqlcode)
     {
       printf ("select error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
-  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text);
+  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s' vc.len='%d' nvc='%10.10s' nvc.len='%d' uvc='%20.20s' uvc.len='%d'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text, vc.len, nvc.text, nvc.len, uvc.text, uvc.len);
 
   EXEC SQL disconnect;
 
   free(ptr);
   exit (0);
 }
+
diff -U 5 -N -r -w ../orig.HEAD/src/interfaces/ecpg/test/sql/fetch.pgc ./src/interfaces/ecpg/test/sql/fetch.pgc
--- ../orig.HEAD/src/interfaces/ecpg/test/sql/fetch.pgc	2013-08-19 20:22:13.000000000 -0400
+++ ./src/interfaces/ecpg/test/sql/fetch.pgc	2013-08-19 20:54:00.000000000 -0400
@@ -6,49 +6,53 @@
 
 int main() {
   EXEC SQL BEGIN DECLARE SECTION;
     char str[25];
     int i, count=1;
+    NCHAR nstr[25];
+    VARCHAR  _varchar[25];
+    NVARCHAR _nvarchar[25];
+    UVARCHAR _uvarchar[25];
   EXEC SQL END DECLARE SECTION;
 
   ECPGdebug(1, stderr);
   EXEC SQL CONNECT TO REGRESSDB1;
 
   EXEC SQL WHENEVER SQLWARNING SQLPRINT;
   EXEC SQL WHENEVER SQLERROR SQLPRINT;
 
-  EXEC SQL CREATE TABLE My_Table ( Item1 int, Item2 text );
+  EXEC SQL CREATE TABLE My_Table ( Item1 int, Item2 text, Item3 text, Item4 varchar, Item5 varchar, Item6 varchar );
 
-  EXEC SQL INSERT INTO My_Table VALUES ( 1, 'text1');
-  EXEC SQL INSERT INTO My_Table VALUES ( 2, 'text2');
-  EXEC SQL INSERT INTO My_Table VALUES ( 3, 'text3');
-  EXEC SQL INSERT INTO My_Table VALUES ( 4, 'text4');
+  EXEC SQL INSERT INTO My_Table VALUES ( 1, 'text1', 'text1', 'text1', 'text1', 'text1');
+  EXEC SQL INSERT INTO My_Table VALUES ( 2, 'text2', 'text2', 'text2', 'text2', 'text2');
+  EXEC SQL INSERT INTO My_Table VALUES ( 3, 'text3', 'text3', 'text3', 'text3', 'text3');
+  EXEC SQL INSERT INTO My_Table VALUES ( 4, 'text4', 'text4', 'text4', 'text4', 'text4');
 
   EXEC SQL DECLARE C CURSOR FOR SELECT * FROM My_Table;
 
   EXEC SQL OPEN C;
 
   EXEC SQL WHENEVER NOT FOUND DO BREAK;
   while (1) {
-  	EXEC SQL FETCH 1 IN C INTO :i, :str;
-	printf("%d: %s\n", i, str);
+  	EXEC SQL FETCH 1 IN C INTO :i, :str, :nstr, :_varchar, :_nvarchar, :_uvarchar;
+	printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
   }
 
   EXEC SQL WHENEVER NOT FOUND CONTINUE;
   EXEC SQL MOVE BACKWARD 2 IN C;
 
-  EXEC SQL FETCH :count IN C INTO :i, :str;
-  printf("%d: %s\n", i, str);
+  EXEC SQL FETCH :count IN C INTO :i, :str, :nstr, :_varchar, :_nvarchar, :_uvarchar;
+  printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
 
   EXEC SQL DECLARE D CURSOR FOR SELECT * FROM My_Table WHERE Item1 = $1;
 
   EXEC SQL OPEN D using 1;
 
   EXEC SQL WHENEVER NOT FOUND DO BREAK;
   while (1) {
-  	EXEC SQL FETCH 1 IN D INTO :i, :str;
-	printf("%d: %s\n", i, str);
+  	EXEC SQL FETCH 1 IN D INTO :i, :str, :nstr, :_varchar, :_nvarchar, :_uvarchar;
+	printf("%d: %s, %s, %s, %s\n", i, str, nstr, _varchar.arr, _nvarchar.arr);
   }
   EXEC SQL CLOSE D;
 
   EXEC SQL DROP TABLE My_Table;
 
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/create_function_3.out ./src/test/regress/expected/create_function_3.out
--- ../orig.HEAD/src/test/regress/expected/create_function_3.out	2013-08-19 20:22:14.000000000 -0400
+++ ./src/test/regress/expected/create_function_3.out	2013-08-19 20:54:00.000000000 -0400
@@ -306,10 +306,12 @@
  namege         | boolean    | [0:1]={name,name}
  namegt         | boolean    | [0:1]={name,name}
  namele         | boolean    | [0:1]={name,name}
  namelt         | boolean    | [0:1]={name,name}
  namene         | boolean    | [0:1]={name,name}
+ nbpchareq      | boolean    | [0:1]={"national character","national character"}
+ nbpcharne      | boolean    | [0:1]={"national character","national character"}
  network_eq     | boolean    | [0:1]={inet,inet}
  network_ge     | boolean    | [0:1]={inet,inet}
  network_gt     | boolean    | [0:1]={inet,inet}
  network_le     | boolean    | [0:1]={inet,inet}
  network_lt     | boolean    | [0:1]={inet,inet}
@@ -381,11 +383,11 @@
  varbitgt       | boolean    | [0:1]={"bit varying","bit varying"}
  varbitle       | boolean    | [0:1]={"bit varying","bit varying"}
  varbitlt       | boolean    | [0:1]={"bit varying","bit varying"}
  varbitne       | boolean    | [0:1]={"bit varying","bit varying"}
  xideq          | boolean    | [0:1]={xid,xid}
-(228 rows)
+(230 rows)
 
 --
 -- CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
 --
 CREATE FUNCTION functext_F_1(int) RETURNS bool LANGUAGE 'sql'
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nchar.out ./src/test/regress/expected/nchar.out
--- ../orig.HEAD/src/test/regress/expected/nchar.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nchar.out	2013-08-19 23:31:45.000000000 -0400
@@ -0,0 +1,122 @@
+--
+-- NCHAR
+--
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+SELECT nchar 'c' = nchar 'c' AS true;
+ true 
+------
+ t
+(1 row)
+
+--
+-- Build a table for testing
+--
+CREATE TABLE NCHAR_TBL(f1 nchar);
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character(1)
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       |  
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | A
+      | 1
+      | 2
+      | 3
+      |  
+(5 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | c
+(1 row)
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | c
+(2 rows)
+
+DROP TABLE NCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character(4)
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NCHAR_TBL;
+ four |  f1  
+------+------
+      | a   
+      | ab  
+      | abcd
+      | abcd
+(4 rows)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar_1.out ./src/test/regress/expected/nvarchar_1.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar_1.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar_1.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | 1
+      | 2
+      | 3
+      | 
+(4 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | 1
+     | 2
+     | 3
+     | 
+(5 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | c
+(2 rows)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | c
+(3 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar_2.out ./src/test/regress/expected/nvarchar_2.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar_2.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar_2.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | 
+(1 row)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | 
+(2 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(5 rows)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(6 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar_alter.out ./src/test/regress/expected/nvarchar_alter.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar_alter.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar_alter.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,138 @@
+--create table
+create table nchar_test (f1 nchar(10) default N'a', f2 nvarchar(10) default N'b');
+--values
+values (N'abc各国語');
+  column1  
+-----------
+ abc各国語
+(1 row)
+
+--analyze
+analyze nchar_test(f1);
+analyze nchar_test(f2);
+--vacuum analyze
+vacuum analyze nchar_test(f1);
+vacuum analyze nchar_test(f2);
+--broken
+--set application_name to N'abc各国語';
+--select
+select f1,f2,N'abc各国語' from nchar_test;
+ f1 | f2 | nvarchar 
+----+----+----------
+(0 rows)
+
+--select into
+select f1,f2,N'abc各国語' into nchar_test1 from nchar_test;
+--revoke per column not supported
+--grant per column not supported
+--insert into/returning
+insert into nchar_test1 select f1,f2,N'abc各国語' from nchar_test returning f1,f2,nvarchar,N'abc各国語';
+ f1 | f2 | nvarchar | nvarchar 
+----+----+----------+----------
+(0 rows)
+
+--explain
+explain select f1,f2,N'abc各国語' from nchar_test;
+                          QUERY PLAN                          
+--------------------------------------------------------------
+ Seq Scan on nchar_test  (cost=0.00..17.70 rows=770 width=76)
+(1 row)
+
+--do
+do $$begin perform f1,f2,N'abc各国語' from nchar_test; end $$;
+--delete
+delete from nchar_test where f1=N'abc各国語' and f2=N'abc各国語';
+--create view
+create view v as select f1,f2,N'abc各国語' from nchar_test;
+--create index
+create index i2 on nchar_test(f2);
+create index i3 on nchar_test((f1||'abc各国語'));
+create index i1 on nchar_test(f1);
+--comment
+comment on COLUMN nchar_test.f1 is 'f1';
+comment on COLUMN nchar_test.f2 is 'f2';
+comment on type nvarchar is 'nvarchar';
+--analyze
+analyze nchar_test(f1);
+--copy
+copy nchar_test(f1, f2) to stdout;
+--alter view
+alter view v alter column f1 set default N'abc各国語';
+alter view v alter column f2 set default N'abc各国語';
+--alter table
+alter table nchar_test1 rename f1 to f3;
+alter table nchar_test1 rename f2 to f4;
+alter table nchar_test1 alter f3 type nchar(10);
+alter table nchar_test1 alter f4 type nvarchar;
+alter table nchar_test1 alter f3 set default N'abc各国語';
+alter table nchar_test1 alter f4 set default N'abc各国語';
+--declare cursor
+begin;declare qqq cursor for select f1,f2,N'abc各国語' from nchar_test;commit;
+--create trigger
+create trigger tr before update on nchar_test for each row when (OLD.f1=N'abc各国語') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+--foreign key
+alter table nchar_test1 add CONSTRAINT nchar_test1_pk primary key(f3);
+NOTICE:  ALTER TABLE / ADD PRIMARY KEY will create implicit index "nchar_test1_pk" for table "nchar_test1"
+alter table nchar_test add CONSTRAINT qqqq FOREIGN KEY (f1) references nchar_test1(f3);
+--update
+update nchar_test set f1=N'abc各国語', f2='abc各国語';
+select * from nchar_test1;
+ f3 | f4 | nvarchar 
+----+----+----------
+(0 rows)
+
+create domain test_nchar_domain as nchar(10) default (N'a') CHECK (value <> N'b');
+create domain test_nvarchar_domain as nvarchar(10) default (N'a') CHECK (value <> N'b');
+alter domain test_nchar_domain set default (N'b');
+alter domain test_nvarchar_domain set default (N'b');
+DROP DOMAIN test_nchar_domain;
+DROP DOMAIN test_nvarchar_domain;
+CREATE AGGREGATE test_nvarchar_agg (nvarchar) ( sfunc = array_append, stype = nvarchar[], initcond = '{}');
+CREATE AGGREGATE test_nchar_agg (nchar(10)) ( sfunc = array_append, stype = nchar(10)[], initcond = '{}');
+alter aggregate test_nvarchar_agg(nvarchar) rename to test_nvarchar_aggregate;
+alter aggregate test_nchar_agg (nchar(10)) rename to test_nchar_aggregate;
+drop aggregate test_nvarchar_aggregate(nvarchar);
+drop aggregate test_nchar_aggregate(nchar(10));
+CREATE TYPE test_nchar_type AS (f1 nchar(10));
+CREATE TYPE test_nvarchar_type AS (f1 nvarchar);
+drop type test_nchar_type;
+drop type test_nvarchar_type;
+drop view v;
+drop table nchar_test;
+drop table nchar_test1;
+create table nchar_test (f1 nchar(10), f2 nvarchar(10));
+select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+ f1 | f2 | nvarchar 
+----+----+----------
+(0 rows)
+
+insert into nchar_test values (N'a', N'b') returning (f1, f2, N'c');
+        row         
+--------------------
+ ("a         ",b,c)
+(1 row)
+
+select f1, f2, N'a' into nchar_test1 from nchar_test;
+select * from nchar_test1;
+     f1     | f2 | nvarchar 
+------------+----+----------
+ a          | b  | a
+(1 row)
+
+prepare qqq(nchar(10), nvarchar(10)) AS select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+execute qqq(N'a', N'b');
+     f1     | f2 | nvarchar 
+------------+----+----------
+ a          | b  | a
+(1 row)
+
+explain select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+                          QUERY PLAN                          
+--------------------------------------------------------------
+ Seq Scan on nchar_test  (cost=0.00..21.55 rows=1 width=76)
+   Filter: ((f1 = 'a'::nbpchar) AND ((f2)::text = 'b'::text))
+(2 rows)
+
+delete from nchar_test1 where f1=N'a' and f2=N'b';
+drop table nchar_test;
+drop table nchar_test1;
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar_func.out ./src/test/regress/expected/nvarchar_func.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar_func.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar_func.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,258 @@
+SELECT N'各国' || N'文字';
+ ?column? 
+----------
+ 各国文字
+(1 row)
+
+SELECT N'各国' || 42;
+ ?column? 
+----------
+ 各国42
+(1 row)
+
+SELECT bit_length(N'各国');
+ bit_length 
+------------
+         48
+(1 row)
+
+SELECT char_length(N'各国');
+ char_length 
+-------------
+           2
+(1 row)
+
+SELECT lower(N'各国TOM');
+  lower  
+---------
+ 各国tom
+(1 row)
+
+SELECT octet_length(N'各国');
+ octet_length 
+--------------
+            6
+(1 row)
+
+SELECT overlay(N'各XX' placing N'文字' from 2 for 3);
+ overlay 
+---------
+ 各文字
+(1 row)
+
+SELECT position(N'文' in N'各国文字');
+ position 
+----------
+        3
+(1 row)
+
+SELECT substring(N'各国文字' from 3 for 3);
+ substring 
+-----------
+ 文字
+(1 row)
+
+SELECT substring(N'各国文字列' from N'...$');
+ substring 
+-----------
+ 文字列
+(1 row)
+
+SELECT substring(N'各国文字列国' from '%#"文_列#"_' for '#');
+ substring 
+-----------
+ 文字列
+(1 row)
+
+SELECT trim(both N'X' from N'X各国文字XX');
+  btrim   
+----------
+ 各国文字
+(1 row)
+
+SELECT upper(N'各国tom');
+  upper  
+---------
+ 各国TOM
+(1 row)
+
+SELECT ascii(N'数');
+ ascii 
+-------
+ 25968
+(1 row)
+
+SELECT btrim(N'XY各国X', N'XY');
+ btrim 
+-------
+ 各国
+(1 row)
+
+SELECT chr(25968);
+ chr 
+-----
+ 数
+(1 row)
+
+SELECT concat(N'各', N'国', NULL, N'語');
+ concat 
+--------
+ 各国語
+(1 row)
+
+SELECT concat_ws(N'_', N'各国', NULL, N'文字');
+ concat_ws  
+------------
+ 各国_文字
+(1 row)
+
+-- convert_to(N'各国', 'SJIS') = '\x8a658d91'
+-- convert_to(N'各国', 'UTF8') = '\xe59084e59bbd'
+SELECT convert('\x8a658d91'::bytea, 'SJIS', 'UTF8');
+    convert     
+----------------
+ \xe59084e59bbd
+(1 row)
+
+-- N'各国'::bytea = '\xe59084e59bbd'
+SELECT convert_from('\xe59084e59bbd'::bytea, 'UTF8');
+ convert_from 
+--------------
+ 各国
+(1 row)
+
+-- '\xe59084e59bbd'::bytea = N'各国'
+SELECT convert_to(N'各国', 'UTF8');
+   convert_to   
+----------------
+ \xe59084e59bbd
+(1 row)
+
+SELECT format('各国 %s, %1$s', N'文字');
+     format      
+-----------------
+ 各国 文字, 文字
+(1 row)
+
+SELECT initcap(N'hi 各国');
+ initcap 
+---------
+ Hi 各国
+(1 row)
+
+SELECT left(N'各国文字', 2);
+ left 
+------
+ 各国
+(1 row)
+
+SELECT length(N'各国文字');
+ length 
+--------
+      4
+(1 row)
+
+-- N'各国文字'::bytea = '\xe59084e59bbde69687e5ad97'
+SELECT length('\xe59084e59bbde69687e5ad97'::bytea , 'UTF8');
+ length 
+--------
+      4
+(1 row)
+
+SELECT lpad(N'文字', 3, N'各国');
+  lpad  
+--------
+ 各文字
+(1 row)
+
+SELECT ltrim(N'◯×各国', N'◯×');
+ ltrim 
+-------
+ 各国
+(1 row)
+
+SELECT regexp_matches(N'各国文字データ', N'(文字)(データ)');
+ regexp_matches 
+----------------
+ {文字,データ}
+(1 row)
+
+SELECT regexp_replace(N'各国x文字', '.[a-z]文.', N'語');
+ regexp_replace 
+----------------
+ 各語
+(1 row)
+
+SELECT regexp_split_to_array(N'各国 文字列', E'\\s+');
+ regexp_split_to_array 
+-----------------------
+ {各国,文字列}
+(1 row)
+
+SELECT regexp_split_to_table(N'各国 文字列', E'\\s+');
+ regexp_split_to_table 
+-----------------------
+ 各国
+ 文字列
+(2 rows)
+
+SELECT repeat(N'文字', 4);
+      repeat      
+------------------
+ 文字文字文字文字
+(1 row)
+
+SELECT replace(N'各国文字国', N'国', N'語');
+  replace   
+------------
+ 各語文字語
+(1 row)
+
+SELECT reverse(N'各国語');
+ reverse 
+---------
+ 語国各
+(1 row)
+
+SELECT right(N'各国語', 2);
+ right 
+-------
+ 国語
+(1 row)
+
+SELECT rpad(N'各国', 3, N'語');
+  rpad  
+--------
+ 各国語
+(1 row)
+
+SELECT rtrim(N'◯各国◯◯', N'◯');
+ rtrim 
+-------
+ ◯各国
+(1 row)
+
+SELECT split_part(N'各国◯語◯文字', N'◯', 3);
+ split_part 
+------------
+ 文字
+(1 row)
+
+SELECT strpos(N'各国語', N'語');
+ strpos 
+--------
+      3
+(1 row)
+
+SELECT substr(N'各国語', 3, 3);
+ substr 
+--------
+ 語
+(1 row)
+
+SELECT translate(N'各国文字', N'各字', N'◯×');
+ translate 
+-----------
+ ◯国文×
+(1 row)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar_misc.out ./src/test/regress/expected/nvarchar_misc.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar_misc.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar_misc.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,126 @@
+select N'a '=N'a';
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a '='a';
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a'='a';
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::varchar(1);
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a '='a'::nvarchar(1);
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a'='a'::nvarchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::nvarchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a'::varchar(10);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a '::varchar(10);
+ ?column? 
+----------
+ f
+(1 row)
+
+select 'a'::nvarchar(10)='a '::char(10);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a '::nchar(10)='a  '::nchar(5);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a '::nchar(10)='a  '::char(5);
+ ?column? 
+----------
+ t
+(1 row)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/nvarchar.out ./src/test/regress/expected/nvarchar.out
--- ../orig.HEAD/src/test/regress/expected/nvarchar.out	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/expected/nvarchar.out	2013-08-19 23:31:50.000000000 -0400
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length char
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | A
+      | 1
+      | 2
+      | 3
+      | 
+(5 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | c
+(1 row)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | c
+(2 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of char
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/opr_sanity.out ./src/test/regress/expected/opr_sanity.out
--- ../orig.HEAD/src/test/regress/expected/opr_sanity.out	2013-08-19 20:22:14.000000000 -0400
+++ ./src/test/regress/expected/opr_sanity.out	2013-08-19 20:54:00.000000000 -0400
@@ -152,12 +152,15 @@
     (p1.prorettype < p2.prorettype)
 ORDER BY 1, 2;
  prorettype | prorettype 
 ------------+------------
          25 |       1043
+         25 |       6001
+       1042 |       5001
+       1043 |       6001
        1114 |       1184
-(2 rows)
+(5 rows)
 
 SELECT DISTINCT p1.proargtypes[0], p2.proargtypes[0]
 FROM pg_proc AS p1, pg_proc AS p2
 WHERE p1.oid != p2.oid AND
     p1.prosrc = p2.prosrc AND
@@ -169,14 +172,18 @@
 ORDER BY 1, 2;
  proargtypes | proargtypes 
 -------------+-------------
           25 |        1042
           25 |        1043
+          25 |        5001
+          25 |        6001
+        1042 |        5001
+        1043 |        6001
         1114 |        1184
         1560 |        1562
         2277 |        2283
-(5 rows)
+(9 rows)
 
 SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]
 FROM pg_proc AS p1, pg_proc AS p2
 WHERE p1.oid != p2.oid AND
     p1.prosrc = p2.prosrc AND
@@ -187,14 +194,15 @@
     (p1.proargtypes[1] < p2.proargtypes[1])
 ORDER BY 1, 2;
  proargtypes | proargtypes 
 -------------+-------------
           23 |          28
+        1042 |        5001
         1114 |        1184
         1560 |        1562
         2277 |        2283
-(4 rows)
+(5 rows)
 
 SELECT DISTINCT p1.proargtypes[2], p2.proargtypes[2]
 FROM pg_proc AS p1, pg_proc AS p2
 WHERE p1.oid != p2.oid AND
     p1.prosrc = p2.prosrc AND
@@ -433,19 +441,23 @@
     NOT EXISTS (SELECT 1 FROM pg_cast k
                 WHERE k.castmethod = 'b' AND
                     k.castsource = c.casttarget AND
                     k.casttarget = c.castsource);
     castsource     |    casttarget     | castfunc | castcontext 
--------------------+-------------------+----------+-------------
- text              | character         |        0 | i
+----------------------------+--------------------+----------+-------------
+ national character varying | character          |        0 | i
+ national character varying | national character |        0 | i
  character varying | character         |        0 | i
+ character varying          | national character |        0 | i
+ text                       | character          |        0 | i
+ text                       | national character |        0 | i
  pg_node_tree      | text              |        0 | i
  cidr              | inet              |        0 | i
  xml               | text              |        0 | a
  xml               | character varying |        0 | a
  xml               | character         |        0 | a
-(7 rows)
+(11 rows)
 
 -- **************** pg_operator ****************
 -- Look for illegal values in pg_operator fields.
 SELECT p1.oid, p1.oprname
 FROM pg_operator as p1
@@ -1211,11 +1223,13 @@
                  p3.amoplefttype = p1.amoplefttype AND
                  p3.amoprighttype = p2.amoplefttype AND
                  p3.amopstrategy = 1);
  amoplefttype | amoplefttype 
 --------------+--------------
-(0 rows)
+         1042 |         5001
+         5001 |         1042
+(2 rows)
 
 -- **************** pg_amproc ****************
 -- Look for illegal values in pg_amproc fields
 SELECT p1.amprocfamily, p1.amprocnum
 FROM pg_amproc as p1
@@ -1273,12 +1287,14 @@
 WHERE am.amname = 'btree' OR am.amname = 'gist' OR am.amname = 'gin'
 GROUP BY amname, amsupport, opcname, amprocfamily
 HAVING (count(*) != amsupport AND count(*) != amsupport - 1)
     OR amprocfamily IS NULL;
  amname | opcname | count 
---------+---------+-------
-(0 rows)
+--------+---------------+-------
+ gin    | _nbpchar_ops  |     1
+ gin    | _nvarchar_ops |     1
+(2 rows)
 
 -- Unfortunately, we can't check the amproc link very well because the
 -- signature of the function may be different for different support routines
 -- or different base data types.
 -- We can check that all the referenced instances of the same support
@@ -1316,12 +1332,14 @@
           WHEN amprocnum = 2
           THEN prorettype != 'void'::regtype OR proretset OR pronargs != 1
                OR proargtypes[0] != 'internal'::regtype
           ELSE true END);
  amprocfamily | amprocnum | oid | proname | opfname 
---------------+-----------+-----+---------+---------
-(0 rows)
+--------------+-----------+------+----------------------+--------------------
+          426 |         1 | 1078 | bpcharcmp            | bpchar_ops
+         2097 |         1 | 2180 | btbpchar_pattern_cmp | bpchar_pattern_ops
+(2 rows)
 
 -- For hash we can also do a little better: the support routines must be
 -- of the form hash(lefttype) returns int4.  There are several cases where
 -- we cheat and use a hash function that is physically compatible with the
 -- datatype even though there's no cast, so this check does find a small
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/sanity_check.out ./src/test/regress/expected/sanity_check.out
--- ../orig.HEAD/src/test/regress/expected/sanity_check.out	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/expected/sanity_check.out	2013-08-19 20:54:00.000000000 -0400
@@ -66,10 +66,11 @@
  kd_point_tbl            | t
  log_table               | f
  lseg_tbl                | f
  main_table              | f
  money_data              | f
+ nchar_tbl               | f
  num_data                | f
  num_exp_add             | t
  num_exp_div             | t
  num_exp_ln              | t
  num_exp_log10           | t
@@ -77,10 +78,11 @@
  num_exp_power_10_ln     | t
  num_exp_sqrt            | t
  num_exp_sub             | t
  num_input_test          | f
  num_result              | f
+ nvarchar_tbl            | f
  onek                    | t
  onek2                   | t
  path_tbl                | f
  person                  | f
  pg_aggregate            | t
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/strings.out ./src/test/regress/expected/strings.out
--- ../orig.HEAD/src/test/regress/expected/strings.out	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/expected/strings.out	2013-08-19 20:54:00.000000000 -0400
@@ -201,19 +201,37 @@
  ab
  abcd
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS text) AS "text(nchar)" FROM NCHAR_TBL;
+ text(nchar) 
+-------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(f1 AS text) AS "text(varchar)" FROM VARCHAR_TBL;
  text(varchar) 
 ---------------
  a
  ab
  abcd
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS text) AS "text(nvarchar)" FROM NVARCHAR_TBL;
+ text(nvarchar) 
+----------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(name 'namefield' AS text) AS "text(name)";
  text(name) 
 ------------
  namefield
 (1 row)
@@ -225,54 +243,106 @@
  doh!      
  hi de ho n
 (2 rows)
 
 -- note: implicit-cast case is tested in char.sql
+SELECT CAST(f1 AS nchar(10)) AS "nchar(text)" FROM TEXT_TBL;
+ nchar(text) 
+-------------
+ doh!      
+ hi de ho n
+(2 rows)
+
+-- note: implicit-cast case is tested in nchar.sql
 SELECT CAST(f1 AS char(20)) AS "char(text)" FROM TEXT_TBL;
       char(text)      
 ----------------------
  doh!                
  hi de ho neighbor   
 (2 rows)
 
+SELECT CAST(f1 AS nchar(20)) AS "nchar(text)" FROM TEXT_TBL;
+     nchar(text)      
+----------------------
+ doh!                
+ hi de ho neighbor   
+(2 rows)
+
 SELECT CAST(f1 AS char(10)) AS "char(varchar)" FROM VARCHAR_TBL;
  char(varchar) 
 ---------------
  a         
  ab        
  abcd      
  abcd      
 (4 rows)
 
+SELECT CAST(f1 AS nchar(10)) AS "nchar(nvarchar)" FROM NVARCHAR_TBL;
+ nchar(nvarchar) 
+-----------------
+ a         
+ ab        
+ abcd      
+ abcd      
+(4 rows)
+
 SELECT CAST(name 'namefield' AS char(10)) AS "char(name)";
  char(name) 
 ------------
  namefield 
 (1 row)
 
+SELECT CAST(name 'namefield' AS nchar(10)) AS "nchar(name)";
+ nchar(name) 
+-------------
+ namefield 
+(1 row)
+
 SELECT CAST(f1 AS varchar) AS "varchar(text)" FROM TEXT_TBL;
    varchar(text)   
 -------------------
  doh!
  hi de ho neighbor
 (2 rows)
 
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(text)" FROM TEXT_TBL;
+  nvarchar(text)   
+-------------------
+ doh!
+ hi de ho neighbor
+(2 rows)
+
 SELECT CAST(f1 AS varchar) AS "varchar(char)" FROM CHAR_TBL;
  varchar(char) 
 ---------------
  a
  ab
  abcd
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(nchar)" FROM NCHAR_TBL;
+ nvarchar(nchar) 
+-----------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(name 'namefield' AS varchar) AS "varchar(name)";
  varchar(name) 
 ---------------
  namefield
 (1 row)
 
+SELECT CAST(name 'namefield' AS nvarchar) AS "nvarchar(name)";
+ nvarchar(name) 
+----------------
+ namefield
+(1 row)
+
 --
 -- test SQL string functions
 -- E### and T### are feature reference numbers from SQL99
 --
 -- E021-09 trim function
@@ -1103,22 +1173,40 @@
  Concat char to unknown type 
 -----------------------------
  characters and text
 (1 row)
 
+SELECT nchar(20) 'ncharacters' || ' and text' AS "Concat nchar to unknown type";
+ Concat nchar to unknown type 
+------------------------------
+ ncharacters and text
+(1 row)
+
 SELECT text 'text' || char(20) ' and characters' AS "Concat text to char";
  Concat text to char 
 ---------------------
  text and characters
 (1 row)
 
+SELECT text 'text' || nchar(20) ' and ncharacters' AS "Concat text to nchar";
+ Concat text to nchar 
+----------------------
+ text and ncharacters
+(1 row)
+
 SELECT text 'text' || varchar ' and varchar' AS "Concat text to varchar";
  Concat text to varchar 
 ------------------------
  text and varchar
 (1 row)
 
+SELECT text 'text' || nvarchar ' and nvarchar' AS "Concat text to nvarchar";
+ Concat text to nvarchar 
+-------------------------
+ text and nvarchar
+(1 row)
+
 --
 -- test substr with toasted text values
 --
 CREATE TABLE toasttest(f1 text);
 insert into toasttest values(repeat('1234567890',10000));
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/union.out ./src/test/regress/expected/union.out
--- ../orig.HEAD/src/test/regress/expected/union.out	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/expected/union.out	2013-08-19 20:54:00.000000000 -0400
@@ -255,10 +255,52 @@
  abcd
  doh!
  hi de ho neighbor
 (5 rows)
 
+--NVARCHAR specific
+SELECT f1 AS three FROM NVARCHAR_TBL
+UNION
+SELECT CAST(f1 AS nvarchar) FROM NCHAR_TBL
+ORDER BY 1;
+ three 
+-------
+ a
+ ab
+ abcd
+(3 rows)
+
+SELECT f1 AS eight FROM NVARCHAR_TBL
+UNION ALL
+SELECT f1 FROM NCHAR_TBL;
+ eight 
+-------
+ a
+ ab
+ abcd
+ abcd
+ a
+ ab
+ abcd
+ abcd
+(8 rows)
+
+SELECT f1 AS five FROM TEXT_TBL
+UNION
+SELECT f1 FROM NVARCHAR_TBL
+UNION
+SELECT TRIM(TRAILING FROM f1) FROM NCHAR_TBL
+ORDER BY 1;
+       five        
+-------------------
+ a
+ ab
+ abcd
+ doh!
+ hi de ho neighbor
+(5 rows)
+
 --
 -- INTERSECT and EXCEPT
 --
 SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;
         q2        
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/expected/update.out ./src/test/regress/expected/update.out
--- ../orig.HEAD/src/test/regress/expected/update.out	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/expected/update.out	2013-08-19 20:54:00.000000000 -0400
@@ -2,75 +2,101 @@
 -- UPDATE syntax tests
 --
 CREATE TABLE update_test (
     a   INT DEFAULT 10,
     b   INT,
-    c   TEXT
+    c   TEXT,
+    d   nchar(10),
+    e   nvarchar
 );
-INSERT INTO update_test VALUES (5, 10, 'foo');
+INSERT INTO update_test VALUES (5, 10, 'foo', 'a', 'a');
 INSERT INTO update_test(b, a) VALUES (15, 10);
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
-  5 | 10 | foo
- 10 | 15 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | a          | a
+ 10 | 15 |     |            | 
+(2 rows)
+
+UPDATE update_test SET d='b', e='b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | b          | b
+ 10 | 15 |     | b          | b
+(2 rows)
+
+UPDATE update_test SET d='c' where e='b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | c          | b
+ 10 | 15 |     | c          | b
+(2 rows)
+
+UPDATE update_test SET d=N'e' where e=N'b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | e          | b
+ 10 | 15 |     | e          | b
 (2 rows)
 
 UPDATE update_test SET a = DEFAULT, b = DEFAULT;
 SELECT * FROM update_test;
- a  | b |  c  
-----+---+-----
- 10 |   | foo
- 10 |   | 
+ a  | b |  c  |     d      | e 
+----+---+-----+------------+---
+ 10 |   | foo | e          | b
+ 10 |   |     | e          | b
 (2 rows)
 
 -- aliases for the UPDATE target table
 UPDATE update_test AS t SET b = 10 WHERE t.a = 10;
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
- 10 | 10 | foo
- 10 | 10 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+ 10 | 10 | foo | e          | b
+ 10 | 10 |     | e          | b
 (2 rows)
 
 UPDATE update_test t SET b = t.b + 10 WHERE t.a = 10;
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
- 10 | 20 | foo
- 10 | 20 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+ 10 | 20 | foo | e          | b
+ 10 | 20 |     | e          | b
 (2 rows)
 
 --
 -- Test VALUES in FROM
 --
 UPDATE update_test SET a=v.i FROM (VALUES(100, 20)) AS v(i, j)
   WHERE update_test.b = v.j;
 SELECT * FROM update_test;
-  a  | b  |  c  
------+----+-----
- 100 | 20 | foo
- 100 | 20 | 
+  a  | b  |  c  |     d      | e 
+-----+----+-----+------------+---
+ 100 | 20 | foo | e          | b
+ 100 | 20 |     | e          | b
 (2 rows)
 
 --
 -- Test multiple-set-clause syntax
 --
 UPDATE update_test SET (c,b,a) = ('bugle', b+11, DEFAULT) WHERE c = 'foo';
 SELECT * FROM update_test;
-  a  | b  |   c   
------+----+-------
- 100 | 20 | 
-  10 | 31 | bugle
+  a  | b  |   c   |     d      | e 
+-----+----+-------+------------+---
+ 100 | 20 |       | e          | b
+  10 | 31 | bugle | e          | b
 (2 rows)
 
 UPDATE update_test SET (c,b) = ('car', a+b), a = a + 1 WHERE a = 10;
 SELECT * FROM update_test;
-  a  | b  |  c  
------+----+-----
- 100 | 20 | 
-  11 | 41 | car
+  a  | b  |  c  |     d      | e 
+-----+----+-----+------------+---
+ 100 | 20 |     | e          | b
+  11 | 41 | car | e          | b
 (2 rows)
 
 -- fail, multi assignment to same column:
 UPDATE update_test SET (c,b) = ('car', a+b), b = a + 1 WHERE a = 10;
 ERROR:  multiple assignments to same column "b"
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/output/misc.source ./src/test/regress/output/misc.source
--- ../orig.HEAD/src/test/regress/output/misc.source	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/output/misc.source	2013-08-19 20:54:00.000000000 -0400
@@ -39,32 +39,32 @@
 --   SET age = age + 3
 --   WHERE name = 'linda';
 --
 -- copy
 --
-COPY onek TO '@abs_builddir@/results/onek.data';
+COPY onek TO '/home/mboguk/postgres/build_test/src/test/regress/results/onek.data';
 DELETE FROM onek;
-COPY onek FROM '@abs_builddir@/results/onek.data';
+COPY onek FROM '/home/mboguk/postgres/build_test/src/test/regress/results/onek.data';
 SELECT unique1 FROM onek WHERE unique1 < 2 ORDER BY unique1;
  unique1 
 ---------
        0
        1
 (2 rows)
 
 DELETE FROM onek2;
-COPY onek2 FROM '@abs_builddir@/results/onek.data';
+COPY onek2 FROM '/home/mboguk/postgres/build_test/src/test/regress/results/onek.data';
 SELECT unique1 FROM onek2 WHERE unique1 < 2 ORDER BY unique1;
  unique1 
 ---------
        0
        1
 (2 rows)
 
-COPY BINARY stud_emp TO '@abs_builddir@/results/stud_emp.data';
+COPY BINARY stud_emp TO '/home/mboguk/postgres/build_test/src/test/regress/results/stud_emp.data';
 DELETE FROM stud_emp;
-COPY BINARY stud_emp FROM '@abs_builddir@/results/stud_emp.data';
+COPY BINARY stud_emp FROM '/home/mboguk/postgres/build_test/src/test/regress/results/stud_emp.data';
 SELECT * FROM stud_emp;
  name  | age |  location  | salary | manager | gpa | percent 
 -------+-----+------------+--------+---------+-----+---------
  jeff  |  23 | (8,7.7)    |    600 | sharon  | 3.5 |        
  cim   |  30 | (10.5,4.7) |    400 |         | 3.4 |        
@@ -640,10 +640,11 @@
  kd_point_tbl
  log_table
  lseg_tbl
  main_table
  money_data
+ nchar_tbl
  num_data
  num_exp_add
  num_exp_div
  num_exp_ln
  num_exp_log10
@@ -651,10 +652,11 @@
  num_exp_power_10_ln
  num_exp_sqrt
  num_exp_sub
  num_input_test
  num_result
+ nvarchar_tbl
  onek
  onek2
  path_tbl
  person
  point_tbl
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/parallel_schedule ./src/test/regress/parallel_schedule
--- ../orig.HEAD/src/test/regress/parallel_schedule	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/parallel_schedule	2013-08-19 20:54:00.000000000 -0400
@@ -11,11 +11,11 @@
 test: tablespace
 
 # ----------
 # The first group of parallel tests
 # ----------
-test: boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes
+test: boolean char name varchar nchar nvarchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes nvarchar_func nvarchar_alter nvarchar_misc
 
 # Depends on things setup during char, varchar and text
 test: strings
 # Depends on int2, int4, int8, float4, float8
 test: numerology
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/serial_schedule ./src/test/regress/serial_schedule
--- ../orig.HEAD/src/test/regress/serial_schedule	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/serial_schedule	2013-08-19 20:54:00.000000000 -0400
@@ -3,10 +3,15 @@
 test: tablespace
 test: boolean
 test: char
 test: name
 test: varchar
+test: nvarchar
+test: nvarchar_alter
+test: nvarchar_func
+test: nvarchar_misc
+test: nchar
 test: text
 test: int2
 test: int4
 test: int8
 test: oid
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/nchar.sql ./src/test/regress/sql/nchar.sql
--- ../orig.HEAD/src/test/regress/sql/nchar.sql	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/sql/nchar.sql	2013-08-19 23:31:21.000000000 -0400
@@ -0,0 +1,75 @@
+--
+-- NCHAR
+--
+
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+
+SELECT nchar 'c' = nchar 'c' AS true;
+
+--
+-- Build a table for testing
+--
+
+CREATE TABLE NCHAR_TBL(f1 nchar);
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+
+
+SELECT '' AS seven, * FROM NCHAR_TBL;
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+
+DROP TABLE NCHAR_TBL;
+
+--
+-- Now test longer arrays of nchar
+--
+
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+
+SELECT '' AS four, * FROM NCHAR_TBL;
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/nvarchar_alter.sql ./src/test/regress/sql/nvarchar_alter.sql
--- ../orig.HEAD/src/test/regress/sql/nvarchar_alter.sql	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/sql/nvarchar_alter.sql	2013-08-19 23:31:15.000000000 -0400
@@ -0,0 +1,126 @@
+--create table
+create table nchar_test (f1 nchar(10) default N'a', f2 nvarchar(10) default N'b');
+
+--values
+values (N'abc各国語');
+
+--analyze
+analyze nchar_test(f1);
+analyze nchar_test(f2);
+
+--vacuum analyze
+vacuum analyze nchar_test(f1);
+vacuum analyze nchar_test(f2);
+
+--broken
+--set application_name to N'abc各国語';
+
+--select
+select f1,f2,N'abc各国語' from nchar_test;
+
+--select into
+select f1,f2,N'abc各国語' into nchar_test1 from nchar_test;
+
+--revoke per column not supported
+
+--grant per column not supported
+
+--insert into/returning
+insert into nchar_test1 select f1,f2,N'abc各国語' from nchar_test returning f1,f2,nvarchar,N'abc各国語';
+
+--explain
+explain select f1,f2,N'abc各国語' from nchar_test;
+
+--do
+do $$begin perform f1,f2,N'abc各国語' from nchar_test; end $$;
+
+--delete
+delete from nchar_test where f1=N'abc各国語' and f2=N'abc各国語';
+
+--create view
+create view v as select f1,f2,N'abc各国語' from nchar_test;
+
+--create index
+create index i2 on nchar_test(f2);
+create index i3 on nchar_test((f1||'abc各国語'));
+create index i1 on nchar_test(f1);
+
+--comment
+comment on COLUMN nchar_test.f1 is 'f1';
+comment on COLUMN nchar_test.f2 is 'f2';
+comment on type nvarchar is 'nvarchar';
+
+
+--analyze
+analyze nchar_test(f1);
+
+--copy
+copy nchar_test(f1, f2) to stdout;
+
+--alter view
+alter view v alter column f1 set default N'abc各国語';
+alter view v alter column f2 set default N'abc各国語';
+
+--alter table
+alter table nchar_test1 rename f1 to f3;
+alter table nchar_test1 rename f2 to f4;
+alter table nchar_test1 alter f3 type nchar(10);
+alter table nchar_test1 alter f4 type nvarchar;
+alter table nchar_test1 alter f3 set default N'abc各国語';
+alter table nchar_test1 alter f4 set default N'abc各国語';
+
+--declare cursor
+begin;declare qqq cursor for select f1,f2,N'abc各国語' from nchar_test;commit;
+
+--create trigger
+create trigger tr before update on nchar_test for each row when (OLD.f1=N'abc各国語') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+
+--foreign key
+alter table nchar_test1 add CONSTRAINT nchar_test1_pk primary key(f3);
+alter table nchar_test add CONSTRAINT qqqq FOREIGN KEY (f1) references nchar_test1(f3);
+
+--update
+update nchar_test set f1=N'abc各国語', f2='abc各国語';
+
+select * from nchar_test1;
+
+create domain test_nchar_domain as nchar(10) default (N'a') CHECK (value <> N'b');
+create domain test_nvarchar_domain as nvarchar(10) default (N'a') CHECK (value <> N'b');
+alter domain test_nchar_domain set default (N'b');
+alter domain test_nvarchar_domain set default (N'b');
+DROP DOMAIN test_nchar_domain;
+DROP DOMAIN test_nvarchar_domain;
+
+CREATE AGGREGATE test_nvarchar_agg (nvarchar) ( sfunc = array_append, stype = nvarchar[], initcond = '{}');
+CREATE AGGREGATE test_nchar_agg (nchar(10)) ( sfunc = array_append, stype = nchar(10)[], initcond = '{}');
+alter aggregate test_nvarchar_agg(nvarchar) rename to test_nvarchar_aggregate;
+alter aggregate test_nchar_agg (nchar(10)) rename to test_nchar_aggregate;
+drop aggregate test_nvarchar_aggregate(nvarchar);
+drop aggregate test_nchar_aggregate(nchar(10));
+
+CREATE TYPE test_nchar_type AS (f1 nchar(10));
+CREATE TYPE test_nvarchar_type AS (f1 nvarchar);
+drop type test_nchar_type;
+drop type test_nvarchar_type;
+
+
+drop view v;
+drop table nchar_test;
+drop table nchar_test1;
+
+create table nchar_test (f1 nchar(10), f2 nvarchar(10));
+select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+insert into nchar_test values (N'a', N'b') returning (f1, f2, N'c');
+select f1, f2, N'a' into nchar_test1 from nchar_test;
+select * from nchar_test1;
+
+prepare qqq(nchar(10), nvarchar(10)) AS select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+execute qqq(N'a', N'b');
+
+explain select f1, f2, N'a'  from nchar_test where f1=N'a' and f2=N'b';
+
+delete from nchar_test1 where f1=N'a' and f2=N'b';
+
+drop table nchar_test;
+drop table nchar_test1;
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/nvarchar_func.sql ./src/test/regress/sql/nvarchar_func.sql
--- ../orig.HEAD/src/test/regress/sql/nvarchar_func.sql	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/sql/nvarchar_func.sql	2013-08-19 23:31:15.000000000 -0400
@@ -0,0 +1,90 @@
+
+SELECT N'各国' || N'文字';
+
+SELECT N'各国' || 42;
+
+SELECT bit_length(N'各国');
+
+SELECT char_length(N'各国');
+
+SELECT lower(N'各国TOM');
+
+SELECT octet_length(N'各国');
+
+SELECT overlay(N'各XX' placing N'文字' from 2 for 3);
+
+SELECT position(N'文' in N'各国文字');
+
+SELECT substring(N'各国文字' from 3 for 3);
+
+SELECT substring(N'各国文字列' from N'...$');
+
+SELECT substring(N'各国文字列国' from '%#"文_列#"_' for '#');
+
+SELECT trim(both N'X' from N'X各国文字XX');
+
+SELECT upper(N'各国tom');
+
+SELECT ascii(N'数');
+
+SELECT btrim(N'XY各国X', N'XY');
+
+SELECT chr(25968);
+
+SELECT concat(N'各', N'国', NULL, N'語');
+
+SELECT concat_ws(N'_', N'各国', NULL, N'文字');
+
+-- convert_to(N'各国', 'SJIS') = '\x8a658d91'
+-- convert_to(N'各国', 'UTF8') = '\xe59084e59bbd'
+SELECT convert('\x8a658d91'::bytea, 'SJIS', 'UTF8');
+
+-- N'各国'::bytea = '\xe59084e59bbd'
+SELECT convert_from('\xe59084e59bbd'::bytea, 'UTF8');
+
+-- '\xe59084e59bbd'::bytea = N'各国'
+SELECT convert_to(N'各国', 'UTF8');
+
+SELECT format('各国 %s, %1$s', N'文字');
+
+SELECT initcap(N'hi 各国');
+
+SELECT left(N'各国文字', 2);
+
+SELECT length(N'各国文字');
+
+-- N'各国文字'::bytea = '\xe59084e59bbde69687e5ad97'
+SELECT length('\xe59084e59bbde69687e5ad97'::bytea , 'UTF8');
+
+SELECT lpad(N'文字', 3, N'各国');
+
+SELECT ltrim(N'◯×各国', N'◯×');
+
+SELECT regexp_matches(N'各国文字データ', N'(文字)(データ)');
+
+SELECT regexp_replace(N'各国x文字', '.[a-z]文.', N'語');
+
+SELECT regexp_split_to_array(N'各国 文字列', E'\\s+');
+
+SELECT regexp_split_to_table(N'各国 文字列', E'\\s+');
+
+SELECT repeat(N'文字', 4);
+
+SELECT replace(N'各国文字国', N'国', N'語');
+
+SELECT reverse(N'各国語');
+
+SELECT right(N'各国語', 2);
+
+SELECT rpad(N'各国', 3, N'語');
+
+SELECT rtrim(N'◯各国◯◯', N'◯');
+
+SELECT split_part(N'各国◯語◯文字', N'◯', 3);
+
+SELECT strpos(N'各国語', N'語');
+
+SELECT substr(N'各国語', 3, 3);
+
+SELECT translate(N'各国文字', N'各字', N'◯×');
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/nvarchar_misc.sql ./src/test/regress/sql/nvarchar_misc.sql
--- ../orig.HEAD/src/test/regress/sql/nvarchar_misc.sql	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/sql/nvarchar_misc.sql	2013-08-19 23:31:15.000000000 -0400
@@ -0,0 +1,22 @@
+select N'a '=N'a';
+select N'a '='a';
+select N'a'='a';
+select N'a '='a'::char(1);
+select N'a '='a'::nchar(1);
+select N'a '='a'::varchar(1);
+select N'a '='a'::nvarchar(1);
+select N'a'='a'::nvarchar(1);
+select N'a'='a'::varchar(1);
+select N'a'='a'::char(1);
+select N'a'='a'::nchar(1);
+select 'a'::nchar(10)='a'::char(1);
+select 'a'::nchar(10)='a'::nchar(1);
+select 'a'::nchar(10)='a'::varchar(1);
+select 'a'::nchar(10)='a'::nvarchar(1);
+select 'a'::nvarchar(10)='a'::varchar(1);
+select 'a'::nvarchar(10)='a'::varchar(10);
+select 'a'::nvarchar(10)='a '::varchar(10);
+select 'a'::nvarchar(10)='a '::char(10);
+select 'a '::nchar(10)='a  '::nchar(5);
+select 'a '::nchar(10)='a  '::char(5);
+
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/nvarchar.sql ./src/test/regress/sql/nvarchar.sql
--- ../orig.HEAD/src/test/regress/sql/nvarchar.sql	1969-12-31 19:00:00.000000000 -0500
+++ ./src/test/regress/sql/nvarchar.sql	2013-08-19 23:31:15.000000000 -0400
@@ -0,0 +1,66 @@
+--
+-- NVARCHAR
+--
+
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+
+-- zero-length nchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+
+
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+
+DROP TABLE NVARCHAR_TBL;
+
+--
+-- Now test longer arrays of nchar
+--
+
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+
+SELECT '' AS four, * FROM NVARCHAR_TBL;
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/strings.sql ./src/test/regress/sql/strings.sql
--- ../orig.HEAD/src/test/regress/sql/strings.sql	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/sql/strings.sql	2013-08-19 20:54:00.000000000 -0400
@@ -68,30 +68,41 @@
 -- test conversions between various string types
 -- E021-10 implicit casting among the character data types
 --
 
 SELECT CAST(f1 AS text) AS "text(char)" FROM CHAR_TBL;
+SELECT CAST(f1 AS text) AS "text(nchar)" FROM NCHAR_TBL;
 
 SELECT CAST(f1 AS text) AS "text(varchar)" FROM VARCHAR_TBL;
+SELECT CAST(f1 AS text) AS "text(nvarchar)" FROM NVARCHAR_TBL;
 
 SELECT CAST(name 'namefield' AS text) AS "text(name)";
 
 -- since this is an explicit cast, it should truncate w/o error:
 SELECT CAST(f1 AS char(10)) AS "char(text)" FROM TEXT_TBL;
 -- note: implicit-cast case is tested in char.sql
+SELECT CAST(f1 AS nchar(10)) AS "nchar(text)" FROM TEXT_TBL;
+-- note: implicit-cast case is tested in nchar.sql
 
 SELECT CAST(f1 AS char(20)) AS "char(text)" FROM TEXT_TBL;
+SELECT CAST(f1 AS nchar(20)) AS "nchar(text)" FROM TEXT_TBL;
 
 SELECT CAST(f1 AS char(10)) AS "char(varchar)" FROM VARCHAR_TBL;
+SELECT CAST(f1 AS nchar(10)) AS "nchar(nvarchar)" FROM NVARCHAR_TBL;
+
 
 SELECT CAST(name 'namefield' AS char(10)) AS "char(name)";
+SELECT CAST(name 'namefield' AS nchar(10)) AS "nchar(name)";
 
 SELECT CAST(f1 AS varchar) AS "varchar(text)" FROM TEXT_TBL;
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(text)" FROM TEXT_TBL;
 
 SELECT CAST(f1 AS varchar) AS "varchar(char)" FROM CHAR_TBL;
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(nchar)" FROM NCHAR_TBL;
 
 SELECT CAST(name 'namefield' AS varchar) AS "varchar(name)";
+SELECT CAST(name 'namefield' AS nvarchar) AS "nvarchar(name)";
 
 --
 -- test SQL string functions
 -- E### and T### are feature reference numbers from SQL99
 --
@@ -328,14 +339,17 @@
 SELECT 'unknown' || ' and unknown' AS "Concat unknown types";
 
 SELECT text 'text' || ' and unknown' AS "Concat text to unknown type";
 
 SELECT char(20) 'characters' || ' and text' AS "Concat char to unknown type";
+SELECT nchar(20) 'ncharacters' || ' and text' AS "Concat nchar to unknown type";
 
 SELECT text 'text' || char(20) ' and characters' AS "Concat text to char";
+SELECT text 'text' || nchar(20) ' and ncharacters' AS "Concat text to nchar";
 
 SELECT text 'text' || varchar ' and varchar' AS "Concat text to varchar";
+SELECT text 'text' || nvarchar ' and nvarchar' AS "Concat text to nvarchar";
 
 --
 -- test substr with toasted text values
 --
 CREATE TABLE toasttest(f1 text);
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/union.sql ./src/test/regress/sql/union.sql
--- ../orig.HEAD/src/test/regress/sql/union.sql	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/sql/union.sql	2013-08-19 20:54:00.000000000 -0400
@@ -87,10 +87,27 @@
 SELECT f1 FROM VARCHAR_TBL
 UNION
 SELECT TRIM(TRAILING FROM f1) FROM CHAR_TBL
 ORDER BY 1;
 
+--NVARCHAR specific
+SELECT f1 AS three FROM NVARCHAR_TBL
+UNION
+SELECT CAST(f1 AS nvarchar) FROM NCHAR_TBL
+ORDER BY 1;
+
+SELECT f1 AS eight FROM NVARCHAR_TBL
+UNION ALL
+SELECT f1 FROM NCHAR_TBL;
+
+SELECT f1 AS five FROM TEXT_TBL
+UNION
+SELECT f1 FROM NVARCHAR_TBL
+UNION
+SELECT TRIM(TRAILING FROM f1) FROM NCHAR_TBL
+ORDER BY 1;
+
 --
 -- INTERSECT and EXCEPT
 --
 
 SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;
diff -U 5 -N -r -w ../orig.HEAD/src/test/regress/sql/update.sql ./src/test/regress/sql/update.sql
--- ../orig.HEAD/src/test/regress/sql/update.sql	2013-08-19 20:22:15.000000000 -0400
+++ ./src/test/regress/sql/update.sql	2013-08-19 20:54:00.000000000 -0400
@@ -3,18 +3,32 @@
 --
 
 CREATE TABLE update_test (
     a   INT DEFAULT 10,
     b   INT,
-    c   TEXT
+    c   TEXT,
+    d   nchar(10),
+    e   nvarchar
 );
 
-INSERT INTO update_test VALUES (5, 10, 'foo');
+INSERT INTO update_test VALUES (5, 10, 'foo', 'a', 'a');
 INSERT INTO update_test(b, a) VALUES (15, 10);
 
 SELECT * FROM update_test;
 
+UPDATE update_test SET d='b', e='b';
+
+SELECT * FROM update_test;
+
+UPDATE update_test SET d='c' where e='b';
+
+SELECT * FROM update_test;
+
+UPDATE update_test SET d=N'e' where e=N'b';
+
+SELECT * FROM update_test;
+
 UPDATE update_test SET a = DEFAULT, b = DEFAULT;
 
 SELECT * FROM update_test;
 
 -- aliases for the UPDATE target table
#2Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Boguk, Maksym (#1)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 03.09.2013 05:28, Boguk, Maksym wrote:

Target usage: ability to store UTF8 national characters in some
selected fields inside a single-byte encoded database.
For sample if I have a ru-RU.koi8r encoded database with mostly Russian
text inside, it would be nice to be able store an Japanese text in one
field without converting the whole database to UTF8 (convert such
database to UTF8 easily could almost double the database size even if
only one field in whole database will use any symbols outside of
ru-RU.koi8r encoding).

Ok.

What has been done:

1)Addition of new string data types NATIONAL CHARACTER and NATIONAL
CHARACTER VARIABLE.
These types differ from the char/varchar data types in one important
respect: NATIONAL string types are always have UTF8 encoding even
(independent from used database encoding).

I don't like the approach of adding a new data type for this. The
encoding used for a text field should be an implementation detail, not
something that's exposed to users at the schema-level. A separate data
type makes an nvarchar field behave slightly differently from text, for
example when it's passed to and from functions. It will also require
drivers and client applications to know about it.

What need to be done:

1)Full set of string functions and operators for NATIONAL types (we
could not use generic text functions because they assume that the stings
will have database encoding).
Now only basic set implemented.
2)Need implement some way to define default collation for a NATIONAL
types.
3)Need implement some way to input UTF8 characters into NATIONAL types
via SQL (there are serious open problem... it will be defined later in
the text).

Yeah, all of these issues stem from the fact that the NATIONAL types are
separate from text.

I think we should take a completely different approach to this. Two
alternatives spring to mind:

1. Implement a new encoding. The new encoding would be some variant of
UTF-8 that encodes languages like Russian more efficiently. Then just
use that in the whole database. Something like SCSU
(http://www.unicode.org/reports/tr6/) should do the trick, although I'm
not sure if SCSU can be used as a server-encoding. A lot of code relies
on the fact that a server encoding must have the high bit set in all
bytes that are part of a multi-byte character. That's why SJIS for
example can only be used as a client-encoding. But surely you could come
up with some subset or variant of SCSU which satisfies that requirement.

2. Compress the column. Simply do "ALTER TABLE foo ALTER COLUMN bar SET
STORAGE MAIN". That will make Postgres compress that field. That might
not be very efficient for compressing short cyrillic text encoded in
UTF-8 today, but that could be improved. There has been discussion on
supporting more compression algorithms in the past, and one such
algorithm could be again something like SCSU.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#2)
Re: UTF8 national character data type support WIP patch and list of open issues.

Heikki Linnakangas <hlinnakangas@vmware.com> writes:

On 03.09.2013 05:28, Boguk, Maksym wrote:

Target usage: ability to store UTF8 national characters in some
selected fields inside a single-byte encoded database.

I think we should take a completely different approach to this. Two
alternatives spring to mind:

1. Implement a new encoding. The new encoding would be some variant of
UTF-8 that encodes languages like Russian more efficiently.

+1. I'm not sure that SCSU satisfies the requirement (which I read as
that Russian text should be pretty much 1 byte/character). But surely
we could devise a variant that does. For instance, it could look like
koi8r (or any other single-byte encoding of your choice) with one byte
value, say 255, reserved as a prefix. 255 means that a UTF8 character
follows. The main complication here is that you don't want to allow more
than one way to represent a character --- else you break text hashing,
for instance. So you'd have to take care that you never emit the 255+UTF8
representation for a character that can be represented in the single-byte
encoding. In particular, you'd never encode ASCII that way, and thus this
would satisfy the all-multibyte-chars-must-have-all-high-bits-set rule.

Ideally we could make a variant like this for each supported single-byte
encoding, and thus you could optimize a database for "mostly but not
entirely LATIN1 text", etc.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Boguk, Maksym
maksymb@fast.au.fujitsu.com
In reply to: Heikki Linnakangas (#2)
Re: UTF8 national character data type support WIP patch and list of open issues.

1)Addition of new string data types NATIONAL CHARACTER and NATIONAL
CHARACTER VARIABLE.
These types differ from the char/varchar data types in one important
respect: NATIONAL string types are always have UTF8 encoding even
(independent from used database encoding).

I don't like the approach of adding a new data type for this. The

encoding used for a text field should be an implementation detail, not
something that's exposed to users at the schema-level. A separate data
type makes an >nvarchar field behave slightly differently from text, for
example when it's passed to and from functions. It will also require
drivers and client applications to know about it.

Hi, my task is implementing ANSI NATIONAL character string types as
part of PostgreSQL core.
And requirement " require drivers and client applications to know about
it" is reason why it could not be done as add-on (these new types should
have a fixed OID for most drivers from my experience).
Implementing them as UTF8 data-type is first step which allows have
NATIONAL characters with encoding differ from database encoding (and
might me even support multiple encoding for common string types in
future).

1)Full set of string functions and operators for NATIONAL types (we
could not use generic text functions because they assume that the
stings will have database encoding).
Now only basic set implemented.
2)Need implement some way to define default collation for a NATIONAL
types.
3)Need implement some way to input UTF8 characters into NATIONAL

types

via SQL (there are serious open problem... it will be defined later
in the text).

Yeah, all of these issues stem from the fact that the NATIONAL types

are separate from text.

I think we should take a completely different approach to this. Two

alternatives spring to mind:

1. Implement a new encoding. The new encoding would be some variant of
UTF-8 that encodes languages like Russian more efficiently. Then just

use that in the whole database. Something like SCSU

(http://www.unicode.org/reports/tr6/) should do the trick, although I'm

not sure if SCSU can be used as a server-encoding. A lot of code relies
on the fact that a server encoding must have the high bit set in all
bytes that >are part of a multi-byte character. That's why SJIS for
example can only be used as a client-encoding. But surely you could come
up with some subset or variant of SCSU which satisfies that requirement.

2. Compress the column. Simply do "ALTER TABLE foo ALTER COLUMN bar SET

STORAGE MAIN". That will make Postgres compress that field. That might
not be very efficient for compressing short Cyrillic text encoded in

UTF-8 today, but that could be improved. There has been discussion on

supporting more compression algorithms in the past, and one such
algorithm could be again something like SCSU.

Both of these approach requires dump/restore the whole database which is
not always an opinion.
Implementing an UTF8 NATIONAL character as new datatype will provide
opinion use pg_upgrade to latest version and have required functionality
without prolonged downtime.

PS: is it possible to reserve some narrow type OID range in PostgreSQL
core for the future use?

Kind Regards,
Maksym

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boguk, Maksym (#4)
Re: UTF8 national character data type support WIP patch and list of open issues.

"Boguk, Maksym" <maksymb@fast.au.fujitsu.com> writes:

Hi, my task is implementing ANSI NATIONAL character string types as
part of PostgreSQL core.

No, that's not a given. You have a problem to solve, ie store some UTF8
strings in a database that's mostly just 1-byte data. It is not clear
that NATIONAL CHARACTER is the best solution to that problem. And I don't
think that you're going to convince anybody that this is an improvement in
spec compliance, because there's too much gap between what you're doing
here and what it says in the spec.

Both of these approach requires dump/restore the whole database which is
not always an opinion.

That's a disadvantage, agreed, but it's not a large enough one to reject
the approach, because what you want to do also has very significant
disadvantages.

I think it is extremely likely that we will end up rejecting a patch based
on NATIONAL CHARACTER altogether. It will require too much duplicative
code, it requires too many application-side changes to make use of the
functionality, and it will break any applications that are relying on the
current behavior of that syntax. But the real problem is that you're
commandeering syntax defined in the SQL spec for what is in the end quite
a narrow usage. I agree that the use-case will be very handy for some
applications ... but if we were ever to try to achieve real spec
compliance for the SQL features around character sets, this doesn't look
like a step on the way to that.

I think you'd be well advised to take a hard look at the
specialized-database-encoding approach. From here it looks like a 99%
solution for about 1% of the effort; and since it would be quite
uninvasive to the system as a whole, it's unlikely that such a patch
would get rejected.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6MauMau
maumau307@gmail.com
In reply to: Tom Lane (#5)
Re: UTF8 national character data type support WIP patch and list of open issues.

Hello,

I think it would be nice for PostgreSQL to support national character types
largely because it should ease migration from other DBMSs.

[Reasons why we need NCHAR]
--------------------------------------------------
1. Invite users of other DBMSs to PostgreSQL. Oracle, SQL Server, MySQL,
etc. all have NCHAR support. PostgreSQL is probably the only database out
of major ones that does not support NCHAR.
Sadly, I've read a report from some Japanese government agency that the
number of MySQL users exceeded that of PostgreSQL here in Japan in 2010 or
2011. I wouldn't say that is due to NCHAR support, but it might be one
reason. I want PostgreSQL to be more popular and regain those users.

2. Enhance the "open" image of PostgreSQL by implementing more features of
SQL standard. NCHAR may be a wrong and unnecessary feature of SQL standard
now that we have Unicode support, but it is defined in the standard and
widely implemented.

3. I have heard that some potential customers didn't adopt PostgreSQL due to
lack of NCHAR support. However, I don't know the exact reason why they need
NCHAR.

4. I guess some users really want to continue to use ShiftJIS or EUC_JP for
database encoding, and use NCHAR for a limited set of columns to store
international text in Unicode:
- to avoid code conversion between the server and the client for performance
- because ShiftJIS and EUC_JP require less amount of storage (2 bytes for
most Kanji) than UTF-8 (3 bytes)
This use case is described in chapter 6 of "Oracle Database Globalization
Support Guide".
--------------------------------------------------

I think we need to do the following:

[Minimum requirements]
--------------------------------------------------
1. Accept NCHAR/NVARCHAR as data type name and N'...' syntactically.
This is already implemented. PostgreSQL treats NCHAR/NVARCHAR as synonyms
for CHAR/VARCHAR, and ignores N prefix. But this is not documented.

2. Declare support for national character support in the manual.
1 is not sufficient because users don't want to depend on undocumented
behavior. This is exactly what the TODO item "national character support"
in PostgreSQL TODO wiki is about.

3. Implement NCHAR/NVARCHAR as distinct data types, not as synonyms so that:
- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is, not as
CHAR/VARCHAR.
- To implement additional features for NCHAR/NVARCHAR in the future, as
described below.
--------------------------------------------------

[Optional requirements]
--------------------------------------------------
1. Implement client driver support, such as:
- NCHAR host variable type (e.g. "NCHAR var_name[12];") in ECPG, as
specified in the SQL standard.
- national character methods (e.g. setNString, getNString,
setNCharacterStream) as specified in JDBC 4.0.
I think at first we can treat these national-character-specific features as
the same as CHAR/VARCHAR.

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.
I think it is sufficient at first that NCHAR/NVARCHAR columns can only be
used in UTF-8 databases and they store UTF-8 strings. This allows us to
reuse the input/output/send/recv functions and other infrastructure of
CHAR/VARCHAR. This is a reasonable compromise to avoid duplication and
minimize the first implementation of NCHAR support.

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.
--------------------------------------------------

I don't think it is good to implement NCHAR/NVARCHAR types as extensions
like contrib/citext, because NCHAR/NVARCHAR are basic types and need
client-side support. That is, client drivers need to be aware of the fixed
NCHAR/NVARCHAR OID values.

How do you think we should implement NCHAR support?

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Arulappan, Arul Shaji
arul@fast.au.fujitsu.com
In reply to: MauMau (#6)
Re: UTF8 national character data type support WIP patch and list of open issues.

-----Original Message-----
From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-
owner@postgresql.org] On Behalf Of MauMau

Hello,

I think it would be nice for PostgreSQL to support national character

types

largely because it should ease migration from other DBMSs.

[Reasons why we need NCHAR]
--------------------------------------------------
1. Invite users of other DBMSs to PostgreSQL. Oracle, SQL Server,

MySQL, etc.

all have NCHAR support. PostgreSQL is probably the only database out

of major

ones that does not support NCHAR.
Sadly, I've read a report from some Japanese government agency that the

number

of MySQL users exceeded that of PostgreSQL here in Japan in 2010 or

2011. I

wouldn't say that is due to NCHAR support, but it might be one reason.

I want

PostgreSQL to be more popular and regain those users.

2. Enhance the "open" image of PostgreSQL by implementing more features

of SQL

standard. NCHAR may be a wrong and unnecessary feature of SQL standard

now

that we have Unicode support, but it is defined in the standard and

widely

implemented.

3. I have heard that some potential customers didn't adopt PostgreSQL

due to

lack of NCHAR support. However, I don't know the exact reason why they

need

NCHAR.

The use case we have is for customer(s) who are modernizing their
databases on mainframes. These applications are typically written in
COBOL which does have extensive support for National Characters.
Supporting National Characters as in-built data types in PostgreSQL is,
not to exaggerate, an important criteria in their decision to use
PostgreSQL or not. (So is Embedded COBOL. But that is a separate issue.)

4. I guess some users really want to continue to use ShiftJIS or EUC_JP

for

database encoding, and use NCHAR for a limited set of columns to store
international text in Unicode:
- to avoid code conversion between the server and the client for

performance

- because ShiftJIS and EUC_JP require less amount of storage (2 bytes

for most

Kanji) than UTF-8 (3 bytes) This use case is described in chapter 6 of

"Oracle

Database Globalization Support Guide".
--------------------------------------------------

I think we need to do the following:

[Minimum requirements]
--------------------------------------------------
1. Accept NCHAR/NVARCHAR as data type name and N'...' syntactically.
This is already implemented. PostgreSQL treats NCHAR/NVARCHAR as

synonyms for

CHAR/VARCHAR, and ignores N prefix. But this is not documented.

2. Declare support for national character support in the manual.
1 is not sufficient because users don't want to depend on undocumented
behavior. This is exactly what the TODO item "national character

support"

in PostgreSQL TODO wiki is about.

3. Implement NCHAR/NVARCHAR as distinct data types, not as synonyms so

that:

- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is, not as
CHAR/VARCHAR.
- To implement additional features for NCHAR/NVARCHAR in the future, as
described below.
--------------------------------------------------

Agreed. This is our minimum requirement too.

Rgds,
Arul Shaji

[Optional requirements]
--------------------------------------------------
1. Implement client driver support, such as:
- NCHAR host variable type (e.g. "NCHAR var_name[12];") in ECPG, as

specified

in the SQL standard.
- national character methods (e.g. setNString, getNString,
setNCharacterStream) as specified in JDBC 4.0.
I think at first we can treat these national-character-specific

features as the

same as CHAR/VARCHAR.

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always

contain

Unicode data.
I think it is sufficient at first that NCHAR/NVARCHAR columns can only

be used

in UTF-8 databases and they store UTF-8 strings. This allows us to

reuse the

input/output/send/recv functions and other infrastructure of

CHAR/VARCHAR.

This is a reasonable compromise to avoid duplication and minimize the

first

implementation of NCHAR support.

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described

in

Oracle's manual. But I'm not sure about this, because UTF-16 is not a

real

fixed-width encoding due to supplementary characters.

This would definitely be a welcome addition.

--------------------------------------------------

I don't think it is good to implement NCHAR/NVARCHAR types as

extensions like

contrib/citext, because NCHAR/NVARCHAR are basic types and need

client-side

support. That is, client drivers need to be aware of the fixed

NCHAR/NVARCHAR

OID values.

How do you think we should implement NCHAR support?

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To

make

changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Robert Haas
robertmhaas@gmail.com
In reply to: MauMau (#6)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Mon, Sep 16, 2013 at 8:49 AM, MauMau <maumau307@gmail.com> wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding. In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases? What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes. You haven't mentioned any of those
issues or discussed how you would solve them.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#8)
Re: UTF8 national character data type support WIP patch and list of open issues.

Robert Haas <robertmhaas@gmail.com> writes:

On Mon, Sep 16, 2013 at 8:49 AM, MauMau <maumau307@gmail.com> wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.
...
3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding. In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

Another point to keep in mind is that UTF16 is not really any easier
to deal with than UTF8, unless you write code that fails to support
characters outside the basic multilingual plane. Which is a restriction
I don't believe we'd accept. But without that restriction, you're still
forced to deal with variable-width characters; and there's nothing very
nice about the way that's done in UTF16. So on the whole I think it
makes more sense to use UTF8 for this.

I share Robert's misgivings about difficulties in dealing with characters
that are not representable in the database's principal encoding. Still,
you probably won't find out about many of those until you try it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Robert Haas (#8)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 18.09.2013 16:16, Robert Haas wrote:

On Mon, Sep 16, 2013 at 8:49 AM, MauMau<maumau307@gmail.com> wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding. In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases? What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes. You haven't mentioned any of those
issues or discussed how you would solve them.

I'm not too thrilled about complicating the system for that, either. If
you really need to deal with many different languages, you can do that
today by using UTF-8 everywhere. Sure, it might not be the most
efficient encoding for some characters, but it works.

There is one reason, however, that makes it a lot more compelling: we
already support having databases with different encodings in the same
cluster, but the encoding used in the shared catalogs, for usernames and
database names for example, is not well-defined. If we dealt with
different encodings in the same database, that inconsistency would go away.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11MauMau
maumau307@gmail.com
In reply to: Robert Haas (#8)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

On Mon, Sep 16, 2013 at 8:49 AM, MauMau <maumau307@gmail.com> wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a
real
fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

No, those are "desirable if possible" features. What's important is to
declare in the manual that PostgreSQL officially supports national character
types, as I stated below.

1. Accept NCHAR/NVARCHAR as data type name and N'...' syntactically.
This is already implemented. PostgreSQL treats NCHAR/NVARCHAR as synonyms
for CHAR/VARCHAR, and ignores N prefix. But this is not documented.

2. Declare support for national character support in the manual.
1 is not sufficient because users don't want to depend on undocumented
behavior. This is exactly what the TODO item "national character support"
in PostgreSQL TODO wiki is about.

3. Implement NCHAR/NVARCHAR as distinct data types, not as synonyms so
that:
- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is, not as
CHAR/VARCHAR.
- To implement additional features for NCHAR/NVARCHAR in the future, as
described below.

And when declaring that, we had better implement NCHAR types as distinct
types with their own OIDs so that we can extend NCHAR behavior in the
future.
As the first stage, I think it's okay to treat NCHAR types exactly the same
as CHAR/VARCHAR types. For example, in ECPG:

switch (type)
case OID_FOR_CHAR:
case OID_FOR_VARCHAR:
case OID_FOR_TEXT:
case OID_FOR_NCHAR: /* new code */
case OID_FOR_NVARCHAR: /* new code */
some processing;
break;
And in JDBC, just call methods for non-national character types.
Currently, those national character methods throw SQLException.

public void setNString(int parameterIndex, String value) throws SQLException
{
setString(parameterIndex, value);
}

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding. In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases? What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes. You haven't mentioned any of those
issues or discussed how you would solve them.

Yes, you are probably right -- I'm not sure UTF-16 has really benefits that
UTF-8 doesn't have. But why did Windows and Java choose UTF-16 for internal
strings rather than UTF-8? Why did Oracle recommend UTF-16 for NCHAR? I
have no clear idea. Anyway, I don't strongly push UTF-16 and complicate the
encoding handling.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12MauMau
maumau307@gmail.com
In reply to: Tom Lane (#9)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Tom Lane" <tgl@sss.pgh.pa.us>

Another point to keep in mind is that UTF16 is not really any easier
to deal with than UTF8, unless you write code that fails to support
characters outside the basic multilingual plane. Which is a restriction
I don't believe we'd accept. But without that restriction, you're still
forced to deal with variable-width characters; and there's nothing very
nice about the way that's done in UTF16. So on the whole I think it
makes more sense to use UTF8 for this.

I feel so. I guess why Windows, Java, and Oracle chose UTF-16 is ... it was
UCS-2 only with BMP when they chose it. So character handling was easier
and faster thanks to fixed-width encoding.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Robert Haas
robertmhaas@gmail.com
In reply to: MauMau (#11)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Wed, Sep 18, 2013 at 6:42 PM, MauMau <maumau307@gmail.com> wrote:

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

No, those are "desirable if possible" features. What's important is to
declare in the manual that PostgreSQL officially supports national character
types, as I stated below.

That may be what's important to you, but it's not what's important to
me. I am not keen to introduce support for nchar and nvarchar as
differently-named types with identical semantics. And I think it's an
even worse idea to introduce them now, making them work one way, and
then later change the behavior in a backward-incompatible fashion.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14MauMau
maumau307@gmail.com
In reply to: Robert Haas (#13)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

That may be what's important to you, but it's not what's important to
me.

National character types support may be important to some potential users of
PostgreSQL and the popularity of PostgreSQL, not me. That's why national
character support is listed in the PostgreSQL TODO wiki. We might be losing
potential users just because their selection criteria includes national
character support.

I am not keen to introduce support for nchar and nvarchar as
differently-named types with identical semantics.

Similar examples already exist:

- varchar and text: the only difference is the existence of explicit length
limit
- numeric and decimal
- int and int4, smallint and int2, bigint and int8
- real/double precison and float

In addition, the SQL standard itself admits:

"The <key word>s NATIONAL CHARACTER are used to specify the character type
with an implementation-
defined character set. Special syntax (N'string') is provided for
representing literals in that character set.
...
"NATIONAL CHARACTER" is equivalent to the corresponding <character string
type> with a specification
of "CHARACTER SET CSN", where "CSN" is an implementation-defined <character
set name>."

"A <national character string literal> is equivalent to a <character string
literal> with the "N" replaced by
"<introducer><character set specification>", where "<character set
specification>" is an implementation-
defined <character set name>."

And I think it's an
even worse idea to introduce them now, making them work one way, and
then later change the behavior in a backward-incompatible fashion.

I understand your feeling. The concern about incompatibility can be
eliminated by thinking the following way. How about this?

- NCHAR can be used with any database encoding.

- At first, NCHAR is exactly the same as CHAR. That is,
"implementation-defined character set" described in the SQL standard is the
database character set.

- In the future, the character set for NCHAR can be selected at database
creation like Oracle's CREATE DATABAWSE .... NATIONAL CHARACTER SET
AL16UTF16. The default it the database set.

Could you tell me what kind of specification we should implement if we
officially support national character types?

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Tatsuo Ishii
ishii@postgresql.org
In reply to: Robert Haas (#8)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Mon, Sep 16, 2013 at 8:49 AM, MauMau <maumau307@gmail.com> wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual. But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal. The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding. In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

Agreed.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases? What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes. You haven't mentioned any of those
issues or discussed how you would solve them.

What about limiting to use NCHAR with a database which has same
encoding or "compatible" encoding (on which the encoding conversion is
defined)? This way, NCHAR text can be automatically converted from
NCHAR to the database encoding in the server side thus we can treat
NCHAR exactly same as CHAR afterward. I suppose what encoding is used
for NCHAR should be defined in initdb time or creation of the database
(if we allow this, we need to add a new column to know what encoding
is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: MauMau (#14)
Re: UTF8 national character data type support WIP patch and list of open issues.

Hi,

That may be what's important to you, but it's not what's important to

me.

National character types support may be important to some potential users
of PostgreSQL and the popularity of PostgreSQL, not me. That's why
national character support is listed in the PostgreSQL TODO wiki. We might
be losing potential users just because their selection criteria includes
national character support.

the whole NCHAR appeared as hack for the systems, that did not have it from
the beginning. It would not be needed, if all the text would be magically
stored in UNICODE or UTF from the beginning and idea of character would be
the same as an idea of a rune and not a byte.

PostgreSQL has a very powerful possibilities for storing any kind of
encoding. So maybe it makes sense to add the ENCODING as another column
property, the same way a COLLATION was added?

It would make it possible to have a database, that talks to the clients in
UTF8 and stores text and varchar data in the encoding that is the most
appropriate for the situation.

It will make it impossible (or complicated) to make the database have a
non-UTF8 default encoding (I wonder who should need that in this case), as
conversions will not be possible from the broader charsets into the default
database encoding.

One could define an additional DATABASE property like LC_ENCODING that
would work for the ENCODING property of a column like LC_COLLATE for
COLLATE property of a column.

Text operations should work automatically, as in memory all strings will be
converted to the database encoding.

This approach will also open a possibility to implement custom ENCODINGs
for the column data storage, like snappy compression or even BSON, gobs or
protbufs for much more compact type storage.

Regards,

-- Valentine Gogichashvili

#17Martijn van Oosterhout
kleptog@svana.org
In reply to: Tatsuo Ishii (#15)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Fri, Sep 20, 2013 at 08:58:53AM +0900, Tatsuo Ishii wrote:

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.

As far as I can tell the whole reason for introducing NCHAR is to
support SHIFT-JIS, there hasn't been call for any other encodings, that
I can remember anyway.

So rather than this whole NCHAR thing, why not just add a type
"sjistext", and a few type casts and call it a day...

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

He who writes carelessly confesses thereby at the very outset that he does
not attach much importance to his own thoughts.

-- Arthur Schopenhauer

#18Robert Haas
robertmhaas@gmail.com
In reply to: MauMau (#14)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Thu, Sep 19, 2013 at 6:42 PM, MauMau <maumau307@gmail.com> wrote:

National character types support may be important to some potential users of
PostgreSQL and the popularity of PostgreSQL, not me. That's why national
character support is listed in the PostgreSQL TODO wiki. We might be losing
potential users just because their selection criteria includes national
character support.

We'd have to go back and search the archives to figure out why that
item was added to the TODO, but I'd be surprised if anyone ever had it
in mind to create additional types that behave just like existing
types but with different names. I don't think that you'll be able to
get consensus around that path on this mailing list.

I am not keen to introduce support for nchar and nvarchar as
differently-named types with identical semantics.

Similar examples already exist:

- varchar and text: the only difference is the existence of explicit length
limit
- numeric and decimal
- int and int4, smallint and int2, bigint and int8
- real/double precison and float

I agree that the fact we have both varchar and text feels like a wart.
The other examples mostly involve different names for the same
underlying type, and so are different from what you are asking for
here.

I understand your feeling. The concern about incompatibility can be
eliminated by thinking the following way. How about this?

- NCHAR can be used with any database encoding.

- At first, NCHAR is exactly the same as CHAR. That is,
"implementation-defined character set" described in the SQL standard is the
database character set.

- In the future, the character set for NCHAR can be selected at database
creation like Oracle's CREATE DATABAWSE .... NATIONAL CHARACTER SET
AL16UTF16. The default it the database set.

Hmm. So under that design, a database could support up to a total of
two character sets, the one that you get when you say 'foo' and the
other one that you get when you say n'foo'.

I guess we could do that, but it seems a bit limited. If we're going
to go to the trouble of supporting multiple character sets, why not
support an arbitrary number instead of just two?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Robert Haas
robertmhaas@gmail.com
In reply to: Tatsuo Ishii (#15)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Thu, Sep 19, 2013 at 7:58 PM, Tatsuo Ishii <ishii@postgresql.org> wrote:

What about limiting to use NCHAR with a database which has same
encoding or "compatible" encoding (on which the encoding conversion is
defined)? This way, NCHAR text can be automatically converted from
NCHAR to the database encoding in the server side thus we can treat
NCHAR exactly same as CHAR afterward. I suppose what encoding is used
for NCHAR should be defined in initdb time or creation of the database
(if we allow this, we need to add a new column to know what encoding
is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.

I think the point here is that, at least as I understand it, encoding
conversion and sanitization happens at a very early stage right now,
when we first receive the input from the client. If the user sends a
string of bytes as part of a query or bind placeholder that's not
valid in the database encoding, it's going to error out before any
type-specific code has an opportunity to get control. Look at
textin(), for example. There's no encoding check there. That means
it's already been done at that point. To make this work, someone's
going to have to figure out what to do about *that*. Until we have a
sketch of what the design for that looks like, I don't see how we can
credibly entertain more specific proposals.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Peter Eisentraut
peter_e@gmx.net
In reply to: Robert Haas (#18)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 9/20/13 2:22 PM, Robert Haas wrote:

I am not keen to introduce support for nchar and nvarchar as

differently-named types with identical semantics.

Similar examples already exist:

- varchar and text: the only difference is the existence of explicit length
limit
- numeric and decimal
- int and int4, smallint and int2, bigint and int8
- real/double precison and float

I agree that the fact we have both varchar and text feels like a wart.
The other examples mostly involve different names for the same
underlying type, and so are different from what you are asking for
here.

Also note that we already have NCHAR [VARYING]. It's mapped to char or
varchar, respectively, in the parser, just like int, real, etc. are handled.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21MauMau
maumau307@gmail.com
In reply to: Tatsuo Ishii (#15)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Tatsuo Ishii" <ishii@postgresql.org>

What about limiting to use NCHAR with a database which has same
encoding or "compatible" encoding (on which the encoding conversion is
defined)? This way, NCHAR text can be automatically converted from
NCHAR to the database encoding in the server side thus we can treat
NCHAR exactly same as CHAR afterward. I suppose what encoding is used
for NCHAR should be defined in initdb time or creation of the database
(if we allow this, we need to add a new column to know what encoding
is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.

Thanks for the idea, it sounds flexible for wider use. Your cooperation
would be much appreciated to devise implementation with as little code as
possible.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#22MauMau
maumau307@gmail.com
In reply to: Martijn van Oosterhout (#17)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Martijn van Oosterhout" <kleptog@svana.org>

As far as I can tell the whole reason for introducing NCHAR is to
support SHIFT-JIS, there hasn't been call for any other encodings, that
I can remember anyway.

Could you elaborate on this, giving some info sources?

So rather than this whole NCHAR thing, why not just add a type
"sjistext", and a few type casts and call it a day...

The main reason for supporting NCHAR types is to ease migration from other
DBMSs, not requiring DDL changes. So sjistext does not match that purpose.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23MauMau
maumau307@gmail.com
In reply to: Valentine Gogichashvili (#16)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Valentine Gogichashvili" <valgog@gmail.com>

the whole NCHAR appeared as hack for the systems, that did not have it
from
the beginning. It would not be needed, if all the text would be magically
stored in UNICODE or UTF from the beginning and idea of character would be
the same as an idea of a rune and not a byte.

I guess so, too.

PostgreSQL has a very powerful possibilities for storing any kind of
encoding. So maybe it makes sense to add the ENCODING as another column
property, the same way a COLLATION was added?

Some other people in this community suggested that. ANd the SQL standard
suggests the same -- specifying a character encoding for each column:
CHAR(n) CHARASET SET ch.

Text operations should work automatically, as in memory all strings will
be
converted to the database encoding.

This approach will also open a possibility to implement custom ENCODINGs
for the column data storage, like snappy compression or even BSON, gobs or
protbufs for much more compact type storage.

Thanks for your idea that sounds interesting, although I don't understand
that well.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24MauMau
maumau307@gmail.com
In reply to: Robert Haas (#18)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

I don't think that you'll be able to
get consensus around that path on this mailing list.

I agree that the fact we have both varchar and text feels like a wart.

Is that right? I don't feel varchar/text case is a wart. I think text was
introduced for a positive reason to ease migration from other DBMSs. The
manual says:

http://www.postgresql.org/docs/current/static/datatype-character.html

"Although the type text is not in the SQL standard, several other SQL
database management systems have it as well."

And isn't EnterpriseDB doing similar things for Oracle compatibility,
although I'm not sure about the details? Could you share your idea why we
won't get consensus?

I understand your feeling. The concern about incompatibility can be
eliminated by thinking the following way. How about this?

- NCHAR can be used with any database encoding.

- At first, NCHAR is exactly the same as CHAR. That is,
"implementation-defined character set" described in the SQL standard is
the
database character set.

- In the future, the character set for NCHAR can be selected at database
creation like Oracle's CREATE DATABAWSE .... NATIONAL CHARACTER SET
AL16UTF16. The default it the database set.

Hmm. So under that design, a database could support up to a total of
two character sets, the one that you get when you say 'foo' and the
other one that you get when you say n'foo'.

I guess we could do that, but it seems a bit limited. If we're going
to go to the trouble of supporting multiple character sets, why not
support an arbitrary number instead of just two?

I agree with you about the arbitrary number. Tatsuo san gave us a good
suggestion. Let's consider how to implement that.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#25MauMau
maumau307@gmail.com
In reply to: Robert Haas (#19)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

On Thu, Sep 19, 2013 at 7:58 PM, Tatsuo Ishii <ishii@postgresql.org>
wrote:

What about limiting to use NCHAR with a database which has same
encoding or "compatible" encoding (on which the encoding conversion is
defined)? This way, NCHAR text can be automatically converted from
NCHAR to the database encoding in the server side thus we can treat
NCHAR exactly same as CHAR afterward. I suppose what encoding is used
for NCHAR should be defined in initdb time or creation of the database
(if we allow this, we need to add a new column to know what encoding
is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.

I think the point here is that, at least as I understand it, encoding
conversion and sanitization happens at a very early stage right now,
when we first receive the input from the client. If the user sends a
string of bytes as part of a query or bind placeholder that's not
valid in the database encoding, it's going to error out before any
type-specific code has an opportunity to get control. Look at
textin(), for example. There's no encoding check there. That means
it's already been done at that point. To make this work, someone's
going to have to figure out what to do about *that*. Until we have a
sketch of what the design for that looks like, I don't see how we can
credibly entertain more specific proposals.

OK, I see your point. Let's consider that design. I'll learn the code
regarding this. Does anybody, especially Tatsuo san, Tom san, Peter san,
have any good idea?

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Tatsuo Ishii
ishii@postgresql.org
In reply to: Robert Haas (#19)
Re: UTF8 national character data type support WIP patch and list of open issues.

I think the point here is that, at least as I understand it, encoding
conversion and sanitization happens at a very early stage right now,
when we first receive the input from the client. If the user sends a
string of bytes as part of a query or bind placeholder that's not
valid in the database encoding, it's going to error out before any
type-specific code has an opportunity to get control. Look at
textin(), for example. There's no encoding check there. That means
it's already been done at that point. To make this work, someone's
going to have to figure out what to do about *that*. Until we have a
sketch of what the design for that looks like, I don't see how we can
credibly entertain more specific proposals.

I don't think the bind placeholder is the case. That is processed by
exec_bind_message() in postgres.c. It has enough info about the type
of the placeholder, and I think we can easily deal with NCHAR. Same
thing can be said to COPY case.

Problem is an ordinary query (simple protocol "Q" message) as you
pointed out. Encoding conversion happens at a very early stage (note
that fast-path case has the same issue). If a query message contains,
say, SHIFT-JIS and EUC-JP, then we are going into trouble because the
encoding conversion routine (pg_client_to_server) regards that the
message from client contains only one encoding. However my question
is, does it really happen? Because there's any text editor which can
create SHIFT-JIS and EUC-JP mixed text. So my guess is, when user want
to use NCHAR as SHIFT-JIS text, the rest of query consist of either
SHIFT-JIS or plain ASCII. If so, what the user need to do is, set the
client encoding to SJIFT-JIS and everything should be fine.

Maumau, is my guess correct?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: MauMau (#23)
Re: UTF8 national character data type support WIP patch and list of open issues.

PostgreSQL has a very powerful possibilities for storing any kind of

encoding. So maybe it makes sense to add the ENCODING as another column
property, the same way a COLLATION was added?

Some other people in this community suggested that. ANd the SQL standard
suggests the same -- specifying a character encoding for each column:
CHAR(n) CHARASET SET ch.

Text operations should work automatically, as in memory all strings will

be
converted to the database encoding.

This approach will also open a possibility to implement custom ENCODINGs
for the column data storage, like snappy compression or even BSON, gobs or
protbufs for much more compact type storage.

Thanks for your idea that sounds interesting, although I don't understand
that well.

The idea is very simple:

CREATE DATABASE utf8_database ENCODING 'utf8';

\c utf8_database

CREATE TABLE a(
id serial,
ascii_data text ENCODING 'ascii', -- will use ascii_to_utf8 to read and
utf8_to_ascii to write
koi8_data text ENCODING 'koi8_r', -- will use koi8_r_to_utf8 to read and
utf8_to_koi8_r to write
json_data json ENCODING 'bson' -- will use bson_to_json to read and
json_to_bson to write
);

The problem with bson_to_json here is that probably it will not be possible
to write JSON in koi8_r for example. But now it is also even not considered
in these discussions.

If the ENCODING machinery would get not only the encoding name, but also
the type OID, it should be possible to write encoders for TYPEs and array
of TYPEs (I had to do it using the casts to bytea and protobuff to minimize
the size of storage for an array of types when writing a lot of data, that
could be unpacked afterwords directly in the DB as normal database types).

I hope I made my point a little bit clearer.

Regards,

Valentine Gogichashvili

#28MauMau
maumau307@gmail.com
In reply to: Tatsuo Ishii (#26)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Tatsuo Ishii" <ishii@postgresql.org>

I don't think the bind placeholder is the case. That is processed by
exec_bind_message() in postgres.c. It has enough info about the type
of the placeholder, and I think we can easily deal with NCHAR. Same
thing can be said to COPY case.

Yes, I've learned it. Agreed. If we allow an encoding for NCHAR different
from the database encoding, we can convert text from the client encoding to
the NCHAR encoding in nchar_in() for example. We can retrieve the NCHAR
encoding from pg_database and store it in a global variable at session
start.

Problem is an ordinary query (simple protocol "Q" message) as you
pointed out. Encoding conversion happens at a very early stage (note
that fast-path case has the same issue). If a query message contains,
say, SHIFT-JIS and EUC-JP, then we are going into trouble because the
encoding conversion routine (pg_client_to_server) regards that the
message from client contains only one encoding. However my question
is, does it really happen? Because there's any text editor which can
create SHIFT-JIS and EUC-JP mixed text. So my guess is, when user want
to use NCHAR as SHIFT-JIS text, the rest of query consist of either
SHIFT-JIS or plain ASCII. If so, what the user need to do is, set the
client encoding to SJIFT-JIS and everything should be fine.

Maumau, is my guess correct?

Yes, I believe you are right. Regardless of whether we support multiple
encodings in one database or not, a single client encoding will be
sufficient for one session. When receiving the "Q" message, the whole SQL
text is converted from the client encoding to the database encoding. This
part needs no modification. During execution of the "Q" message, NCHAR
values are converted from the database encoding to the NCHAR encoding.

Thank you very much, Tatsuo san. Everybody, is there any other challenge we
should consider to support NCHAR/NVARCHAR types as distinct types?

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Robert Haas
robertmhaas@gmail.com
In reply to: MauMau (#24)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Fri, Sep 20, 2013 at 8:32 PM, MauMau <maumau307@gmail.com> wrote:

I don't think that you'll be able to
get consensus around that path on this mailing list.
I agree that the fact we have both varchar and text feels like a wart.

Is that right? I don't feel varchar/text case is a wart. I think text was
introduced for a positive reason to ease migration from other DBMSs. The
manual says:

http://www.postgresql.org/docs/current/static/datatype-character.html

"Although the type text is not in the SQL standard, several other SQL
database management systems have it as well."

And isn't EnterpriseDB doing similar things for Oracle compatibility,
although I'm not sure about the details? Could you share your idea why we
won't get consensus?

Sure, it's EnterpriseDB's policy to add features that facilitate
migrations from other databases - particularly Oracle - to our
product, Advanced Server, even if those features don't otherwise add
any value. However, the community is usually reluctant to add such
features to PostgreSQL. Also, at least up until now, the existing
aliasing of nchar and nchar varying to other data types has been
adequate for the needs of our customers, and we've handled a bunch of
other type-name incompatibilities with similar tricks. What you are
proposing goes off in a different direction from both PostgreSQL and
Advanced Server, and that's why I'm skeptical. If you were proposing
something that we were doing in Advanced Server with great success, it
would be a bit disingenuous of me to argue against doing the same
thing in PostgreSQL, but that's not the case.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#30Peter Eisentraut
peter_e@gmx.net
In reply to: MauMau (#28)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 9/23/13 2:53 AM, MauMau wrote:

Yes, I believe you are right. Regardless of whether we support multiple
encodings in one database or not, a single client encoding will be
sufficient for one session. When receiving the "Q" message, the whole
SQL text is converted from the client encoding to the database
encoding. This part needs no modification. During execution of the "Q"
message, NCHAR values are converted from the database encoding to the
NCHAR encoding.

That assumes that the conversion client encoding -> server encoding ->
NCHAR encoding is not lossy. I thought one main point of this exercise
was the avoid these conversions and be able to go straight from client
encoding into NCHAR.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#31MauMau
maumau307@gmail.com
In reply to: Robert Haas (#29)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

Sure, it's EnterpriseDB's policy to add features that facilitate
migrations from other databases - particularly Oracle - to our
product, Advanced Server, even if those features don't otherwise add
any value. However, the community is usually reluctant to add such
features to PostgreSQL. Also, at least up until now, the existing
aliasing of nchar and nchar varying to other data types has been
adequate for the needs of our customers, and we've handled a bunch of
other type-name incompatibilities with similar tricks. What you are
proposing goes off in a different direction from both PostgreSQL and
Advanced Server, and that's why I'm skeptical. If you were proposing
something that we were doing in Advanced Server with great success, it
would be a bit disingenuous of me to argue against doing the same
thing in PostgreSQL, but that's not the case.

Sorry, I didn't mean to imitate EnterpriseDB. My intent is to just increase
the popularity of PostgreSQL (or prevent the drop in popularity?). NCHAR is
so basic that we can/should accept proper support.

Aliasing would be nice to some extent, if its offical support would be
documented in PG manual. However, just aliasing loses NCHAR type
information through pg_dump. This is contrary to the benefit of pg_dump --
allow migration from PG to other DBMSs, possibly for performance comparison:

http://www.postgresql.org/docs/current/static/app-pgdump.html

"Script files can be used to reconstruct the database even on other machines
and other architectures; with some modifications, even on other SQL database
products."

In addition, distinct types for NCHAR/NVARCHAR allow future extension such
as different encoding for NCHAR and UTF-16.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32MauMau
maumau307@gmail.com
In reply to: Peter Eisentraut (#30)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Peter Eisentraut" <peter_e@gmx.net>

That assumes that the conversion client encoding -> server encoding ->
NCHAR encoding is not lossy.

Yes, so Tatsuo san suggested to restrict server encoding <-> NCHAR encoding
combination to those with lossless conversion.

I thought one main point of this exercise
was the avoid these conversions and be able to go straight from client
encoding into NCHAR.

It's slightly different. Please see the following excerpt:

/messages/by-id/B1A7485194DE4FDAB8FA781AFB570079@maumau

"4. I guess some users really want to continue to use ShiftJIS or EUC_JP for
database encoding, and use NCHAR for a limited set of columns to store
international text in Unicode:
- to avoid code conversion between the server and the client for performance
- because ShiftJIS and EUC_JP require less amount of storage (2 bytes for
most Kanji) than UTF-8 (3 bytes)
This use case is described in chapter 6 of "Oracle Database Globalization
Support Guide"."

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Peter Eisentraut
peter_e@gmx.net
In reply to: MauMau (#32)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Tue, 2013-09-24 at 21:04 +0900, MauMau wrote:

"4. I guess some users really want to continue to use ShiftJIS or EUC_JP for
database encoding, and use NCHAR for a limited set of columns to store
international text in Unicode:
- to avoid code conversion between the server and the client for performance
- because ShiftJIS and EUC_JP require less amount of storage (2 bytes for
most Kanji) than UTF-8 (3 bytes)
This use case is described in chapter 6 of "Oracle Database Globalization
Support Guide"."

But your proposal wouldn't address the first point, because data would
have to go client -> server -> NCHAR.

The second point is valid, but it's going to be an awful amount of work
for that limited result.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#34MauMau
maumau307@gmail.com
In reply to: Peter Eisentraut (#33)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Peter Eisentraut" <peter_e@gmx.net>

On Tue, 2013-09-24 at 21:04 +0900, MauMau wrote:

"4. I guess some users really want to continue to use ShiftJIS or EUC_JP
for
database encoding, and use NCHAR for a limited set of columns to store
international text in Unicode:
- to avoid code conversion between the server and the client for
performance
- because ShiftJIS and EUC_JP require less amount of storage (2 bytes for
most Kanji) than UTF-8 (3 bytes)
This use case is described in chapter 6 of "Oracle Database Globalization
Support Guide"."

But your proposal wouldn't address the first point, because data would
have to go client -> server -> NCHAR.

The second point is valid, but it's going to be an awful amount of work
for that limited result.

I (or, Oracle's use case) meant the following, for example:

initdb -E EUC_JP
CREATE DATABASE mydb ENCODING EUC_JP NATIONAL ENCODING UTF-8;
CREATE TABLE mytable (
col1 char(10), -- EUC_JP text
col2 Nchar(10), -- UTF-8 text
);
client encoding = EUC_JP

That is,

1. Currently, the user is only handling Japanese text. To avoid unnecessary
conversion, he uses EUC_JP for both client and server.
2. He needs to store some limited amount of international (non-Japanese)
text in a few columns for a new feature of the system. But the
international text is limited, so he wants to sacrifice performance and
storage cost due to code conversion for most text and more bytes for each
character.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Greg Stark
stark@mit.edu
In reply to: MauMau (#32)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Tue, Sep 24, 2013 at 1:04 PM, MauMau <maumau307@gmail.com> wrote:

Yes, so Tatsuo san suggested to restrict server encoding <-> NCHAR
encoding combination to those with lossless conversion.

If it's not lossy then what's the point? From the client's point of view
it'll be functionally equivalent to text then.

--
greg

#36MauMau
maumau307@gmail.com
In reply to: Greg Stark (#35)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Greg Stark" <stark@mit.edu>

If it's not lossy then what's the point? From the client's point of view
it'll be functionally equivalent to text then.

Sorry, what Tatsuo san suggested meant was "same or compatible", not lossy.
I quote the relevant part below. This is enough for the use case I
mentioned in my previous mail several hours ago (actually, that is what
Oracle manual describes...).

/messages/by-id/20130920.085853.1628917054830864151.t-ishii@sraoss.co.jp

[Excerpt]
----------------------------------------
What about limiting to use NCHAR with a database which has same
encoding or "compatible" encoding (on which the encoding conversion is
defined)? This way, NCHAR text can be automatically converted from
NCHAR to the database encoding in the server side thus we can treat
NCHAR exactly same as CHAR afterward. I suppose what encoding is used
for NCHAR should be defined in initdb time or creation of the database
(if we allow this, we need to add a new column to know what encoding
is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is
SHIFT-JIS and database encoding is UTF-8 because there is a conversion
between UTF-8 and SHIFT-JIS. However will not succeed if NCHAR is
SHIFT-JIS and database encoding is ISO-8859-1 because there's no
conversion between them.
----------------------------------------

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#37Arulappan, Arul Shaji
arul@fast.au.fujitsu.com
In reply to: MauMau (#36)
1 attachment(s)
Re: UTF8 national character data type support WIP patch and list of open issues.

Attached is a patch that implements the first set of changes discussed
in this thread originally. They are:

(i) Implements NCHAR/NVARCHAR as distinct data types, not as synonyms so
that:
- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is,
not as CHAR/VARCHAR.
- Groundwork to implement additional features for NCHAR/NVARCHAR
in the future (For eg: separate encoding for nchar columns).
(ii) Support for NCHAR/NVARCHAR in ECPG
(iii) Documentation changes to reflect the new data type

Rgds,
Arul Shaji

-----Original Message-----
From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-
owner@postgresql.org] On Behalf Of MauMau

From: "Greg Stark" <stark@mit.edu>

If it's not lossy then what's the point? From the client's point of
view it'll be functionally equivalent to text then.

Sorry, what Tatsuo san suggested meant was "same or compatible", not

lossy.

I quote the relevant part below. This is enough for the use case I

mentioned

in my previous mail several hours ago (actually, that is what Oracle

manual

describes...).

/messages/by-id/20130920.085853.162891705483086415

1.t-

ishii@sraoss.co.jp

[Excerpt]
----------------------------------------
What about limiting to use NCHAR with a database which has same

encoding or

"compatible" encoding (on which the encoding conversion is defined)?

This way,

NCHAR text can be automatically converted from NCHAR to the database

encoding

in the server side thus we can treat NCHAR exactly same as CHAR

afterward. I

suppose what encoding is used for NCHAR should be defined in initdb

time or

creation of the database (if we allow this, we need to add a new column

to know

what encoding is used for NCHAR).

For example, "CREATE TABLE t1(t NCHAR(10))" will succeed if NCHAR is
UTF-8 and database encoding is UTF-8. Even succeed if NCHAR is

SHIFT-JIS and

database encoding is UTF-8 because there is a conversion between UTF-8

and

SHIFT-JIS. However will not succeed if NCHAR is SHIFT-JIS and database

encoding

is ISO-8859-1 because there's no conversion between them.
----------------------------------------

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To

make

Show quoted text

changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachments:

PGHEAD_nchar_v1.patchapplication/octet-stream; name=PGHEAD_nchar_v1.patchDownload
diff -uNr postgresql-head-20131017/doc/src/sgml/datatype.sgml postgresql-head-20131017-nchar/doc/src/sgml/datatype.sgml
--- postgresql-head-20131017/doc/src/sgml/datatype.sgml	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/doc/src/sgml/datatype.sgml	2013-10-22 16:12:19.000000000 +1100
@@ -161,6 +161,18 @@
       </row>
 
       <row>
+       <entry><type>national character varying [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry><type>nvarchar [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry>variable-length national character string</entry>
+      </row>
+
+      <row>
+       <entry><type>national character [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry><type>nchar [ (<replaceable>n</replaceable>) ]</type></entry>
+       <entry>fixed-length national character string</entry>
+      </row>
+
+      <row>
        <entry><type>numeric [ (<replaceable>p</replaceable>,
          <replaceable>s</replaceable>) ]</type></entry>
        <entry><type>decimal [ (<replaceable>p</replaceable>,
@@ -1014,6 +1026,14 @@
         <entry><type>text</type></entry>
         <entry>variable unlimited length</entry>
        </row>
+       <row>
+        <entry><type>national character varying(<replaceable>n</>)</type>, <type>nvarchar(<replaceable>n</>)</type></entry>
+        <entry>variable-length national character string</entry>
+       </row>
+       <row>
+        <entry><type>national character(<replaceable>n</>)</type>, <type>nchar(<replaceable>n</>)</type></entry>
+        <entry>fixed-length national character string</entry>
+       </row>
      </tbody>
      </tgroup>
     </table>
@@ -1040,7 +1060,15 @@
     shorter
     string.
    </para>
-
+   <para>
+    The data types <type>national character(<replaceable>n</>)</type> and 
+    <type>national character varying(<replaceable>n</>)</type> can be 
+    used to store data in a specific encoding. Currently the encoding for 
+    <type>national character</type> and <type>national character varying</type> 
+    data types are limited to the database encoding. As a result, declaring a 
+    column as <type>national character</type> is equivalent to declaring 
+    the column as <type>character</type>.
+   </para>
    <para>
     If one explicitly casts a value to <type>character
     varying(<replaceable>n</>)</type> or
@@ -1060,6 +1088,15 @@
     without length specifier, the type accepts strings of any size. The
     latter is a <productname>PostgreSQL</> extension.
    </para>
+   <para>
+    The notations <type>nvarchar(<replaceable>n</>)</type> and
+    <type>nchar(<replaceable>n</>)</type> are aliases 
+    for <type>national character varying(<replaceable>n</>)</type> and
+    <type>national character(<replaceable>n</>)</type>, respectively.
+    <type>national character</type> without length specifier is equivalent to
+    <type>national character(1)</type>. If <type>national character varying</type> 
+    is used without length specifier, the type accepts strings of any size.
+   </para>
 
    <para>
     In addition, <productname>PostgreSQL</productname> provides the
@@ -1070,50 +1107,53 @@
    </para>
 
    <para>
-    Values of type <type>character</type> are physically padded
-    with spaces to the specified width <replaceable>n</>, and are
-    stored and displayed that way.  However, the padding spaces are
+    Values of types <type>character</type> and <type>national character</type>
+    are physically padded with spaces to the specified width <replaceable>n</>, 
+    and are stored and displayed that way.  However, the padding spaces are
     treated as semantically insignificant.  Trailing spaces are
-    disregarded when comparing two values of type <type>character</type>,
-    and they will be removed when converting a <type>character</type> value
+    disregarded when comparing two values of type <type>character</type> or
+    <type>national character</type>, and they will be removed when converting 
+    a <type>character</type> or <type>national character</type> value
     to one of the other string types.  Note that trailing spaces
     <emphasis>are</> semantically significant in
-    <type>character varying</type> and <type>text</type> values, and
-    when using pattern matching, e.g. <literal>LIKE</>,
+    <type>character varying</type>,<type>national character varying</type>
+    and <type>text</type> values, and when using pattern matching, e.g. <literal>LIKE</>,
     regular expressions.
    </para>
 
    <para>
     The storage requirement for a short string (up to 126 bytes) is 1 byte
     plus the actual string, which includes the space padding in the case of
-    <type>character</type>.  Longer strings have 4 bytes of overhead instead
-    of 1.  Long strings are compressed by the system automatically, so
-    the physical requirement on disk might be less. Very long values are also
-    stored in background tables so that they do not interfere with rapid
-    access to shorter column values. In any case, the longest
+    <type>character</type> or <type>national character</type>.  Longer strings have 
+    4 bytes of overhead instead of 1.  Long strings are compressed by the 
+    system automatically, so the physical requirement on disk might be less. 
+    Very long values are also stored in background tables so that they do not 
+    interfere with rapid access to shorter column values. In any case, the longest
     possible character string that can be stored is about 1 GB. (The
     maximum value that will be allowed for <replaceable>n</> in the data
     type declaration is less than that. It wouldn't be useful to
     change this because with multibyte character encodings the number of
     characters and bytes can be quite different. If you desire to
     store long strings with no specific upper limit, use
-    <type>text</type> or <type>character varying</type> without a length
+    <type>text</type> or <type>character varying</type> or <type>
+    national character varying</type> without a length
     specifier, rather than making up an arbitrary length limit.)
    </para>
 
    <tip>
     <para>
-     There is no performance difference among these three types,
+     There is no performance difference among these five types,
      apart from increased storage space when using the blank-padded
      type, and a few extra CPU cycles to check the length when storing into
      a length-constrained column.  While
-     <type>character(<replaceable>n</>)</type> has performance
-     advantages in some other database systems, there is no such advantage in
-     <productname>PostgreSQL</productname>; in fact
-     <type>character(<replaceable>n</>)</type> is usually the slowest of
-     the three because of its additional storage costs.  In most situations
-     <type>text</type> or <type>character varying</type> should be used
-     instead.
+     <type>character(<replaceable>n</>)</type> and <type>national character
+     (<replaceable>n</>)</type> has performance advantages in some other database 
+     systems, there is no such advantage in <productname>PostgreSQL</productname>; 
+     in fact <type>character(<replaceable>n</>)</type> and <type>national character
+     (<replaceable>n</>)</type> are usually the slowest of
+     the five because of its additional storage costs.  In most situations
+     <type>text</type> or <type>character varying</type> or
+     <type>national character varying</type> should be used instead.
     </para>
    </tip>
 
@@ -1153,6 +1193,31 @@
  good  |           5
  too l |           5
 </computeroutput>
+
+CREATE TABLE test3 (a national character(4));
+INSERT INTO test3 VALUES (N'ok');
+SELECT a, char_length(a) FROM test3;
+<computeroutput>
+  a   | char_length
+------+-------------
+ ok   |           2
+</computeroutput>
+
+CREATE TABLE test4 (b nvarchar(5));
+INSERT INTO test4 VALUES (N'ok');
+INSERT INTO test4 VALUES (N'good      ');
+INSERT INTO test4 VALUES (N'too long');
+<computeroutput>ERROR:  value too long for type character varying(5)</computeroutput>
+INSERT INTO test4 VALUES ('too long'::nvarchar(5)); -- explicit truncation
+SELECT b, char_length(b) FROM test4;
+<computeroutput>
+   b   | char_length
+-------+-------------
+ ok    |           2
+ good  |           5
+ too l |           5
+</computeroutput>
+
 </programlisting>
     <calloutlist>
      <callout arearefs="co.datatype-char">
@@ -4692,5 +4757,4 @@
    </para>
 
   </sect1>
-
  </chapter>
diff -uNr postgresql-head-20131017/doc/src/sgml/ecpg.sgml postgresql-head-20131017-nchar/doc/src/sgml/ecpg.sgml
--- postgresql-head-20131017/doc/src/sgml/ecpg.sgml	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/doc/src/sgml/ecpg.sgml	2013-10-22 16:03:40.000000000 +1100
@@ -890,6 +890,12 @@
        <entry><type>boolean</type></entry>
        <entry><type>bool</type><footnote><para>declared in <filename>ecpglib.h</filename> if not native</para></footnote></entry>
       </row>
+      
+      <row>
+       <entry><type>national character(<replaceable>n</>)</type>, <type>nvarchar(<replaceable>n</>)</type></entry>
+       <entry><type>NCHAR[<replaceable>n</>+1]</type>, <type>NVARCHAR[<replaceable>n</>+1]</type></entry>
+      </row>
+      
      </tbody>
     </tgroup>
    </table>
@@ -968,6 +974,81 @@
     </para>
    </sect3>
 
+   <sect3 id="ecpg-nchar">
+    <title>Handling National Character Strings</title>
+
+    <para>
+     To handle SQL national character string data types, such
+     as <type>nvarchar</type> and <type>nchar</type>, there are two
+     possible ways to declare the host variables.
+    </para>
+
+    <para>
+     One way is using <type>NCHAR[]</type>, an array
+     of <type>NCHAR</type>, internally which is mapped to an array of <type>char</type> which is the most common way to handle
+     character data in C.
+<programlisting>
+EXEC SQL BEGIN DECLARE SECTION;
+    NCHAR str[50];
+EXEC SQL END DECLARE SECTION;
+</programlisting>
+     Note that you have to take care of the length yourself.  If you
+     use this host variable as the target variable of a query which
+     returns a string with more than 49 characters, a buffer overflow
+     occurs.
+    </para>
+
+    <para>
+     The other way is using the <type>NVARCHAR</type> type, which is a
+     special type provided by ECPG.  The definition on an array of
+     type <type>NVARCHAR</type> is converted into a
+     named <type>struct</> for every variable. A declaration like:
+<programlisting>
+NVARCHAR var[180];
+</programlisting>
+     is converted into:
+<programlisting>
+struct varchar_var { int len; char arr[180]; } var;
+</programlisting>
+     The member <structfield>arr</structfield> hosts the string
+     including a terminating zero byte.  Thus, to store a string in
+     a <type>NVARCHAR</type> host variable, the host variable has to be
+     declared with the length including the zero byte terminator.  The
+     member <structfield>len</structfield> holds the length of the
+     string stored in the <structfield>arr</structfield> without the
+     terminating zero byte.  When a host variable is used as input for
+     a query, if <literal>strlen(arr)</literal>
+     and <structfield>len</structfield> are different, the shorter one
+     is used.
+    </para>
+
+    <para>
+     Two or more <type>NVARCHAR</type> host variables cannot be defined
+     in single line statement.  The following code will confuse
+     the <command>ecpg</command> preprocessor:
+<programlisting>
+NVARCHAR v1[128], v2[128];   /* WRONG */
+</programlisting>
+     Two variables should be defined in separate statements like this:
+<programlisting>
+NVARCHAR v1[128];
+NVARCHAR v2[128];
+</programlisting>
+    </para>
+
+    <para>
+     <type>NVARCHAR</type> and <type>NCHAR</type> can be written in upper or lower case, but
+     not in mixed case.
+    </para>
+
+    <para>
+     <type>NCHAR</type> and <type>NVARCHAR</type> host variables can
+     also hold values of other SQL types, which will be stored in
+     their string forms. NVARCHAR type is similar to VARCHAR type 
+     and added mainly for SQL standard compatibility.
+    </para>
+   </sect3>
+
    <sect3 id="ecpg-special-types">
     <title>Accessing Special Data Types</title>
 
diff -uNr postgresql-head-20131017/doc/src/sgml/syntax.sgml postgresql-head-20131017-nchar/doc/src/sgml/syntax.sgml
--- postgresql-head-20131017/doc/src/sgml/syntax.sgml	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/doc/src/sgml/syntax.sgml	2013-10-22 15:53:09.000000000 +1100
@@ -553,6 +553,32 @@
     </para>
    </sect3>
 
+   <sect3 id="sql-syntax-strings-N">
+    <title>String Constants with prefix N</title>
+
+    <indexterm  zone="sql-syntax-strings-N">
+     <primary>Prefix N</primary>
+     <secondary>in string constants</secondary>
+    </indexterm>
+
+    <para>
+    The standard syntax for specifying national character type string constants 
+    is to add a prefix 'N' to the string constant.
+    
+    The following trivial example shows how to pass a national character type 
+    string constant with prefix 'N'
+    
+     <programlisting>
+            INSERT INTO test VALUES (N'ok');
+     </programlisting>
+    
+    </para>
+    <para>
+    Please note that this syntax is optional and national character type string constants can also be specified bounded by single quotes (<literal>'</literal>), for example
+     <literal>'ok'</literal>.
+    </para>
+   </sect3>
+
    <sect3 id="sql-syntax-dollar-quoting">
     <title>Dollar-quoted String Constants</title>
 
diff -uNr postgresql-head-20131017/src/backend/catalog/information_schema.sql postgresql-head-20131017-nchar/src/backend/catalog/information_schema.sql
--- postgresql-head-20131017/src/backend/catalog/information_schema.sql	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/catalog/information_schema.sql	2013-10-17 11:56:22.000000000 +1100
@@ -79,7 +79,7 @@
 $$SELECT
   CASE WHEN $2 = -1 /* default typmod */
        THEN null
-       WHEN $1 IN (1042, 1043) /* char, varchar */
+       WHEN $1 IN (1042, 1043, 5001, 6001) /* char, varchar, nchar, nvarchar */
        THEN $2 - 4
        WHEN $1 IN (1560, 1562) /* bit, varbit */
        THEN $2
@@ -92,7 +92,7 @@
     RETURNS NULL ON NULL INPUT
     AS
 $$SELECT
-  CASE WHEN $1 IN (25, 1042, 1043) /* text, char, varchar */
+  CASE WHEN $1 IN (25, 1042, 1043, 5001, 6001) /* text, char, varchar, nchar, nvarchar */
        THEN CASE WHEN $2 = -1 /* default typmod */
                  THEN CAST(2^30 AS integer)
                  ELSE information_schema._pg_char_max_length($1, $2) *
diff -uNr postgresql-head-20131017/src/backend/optimizer/path/indxpath.c postgresql-head-20131017-nchar/src/backend/optimizer/path/indxpath.c
--- postgresql-head-20131017/src/backend/optimizer/path/indxpath.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/optimizer/path/indxpath.c	2013-10-31 10:58:24.000000000 +1100
@@ -4003,6 +4003,8 @@
 		case TEXTOID:
 		case VARCHAROID:
 		case BPCHAROID:
+		case NVARCHAROID:
+		case NBPCHAROID:
 			collation = DEFAULT_COLLATION_OID;
 			constlen = -1;
 			break;
diff -uNr postgresql-head-20131017/src/backend/parser/gram.y postgresql-head-20131017-nchar/src/backend/parser/gram.y
--- postgresql-head-20131017/src/backend/parser/gram.y	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/parser/gram.y	2013-11-04 10:57:47.000000000 +1100
@@ -63,6 +63,7 @@
 #include "utils/datetime.h"
 #include "utils/numeric.h"
 #include "utils/xml.h"
+#include "mb/pg_wchar.h"
 
 
 /*
@@ -442,9 +443,11 @@
 				GenericType Numeric opt_float
 				Character ConstCharacter
 				CharacterWithLength CharacterWithoutLength
+				NCharacterWithLength NCharacterWithoutLength
 				ConstDatetime ConstInterval
 				Bit ConstBit BitWithLength BitWithoutLength
 %type <str>		character
+%type <str>		ncharacter
 %type <str>		extract_arg
 %type <str>		opt_charset
 %type <boolean> opt_varying opt_timezone opt_no_inherit
@@ -564,7 +567,7 @@
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
-	NULLS_P NUMERIC
+	NULLS_P NUMERIC NVARCHAR
 
 	OBJECT_P OF OFF OFFSET OIDS ON ONLY OPERATOR OPTION OPTIONS OR
 	ORDER ORDINALITY OUT_P OUTER_P OVER OVERLAPS OVERLAY OWNED OWNER
@@ -10302,6 +10305,14 @@
 				{
 					$$ = $1;
 				}
+			| NCharacterWithLength
+				{
+					$$ = $1;
+				}
+			| NCharacterWithoutLength
+				{
+					$$ = $1;
+				}
 		;
 
 ConstCharacter:  CharacterWithLength
@@ -10319,6 +10330,68 @@
 					$$ = $1;
 					$$->typmods = NIL;
 				}
+			| NCharacterWithLength
+				{
+					$$ = $1;
+				}
+			| NCharacterWithoutLength
+				{
+					/* Length was not specified so allow to be unrestricted.
+					 * This handles problems with fixed-length (bpchar) strings
+					 * which in column definitions must default to a length
+					 * of one, but should not be constrained if the length
+					 * was not specified.
+					 */
+					$$ = $1;
+					$$->typmods = NIL;
+				}
+		;
+
+NCharacterWithLength:  NATIONAL ncharacter '(' Iconst ')' opt_charset
+				{
+					if (($6 != NULL) && (strcmp($6, "sql_text") != 0))
+					{
+						char *type;
+
+						type = palloc(strlen($2) + 1 + strlen($2) + 1);
+						strcpy(type, $2);
+						strcat(type, "_");
+						strcat(type, $6);
+						$1 = type;
+					}
+					$$ = SystemTypeName($2);
+					$$->typmods = list_make1(makeIntConst($4, @4));
+					$$->location = @1;
+				}
+		;
+
+NCharacterWithoutLength: NATIONAL ncharacter opt_charset
+				{
+					if (($3 != NULL) && (strcmp($3, "sql_text") != 0))
+					{
+						char *type;
+
+						type = palloc(strlen($2) + 1 + strlen($3) + 1);
+						strcpy(type, $2);
+						strcat(type, "_");
+						strcat(type, $3);
+						$2 = type;
+					}
+					$$ = SystemTypeName($2);
+					/* nchar defaults to char(1), varchar to no limit */
+					if (strcmp($2, "nbpchar") == 0)
+						$$->typmods = list_make1(makeIntConst(1, -1));
+
+					$$->location = @1;
+				}
+		;
+
+ncharacter:  CHARACTER opt_varying
+				{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| CHAR_P opt_varying
+				{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| VARCHAR
+				{ $$ = "nvarchar"; }
 		;
 
 CharacterWithLength:  character '(' Iconst ')' opt_charset
@@ -10339,8 +10412,8 @@
 
 					$$ = SystemTypeName($1);
 
-					/* char defaults to char(1), varchar to no limit */
-					if (strcmp($1, "bpchar") == 0)
+					/* [n]char defaults to [national ]char(1), varchar to no limit */
+					if ((strcmp($1, "bpchar") == 0) || (strcmp($1, "nbpchar") == 0))
 						$$->typmods = list_make1(makeIntConst(1, -1));
 
 					$$->location = @1;
@@ -10353,12 +10426,10 @@
 										{ $$ = $2 ? "varchar": "bpchar"; }
 			| VARCHAR
 										{ $$ = "varchar"; }
-			| NATIONAL CHARACTER opt_varying
-										{ $$ = $3 ? "varchar": "bpchar"; }
-			| NATIONAL CHAR_P opt_varying
-										{ $$ = $3 ? "varchar": "bpchar"; }
 			| NCHAR opt_varying
-										{ $$ = $2 ? "varchar": "bpchar"; }
+										{ $$ = $2 ? "nvarchar": "nbpchar"; }
+			| NVARCHAR
+										{ $$ = "nvarchar"; }
 		;
 
 opt_varying:
@@ -12748,6 +12819,7 @@
 			| NONE
 			| NULLIF
 			| NUMERIC
+			| NVARCHAR
 			| OUT_P
 			| OVERLAY
 			| POSITION
diff -uNr postgresql-head-20131017/src/backend/parser/scan.l postgresql-head-20131017-nchar/src/backend/parser/scan.l
--- postgresql-head-20131017/src/backend/parser/scan.l	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/parser/scan.l	2013-10-17 11:56:22.000000000 +1100
@@ -482,7 +482,7 @@
 					SET_YYLLOC();
 					yyless(1);				/* eat only 'n' this time */
 
-					keyword = ScanKeywordLookup("nchar",
+					keyword = ScanKeywordLookup("nvarchar",
 												yyextra->keywords,
 												yyextra->num_keywords);
 					if (keyword != NULL)
diff -uNr postgresql-head-20131017/src/backend/utils/adt/Makefile postgresql-head-20131017-nchar/src/backend/utils/adt/Makefile
--- postgresql-head-20131017/src/backend/utils/adt/Makefile	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/utils/adt/Makefile	2013-10-25 14:53:38.000000000 +1100
@@ -20,7 +20,7 @@
 	cash.o char.o date.o datetime.o datum.o domains.o \
 	enum.o float.o format_type.o \
 	geo_ops.o geo_selfuncs.o int.o int8.o json.o jsonfuncs.o like.o \
-	lockfuncs.o misc.o nabstime.o name.o numeric.o numutils.o \
+	lockfuncs.o misc.o nabstime.o name.o numeric.o numutils.o nvarchar.o \
 	oid.o oracle_compat.o pseudotypes.o rangetypes.o rangetypes_gist.o \
 	rowtypes.o regexp.o regproc.o ruleutils.o selfuncs.o \
 	tid.o timestamp.o varbit.o varchar.o varlena.o version.o xid.o \
diff -uNr postgresql-head-20131017/src/backend/utils/adt/format_type.c postgresql-head-20131017-nchar/src/backend/utils/adt/format_type.c
--- postgresql-head-20131017/src/backend/utils/adt/format_type.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/utils/adt/format_type.c	2013-10-31 11:01:17.000000000 +1100
@@ -218,6 +218,17 @@
 				buf = pstrdup("character");
 			break;
 
+		case NBPCHAROID:
+			if (with_typemod)
+				buf = printTypmod("national character", typemod, typeform->typmodout);
+			else if (typemod_given)
+			{
+				/* See the comment for BPCHAR above */
+			}
+			else
+				buf = pstrdup("national character");
+			break;
+
 		case FLOAT4OID:
 			buf = pstrdup("real");
 			break;
@@ -293,6 +304,13 @@
 			else
 				buf = pstrdup("character varying");
 			break;
+
+		case NVARCHAROID:
+			if (with_typemod)
+				buf = printTypmod("national character varying", typemod, typeform->typmodout);
+			else
+				buf = pstrdup("national character varying");
+			break;
 	}
 
 	if (buf == NULL)
@@ -384,6 +402,8 @@
 	{
 		case BPCHAROID:
 		case VARCHAROID:
+		case NBPCHAROID:
+		case NVARCHAROID:
 			/* typemod includes varlena header */
 
 			/* typemod is in characters not bytes */
diff -uNr postgresql-head-20131017/src/backend/utils/adt/nvarchar.c postgresql-head-20131017-nchar/src/backend/utils/adt/nvarchar.c
--- postgresql-head-20131017/src/backend/utils/adt/nvarchar.c	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/backend/utils/adt/nvarchar.c	2013-10-30 14:32:02.000000000 +1100
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * varchar.c
+ *	  Functions for the built-in types nchar(n) and nvarchar(n).
+ *
+ * Portions Copyright (c) 1996-2013, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/utils/adt/nvarchar.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+
+#include "access/hash.h"
+#include "access/tuptoaster.h"
+#include "libpq/pqformat.h"
+#include "nodes/nodeFuncs.h"
+#include "utils/array.h"
+#include "utils/builtins.h"
+#include "mb/pg_wchar.h"
+
+
+/*
+ * actually just UTF8 stubs copied form char/varchar
+ */
+
+Datum
+nbpcharoctetlen(PG_FUNCTION_ARGS)
+{
+	Datum		arg = PG_GETARG_DATUM(0);
+
+	/* We need not detoast the input at all */
+	PG_RETURN_INT32(toast_raw_datum_size(arg) - VARHDRSZ);
+}
+
+Datum
+nvarcharoctetlen(PG_FUNCTION_ARGS)
+{
+        Datum           arg = PG_GETARG_DATUM(0);
+
+        /* We need not detoast the input at all */
+        PG_RETURN_INT32(toast_raw_datum_size(arg) - VARHDRSZ);
+}
+
diff -uNr postgresql-head-20131017/src/backend/utils/adt/selfuncs.c postgresql-head-20131017-nchar/src/backend/utils/adt/selfuncs.c
--- postgresql-head-20131017/src/backend/utils/adt/selfuncs.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/backend/utils/adt/selfuncs.c	2013-10-17 11:56:22.000000000 +1100
@@ -3626,7 +3626,9 @@
 			 */
 		case CHAROID:
 		case BPCHAROID:
+		case NBPCHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case TEXTOID:
 		case NAMEOID:
 			{
@@ -3888,7 +3890,9 @@
 			val[1] = '\0';
 			break;
 		case BPCHAROID:
+		case NBPCHAROID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case TEXTOID:
 			val = TextDatumGetCString(value);
 			break;
@@ -5875,7 +5879,9 @@
 	{
 		case TEXTOID:
 		case VARCHAROID:
+		case NVARCHAROID:
 		case BPCHAROID:
+		case NBPCHAROID:
 			collation = DEFAULT_COLLATION_OID;
 			constlen = -1;
 			break;
diff -uNr postgresql-head-20131017/src/include/catalog/pg_amop.h postgresql-head-20131017-nchar/src/include/catalog/pg_amop.h
--- postgresql-head-20131017/src/include/catalog/pg_amop.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_amop.h	2013-10-30 14:44:44.000000000 +1100
@@ -253,6 +253,12 @@
 DATA(insert (	426   1042 1042 4 s 1061	403 0 ));
 DATA(insert (	426   1042 1042 5 s 1060	403 0 ));
 
+DATA(insert (	426   5001 5001 1 s 5058	403 0 ));
+DATA(insert (	426   5001 5001 2 s 5059	403 0 ));
+DATA(insert (	426   5001 5001 3 s 5054	403 0 ));
+DATA(insert (	426   5001 5001 4 s 5061	403 0 ));
+DATA(insert (	426   5001 5001 5 s 5060	403 0 ));
+
 /*
  *	btree bytea_ops
  */
@@ -518,6 +524,10 @@
 
 /* bpchar_ops */
 DATA(insert (	427   1042 1042 1 s 1054	405 0 ));
+
+/* nbpchar_ops */
+DATA(insert (	427   5001 5001 1 s 5054	405 0 ));
+
 /* char_ops */
 DATA(insert (	431   18 18 1 s 92	405 0 ));
 /* date_ops */
diff -uNr postgresql-head-20131017/src/include/catalog/pg_amproc.h postgresql-head-20131017-nchar/src/include/catalog/pg_amproc.h
--- postgresql-head-20131017/src/include/catalog/pg_amproc.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_amproc.h	2013-10-30 14:43:25.000000000 +1100
@@ -80,6 +80,7 @@
 DATA(insert (	423   1560 1560 1 1596 ));
 DATA(insert (	424   16 16 1 1693 ));
 DATA(insert (	426   1042 1042 1 1078 ));
+DATA(insert (	426   5001 5001 1 1078 ));
 DATA(insert (	428   17 17 1 1954 ));
 DATA(insert (	429   18 18 1 358 ));
 DATA(insert (	434   1082 1082 1 1092 ));
@@ -129,6 +130,7 @@
 DATA(insert (	2002   1562 1562 1 1672 ));
 DATA(insert (	2095   25 25 1 2166 ));
 DATA(insert (	2097   1042 1042 1 2180 ));
+DATA(insert (	2097   5001 5001 1 2180 ));
 DATA(insert (	2099   790 790 1  377 ));
 DATA(insert (	2233   703 703 1  380 ));
 DATA(insert (	2234   704 704 1  381 ));
@@ -139,6 +141,7 @@
 
 /* hash */
 DATA(insert (	427   1042 1042 1 1080 ));
+DATA(insert (	427   5001 5001 1 1080 ));
 DATA(insert (	431   18 18 1 454 ));
 DATA(insert (	435   1082 1082 1 450 ));
 DATA(insert (	627   2277 2277 1 626 ));
@@ -168,6 +171,7 @@
 DATA(insert (	2228   703 703 1 450 ));
 DATA(insert (	2229   25 25 1 400 ));
 DATA(insert (	2231   1042 1042 1 1080 ));
+DATA(insert (	2231   5001 5001 1 1080 ));
 DATA(insert (	2235   1033 1033 1 329 ));
 DATA(insert (	2969   2950 2950 1 2963 ));
 DATA(insert (	3523   3500 3500 1 3515 ));
diff -uNr postgresql-head-20131017/src/include/catalog/pg_cast.h postgresql-head-20131017-nchar/src/include/catalog/pg_cast.h
--- postgresql-head-20131017/src/include/catalog/pg_cast.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_cast.h	2013-10-30 15:14:09.000000000 +1100
@@ -220,6 +220,20 @@
 DATA(insert ( 1042 1043  401 i f ));
 DATA(insert ( 1043	 25    0 i b ));
 DATA(insert ( 1043 1042    0 i b ));
+DATA(insert (	25 5001    0 i b ));
+DATA(insert (	25 6001    0 i b ));
+DATA(insert ( 5001	 25  401 i f ));
+DATA(insert ( 6001	 25    0 i b ));
+DATA(insert ( 1042 5001    0 a b ));
+DATA(insert ( 1042 6001  401 i f ));
+DATA(insert ( 5001 1042    0 i b ));
+DATA(insert ( 6001 1042    0 i b ));
+DATA(insert ( 1043 5001    0 i b ));
+DATA(insert ( 1043 6001    0 i b ));
+DATA(insert ( 5001 1043  401 i f ));
+DATA(insert ( 6001 1043    0 i b ));
+DATA(insert ( 5001 6001  401 i f ));
+DATA(insert ( 6001 5001    0 i b ));
 DATA(insert (	18	 25  946 i f ));
 DATA(insert (	18 1042  860 a f ));
 DATA(insert (	18 1043  946 a f ));
@@ -358,5 +372,7 @@
 DATA(insert ( 1560 1560 1685 i f ));
 DATA(insert ( 1562 1562 1687 i f ));
 DATA(insert ( 1700 1700 1703 i f ));
+DATA(insert ( 5001 5001 5668 i f ));
+DATA(insert ( 6001 6001 5669 i f ));
 
 #endif   /* PG_CAST_H */
diff -uNr postgresql-head-20131017/src/include/catalog/pg_opclass.h postgresql-head-20131017-nchar/src/include/catalog/pg_opclass.h
--- postgresql-head-20131017/src/include/catalog/pg_opclass.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_opclass.h	2013-10-17 11:56:22.000000000 +1100
@@ -97,6 +97,8 @@
 DATA(insert (	403		bool_ops			PGNSP PGUID  424   16 t 0 ));
 DATA(insert (	403		bpchar_ops			PGNSP PGUID  426 1042 t 0 ));
 DATA(insert (	405		bpchar_ops			PGNSP PGUID  427 1042 t 0 ));
+DATA(insert (	403		nbpchar_ops			PGNSP PGUID  426 5001 t 0 ));
+DATA(insert (	405		nbpchar_ops			PGNSP PGUID  427 5001 t 0 ));
 DATA(insert (	403		bytea_ops			PGNSP PGUID  428   17 t 0 ));
 DATA(insert (	403		char_ops			PGNSP PGUID  429   18 t 0 ));
 DATA(insert (	405		char_ops			PGNSP PGUID  431   18 t 0 ));
@@ -157,6 +159,8 @@
 DATA(insert (	403		varbit_ops			PGNSP PGUID 2002 1562 t 0 ));
 DATA(insert (	403		varchar_ops			PGNSP PGUID 1994   25 f 0 ));
 DATA(insert (	405		varchar_ops			PGNSP PGUID 1995   25 f 0 ));
+DATA(insert (	403		nvarchar_ops		PGNSP PGUID 1994   25 f 0 ));
+DATA(insert (	405		nvarchar_ops		PGNSP PGUID 1995   25 f 0 ));
 DATA(insert OID = 3128 ( 403	timestamp_ops	PGNSP PGUID  434 1114 t 0 ));
 #define TIMESTAMP_BTREE_OPS_OID 3128
 DATA(insert (	405		timestamp_ops		PGNSP PGUID 2040 1114 t 0 ));
@@ -188,6 +192,7 @@
 DATA(insert (	2742	_bit_ops			PGNSP PGUID 2745  1561 t 1560 ));
 DATA(insert (	2742	_bool_ops			PGNSP PGUID 2745  1000 t 16 ));
 DATA(insert (	2742	_bpchar_ops			PGNSP PGUID 2745  1014 t 1042 ));
+DATA(insert (	2742	_nbpchar_ops		PGNSP PGUID 2745  5014 t 5001 ));
 DATA(insert (	2742	_bytea_ops			PGNSP PGUID 2745  1001 t 17 ));
 DATA(insert (	2742	_char_ops			PGNSP PGUID 2745  1002 t 18 ));
 DATA(insert (	2742	_cidr_ops			PGNSP PGUID 2745  651 t 650 ));
@@ -208,6 +213,7 @@
 DATA(insert (	2742	_timetz_ops			PGNSP PGUID 2745  1270 t 1266 ));
 DATA(insert (	2742	_varbit_ops			PGNSP PGUID 2745  1563 t 1562 ));
 DATA(insert (	2742	_varchar_ops		PGNSP PGUID 2745  1015 t 1043 ));
+DATA(insert (	2742	_nvarchar_ops		PGNSP PGUID 2745  5015 t 6001 ));
 DATA(insert (	2742	_timestamp_ops		PGNSP PGUID 2745  1115 t 1114 ));
 DATA(insert (	2742	_money_ops			PGNSP PGUID 2745  791 t 790 ));
 DATA(insert (	2742	_reltime_ops		PGNSP PGUID 2745  1024 t 703 ));
diff -uNr postgresql-head-20131017/src/include/catalog/pg_operator.h postgresql-head-20131017-nchar/src/include/catalog/pg_operator.h
--- postgresql-head-20131017/src/include/catalog/pg_operator.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_operator.h	2013-10-31 11:11:29.000000000 +1100
@@ -1751,7 +1751,46 @@
 DATA(insert OID = 3967 (  "#>>"    PGNSP PGUID b f f 114 1009 25 0 0 json_extract_path_text_op - - ));
 DESCR("get value from json as text with path elements");
 
+DATA(insert OID = 5054 ( "="	   PGNSP PGUID b t t 5001 5001 16 5054 5057 nbpchareq eqsel eqjoinsel ));
+DESCR("equal");
+DATA(insert OID = 5055 ( "~"	   PGNSP PGUID b f f 5001 25 16    0 5056 nbpcharregexeq regexeqsel regexeqjoinsel ));
+DESCR("matches regular expression, case-sensitive");
+DATA(insert OID = 5056 ( "!~"	   PGNSP PGUID b f f 5001 25 16    0 5055 nbpcharregexne regexnesel regexnejoinsel ));
+DESCR("does not match regular expression, case-sensitive");
+DATA(insert OID = 5057 ( "<>"	   PGNSP PGUID b f f 5001 5001 16 5057 5054 nbpcharne neqsel neqjoinsel ));
+DESCR("not equal");
+DATA(insert OID = 5058 ( "<"	   PGNSP PGUID b f f 5001 5001 16 5060 5061 nbpcharlt scalarltsel scalarltjoinsel ));
+DESCR("less than");
+DATA(insert OID = 5059 ( "<="	   PGNSP PGUID b f f 5001 5001 16 5061 5060 nbpcharle scalarltsel scalarltjoinsel ));
+DESCR("less than or equal");
+DATA(insert OID = 5060 ( ">"	   PGNSP PGUID b f f 5001 5001 16 5058 5059 nbpchargt scalargtsel scalargtjoinsel ));
+DESCR("greater than");
+DATA(insert OID = 5061 ( ">="	   PGNSP PGUID b f f 5001 5001 16 5059 5058 nbpcharge scalargtsel scalargtjoinsel ));
+DESCR("greater than or equal");
 
+DATA(insert OID = 5211 (  "~~"	   PGNSP PGUID b f f  5001 25 16 0 5212 nbpcharlike likesel likejoinsel ));
+DESCR("matches LIKE expression");
+DATA(insert OID = 5212 (  "!~~"	   PGNSP PGUID b f f  5001 25 16 0 5211 nbpcharnlike nlikesel nlikejoinsel ));
+DESCR("does not match LIKE expression");
+
+DATA(insert OID = 5234 (  "~*"	   PGNSP PGUID b f f  5001 25 16 0 5235 nbpcharicregexeq icregexeqsel icregexeqjoinsel ));
+DESCR("matches regular expression, case-insensitive");
+DATA(insert OID = 5235 ( "!~*"	   PGNSP PGUID b f f  5001 25 16 0 5234 nbpcharicregexne icregexnesel icregexnejoinsel ));
+DESCR("does not match regular expression, case-insensitive");
+
+DATA(insert OID = 5326 ( "~<~"  PGNSP PGUID b f f 5001 5001 16 5330 5329 nbpchar_pattern_lt scalarltsel scalarltjoinsel ));
+DESCR("less than");
+DATA(insert OID = 5327 ( "~<=~" PGNSP PGUID b f f 5001 5001 16 5329 5330 nbpchar_pattern_le scalarltsel scalarltjoinsel ));
+DESCR("less than or equal");
+DATA(insert OID = 5329 ( "~>=~" PGNSP PGUID b f f 5001 5001 16 5327 5326 nbpchar_pattern_ge scalargtsel scalargtjoinsel ));
+DESCR("greater than or equal");
+DATA(insert OID = 5330 ( "~>~"  PGNSP PGUID b f f 5001 5001 16 5326 5327 nbpchar_pattern_gt scalargtsel scalargtjoinsel ));
+DESCR("greater than");
+
+DATA(insert OID = 5629 (  "~~*"	   PGNSP PGUID b f f  5001 25 16 0 5630 nbpchariclike iclikesel iclikejoinsel ));
+DESCR("matches LIKE expression, case-insensitive");
+DATA(insert OID = 5630 (  "!~~*"   PGNSP PGUID b f f  5001 25 16 0 5629 nbpcharicnlike icnlikesel icnlikejoinsel ));
+DESCR("does not match LIKE expression, case-insensitive");
 
 /*
  * function prototypes
diff -uNr postgresql-head-20131017/src/include/catalog/pg_proc.h postgresql-head-20131017-nchar/src/include/catalog/pg_proc.h
--- postgresql-head-20131017/src/include/catalog/pg_proc.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_proc.h	2013-10-31 11:57:16.000000000 +1100
@@ -4741,6 +4741,82 @@
 /* event triggers */
 DATA(insert OID = 3566 (  pg_event_trigger_dropped_objects		PGNSP PGUID 12 10 100 0 0 f f f f t t s 0 0 2249 "" "{26,26,23,25,25,25,25}" "{o,o,o,o,o,o,o}" "{classid, objid, objsubid, object_type, schema_name, object_name, object_identity}" _null_ pg_event_trigger_dropped_objects _null_ _null_ _null_ ));
 DESCR("list objects dropped by the current command");
+
+DATA(insert OID = 5044 (  nbpcharin		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 5001 "2275 26 23" _null_ _null_ _null_ _null_ bpcharin _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5045 (  nbpcharout	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "5001" _null_ _null_ _null_ _null_ bpcharout _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5046 (  nvarcharin	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 6001 "2275 26 23" _null_ _null_ _null_ _null_ varcharin _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5047 (  nvarcharout	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "6001" _null_ _null_ _null_ _null_ varcharout _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5048 (  nbpchareq		   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchareq _null_ _null_ _null_ ));
+DATA(insert OID = 5049 (  nbpcharlt		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharlt _null_ _null_ _null_ ));
+DATA(insert OID = 5050 (  nbpcharle		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharle _null_ _null_ _null_ ));
+DATA(insert OID = 5051 (  nbpchargt		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchargt _null_ _null_ _null_ ));
+DATA(insert OID = 5052 (  nbpcharge		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharge _null_ _null_ _null_ ));
+DATA(insert OID = 5053 (  nbpcharne		   PGNSP PGUID 12 1 0 0 0 f f f t t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpcharne _null_ _null_ _null_ ));
+
+DATA(insert OID = 5063 (  nbpchar_larger   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 5001 "5001 5001" _null_ _null_ _null_ _null_ bpchar_larger _null_ _null_ _null_ ));
+DESCR("larger of two");
+DATA(insert OID = 5064 (  nbpchar_smaller  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 5001 "5001 5001" _null_ _null_ _null_ _null_ bpchar_smaller _null_ _null_ _null_ ));
+DESCR("smaller of two");
+DATA(insert OID = 5078 (  nbpcharcmp	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "5001 5001" _null_ _null_ _null_ _null_ bpcharcmp _null_ _null_ _null_ ));
+DESCR("less-equal-greater");
+
+DATA(insert OID = 5097 ( nvarchar_transform PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2281 "2281" _null_ _null_ _null_ _null_ varchar_transform _null_ _null_ _null_ ));
+DESCR("transform a nvarchar length coercion");
+
+DATA(insert OID = 5174 ( nbpchar_pattern_lt	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_lt _null_ _null_ _null_ ));
+DATA(insert OID = 5175 ( nbpchar_pattern_le	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_le _null_ _null_ _null_ ));
+DATA(insert OID = 5177 ( nbpchar_pattern_ge	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_ge _null_ _null_ _null_ ));
+DATA(insert OID = 5178 ( nbpchar_pattern_gt	  PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 5001" _null_ _null_ _null_ _null_ bpchar_pattern_gt _null_ _null_ _null_ ));
+DATA(insert OID = 5180 ( nbtbpchar_pattern_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 23 "5001 5001" _null_ _null_ _null_ _null_ btbpchar_pattern_cmp _null_ _null_ _null_ ));
+DESCR("less-equal-greater");
+
+DATA(insert OID = 5374 (  octet_length		  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "5001" _null_ _null_ _null_ _null_ nbpcharoctetlen _null_ _null_ _null_ ));
+DESCR("octet length");
+DATA(insert OID = 5375 (  octet_length		  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "6001" _null_ _null_ _null_ _null_ nvarcharoctetlen _null_ _null_ _null_ ));
+DESCR("octet length");
+
+DATA(insert OID = 5400 (  name		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 19 "6001" _null_ _null_ _null_ _null_       text_name _null_ _null_ _null_ ));
+DESCR("convert nvarchar to name");
+DATA(insert OID = 5401 (  nvarchar	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 6001 "19" _null_ _null_ _null_ _null_       name_text _null_ _null_ _null_ ));
+DESCR("convert name to nvarchar");
+
+DATA(insert OID = 5430 (  nbpcharrecv		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 5001 "2281 26 23" _null_ _null_ _null_ _null_  bpcharrecv _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5431 (  nbpcharsend		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "5001" _null_ _null_ _null_ _null_       bpcharsend _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5432 (  nvarcharrecv		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 3 0 6001 "2281 26 23" _null_ _null_ _null_ _null_  varcharrecv _null_ _null_ _null_ ));
+DESCR("I/O");
+DATA(insert OID = 5433 (  nvarcharsend		   PGNSP PGUID 12 1 0 0 0 f f f f t f s 1 0 17 "6001" _null_ _null_ _null_ _null_       varcharsend _null_ _null_ _null_ ));
+DESCR("I/O");
+
+DATA(insert OID = 5631 (  nbpcharlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textlike _null_ _null_ _null_ ));
+DATA(insert OID = 5632 (  nbpcharnlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textnlike _null_ _null_ _null_ ));
+
+DATA(insert OID = 5656 (  nbpcharicregexeq	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticregexeq _null_ _null_ _null_ ));
+DATA(insert OID = 5657 (  nbpcharicregexne	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticregexne _null_ _null_ _null_ ));
+DATA(insert OID = 5658 (  nbpcharregexeq	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textregexeq _null_ _null_ _null_ ));
+DATA(insert OID = 5659 (  nbpcharregexne	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ textregexne _null_ _null_ _null_ ));
+DATA(insert OID = 5660 (  nbpchariclike		PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticlike _null_ _null_ _null_ ));
+DATA(insert OID = 5661 (  nbpcharicnlike	PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 16 "5001 25" _null_ _null_ _null_ _null_ texticnlike _null_ _null_ _null_ ));
+
+DATA(insert OID = 5668 (  nbpchar			PGNSP PGUID 12 1 0 0 0 f f f f t f i 3 0 5001 "5001 23 16" _null_ _null_ _null_ _null_ bpchar _null_ _null_ _null_ ));
+DESCR("adjust char() to typmod length");
+DATA(insert OID = 5669 (  nvarchar			PGNSP PGUID 12 1 0 0 nvarchar_transform f f f f t f i 3 0 6001 "6001 23 16" _null_ _null_ _null_ _null_ varchar _null_ _null_ _null_ ));
+DESCR("adjust nvarchar() to typmod length");
+
+DATA(insert OID = 5913 (  nbpchartypmodin  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_       bpchartypmodin _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+DATA(insert OID = 5914 (  nbpchartypmodout PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_       bpchartypmodout _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+DATA(insert OID = 5915 (  nvarchartypmodin  PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 23 "1263" _null_ _null_ _null_ _null_       varchartypmodin _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+DATA(insert OID = 5916 (  nvarchartypmodout PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2275 "23" _null_ _null_ _null_ _null_       varchartypmodout _null_ _null_ _null_ ));
+DESCR("I/O typmod");
+
 /*
  * Symbolic values for provolatile column: these indicate whether the result
  * of a function is dependent *only* on the values of its explicit arguments,
diff -uNr postgresql-head-20131017/src/include/catalog/pg_type.h postgresql-head-20131017-nchar/src/include/catalog/pg_type.h
--- postgresql-head-20131017/src/include/catalog/pg_type.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/catalog/pg_type.h	2013-10-30 16:07:44.000000000 +1100
@@ -669,6 +669,16 @@
 DATA(insert OID = 3831 ( anyrange		PGNSP PGUID  -1 f p P f t \054 0 0 0 anyrange_in anyrange_out - - - - - d x f 0 -1 0 0 _null_ _null_ _null_ ));
 #define ANYRANGEOID		3831
 
+DATA(insert OID = 5014 (  _nbpchar	 PGNSP PGUID -1 f b A f t \054 0 5001 0 array_in array_out array_recv array_send nbpchartypmodin nbpchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DATA(insert OID = 5015 (  _nvarchar	 PGNSP PGUID -1 f b A f t \054 0 6001 0 array_in array_out array_recv array_send nvarchartypmodin nvarchartypmodout array_typanalyze i x f 0 -1 0 100 _null_ _null_ _null_ ));
+
+/* national character types */
+DATA(insert OID = 5001 ( nbpchar	 PGNSP PGUID -1 f b S f t \054 0 0 5014 nbpcharin nbpcharout nbpcharrecv nbpcharsend nbpchartypmodin nbpchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DESCR("nchar(length), blank-padded national string, fixed storage length");
+#define NBPCHAROID		5001
+DATA(insert OID = 6001 ( nvarchar	 PGNSP PGUID -1 f b S f t \054 0 0 5015 nvarcharin nvarcharout nvarcharrecv nvarcharsend nvarchartypmodin nvarchartypmodout - i x f 0 -1 0 100 _null_ _null_ _null_ ));
+DESCR("nvarchar(length), non-blank-padded national string, variable storage length");
+#define NVARCHAROID		6001
 
 /*
  * macros
diff -uNr postgresql-head-20131017/src/include/parser/kwlist.h postgresql-head-20131017-nchar/src/include/parser/kwlist.h
--- postgresql-head-20131017/src/include/parser/kwlist.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/parser/kwlist.h	2013-10-17 11:56:22.000000000 +1100
@@ -257,6 +257,7 @@
 PG_KEYWORD("nullif", NULLIF, COL_NAME_KEYWORD)
 PG_KEYWORD("nulls", NULLS_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("numeric", NUMERIC, COL_NAME_KEYWORD)
+PG_KEYWORD("nvarchar", NVARCHAR, COL_NAME_KEYWORD)
 PG_KEYWORD("object", OBJECT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("of", OF, UNRESERVED_KEYWORD)
 PG_KEYWORD("off", OFF, UNRESERVED_KEYWORD)
diff -uNr postgresql-head-20131017/src/include/utils/builtins.h postgresql-head-20131017-nchar/src/include/utils/builtins.h
--- postgresql-head-20131017/src/include/utils/builtins.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/include/utils/builtins.h	2013-10-17 11:56:22.000000000 +1100
@@ -326,6 +326,10 @@
 extern Datum btoidsortsupport(PG_FUNCTION_ARGS);
 extern Datum btnamesortsupport(PG_FUNCTION_ARGS);
 
+/* nvarchar.c */
+extern Datum nbpcharoctetlen(PG_FUNCTION_ARGS);
+extern Datum nvarcharoctetlen(PG_FUNCTION_ARGS);
+
 /* float.c */
 extern PGDLLIMPORT int extra_float_digits;
 
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/data.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/data.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/data.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/data.c	2013-10-17 11:56:22.000000000 +1100
@@ -227,8 +227,10 @@
 		switch (type)
 		{
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_varchar:
+			case ECPGt_nvarchar:
 			case ECPGt_string:
 				break;
 
@@ -450,6 +452,7 @@
 					break;
 
 				case ECPGt_char:
+				case ECPGt_nchar:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
 					{
@@ -508,6 +511,7 @@
 					break;
 
 				case ECPGt_varchar:
+				case ECPGt_nvarchar:
 					{
 						struct ECPGgeneric_varchar *variable =
 						(struct ECPGgeneric_varchar *) (var + offset * act_tuple);
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/descriptor.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/descriptor.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/descriptor.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/descriptor.c	2013-10-17 11:56:22.000000000 +1100
@@ -200,11 +200,13 @@
 	switch (vartype)
 	{
 		case ECPGt_char:
+		case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			strncpy((char *) var, value, varcharsize);
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			{
 				struct ECPGgeneric_varchar *variable =
 				(struct ECPGgeneric_varchar *) var;
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/execute.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/execute.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/execute.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/execute.c	2013-10-31 10:35:42.000000000 +1100
@@ -248,6 +248,10 @@
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), VARCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
+		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NBPCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
+			return (ECPG_ARRAY_ERROR);
+		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NVARCHAROID, ECPG_ARRAY_NONE, stmt->lineno))
+			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), DATEOID, ECPG_ARRAY_NONE, stmt->lineno))
 			return (ECPG_ARRAY_ERROR);
 		if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMEOID, ECPG_ARRAY_NONE, stmt->lineno))
@@ -362,6 +366,7 @@
 			switch (var->type)
 			{
 				case ECPGt_char:
+				case ECPGt_nchar:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
 					if (!var->varcharsize && !var->arrsize)
@@ -388,6 +393,7 @@
 					}
 					break;
 				case ECPGt_varchar:
+				case ECPGt_nvarchar:
 					len = ntuples * (var->varcharsize + sizeof(int));
 					break;
 				default:
@@ -423,7 +429,7 @@
 
 	/* fill the variable with the tuple(s) */
 	if (!var->varcharsize && !var->arrsize &&
-		(var->type == ECPGt_char || var->type == ECPGt_unsigned_char || var->type == ECPGt_string))
+		(var->type == ECPGt_char || var->type == ECPGt_unsigned_char || var->type == ECPGt_string || var->type == ECPGt_nchar))
 	{
 		/* special mode for handling char**foo=0 */
 
@@ -793,6 +799,7 @@
 				break;
 
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 				{
@@ -827,6 +834,7 @@
 				}
 				break;
 			case ECPGt_varchar:
+			case ECPGt_nvarchar:
 				{
 					struct ECPGgeneric_varchar *variable =
 					(struct ECPGgeneric_varchar *) (var->value);
@@ -1232,6 +1240,8 @@
 						{
 							case ECPGt_char:
 							case ECPGt_varchar:
+							case ECPGt_nchar:
+							case ECPGt_nvarchar:
 								desc_inlist.varcharsize = strlen(sqlda->sqlvar[i].sqldata);
 								break;
 							default:
@@ -1287,6 +1297,8 @@
 						{
 							case ECPGt_char:
 							case ECPGt_varchar:
+							case ECPGt_nchar:
+							case ECPGt_nvarchar:
 								desc_inlist.varcharsize = strlen(sqlda->sqlvar[i].sqldata);
 								break;
 							default:
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/misc.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/misc.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/misc.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/misc.c	2013-10-31 10:33:33.000000000 +1100
@@ -305,6 +305,7 @@
 	switch (type)
 	{
 		case ECPGt_char:
+		case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			*((char *) ptr) = '\0';
@@ -335,6 +336,7 @@
 			memset((char *) ptr, 0xff, sizeof(double));
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			*(((struct ECPGgeneric_varchar *) ptr)->arr) = 0x00;
 			((struct ECPGgeneric_varchar *) ptr)->len = 0;
 			break;
@@ -408,6 +410,7 @@
 			return (_check(ptr, sizeof(double)));
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			if (*(((struct ECPGgeneric_varchar *) ptr)->arr) == 0x00)
 				return true;
 			break;
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/pg_type.h postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/pg_type.h
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/pg_type.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/pg_type.h	2013-10-25 15:06:05.000000000 +1100
@@ -57,5 +57,7 @@
 #define ZPBITOID	 1560
 #define VARBITOID	  1562
 #define NUMERICOID		1700
+#define NBPCHAROID		5001
+#define NVARCHAROID		6001
 
 #endif   /* PG_TYPE_H */
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/sqlda.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/sqlda.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/sqlda.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/sqlda.c	2013-10-17 11:56:22.000000000 +1100
@@ -134,6 +134,7 @@
 				ecpg_sqlda_align_add_size(offset, sizeof(int64), sizeof(interval), &offset, &next_offset);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
@@ -373,6 +374,7 @@
 				sqlda->sqlvar[i].sqllen = sizeof(interval);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
@@ -562,6 +564,7 @@
 				sqlda->sqlvar[i].sqllen = sizeof(interval);
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_string:
 			default:
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/ecpglib/typename.c postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/typename.c
--- postgresql-head-20131017/src/interfaces/ecpg/ecpglib/typename.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/ecpglib/typename.c	2013-10-31 10:39:30.000000000 +1100
@@ -62,6 +62,10 @@
 			return "interval";
 		case ECPGt_const:
 			return "Const";
+		case ECPGt_nchar:
+			return "national char";
+		case ECPGt_nvarchar:
+			return "nvarchar";
 		default:
 			abort();
 	}
@@ -89,6 +93,10 @@
 			return SQL3_CHARACTER;		/* bpchar */
 		case VARCHAROID:
 			return SQL3_CHARACTER_VARYING;		/* varchar */
+		case NBPCHAROID:
+			return SQL3_CHARACTER;		/* bpchar */
+		case NVARCHAROID:
+			return SQL3_CHARACTER_VARYING;		/* nvarchar */
 		case DATEOID:
 			return SQL3_DATE_TIME_TIMESTAMP;	/* date */
 		case TIMEOID:
@@ -110,6 +118,8 @@
 		case CHAROID:
 		case VARCHAROID:
 		case BPCHAROID:
+		case NVARCHAROID:
+		case NBPCHAROID:
 		case TEXTOID:
 			return ECPGt_char;
 		case INT2OID:
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/include/ecpgtype.h postgresql-head-20131017-nchar/src/interfaces/ecpg/include/ecpgtype.h
--- postgresql-head-20131017/src/interfaces/ecpg/include/ecpgtype.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/include/ecpgtype.h	2013-10-31 10:40:06.000000000 +1100
@@ -63,7 +63,9 @@
 	ECPGt_EORT,					/* End of result types. */
 	ECPGt_NO_INDICATOR,			/* no indicator */
 	ECPGt_string,				/* trimmed (char *) type */
-	ECPGt_sqlda					/* C struct descriptor */
+	ECPGt_sqlda,					/* C struct descriptor */
+	ECPGt_nvarchar,				/* the same as ECPGt_varchar actually */
+	ECPGt_nchar					/* converted to char *arr[N] */
 };
 
  /* descriptor items */
@@ -88,7 +90,7 @@
 	ECPGd_cardinality
 };
 
-#define IS_SIMPLE_TYPE(type) (((type) >= ECPGt_char && (type) <= ECPGt_interval) || ((type) == ECPGt_string))
+#define IS_SIMPLE_TYPE(type) (((type) >= ECPGt_char && (type) <= ECPGt_interval) || ((type) == ECPGt_string) || ((type) == ECPGt_nvarchar) || ((type) == ECPGt_nchar))
 
 /* we also have to handle different statement types */
 enum ECPG_statement_type
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/include/sqltypes.h postgresql-head-20131017-nchar/src/interfaces/ecpg/include/sqltypes.h
--- postgresql-head-20131017/src/interfaces/ecpg/include/sqltypes.h	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/include/sqltypes.h	2013-10-17 11:56:22.000000000 +1100
@@ -46,6 +46,7 @@
 #define SQLINTERVAL     ECPGt_interval
 #define	SQLNCHAR	ECPGt_char
 #define	SQLNVCHAR	ECPGt_char
+#define SQLNVARCHAR	ECPGt_char
 #ifdef HAVE_LONG_LONG_INT_64
 #define	SQLINT8		ECPGt_long_long
 #define	SQLSERIAL8	ECPGt_long_long
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/preproc/c_keywords.c postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/c_keywords.c
--- postgresql-head-20131017/src/interfaces/ecpg/preproc/c_keywords.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/c_keywords.c	2013-10-17 11:56:22.000000000 +1100
@@ -27,6 +27,8 @@
 	 * category is not needed in ecpg, it is only here so we can share the
 	 * data structure with the backend
 	 */
+	{"NCHAR", NCHAR, 0},
+	{"NVARCHAR", NVARCHAR, 0},
 	{"VARCHAR", VARCHAR, 0},
 	{"auto", S_AUTO, 0},
 	{"bool", SQL_BOOL, 0},
@@ -40,6 +42,8 @@
 	{"long", SQL_LONG, 0},
 	{"minute", MINUTE_P, 0},
 	{"month", MONTH_P, 0},
+	{"nchar", NCHAR, 0},
+	{"nvarchar", NVARCHAR, 0},
 	{"register", S_REGISTER, 0},
 	{"second", SECOND_P, 0},
 	{"short", SQL_SHORT, 0},
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/preproc/ecpg.header postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/ecpg.header
--- postgresql-head-20131017/src/interfaces/ecpg/preproc/ecpg.header	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/ecpg.header	2013-10-31 10:42:27.000000000 +1100
@@ -270,9 +270,11 @@
 			}
 			else if ((ptr->variable->type->type != ECPGt_varchar
 					  && ptr->variable->type->type != ECPGt_char
+					  && ptr->variable->type->type != ECPGt_nvarchar
+					  && ptr->variable->type->type != ECPGt_nchar
 					  && ptr->variable->type->type != ECPGt_unsigned_char
 					  && ptr->variable->type->type != ECPGt_string)
-					 && atoi(ptr->variable->type->size) > 1)
+					  && atoi(ptr->variable->type->size) > 1)
 			{
 				newvar = new_variable(cat_str(4, mm_strdup("("),
 											  mm_strdup(ecpg_type_name(ptr->variable->type->u.element->type)),
@@ -286,6 +288,8 @@
 			}
 			else if ((ptr->variable->type->type == ECPGt_varchar
 					  || ptr->variable->type->type == ECPGt_char
+					  || ptr->variable->type->type == ECPGt_nvarchar
+					  || ptr->variable->type->type == ECPGt_nchar
 					  || ptr->variable->type->type == ECPGt_unsigned_char
 					  || ptr->variable->type->type == ECPGt_string)
 					 && atoi(ptr->variable->type->size) > 1)
@@ -298,7 +302,7 @@
 														   ptr->variable->type->size,
 														   ptr->variable->type->counter),
 									  0);
-				if (ptr->variable->type->type == ECPGt_varchar)
+				if (ptr->variable->type->type == ECPGt_varchar || ptr->variable->type->type == ECPGt_nvarchar)
 					var_ptr = true;
 			}
 			else if (ptr->variable->type->type == ECPGt_struct
@@ -545,6 +549,7 @@
 		ECPGstruct_member_dup(struct_member_list[struct_level]) : NULL;
 
 		if (type_enum != ECPGt_varchar &&
+			type_enum != ECPGt_nvarchar &&
 			type_enum != ECPGt_char &&
 			type_enum != ECPGt_unsigned_char &&
 			type_enum != ECPGt_string &&
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/preproc/ecpg.trailer postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/ecpg.trailer
--- postgresql-head-20131017/src/interfaces/ecpg/preproc/ecpg.trailer	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/ecpg.trailer	2013-10-31 10:47:10.000000000 +1100
@@ -186,7 +186,7 @@
 				type = argsinsert->variable->type->u.element->type;
 
 			/* handle varchars */
-			if (type == ECPGt_varchar)
+			if (type == ECPGt_varchar || type == ECPGt_nvarchar)
 				$$ = make2_str(mm_strdup(argsinsert->variable->name), mm_strdup(".arr"));
 			else
 				$$ = mm_strdup(argsinsert->variable->name);
@@ -211,11 +211,13 @@
 				switch (type)
 				{
 					case ECPGt_char:
+					case ECPGt_nchar:
 					case ECPGt_unsigned_char:
 					case ECPGt_string:
 						$$ = $1;
 						break;
 					case ECPGt_varchar:
+					case ECPGt_nvarchar:
 						$$ = make2_str($1, mm_strdup(".arr"));
 						break;
 					default:
@@ -550,6 +552,22 @@
 				$$.type_index = mm_strdup("-1");
 				$$.type_sizeof = NULL;
 			}
+			else if (strcmp($1, "nvarchar") == 0)
+			{
+				$$.type_enum = ECPGt_nvarchar;
+				$$.type_str = EMPTY; /*mm_strdup("nvarchar");*/
+				$$.type_dimension = mm_strdup("-1");
+				$$.type_index = mm_strdup("-1");
+				$$.type_sizeof = NULL;
+			}
+			else if (strcmp($1, "nchar") == 0)
+			{
+				$$.type_enum = ECPGt_nchar;
+				$$.type_str = EMPTY; /*mm_strdup("nchar");*/
+				$$.type_dimension = mm_strdup("-1");
+				$$.type_index = mm_strdup("-1");
+				$$.type_sizeof = NULL;
+			}
 			else if (strcmp($1, "float") == 0)
 			{
 				$$.type_enum = ECPGt_float;
@@ -627,7 +645,7 @@
 				/* this is for typedef'ed types */
 				struct typedefs *this = get_typedef($1);
 
-				$$.type_str = (this->type->type_enum == ECPGt_varchar) ? EMPTY : mm_strdup(this->name);
+				$$.type_str = (this->type->type_enum == ECPGt_varchar || this->type->type_enum == ECPGt_nvarchar) ? EMPTY : mm_strdup(this->name);
 				$$.type_enum = this->type->type_enum;
 				$$.type_dimension = this->type->type_dimension;
 				$$.type_index = this->type->type_index;
@@ -862,6 +880,7 @@
 					$$ = cat_str(5, $1, mm_strdup($2), $3.str, $4, $5);
 					break;
 
+				case ECPGt_nvarchar:
 				case ECPGt_varchar:
 					if (atoi(dimension) < 0)
 						type = ECPGmake_simple_type(actual_type[struct_level].type_enum, length, varchar_counter);
@@ -886,6 +905,25 @@
 					varchar_counter++;
 					break;
 
+				case ECPGt_nchar:
+					if (atoi(dimension) == -1)
+					{
+						int i = strlen($5);
+
+						if (atoi(length) == -1 && i > 0) /* char <var>[] = "string" */
+						{
+							/* if we have an initializer but no string size set, let's use the initializer's length */
+							free(length);
+							length = mm_alloc(i+sizeof("sizeof()"));
+							sprintf(length, "sizeof(%s)", $5+2);
+						}
+						type = ECPGmake_simple_type(actual_type[struct_level].type_enum, length, 0);
+					}
+					else
+						type = ECPGmake_array_type(ECPGmake_simple_type(actual_type[struct_level].type_enum, length, 0), dimension);
+
+					$$ = cat_str(6, mm_strdup("char"), $1, mm_strdup($2), $3.str, $4, $5);
+					break;
 				case ECPGt_char:
 				case ECPGt_unsigned_char:
 				case ECPGt_string:
@@ -1348,6 +1386,7 @@
 							type = ECPGmake_array_type(ECPGmake_struct_type(struct_member_list[struct_level], $5.type_enum, $5.type_str, $5.type_sizeof), dimension);
 						break;
 
+					case ECPGt_nvarchar:
 					case ECPGt_varchar:
 						if (atoi(dimension) == -1)
 							type = ECPGmake_simple_type($5.type_enum, length, 0);
@@ -1356,6 +1395,7 @@
 						break;
 
 					case ECPGt_char:
+					case ECPGt_nchar:
 					case ECPGt_unsigned_char:
 					case ECPGt_string:
 						if (atoi(dimension) == -1)
@@ -1844,6 +1884,8 @@
 		| TO				{ $$ = mm_strdup("to"); }
 		| UNION				{ $$ = mm_strdup("union"); }
 		| VARCHAR			{ $$ = mm_strdup("varchar"); }
+		| NVARCHAR			{ $$ = mm_strdup("nvarchar"); }
+		| NCHAR				{ $$ = mm_strdup("nchar"); }
 		| '['				{ $$ = mm_strdup("["); }
 		| ']'				{ $$ = mm_strdup("]"); }
 		| '='				{ $$ = mm_strdup("="); }
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/preproc/type.c postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/type.c
--- postgresql-head-20131017/src/interfaces/ecpg/preproc/type.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/type.c	2013-10-17 11:56:22.000000000 +1100
@@ -137,6 +137,9 @@
 		case ECPGt_char:
 			return ("ECPGt_char");
 			break;
+		case ECPGt_nchar:
+			return ("ECPGt_nchar");
+			break;
 		case ECPGt_unsigned_char:
 			return ("ECPGt_unsigned_char");
 			break;
@@ -175,6 +178,8 @@
 			break;
 		case ECPGt_varchar:
 			return ("ECPGt_varchar");
+		case ECPGt_nvarchar:
+			return ("ECPGt_nvarchar");
 		case ECPGt_NO_INDICATOR:		/* no indicator */
 			return ("ECPGt_NO_INDICATOR");
 			break;
@@ -383,6 +388,7 @@
 				 * pointers
 				 */
 
+			case ECPGt_nvarchar:
 			case ECPGt_varchar:
 
 				/*
@@ -406,6 +412,7 @@
 					sprintf(offset, "sizeof(struct varchar)");
 				break;
 			case ECPGt_char:
+			case ECPGt_nchar:
 			case ECPGt_unsigned_char:
 			case ECPGt_char_variable:
 			case ECPGt_string:
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/preproc/variable.c postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/variable.c
--- postgresql-head-20131017/src/interfaces/ecpg/preproc/variable.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/preproc/variable.c	2013-10-17 11:56:22.000000000 +1100
@@ -529,7 +529,7 @@
 												"multilevel pointers (more than 2 levels) are not supported; found %d levels", pointer_len),
 				pointer_len);
 
-	if (pointer_len > 1 && type_enum != ECPGt_char && type_enum != ECPGt_unsigned_char && type_enum != ECPGt_string)
+	if (pointer_len > 1 && type_enum != ECPGt_char && type_enum != ECPGt_nchar && type_enum != ECPGt_unsigned_char && type_enum != ECPGt_string)
 		mmerror(PARSE_ERROR, ET_FATAL, "pointer to pointer is not supported for this data type");
 
 	if (pointer_len > 1 && (atoi(*length) >= 0 || atoi(*dimension) >= 0))
@@ -554,6 +554,7 @@
 
 			break;
 		case ECPGt_varchar:
+		case ECPGt_nvarchar:
 			/* pointer has to get dimension 0 */
 			if (pointer_len)
 				*dimension = mm_strdup("0");
@@ -567,6 +568,7 @@
 
 			break;
 		case ECPGt_char:
+		case ECPGt_nchar:
 		case ECPGt_unsigned_char:
 		case ECPGt_string:
 			/* char ** */
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/ecpg_schedule postgresql-head-20131017-nchar/src/interfaces/ecpg/test/ecpg_schedule
--- postgresql-head-20131017/src/interfaces/ecpg/test/ecpg_schedule	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/ecpg_schedule	2013-10-17 11:56:22.000000000 +1100
@@ -47,8 +47,11 @@
 test: sql/show
 test: sql/insupd
 test: sql/parser
+test: sql/nchar
+test: sql/nvarchar
 test: thread/thread
 test: thread/thread_implicit
 test: thread/prep
 test: thread/alloc
 test: thread/descriptor
+
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/ecpg_schedule_tcp postgresql-head-20131017-nchar/src/interfaces/ecpg/test/ecpg_schedule_tcp
--- postgresql-head-20131017/src/interfaces/ecpg/test/ecpg_schedule_tcp	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/ecpg_schedule_tcp	2013-10-17 11:56:22.000000000 +1100
@@ -47,6 +47,8 @@
 test: sql/show
 test: sql/insupd
 test: sql/parser
+test: sql/nchar
+test: sql/nvarchar
 test: thread/thread
 test: thread/thread_implicit
 test: thread/prep
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c	2013-10-17 11:56:22.000000000 +1100
@@ -97,6 +97,7 @@
 #define SQLINTERVAL     ECPGt_interval
 #define	SQLNCHAR	ECPGt_char
 #define	SQLNVCHAR	ECPGt_char
+#define SQLNVARCHAR	ECPGt_char
 #ifdef HAVE_LONG_LONG_INT_64
 #define	SQLINT8		ECPGt_long_long
 #define	SQLSERIAL8	ECPGt_long_long
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.c postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.c
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.c	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.c	2013-10-31 14:18:52.000000000 +1100
@@ -82,6 +82,12 @@
   	 
 	 
    
+   
+  
+	 
+	 
+   
+
   
 #line 29 "type.pgc"
  struct TBempl empl ;
@@ -100,18 +106,30 @@
 #line 35 "type.pgc"
  char text [ 10 ] ;
  } vc ;
+ 
+#line 41 "type.pgc"
+ struct nvarchar { 
+#line 39 "type.pgc"
+ int len ;
+ 
+#line 40 "type.pgc"
+ char text [ 10 ] ;
+ } nvc ;
 /* exec sql end declare section */
-#line 37 "type.pgc"
+#line 43 "type.pgc"
 
 
   /* exec sql var vc is [ 10 ] */
-#line 39 "type.pgc"
+#line 45 "type.pgc"
+
+  /* exec sql var nvc is [ 10 ] */
+#line 46 "type.pgc"
 
   ECPGdebug (1, stderr);
 
   empl.idnum = 1;
   { ECPGconnect(__LINE__, 0, "regress1" , NULL, NULL , NULL, 0); }
-#line 43 "type.pgc"
+#line 50 "type.pgc"
 
   if (sqlca.sqlcode)
     {
@@ -119,8 +137,8 @@
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) )", ECPGt_EOIT, ECPGt_EORT);}
-#line 51 "type.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) , string4 char ( 10 ) )", ECPGt_EOIT, ECPGt_EORT);}
+#line 58 "type.pgc"
 
   if (sqlca.sqlcode)
     {
@@ -128,8 +146,8 @@
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' )", ECPGt_EOIT, ECPGt_EORT);}
-#line 58 "type.pgc"
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' , 'fourth str' )", ECPGt_EOIT, ECPGt_EORT);}
+#line 65 "type.pgc"
 
   if (sqlca.sqlcode)
     {
@@ -137,7 +155,7 @@
       exit (sqlca.sqlcode);
     }
 
-  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select idnum , name , accs , string1 , string2 , string3 from empl where idnum = $1 ", 
+  { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select idnum , name , accs , string1 , string2 , string3 , string4 from empl where idnum = $1 ", 
 	ECPGt_long,&(empl.idnum),(long)1,(long)1,sizeof(long), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
 	ECPGt_long,&(empl.idnum),(long)1,(long)1,sizeof(long), 
@@ -151,18 +169,20 @@
 	ECPGt_char,&(ptr),(long)0,(long)1,(1)*sizeof(char), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
 	ECPGt_varchar,&(vc),(long)10,(long)1,sizeof(struct varchar), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, 
+	ECPGt_nvarchar,&(nvc),(long)10,(long)1,sizeof(struct varchar), 
 	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);}
-#line 68 "type.pgc"
+#line 75 "type.pgc"
 
   if (sqlca.sqlcode)
     {
       printf ("select error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
-  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text);
+  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s' vc.len='%d' nvc='%10.10s' nvc.len='%d'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text, vc.len, nvc.text, nvc.len);
 
   { ECPGdisconnect(__LINE__, "CURRENT");}
-#line 76 "type.pgc"
+#line 83 "type.pgc"
 
 
   free(ptr);
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.stderr postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.stderr
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.stderr	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.stderr	2013-10-17 11:56:22.000000000 +1100
@@ -2,39 +2,41 @@
 [NO_PID]: sqlca: code: 0, state: 00000
 [NO_PID]: ECPGconnect: opening database regress1 on <DEFAULT> port <DEFAULT>  
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: query: create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 57: query: create table empl ( idnum integer , name char ( 20 ) , accs smallint , string1 char ( 10 ) , string2 char ( 10 ) , string3 char ( 10 ) , string4 char ( 10 ) ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: using PQexec
+[NO_PID]: ecpg_execute on line 57: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 50: OK: CREATE TABLE
+[NO_PID]: ecpg_execute on line 57: OK: CREATE TABLE
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: query: insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' ); with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 65: query: insert into empl values ( 1 , 'user name' , 320 , 'first str' , 'second str' , 'third str' , 'fourth str' ); with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: using PQexec
+[NO_PID]: ecpg_execute on line 65: using PQexec
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 58: OK: INSERT 0 1
+[NO_PID]: ecpg_execute on line 65: OK: INSERT 0 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: query: select idnum , name , accs , string1 , string2 , string3 from empl where idnum = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 72: query: select idnum , name , accs , string1 , string2 , string3 , string4 from empl where idnum = $1 ; with 1 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: using PQexecParams
+[NO_PID]: ecpg_execute on line 72: using PQexecParams
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: free_params on line 65: parameter 1 = 1
+[NO_PID]: free_params on line 72: parameter 1 = 1
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_execute on line 65: correctly got 1 tuples with 6 fields
+[NO_PID]: ecpg_execute on line 72: correctly got 1 tuples with 7 fields
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: 1 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: 1 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: user name            offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: user name            offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: 320 offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: 320 offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: first str  offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: first str  offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_store_result on line 65: allocating memory for 1 tuples
+[NO_PID]: ecpg_store_result on line 72: allocating memory for 1 tuples
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: second str offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: second str offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
-[NO_PID]: ecpg_get_data on line 65: RESULT: third str  offset: -1; array: no
+[NO_PID]: ecpg_get_data on line 72: RESULT: third str  offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 72: RESULT: fourth str offset: -1; array: no
 [NO_PID]: sqlca: code: 0, state: 00000
 [NO_PID]: ecpg_finish: connection regress1 closed
 [NO_PID]: sqlca: code: 0, state: 00000
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.stdout postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.stdout
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/preproc-type.stdout	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/preproc-type.stdout	2013-10-17 11:56:22.000000000 +1100
@@ -1 +1 @@
-id=1 name='user name           ' accs=320 str='first str ' ptr='second str' vc='third str '
+id=1 name='user name           ' accs=320 str='first str ' ptr='second str' vc='third str ' vc.len='10' nvc='fourth str' nvc.len='10'
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.c postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.c
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.c	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.c	2013-10-31 15:08:18.000000000 +1100
@@ -0,0 +1,417 @@
+/* Processed by ecpg (regression mode) */
+/* These include files are added by the preprocessor */
+#include <ecpglib.h>
+#include <ecpgerrno.h>
+#include <sqlca.h>
+/* End of automatic include section */
+#define ECPGdebug(X,Y) ECPGdebug((X)+100,(Y))
+
+#line 1 "nchar.pgc"
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+
+#line 1 "regression.h"
+
+
+
+
+
+
+#line 5 "nchar.pgc"
+
+
+int main() {
+	/* exec sql begin declare section */
+			
+			
+	
+#line 9 "nchar.pgc"
+ char nchar_var [ 40 ] ;
+ 
+#line 10 "nchar.pgc"
+ char count [ 10 ] ;
+/* exec sql end declare section */
+#line 11 "nchar.pgc"
+
+
+	ECPGdebug(1, stderr);
+	{ ECPGconnect(__LINE__, 0, "regress1" , NULL, NULL , NULL, 0); }
+#line 14 "nchar.pgc"
+
+
+	{ ECPGsetcommit(__LINE__, "on", NULL);}
+#line 16 "nchar.pgc"
+
+	/* exec sql whenever sql_warning  sqlprint ; */
+#line 17 "nchar.pgc"
+
+	/* exec sql whenever sqlerror  sqlprint ; */
+#line 18 "nchar.pgc"
+
+
+	/*initialization of test table*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table My_Table ( Item varchar , count integer )", ECPGt_EOIT, ECPGt_EORT);
+#line 21 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 21 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 21 "nchar.pgc"
+
+
+	/*reinitalization*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "truncate My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 24 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 24 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 24 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 'foo_item' , 1 )", ECPGt_EOIT, ECPGt_EORT);
+#line 25 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 25 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 25 "nchar.pgc"
+
+	/*simple select into NCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select Item from My_Table limit 1", ECPGt_EOIT, 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 27 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 27 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 27 "nchar.pgc"
+ 
+	printf ("Item='%s'\n", nchar_var);
+	/*simple select using NCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = $1 ", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 30 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 30 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 30 "nchar.pgc"
+
+	printf ("count=%s for Item='%s'\n", count, nchar_var);
+	/*simple update using NCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "update My_Table set count = count + 1 where Item = $1 ", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 33 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 33 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 33 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 34 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 34 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 34 "nchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	/*simple delete using NCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "delete from My_Table where Item = $1 ", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 37 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 37 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 37 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 38 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 38 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 38 "nchar.pgc"
+
+	printf ("found %s rows for Item='foo_item'\n", count);
+	/*simple insert using NCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( $1  , 3 )", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 41 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 41 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 41 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 42 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 42 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 42 "nchar.pgc"
+
+	printf ("count='%s' for Item='foo_item'\n", count);
+
+	/*prepared tests*/
+	/*reinitalization*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "truncate My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 47 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 47 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 47 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 'foo_item' , 1 )", ECPGt_EOIT, ECPGt_EORT);
+#line 48 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 48 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 48 "nchar.pgc"
+
+	/*prepared select into NCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "SELECT Item FROM My_Table LIMIT 1");
+#line 50 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 50 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 50 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", ECPGt_EOIT, 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 51 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 51 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 51 "nchar.pgc"
+
+	printf ("Item='%s'\n", nchar_var);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 53 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 53 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 53 "nchar.pgc"
+ 
+	/*prepared select using NCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "SELECT Count FROM My_Table WHERE Item=?");
+#line 55 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 55 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 55 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 56 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 56 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 56 "nchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 58 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 58 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 58 "nchar.pgc"
+
+	/*prepared update using NCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "UPDATE My_Table SET Count=Count+1 WHERE Item=?");
+#line 60 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 60 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 60 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 61 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 61 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 61 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 62 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 62 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 62 "nchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 64 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 64 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 64 "nchar.pgc"
+
+	/*prepared delete using NCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "DELETE FROM My_Table WHERE Item=?");
+#line 66 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 66 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 66 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 67 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 67 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 67 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 68 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 68 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 68 "nchar.pgc"
+
+	printf ("found %s rows for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 70 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 70 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 70 "nchar.pgc"
+
+	/*prepared insert using NCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "INSERT INTO My_Table values (?, 3)");
+#line 72 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 72 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 72 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nchar,(nchar_var),(long)40,(long)1,(40)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 73 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 73 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 73 "nchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 74 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 74 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 74 "nchar.pgc"
+
+	printf ("count='%s' for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 76 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 76 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 76 "nchar.pgc"
+
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 78 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 78 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 78 "nchar.pgc"
+
+	{ ECPGdisconnect(__LINE__, "ALL");
+#line 79 "nchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 79 "nchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 79 "nchar.pgc"
+
+
+	return 0;
+}
+
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.stderr postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.stderr
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.stderr	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.stderr	2013-10-31 15:08:18.000000000 +1100
@@ -0,0 +1,196 @@
+[NO_PID]: ECPGdebug: set to 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ECPGconnect: opening database regress1 on <DEFAULT> port <DEFAULT>  
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ECPGsetcommit on line 16: action "on"; connection "regress1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: query: create table My_Table ( Item varchar , count integer ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: OK: CREATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 24: query: truncate My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 24: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 24: OK: TRUNCATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: query: insert into My_Table values ( 'foo_item' , 1 ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 27: query: select Item from My_Table limit 1; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 27: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 27: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 27: RESULT: foo_item offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 30: query: select count from My_Table where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 30: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 30: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 30: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 30: RESULT: 1 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 33: query: update My_Table set count = count + 1 where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 33: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 33: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 33: OK: UPDATE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 34: RESULT: 2 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 37: query: delete from My_Table where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 37: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 37: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 37: OK: DELETE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: query: select count ( * ) from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 38: RESULT: 0 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 41: query: insert into My_Table values ( $1  , 3 ); with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 41: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 41: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 41: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 42: RESULT: 3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 47: query: truncate My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 47: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 47: OK: TRUNCATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: query: insert into My_Table values ( 'foo_item' , 1 ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 50: name stmt; query: "SELECT Item FROM My_Table LIMIT 1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 51: query: SELECT Item FROM My_Table LIMIT 1; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 51: using PQexecPrepared for "SELECT Item FROM My_Table LIMIT 1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 51: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 51: RESULT: foo_item offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 53: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 55: name stmt; query: "SELECT Count FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 56: query: SELECT Count FROM My_Table WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 56: using PQexecPrepared for "SELECT Count FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 56: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 56: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 56: RESULT: 1 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 58: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 60: name stmt; query: "UPDATE My_Table SET Count=Count+1 WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 61: query: UPDATE My_Table SET Count=Count+1 WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 61: using PQexecPrepared for "UPDATE My_Table SET Count=Count+1 WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 61: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 61: OK: UPDATE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 62: RESULT: 2 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 64: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 66: name stmt; query: "DELETE FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 67: query: DELETE FROM My_Table WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 67: using PQexecPrepared for "DELETE FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 67: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 67: OK: DELETE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: query: select count ( * ) from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 68: RESULT: 0 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 70: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 72: name stmt; query: "INSERT INTO My_Table values ($1, 3)"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 73: query: INSERT INTO My_Table values ($1, 3); with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 73: using PQexecPrepared for "INSERT INTO My_Table values ($1, 3)"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 73: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 73: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 74: RESULT: 3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 76: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 78: query: drop table My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 78: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 78: OK: DROP TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_finish: connection regress1 closed
+[NO_PID]: sqlca: code: 0, state: 00000
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.stdout postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.stdout
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nchar.stdout	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nchar.stdout	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,10 @@
+Item='foo_item'
+count=1 for Item='foo_item'
+count=2 for Item='foo_item'
+found 0 rows for Item='foo_item'
+count='3' for Item='foo_item'
+Item='foo_item'
+count=1 for Item='foo_item'
+count=2 for Item='foo_item'
+found 0 rows for Item='foo_item'
+count='3' for Item='foo_item'
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.c postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.c
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.c	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.c	2013-10-31 15:08:18.000000000 +1100
@@ -0,0 +1,418 @@
+/* Processed by ecpg (regression mode) */
+/* These include files are added by the preprocessor */
+#include <ecpglib.h>
+#include <ecpgerrno.h>
+#include <sqlca.h>
+/* End of automatic include section */
+#define ECPGdebug(X,Y) ECPGdebug((X)+100,(Y))
+
+#line 1 "nvarchar.pgc"
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+
+#line 1 "regression.h"
+
+
+
+
+
+
+#line 5 "nvarchar.pgc"
+
+
+int main() {
+	/* exec sql begin declare section */
+		 
+			
+	
+#line 9 "nvarchar.pgc"
+  struct varchar_1  { int len; char arr[ 40 ]; }  nvarchar_var ;
+ 
+#line 10 "nvarchar.pgc"
+ char count [ 10 ] ;
+/* exec sql end declare section */
+#line 11 "nvarchar.pgc"
+
+
+	ECPGdebug(1, stderr);
+	{ ECPGconnect(__LINE__, 0, "regress1" , NULL, NULL , NULL, 0); }
+#line 14 "nvarchar.pgc"
+
+
+	{ ECPGsetcommit(__LINE__, "on", NULL);}
+#line 16 "nvarchar.pgc"
+
+	/* exec sql whenever sql_warning  sqlprint ; */
+#line 17 "nvarchar.pgc"
+
+	/* exec sql whenever sqlerror  sqlprint ; */
+#line 18 "nvarchar.pgc"
+
+
+	/*initialization of test table*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table My_Table ( Item varchar , count integer )", ECPGt_EOIT, ECPGt_EORT);
+#line 21 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 21 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 21 "nvarchar.pgc"
+
+
+	/*simple tests*/
+	/*reinitalization*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "truncate My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 25 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 25 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 25 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 'foo_item' , 1 )", ECPGt_EOIT, ECPGt_EORT);
+#line 26 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 26 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 26 "nvarchar.pgc"
+
+	/*simple select into NVARCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select Item from My_Table limit 1", ECPGt_EOIT, 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 28 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 28 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 28 "nvarchar.pgc"
+
+	printf ("Item='%s'\n", nvarchar_var.arr);
+	/*simple select using NVARCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = $1 ", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 31 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 31 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 31 "nvarchar.pgc"
+
+	printf ("count=%s for Item='%s'\n", count, nvarchar_var.arr);
+	/*simple update using NVARCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "update My_Table set count = count + 1 where Item = $1 ", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 34 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 34 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 34 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 35 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 35 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 35 "nvarchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	/*simple delete using NVARCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "delete from My_Table where Item = $1 ", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 38 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 38 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 38 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 39 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 39 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 39 "nvarchar.pgc"
+
+	printf ("found %s rows for Item='foo_item'\n", count);
+	/*simple insert using NVARCHAR*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( $1  , 3 )", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 42 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 42 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 42 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 43 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 43 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 43 "nvarchar.pgc"
+
+	printf ("count='%s' for Item='foo_item'\n", count);
+
+	/*prepared tests*/
+	/*reinitalization*/
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "truncate My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 48 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 48 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 48 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into My_Table values ( 'foo_item' , 1 )", ECPGt_EOIT, ECPGt_EORT);
+#line 49 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 49 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 49 "nvarchar.pgc"
+
+	/*prepared select into NVARCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "SELECT Item FROM My_Table LIMIT 1");
+#line 51 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 51 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 51 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", ECPGt_EOIT, 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 52 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 52 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 52 "nvarchar.pgc"
+
+	printf ("Item='%s'\n", nvarchar_var.arr);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 54 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 54 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 54 "nvarchar.pgc"
+
+	/*prepared select using NVARCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "SELECT Count FROM My_Table WHERE Item=?");
+#line 56 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 56 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 56 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 57 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 57 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 57 "nvarchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 59 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 59 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 59 "nvarchar.pgc"
+
+	/*prepared update using NVARCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "UPDATE My_Table SET Count=Count+1 WHERE Item=?");
+#line 61 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 61 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 61 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 62 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 62 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 62 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 63 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 63 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 63 "nvarchar.pgc"
+
+	printf ("count=%s for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 65 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 65 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 65 "nvarchar.pgc"
+
+	/*prepared delete using NVARCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "DELETE FROM My_Table WHERE Item=?");
+#line 67 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 67 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 67 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 68 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 68 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 68 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 69 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 69 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 69 "nvarchar.pgc"
+
+	printf ("found %s rows for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 71 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 71 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 71 "nvarchar.pgc"
+
+	/*prepared insert using NVARCHAR*/
+	{ ECPGprepare(__LINE__, NULL, 0, "stmt", "INSERT INTO My_Table values (?, 3)");
+#line 73 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 73 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 73 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "stmt", 
+	ECPGt_nvarchar,&(nvarchar_var),(long)40,(long)1,sizeof(struct varchar_1), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);
+#line 74 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 74 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 74 "nvarchar.pgc"
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count from My_Table where Item = 'foo_item'", ECPGt_EOIT, 
+	ECPGt_char,(count),(long)10,(long)1,(10)*sizeof(char), 
+	ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
+#line 75 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 75 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 75 "nvarchar.pgc"
+
+	printf ("count='%s' for Item='foo_item'\n", count);
+	{ ECPGdeallocate(__LINE__, 0, NULL, "stmt");
+#line 77 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 77 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 77 "nvarchar.pgc"
+
+
+	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table My_Table", ECPGt_EOIT, ECPGt_EORT);
+#line 79 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 79 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 79 "nvarchar.pgc"
+
+	{ ECPGdisconnect(__LINE__, "ALL");
+#line 80 "nvarchar.pgc"
+
+if (sqlca.sqlwarn[0] == 'W') sqlprint();
+#line 80 "nvarchar.pgc"
+
+if (sqlca.sqlcode < 0) sqlprint();}
+#line 80 "nvarchar.pgc"
+
+
+	return 0;
+}
+
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.stderr postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.stderr
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.stderr	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.stderr	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,196 @@
+[NO_PID]: ECPGdebug: set to 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ECPGconnect: opening database regress1 on <DEFAULT> port <DEFAULT>  
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ECPGsetcommit on line 16: action "on"; connection "regress1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: query: create table My_Table ( Item varchar , count integer ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 21: OK: CREATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: query: truncate My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 25: OK: TRUNCATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 26: query: insert into My_Table values ( 'foo_item' , 1 ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 26: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 26: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 28: query: select Item from My_Table limit 1; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 28: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 28: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 28: RESULT: foo_item offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 31: query: select count from My_Table where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 31: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 31: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 31: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 31: RESULT: 1 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: query: update My_Table set count = count + 1 where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 34: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 34: OK: UPDATE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 35: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 35: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 35: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 35: RESULT: 2 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: query: delete from My_Table where Item = $1 ; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 38: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 38: OK: DELETE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 39: query: select count ( * ) from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 39: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 39: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 39: RESULT: 0 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: query: insert into My_Table values ( $1  , 3 ); with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: using PQexecParams
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 42: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 42: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 43: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 43: RESULT: 3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: query: truncate My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 48: OK: TRUNCATE TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 49: query: insert into My_Table values ( 'foo_item' , 1 ); with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 49: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 49: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 51: name stmt; query: "SELECT Item FROM My_Table LIMIT 1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 52: query: SELECT Item FROM My_Table LIMIT 1; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 52: using PQexecPrepared for "SELECT Item FROM My_Table LIMIT 1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 52: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 52: RESULT: foo_item offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 54: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 56: name stmt; query: "SELECT Count FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 57: query: SELECT Count FROM My_Table WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 57: using PQexecPrepared for "SELECT Count FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 57: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 57: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 57: RESULT: 1 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 59: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 61: name stmt; query: "UPDATE My_Table SET Count=Count+1 WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: query: UPDATE My_Table SET Count=Count+1 WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: using PQexecPrepared for "UPDATE My_Table SET Count=Count+1 WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 62: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 62: OK: UPDATE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 63: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 63: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 63: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 63: RESULT: 2 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 65: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 67: name stmt; query: "DELETE FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: query: DELETE FROM My_Table WHERE Item=$1; with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: using PQexecPrepared for "DELETE FROM My_Table WHERE Item=$1"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 68: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 68: OK: DELETE 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 69: query: select count ( * ) from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 69: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 69: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 69: RESULT: 0 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 71: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: prepare_common on line 73: name stmt; query: "INSERT INTO My_Table values ($1, 3)"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: query: INSERT INTO My_Table values ($1, 3); with 1 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: using PQexecPrepared for "INSERT INTO My_Table values ($1, 3)"
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: free_params on line 74: parameter 1 = foo_item
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 74: OK: INSERT 0 1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 75: query: select count from My_Table where Item = 'foo_item'; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 75: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 75: correctly got 1 tuples with 1 fields
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_get_data on line 75: RESULT: 3 offset: -1; array: no
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: deallocate_one on line 77: name stmt
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 79: query: drop table My_Table; with 0 parameter(s) on connection regress1
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 79: using PQexec
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_execute on line 79: OK: DROP TABLE
+[NO_PID]: sqlca: code: 0, state: 00000
+[NO_PID]: ecpg_finish: connection regress1 closed
+[NO_PID]: sqlca: code: 0, state: 00000
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.stdout postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.stdout
--- postgresql-head-20131017/src/interfaces/ecpg/test/expected/sql-nvarchar.stdout	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/expected/sql-nvarchar.stdout	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,10 @@
+Item='foo_item'
+count=1 for Item='foo_item'
+count=2 for Item='foo_item'
+found 0 rows for Item='foo_item'
+count='3' for Item='foo_item'
+Item='foo_item'
+count=1 for Item='foo_item'
+count=2 for Item='foo_item'
+found 0 rows for Item='foo_item'
+count='3' for Item='foo_item'
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/preproc/type.pgc postgresql-head-20131017-nchar/src/interfaces/ecpg/test/preproc/type.pgc
--- postgresql-head-20131017/src/interfaces/ecpg/test/preproc/type.pgc	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/preproc/type.pgc	2013-10-31 11:24:42.000000000 +1100
@@ -34,9 +34,16 @@
   	int len;
 	char text[10];
   } vc;
+  struct nvarchar
+  {
+	int len;
+	char text[10];
+  } nvc;
+
   EXEC SQL END DECLARE SECTION;
 
   EXEC SQL var vc is varchar[10];
+  EXEC SQL var nvc is nvarchar[10];
   ECPGdebug (1, stderr);
 
   empl.idnum = 1;
@@ -48,22 +55,22 @@
     }
 
   EXEC SQL create table empl
-    (idnum integer, name char(20), accs smallint, string1 char(10), string2 char(10), string3 char(10));
+    (idnum integer, name char(20), accs smallint, string1 char(10), string2 char(10), string3 char(10), string4 char(10));
   if (sqlca.sqlcode)
     {
       printf ("create error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  EXEC SQL insert into empl values (1, 'user name', 320, 'first str', 'second str', 'third str');
+  EXEC SQL insert into empl values (1, 'user name', 320, 'first str', 'second str', 'third str', 'fourth str');
   if (sqlca.sqlcode)
     {
       printf ("insert error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
 
-  EXEC SQL select idnum, name, accs, string1, string2, string3
-	into :empl, :str, :ptr, :vc
+  EXEC SQL select idnum, name, accs, string1, string2, string3, string4 
+	into :empl, :str, :ptr, :vc, :nvc
 	from empl
 	where idnum =:empl.idnum;
   if (sqlca.sqlcode)
@@ -71,7 +78,7 @@
       printf ("select error = %ld\n", sqlca.sqlcode);
       exit (sqlca.sqlcode);
     }
-  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text);
+  printf ("id=%ld name='%s' accs=%d str='%s' ptr='%s' vc='%10.10s' vc.len='%d' nvc='%10.10s' nvc.len='%d'\n", empl.idnum, empl.name, empl.accs, str, ptr, vc.text, vc.len, nvc.text, nvc.len);
 
   EXEC SQL disconnect;
 
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/sql/Makefile postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/Makefile
--- postgresql-head-20131017/src/interfaces/ecpg/test/sql/Makefile	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/Makefile	2013-10-17 11:56:22.000000000 +1100
@@ -22,7 +22,9 @@
         parser parser.c \
         quote quote.c \
         show show.c \
-        insupd insupd.c
+        insupd insupd.c \
+	nchar nchar.c \
+	nvarchar nvarchar.c
 
 all: $(TESTS)
 
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/sql/nchar.pgc postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/nchar.pgc
--- postgresql-head-20131017/src/interfaces/ecpg/test/sql/nchar.pgc	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/nchar.pgc	2013-10-31 14:49:22.000000000 +1100
@@ -0,0 +1,83 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+EXEC SQL INCLUDE ../regression;
+
+int main() {
+	EXEC SQL BEGIN DECLARE SECTION;
+		NCHAR	nchar_var[40];
+		char	count[10];
+	EXEC SQL END DECLARE SECTION;
+
+	ECPGdebug(1, stderr);
+	EXEC SQL CONNECT TO REGRESSDB1;
+
+	EXEC SQL SET AUTOCOMMIT TO ON;
+	EXEC SQL WHENEVER SQLWARNING SQLPRINT;
+	EXEC SQL WHENEVER SQLERROR SQLPRINT;
+
+	/*initialization of test table*/
+	EXEC SQL CREATE TABLE My_Table (Item varchar, Count integer);
+
+	/*reinitalization*/
+	EXEC SQL TRUNCATE My_Table;
+	EXEC SQL INSERT INTO My_Table VALUES ('foo_item', 1);
+	/*simple select into NCHAR*/
+	EXEC SQL SELECT Item INTO :nchar_var FROM My_Table LIMIT 1; 
+	printf ("Item='%s'\n", nchar_var);
+	/*simple select using NCHAR*/
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item=:nchar_var;
+	printf ("count=%s for Item='%s'\n", count, nchar_var);
+	/*simple update using NCHAR*/
+	EXEC SQL UPDATE My_Table SET Count=Count+1 WHERE Item=:nchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count=%s for Item='foo_item'\n", count);
+	/*simple delete using NCHAR*/
+	EXEC SQL DELETE FROM My_Table WHERE Item=:nchar_var;
+	EXEC SQL SELECT count(*) INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("found %s rows for Item='foo_item'\n", count);
+	/*simple insert using NCHAR*/
+	EXEC SQL INSERT INTO My_Table values (:nchar_var, 3);
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count='%s' for Item='foo_item'\n", count);
+
+	/*prepared tests*/
+	/*reinitalization*/
+	EXEC SQL TRUNCATE My_Table;
+	EXEC SQL INSERT INTO My_Table VALUES ('foo_item', 1);
+	/*prepared select into NCHAR*/
+	EXEC SQL PREPARE stmt FROM "SELECT Item FROM My_Table LIMIT 1";
+	EXEC SQL EXECUTE stmt INTO :nchar_var;
+	printf ("Item='%s'\n", nchar_var);
+	EXEC SQL DEALLOCATE PREPARE stmt; 
+	/*prepared select using NCHAR*/
+	EXEC SQL PREPARE stmt FROM "SELECT Count FROM My_Table WHERE Item=?";
+	EXEC SQL EXECUTE stmt INTO :count USING :nchar_var;
+	printf ("count=%s for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared update using NCHAR*/
+	EXEC SQL PREPARE stmt FROM "UPDATE My_Table SET Count=Count+1 WHERE Item=?";
+	EXEC SQL EXECUTE stmt USING :nchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count=%s for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared delete using NCHAR*/
+	EXEC SQL PREPARE stmt FROM "DELETE FROM My_Table WHERE Item=?";
+	EXEC SQL EXECUTE stmt USING :nchar_var;
+	EXEC SQL SELECT count(*) INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("found %s rows for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared insert using NCHAR*/
+	EXEC SQL PREPARE stmt FROM "INSERT INTO My_Table values (?, 3)";
+	EXEC SQL EXECUTE stmt USING :nchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count='%s' for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+
+	EXEC SQL DROP TABLE My_Table;
+	EXEC SQL DISCONNECT ALL;
+
+	return 0;
+}
+
diff -uNr postgresql-head-20131017/src/interfaces/ecpg/test/sql/nvarchar.pgc postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/nvarchar.pgc
--- postgresql-head-20131017/src/interfaces/ecpg/test/sql/nvarchar.pgc	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/interfaces/ecpg/test/sql/nvarchar.pgc	2013-10-31 14:50:53.000000000 +1100
@@ -0,0 +1,84 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+EXEC SQL INCLUDE ../regression;
+
+int main() {
+	EXEC SQL BEGIN DECLARE SECTION;
+		NVARCHAR nvarchar_var[40];
+		char	count[10];
+	EXEC SQL END DECLARE SECTION;
+
+	ECPGdebug(1, stderr);
+	EXEC SQL CONNECT TO REGRESSDB1;
+
+	EXEC SQL SET AUTOCOMMIT TO ON;
+	EXEC SQL WHENEVER SQLWARNING SQLPRINT;
+	EXEC SQL WHENEVER SQLERROR SQLPRINT;
+
+	/*initialization of test table*/
+	EXEC SQL CREATE TABLE My_Table (Item varchar, Count integer);
+
+	/*simple tests*/
+	/*reinitalization*/
+	EXEC SQL TRUNCATE My_Table;
+	EXEC SQL INSERT INTO My_Table VALUES ('foo_item', 1);
+	/*simple select into NVARCHAR*/
+	EXEC SQL SELECT Item INTO :nvarchar_var FROM My_Table LIMIT 1;
+	printf ("Item='%s'\n", nvarchar_var.arr);
+	/*simple select using NVARCHAR*/
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item=:nvarchar_var;
+	printf ("count=%s for Item='%s'\n", count, nvarchar_var.arr);
+	/*simple update using NVARCHAR*/
+	EXEC SQL UPDATE My_Table SET Count=Count+1 WHERE Item=:nvarchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count=%s for Item='foo_item'\n", count);
+	/*simple delete using NVARCHAR*/
+	EXEC SQL DELETE FROM My_Table WHERE Item=:nvarchar_var;
+	EXEC SQL SELECT count(*) INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("found %s rows for Item='foo_item'\n", count);
+	/*simple insert using NVARCHAR*/
+	EXEC SQL INSERT INTO My_Table values (:nvarchar_var, 3);
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count='%s' for Item='foo_item'\n", count);
+
+	/*prepared tests*/
+	/*reinitalization*/
+	EXEC SQL TRUNCATE My_Table;
+	EXEC SQL INSERT INTO My_Table VALUES ('foo_item', 1);
+	/*prepared select into NVARCHAR*/
+	EXEC SQL PREPARE stmt FROM "SELECT Item FROM My_Table LIMIT 1";
+	EXEC SQL EXECUTE stmt INTO :nvarchar_var;
+	printf ("Item='%s'\n", nvarchar_var.arr);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared select using NVARCHAR*/
+	EXEC SQL PREPARE stmt FROM "SELECT Count FROM My_Table WHERE Item=?";
+	EXEC SQL EXECUTE stmt INTO :count USING :nvarchar_var;
+	printf ("count=%s for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared update using NVARCHAR*/
+	EXEC SQL PREPARE stmt FROM "UPDATE My_Table SET Count=Count+1 WHERE Item=?";
+	EXEC SQL EXECUTE stmt USING :nvarchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count=%s for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared delete using NVARCHAR*/
+	EXEC SQL PREPARE stmt FROM "DELETE FROM My_Table WHERE Item=?";
+	EXEC SQL EXECUTE stmt USING :nvarchar_var;
+	EXEC SQL SELECT count(*) INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("found %s rows for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+	/*prepared insert using NVARCHAR*/
+	EXEC SQL PREPARE stmt FROM "INSERT INTO My_Table values (?, 3)";
+	EXEC SQL EXECUTE stmt USING :nvarchar_var;
+	EXEC SQL SELECT Count INTO :count FROM My_Table WHERE Item='foo_item';
+	printf ("count='%s' for Item='foo_item'\n", count);
+	EXEC SQL DEALLOCATE PREPARE stmt;
+
+	EXEC SQL DROP TABLE My_Table;
+	EXEC SQL DISCONNECT ALL;
+
+	return 0;
+}
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/create_function_3.out postgresql-head-20131017-nchar/src/test/regress/expected/create_function_3.out
--- postgresql-head-20131017/src/test/regress/expected/create_function_3.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/create_function_3.out	2013-10-17 11:56:22.000000000 +1100
@@ -308,6 +308,8 @@
  namele         | boolean    | [0:1]={name,name}
  namelt         | boolean    | [0:1]={name,name}
  namene         | boolean    | [0:1]={name,name}
+ nbpchareq      | boolean    | [0:1]={"national character","national character"}
+ nbpcharne      | boolean    | [0:1]={"national character","national character"}
  network_eq     | boolean    | [0:1]={inet,inet}
  network_ge     | boolean    | [0:1]={inet,inet}
  network_gt     | boolean    | [0:1]={inet,inet}
@@ -383,7 +385,7 @@
  varbitlt       | boolean    | [0:1]={"bit varying","bit varying"}
  varbitne       | boolean    | [0:1]={"bit varying","bit varying"}
  xideq          | boolean    | [0:1]={xid,xid}
-(228 rows)
+(230 rows)
 
 --
 -- CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
diff -uNr postgresql-head-20131017/src/test/regress/expected/n_test.out postgresql-head-20131017-nchar/src/test/regress/expected/n_test.out
--- postgresql-head-20131017/src/test/regress/expected/n_test.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/n_test.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,81 @@
+create domain test1_n_domain as varchar default (N'a');
+create domain test2_n_domain as varchar CHECK (value <> N'b');
+alter domain test1_n_domain set default (N'b');
+values (N'a');
+ column1 
+---------
+ a
+(1 row)
+
+create table n_test (f varchar default N'a', val varchar);
+insert into n_test values (N'a', 'test_val') returning f;
+ f 
+---
+ a
+(1 row)
+
+SELECT f from n_test;
+ f 
+---
+ a
+(1 row)
+
+select N'a' as f into n_test1 from n_test;
+select * from n_test1;
+ f 
+---
+ a
+(1 row)
+
+select N'a' as f from n_test;
+ f 
+---
+ a
+(1 row)
+
+select val from n_test where f=N'a';
+   val    
+----------
+ test_val
+(1 row)
+
+copy (select N'a' from n_test) to stdout;
+a
+alter table n_test alter f set default N'a';
+prepare stmt AS select N'a' as f from n_test;
+deallocate stmt;
+create index i on n_test((f||N'a'));
+create trigger tr before update on n_test for each row when (OLD.f=N'a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+do $$begin perform N'a' from n_test; end $$;
+create view v as select N'a' as f from n_test;
+select * from v;
+ f 
+---
+ a
+(1 row)
+
+alter view v alter column f set default N'a';
+begin;declare c cursor for select N'a' as f from n_test;commit;
+prepare stmt(varchar) AS select val from n_test where f=$1;
+execute stmt(N'a');
+   val    
+----------
+ test_val
+(1 row)
+
+deallocate stmt;
+UPDATE n_test SET f=N'b';
+SELECT * from n_test;
+ f |   val    
+---+----------
+ b | test_val
+(1 row)
+
+delete from n_test where f=N'b';
+CREATE FUNCTION foo(f varchar default N'a') returns setof varchar as $$SELECT N'a' from n_test;$$ LANGUAGE SQL;
+drop view v;
+drop table n_test;
+drop table n_test1;
+drop function foo(varchar);
+drop domain test1_n_domain;
+drop domain test2_n_domain;
diff -uNr postgresql-head-20131017/src/test/regress/expected/nchar.out postgresql-head-20131017-nchar/src/test/regress/expected/nchar.out
--- postgresql-head-20131017/src/test/regress/expected/nchar.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nchar.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,122 @@
+--
+-- NCHAR
+--
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+SELECT nchar 'c' = nchar 'c' AS true;
+ true 
+------
+ t
+(1 row)
+
+--
+-- Build a table for testing
+--
+CREATE TABLE NCHAR_TBL(f1 nchar);
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character(1)
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       |  
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | A
+      | 1
+      | 2
+      | 3
+      |  
+(5 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | c
+(1 row)
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | c
+(2 rows)
+
+DROP TABLE NCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character(4)
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NCHAR_TBL;
+ four |  f1  
+------+------
+      | a   
+      | ab  
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nchar_1.out postgresql-head-20131017-nchar/src/test/regress/expected/nchar_1.out
--- postgresql-head-20131017/src/test/regress/expected/nchar_1.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nchar_1.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,122 @@
+--
+-- NCHAR
+--
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+SELECT nchar 'c' = nchar 'c' AS true;
+ true 
+------
+ t
+(1 row)
+
+--
+-- Build a table for testing
+--
+CREATE TABLE NCHAR_TBL(f1 nchar);
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character(1)
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       |  
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | 1
+      | 2
+      | 3
+      |  
+(4 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | 1
+     | 2
+     | 3
+     |  
+(5 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | c
+(2 rows)
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | c
+(3 rows)
+
+DROP TABLE NCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character(4)
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NCHAR_TBL;
+ four |  f1  
+------+------
+      | a   
+      | ab  
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nchar_2.out postgresql-head-20131017-nchar/src/test/regress/expected/nchar_2.out
--- postgresql-head-20131017/src/test/regress/expected/nchar_2.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nchar_2.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,122 @@
+--
+-- NCHAR
+--
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+SELECT nchar 'c' = nchar 'c' AS true;
+ true 
+------
+ t
+(1 row)
+
+--
+-- Build a table for testing
+--
+CREATE TABLE NCHAR_TBL(f1 nchar);
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character(1)
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       |  
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     |  
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      |  
+(1 row)
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     |  
+(2 rows)
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(5 rows)
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(6 rows)
+
+DROP TABLE NCHAR_TBL;
+--
+-- Now test longer arrays of nchar
+--
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character(4)
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NCHAR_TBL;
+ four |  f1  
+------+------
+      | a   
+      | ab  
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nchar_test.out postgresql-head-20131017-nchar/src/test/regress/expected/nchar_test.out
--- postgresql-head-20131017/src/test/regress/expected/nchar_test.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nchar_test.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,101 @@
+CREATE AGGREGATE test_nchar_agg (nchar(10)) ( sfunc = array_append, stype = nchar(10)[], initcond = '{}');
+alter aggregate test_nchar_agg (nchar(10)) rename to test_nchar_aggregate;
+drop aggregate test_nchar_aggregate(nchar(10));
+create domain test_nchar_domain as nchar(10);
+drop domain test_nchar_domain;
+create table nchar_test (f nchar(10), val varchar);
+create index i on nchar_test(f);
+vacuum analyze nchar_test(f);
+insert into nchar_test values ('a', 'test_val') returning (f);
+     f      
+------------
+ a         
+(1 row)
+
+SELECT f from nchar_test;
+     f      
+------------
+ a         
+(1 row)
+
+select f into nchar_test1 from nchar_test;
+select * from nchar_test1;
+     f      
+------------
+ a         
+(1 row)
+
+drop table nchar_test1;
+select f from nchar_test;
+     f      
+------------
+ a         
+(1 row)
+
+select val from nchar_test where f='a';
+   val    
+----------
+ test_val
+(1 row)
+
+prepare stmt AS select f from nchar_test;
+prepare stmt1(nchar(10)) AS select val from nchar_test where f=$1;
+deallocate stmt;
+deallocate stmt1;
+prepare stmt(varchar) AS select val from nchar_test where f=$1;
+execute stmt('a');
+   val    
+----------
+ test_val
+(1 row)
+
+deallocate stmt;
+begin;declare c cursor for select f from nchar_test;commit;
+create view v as select f from nchar_test;
+select * from v;
+     f      
+------------
+ a         
+(1 row)
+
+alter view v alter column f set default 'a';
+drop view v;
+do $$begin perform f from nchar_test; end $$;
+CREATE TYPE test_nchar_type AS (f nchar(10));
+comment on type test_nchar_type is 'comment';
+drop type test_nchar_type;
+create trigger tr before update on nchar_test for each row when (OLD.f='a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+copy nchar_test(f) to stdout;
+a         
+comment on COLUMN nchar_test.f is 'comment';
+analyze nchar_test(f);
+alter table nchar_test rename f to f_renamed;
+alter table nchar_test rename f_renamed to f;
+alter table nchar_test alter val type nchar(10);
+UPDATE nchar_test SET f='b';
+SELECT * from nchar_test;
+     f      |    val     
+------------+------------
+ b          | test_val  
+(1 row)
+
+delete from nchar_test where f='b';
+CREATE FUNCTION foo(f nchar(10)) returns setof nchar(10) as $$SELECT f from nchar_test;$$ LANGUAGE SQL;
+alter function foo(nchar(10)) reset all;
+drop function foo(nchar(10));
+drop table nchar_test;
+create function dummy_eq (nchar, nchar) returns boolean as 'SELECT $1=$2;' LANGUAGE SQL;
+CREATE OPERATOR === (LEFTARG = nchar, RIGHTARG = nchar, PROCEDURE = dummy_eq);
+alter operator === (nchar,nchar) set schema pg_catalog;
+DROP OPERATOR  === (nchar,nchar);
+drop function dummy_eq (nchar, nchar);
+create cast (nchar as bytea) with FUNCTION nbpcharsend(nchar);
+drop cast (nchar as bytea);
+create foreign data wrapper dummy;
+CREATE SERVER foo FOREIGN DATA WRAPPER "dummy";
+create foreign table ft_nchar (f nchar(10), var varchar) server foo;
+alter foreign table ft_nchar alter var type nchar(10);
+alter foreign table ft_nchar rename column f to f_new;
+drop foreign table ft_nchar;
+drop SERVER foo;
+drop foreign data wrapper dummy;
diff -uNr postgresql-head-20131017/src/test/regress/expected/nvarchar.out postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar.out
--- postgresql-head-20131017/src/test/regress/expected/nvarchar.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length nvarchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | A
+      | 1
+      | 2
+      | 3
+      | 
+(5 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | c
+(1 row)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | c
+(2 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of nvarchar
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nvarchar_1.out postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_1.out
--- postgresql-head-20131017/src/test/regress/expected/nvarchar_1.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_1.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length nvarchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | 1
+      | 2
+      | 3
+      | 
+(4 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | 1
+     | 2
+     | 3
+     | 
+(5 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | c
+(2 rows)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | c
+(3 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of nvarchar
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nvarchar_2.out postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_2.out
--- postgresql-head-20131017/src/test/regress/expected/nvarchar_2.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_2.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,111 @@
+--
+-- NVARCHAR
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+-- zero-length nvarchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+ERROR:  value too long for type character varying(1)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+ seven | f1 
+-------+----
+       | a
+       | A
+       | 1
+       | 2
+       | 3
+       | 
+       | c
+(7 rows)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+ six | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | 
+     | c
+(6 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+ one | f1 
+-----+----
+     | a
+(1 row)
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+ five | f1 
+------+----
+      | 
+(1 row)
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+ six | f1 
+-----+----
+     | a
+     | 
+(2 rows)
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+ one | f1 
+-----+----
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(5 rows)
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+ two | f1 
+-----+----
+     | a
+     | A
+     | 1
+     | 2
+     | 3
+     | c
+(6 rows)
+
+DROP TABLE NVARCHAR_TBL;
+--
+-- Now test longer arrays of nvarchar
+--
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+ERROR:  value too long for type character varying(4)
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+SELECT '' AS four, * FROM NVARCHAR_TBL;
+ four |  f1  
+------+------
+      | a
+      | ab
+      | abcd
+      | abcd
+(4 rows)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nvarchar_misc.out postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_misc.out
--- postgresql-head-20131017/src/test/regress/expected/nvarchar_misc.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_misc.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,126 @@
+select N'a '=N'a';
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a '='a';
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a'='a';
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a '='a'::varchar(1);
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a '='a'::nvarchar(1);
+ ?column? 
+----------
+ f
+(1 row)
+
+select N'a'='a'::nvarchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select N'a'='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::char(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::nchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nchar(10)='a'::nvarchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a'::varchar(1);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a'::varchar(10);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a'::nvarchar(10)='a '::varchar(10);
+ ?column? 
+----------
+ f
+(1 row)
+
+select 'a'::nvarchar(10)='a '::char(10);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a '::nchar(10)='a  '::nchar(5);
+ ?column? 
+----------
+ t
+(1 row)
+
+select 'a '::nchar(10)='a  '::char(5);
+ ?column? 
+----------
+ t
+(1 row)
+
diff -uNr postgresql-head-20131017/src/test/regress/expected/nvarchar_test.out postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_test.out
--- postgresql-head-20131017/src/test/regress/expected/nvarchar_test.out	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/expected/nvarchar_test.out	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,101 @@
+CREATE AGGREGATE test_nvarchar_agg (nvarchar) ( sfunc = array_append, stype = nvarchar[], initcond = '{}');
+alter aggregate test_nvarchar_agg (nvarchar) rename to test_nvarchar_aggregate;
+drop aggregate test_nvarchar_aggregate(nvarchar);
+create domain test_nvarchar_domain as nvarchar;
+drop domain test_nvarchar_domain;
+create table nvarchar_test (f nvarchar, val varchar);
+create index i on nvarchar_test(f);
+vacuum analyze nvarchar_test(f);
+insert into nvarchar_test values ('a', 'test_val') returning (f);
+ f 
+---
+ a
+(1 row)
+
+SELECT f from nvarchar_test;
+ f 
+---
+ a
+(1 row)
+
+select f into nvarchar_test1 from nvarchar_test;
+select * from nvarchar_test1;
+ f 
+---
+ a
+(1 row)
+
+drop table nvarchar_test1;
+select f from nvarchar_test;
+ f 
+---
+ a
+(1 row)
+
+select val from nvarchar_test where f='a';
+   val    
+----------
+ test_val
+(1 row)
+
+prepare stmt AS select f from nvarchar_test;
+prepare stmt1(nvarchar) AS select val from nvarchar_test where f=$1;
+deallocate stmt;
+deallocate stmt1;
+prepare stmt(varchar) AS select val from nvarchar_test where f=$1;
+execute stmt('a');
+   val    
+----------
+ test_val
+(1 row)
+
+deallocate stmt;
+begin;declare c cursor for select f from nvarchar_test;commit;
+create view v as select f from nvarchar_test;
+select * from v;
+ f 
+---
+ a
+(1 row)
+
+alter view v alter column f set default 'a';
+drop view v;
+do $$begin perform f from nvarchar_test; end $$;
+CREATE TYPE test_nvarchar_type AS (f nvarchar);
+comment on type test_nvarchar_type is 'comment';
+drop type test_nvarchar_type;
+create trigger tr before update on nvarchar_test for each row when (OLD.f='a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+copy nvarchar_test(f) to stdout;
+a
+comment on COLUMN nvarchar_test.f is 'comment';
+analyze nvarchar_test(f);
+alter table nvarchar_test rename f to f_renamed;
+alter table nvarchar_test rename f_renamed to f;
+alter table nvarchar_test alter val type nvarchar;
+UPDATE nvarchar_test SET f='b';
+SELECT * from nvarchar_test;
+ f |   val    
+---+----------
+ b | test_val
+(1 row)
+
+delete from nvarchar_test where f='b';
+CREATE FUNCTION foo(f nvarchar) returns setof nvarchar as $$SELECT f from nvarchar_test;$$ LANGUAGE SQL;
+alter function foo(nvarchar) reset all;
+drop function foo(nvarchar);
+drop table nvarchar_test;
+create function dummy_eq (nvarchar, nvarchar) returns boolean as 'SELECT $1=$2;' LANGUAGE SQL;
+CREATE OPERATOR === (LEFTARG = nvarchar, RIGHTARG = nvarchar, PROCEDURE = dummy_eq);
+alter operator === (nvarchar,nvarchar) set schema pg_catalog;
+DROP OPERATOR  === (nvarchar,nvarchar);
+drop function dummy_eq (nvarchar, nvarchar);
+create cast (name as nvarchar) with FUNCTION  "nvarchar"(name);
+drop cast (name as nvarchar);
+create foreign data wrapper dummy;
+CREATE SERVER foo FOREIGN DATA WRAPPER "dummy";
+create foreign table ft_nvarchar (f nvarchar, var varchar) server foo;
+alter foreign table ft_nvarchar alter var type nvarchar;
+alter foreign table ft_nvarchar rename column f to f_new;
+drop foreign table ft_nvarchar;
+drop SERVER foo;
+drop foreign data wrapper dummy;
diff -uNr postgresql-head-20131017/src/test/regress/expected/opr_sanity.out postgresql-head-20131017-nchar/src/test/regress/expected/opr_sanity.out
--- postgresql-head-20131017/src/test/regress/expected/opr_sanity.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/opr_sanity.out	2013-10-31 13:47:56.000000000 +1100
@@ -154,8 +154,11 @@
  prorettype | prorettype 
 ------------+------------
          25 |       1043
+         25 |       6001
+       1042 |       5001
+       1043 |       6001
        1114 |       1184
-(2 rows)
+(5 rows)
 
 SELECT DISTINCT p1.proargtypes[0], p2.proargtypes[0]
 FROM pg_proc AS p1, pg_proc AS p2
@@ -171,10 +174,14 @@
 -------------+-------------
           25 |        1042
           25 |        1043
+          25 |        5001
+          25 |        6001
+        1042 |        5001
+        1043 |        6001
         1114 |        1184
         1560 |        1562
         2277 |        2283
-(5 rows)
+(9 rows)
 
 SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]
 FROM pg_proc AS p1, pg_proc AS p2
@@ -189,10 +196,11 @@
  proargtypes | proargtypes 
 -------------+-------------
           23 |          28
+        1042 |        5001
         1114 |        1184
         1560 |        1562
         2277 |        2283
-(4 rows)
+(5 rows)
 
 SELECT DISTINCT p1.proargtypes[2], p2.proargtypes[2]
 FROM pg_proc AS p1, pg_proc AS p2
@@ -434,16 +442,20 @@
                 WHERE k.castmethod = 'b' AND
                     k.castsource = c.casttarget AND
                     k.casttarget = c.castsource);
-    castsource     |    casttarget     | castfunc | castcontext 
--------------------+-------------------+----------+-------------
- text              | character         |        0 | i
- character varying | character         |        0 | i
- pg_node_tree      | text              |        0 | i
- cidr              | inet              |        0 | i
- xml               | text              |        0 | a
- xml               | character varying |        0 | a
- xml               | character         |        0 | a
-(7 rows)
+         castsource         |     casttarget     | castfunc | castcontext 
+----------------------------+--------------------+----------+-------------
+ text                       | character          |        0 | i
+ character varying          | character          |        0 | i
+ text                       | national character |        0 | i
+ national character varying | character          |        0 | i
+ character varying          | national character |        0 | i
+ national character varying | national character |        0 | i
+ pg_node_tree               | text               |        0 | i
+ cidr                       | inet               |        0 | i
+ xml                        | text               |        0 | a
+ xml                        | character varying  |        0 | a
+ xml                        | character          |        0 | a
+(11 rows)
 
 -- **************** pg_operator ****************
 -- Look for illegal values in pg_operator fields.
@@ -1228,7 +1240,9 @@
                  p3.amopstrategy = 1);
  amoplefttype | amoplefttype 
 --------------+--------------
-(0 rows)
+         1042 |         5001
+         5001 |         1042
+(2 rows)
 
 -- **************** pg_amproc ****************
 -- Look for illegal values in pg_amproc fields
@@ -1289,9 +1303,11 @@
 GROUP BY amname, amsupport, opcname, amprocfamily
 HAVING (count(*) != amsupport AND count(*) != amsupport - 1)
     OR amprocfamily IS NULL;
- amname | opcname | count 
---------+---------+-------
-(0 rows)
+ amname |    opcname    | count 
+--------+---------------+-------
+ gin    | _nbpchar_ops  |     1
+ gin    | _nvarchar_ops |     1
+(2 rows)
 
 -- Unfortunately, we can't check the amproc link very well because the
 -- signature of the function may be different for different support routines
@@ -1332,9 +1348,11 @@
           THEN prorettype != 'void'::regtype OR proretset OR pronargs != 1
                OR proargtypes[0] != 'internal'::regtype
           ELSE true END);
- amprocfamily | amprocnum | oid | proname | opfname 
---------------+-----------+-----+---------+---------
-(0 rows)
+ amprocfamily | amprocnum | oid  |       proname        |      opfname       
+--------------+-----------+------+----------------------+--------------------
+          426 |         1 | 1078 | bpcharcmp            | bpchar_ops
+         2097 |         1 | 2180 | btbpchar_pattern_cmp | bpchar_pattern_ops
+(2 rows)
 
 -- For hash we can also do a little better: the support routines must be
 -- of the form hash(lefttype) returns int4.  There are several cases where
diff -uNr postgresql-head-20131017/src/test/regress/expected/sanity_check.out postgresql-head-20131017-nchar/src/test/regress/expected/sanity_check.out
--- postgresql-head-20131017/src/test/regress/expected/sanity_check.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/sanity_check.out	2013-10-17 14:05:30.000000000 +1100
@@ -69,6 +69,7 @@
  lseg_tbl                | f
  main_table              | f
  money_data              | f
+ nchar_tbl               | f
  num_data                | f
  num_exp_add             | t
  num_exp_div             | t
@@ -80,6 +81,7 @@
  num_exp_sub             | t
  num_input_test          | f
  num_result              | f
+ nvarchar_tbl            | f
  onek                    | t
  onek2                   | t
  path_tbl                | f
@@ -167,7 +169,7 @@
  timetz_tbl              | f
  tinterval_tbl           | f
  varchar_tbl             | f
-(156 rows)
+(158 rows)
 
 --
 -- another sanity check: every system catalog that has OIDs should have
diff -uNr postgresql-head-20131017/src/test/regress/expected/strings.out postgresql-head-20131017-nchar/src/test/regress/expected/strings.out
--- postgresql-head-20131017/src/test/regress/expected/strings.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/strings.out	2013-10-17 11:56:22.000000000 +1100
@@ -203,6 +203,15 @@
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS text) AS "text(nchar)" FROM NCHAR_TBL;
+ text(nchar) 
+-------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(f1 AS text) AS "text(varchar)" FROM VARCHAR_TBL;
  text(varchar) 
 ---------------
@@ -212,6 +221,15 @@
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS text) AS "text(nvarchar)" FROM NVARCHAR_TBL;
+ text(nvarchar) 
+----------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(name 'namefield' AS text) AS "text(name)";
  text(name) 
 ------------
@@ -227,6 +245,14 @@
 (2 rows)
 
 -- note: implicit-cast case is tested in char.sql
+SELECT CAST(f1 AS nchar(10)) AS "nchar(text)" FROM TEXT_TBL;
+ nchar(text) 
+-------------
+ doh!      
+ hi de ho n
+(2 rows)
+
+-- note: implicit-cast case is tested in nchar.sql
 SELECT CAST(f1 AS char(20)) AS "char(text)" FROM TEXT_TBL;
       char(text)      
 ----------------------
@@ -234,6 +260,13 @@
  hi de ho neighbor   
 (2 rows)
 
+SELECT CAST(f1 AS nchar(20)) AS "nchar(text)" FROM TEXT_TBL;
+     nchar(text)      
+----------------------
+ doh!                
+ hi de ho neighbor   
+(2 rows)
+
 SELECT CAST(f1 AS char(10)) AS "char(varchar)" FROM VARCHAR_TBL;
  char(varchar) 
 ---------------
@@ -243,12 +276,27 @@
  abcd      
 (4 rows)
 
+SELECT CAST(f1 AS nchar(10)) AS "nchar(nvarchar)" FROM NVARCHAR_TBL;
+ nchar(nvarchar) 
+-----------------
+ a         
+ ab        
+ abcd      
+ abcd      
+(4 rows)
+
 SELECT CAST(name 'namefield' AS char(10)) AS "char(name)";
  char(name) 
 ------------
  namefield 
 (1 row)
 
+SELECT CAST(name 'namefield' AS nchar(10)) AS "nchar(name)";
+ nchar(name) 
+-------------
+ namefield 
+(1 row)
+
 SELECT CAST(f1 AS varchar) AS "varchar(text)" FROM TEXT_TBL;
    varchar(text)   
 -------------------
@@ -256,6 +304,13 @@
  hi de ho neighbor
 (2 rows)
 
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(text)" FROM TEXT_TBL;
+  nvarchar(text)   
+-------------------
+ doh!
+ hi de ho neighbor
+(2 rows)
+
 SELECT CAST(f1 AS varchar) AS "varchar(char)" FROM CHAR_TBL;
  varchar(char) 
 ---------------
@@ -265,12 +320,27 @@
  abcd
 (4 rows)
 
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(nchar)" FROM NCHAR_TBL;
+ nvarchar(nchar) 
+-----------------
+ a
+ ab
+ abcd
+ abcd
+(4 rows)
+
 SELECT CAST(name 'namefield' AS varchar) AS "varchar(name)";
  varchar(name) 
 ---------------
  namefield
 (1 row)
 
+SELECT CAST(name 'namefield' AS nvarchar) AS "nvarchar(name)";
+ nvarchar(name) 
+----------------
+ namefield
+(1 row)
+
 --
 -- test SQL string functions
 -- E### and T### are feature reference numbers from SQL99
@@ -1105,18 +1175,36 @@
  characters and text
 (1 row)
 
+SELECT nchar(20) 'ncharacters' || ' and text' AS "Concat nchar to unknown type";
+ Concat nchar to unknown type 
+------------------------------
+ ncharacters and text
+(1 row)
+
 SELECT text 'text' || char(20) ' and characters' AS "Concat text to char";
  Concat text to char 
 ---------------------
  text and characters
 (1 row)
 
+SELECT text 'text' || nchar(20) ' and ncharacters' AS "Concat text to nchar";
+ Concat text to nchar 
+----------------------
+ text and ncharacters
+(1 row)
+
 SELECT text 'text' || varchar ' and varchar' AS "Concat text to varchar";
  Concat text to varchar 
 ------------------------
  text and varchar
 (1 row)
 
+SELECT text 'text' || nvarchar ' and nvarchar' AS "Concat text to nvarchar";
+ Concat text to nvarchar 
+-------------------------
+ text and nvarchar
+(1 row)
+
 --
 -- test substr with toasted text values
 --
diff -uNr postgresql-head-20131017/src/test/regress/expected/union.out postgresql-head-20131017-nchar/src/test/regress/expected/union.out
--- postgresql-head-20131017/src/test/regress/expected/union.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/union.out	2013-10-17 11:56:22.000000000 +1100
@@ -257,6 +257,48 @@
  hi de ho neighbor
 (5 rows)
 
+--NVARCHAR specific
+SELECT f1 AS three FROM NVARCHAR_TBL
+UNION
+SELECT CAST(f1 AS nvarchar) FROM NCHAR_TBL
+ORDER BY 1;
+ three 
+-------
+ a
+ ab
+ abcd
+(3 rows)
+
+SELECT f1 AS eight FROM NVARCHAR_TBL
+UNION ALL
+SELECT f1 FROM NCHAR_TBL;
+ eight 
+-------
+ a
+ ab
+ abcd
+ abcd
+ a
+ ab
+ abcd
+ abcd
+(8 rows)
+
+SELECT f1 AS five FROM TEXT_TBL
+UNION
+SELECT f1 FROM NVARCHAR_TBL
+UNION
+SELECT TRIM(TRAILING FROM f1) FROM NCHAR_TBL
+ORDER BY 1;
+       five        
+-------------------
+ a
+ ab
+ abcd
+ doh!
+ hi de ho neighbor
+(5 rows)
+
 --
 -- INTERSECT and EXCEPT
 --
diff -uNr postgresql-head-20131017/src/test/regress/expected/update.out postgresql-head-20131017-nchar/src/test/regress/expected/update.out
--- postgresql-head-20131017/src/test/regress/expected/update.out	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/expected/update.out	2013-10-17 11:56:22.000000000 +1100
@@ -4,40 +4,66 @@
 CREATE TABLE update_test (
     a   INT DEFAULT 10,
     b   INT,
-    c   TEXT
+    c   TEXT,
+    d   nchar(10),
+    e   nvarchar
 );
-INSERT INTO update_test VALUES (5, 10, 'foo');
+INSERT INTO update_test VALUES (5, 10, 'foo', 'a', 'a');
 INSERT INTO update_test(b, a) VALUES (15, 10);
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
-  5 | 10 | foo
- 10 | 15 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | a          | a
+ 10 | 15 |     |            | 
+(2 rows)
+
+UPDATE update_test SET d='b', e='b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | b          | b
+ 10 | 15 |     | b          | b
+(2 rows)
+
+UPDATE update_test SET d='c' where e='b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | c          | b
+ 10 | 15 |     | c          | b
+(2 rows)
+
+UPDATE update_test SET d=N'e' where e=N'b';
+SELECT * FROM update_test;
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+  5 | 10 | foo | e          | b
+ 10 | 15 |     | e          | b
 (2 rows)
 
 UPDATE update_test SET a = DEFAULT, b = DEFAULT;
 SELECT * FROM update_test;
- a  | b |  c  
-----+---+-----
- 10 |   | foo
- 10 |   | 
+ a  | b |  c  |     d      | e 
+----+---+-----+------------+---
+ 10 |   | foo | e          | b
+ 10 |   |     | e          | b
 (2 rows)
 
 -- aliases for the UPDATE target table
 UPDATE update_test AS t SET b = 10 WHERE t.a = 10;
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
- 10 | 10 | foo
- 10 | 10 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+ 10 | 10 | foo | e          | b
+ 10 | 10 |     | e          | b
 (2 rows)
 
 UPDATE update_test t SET b = t.b + 10 WHERE t.a = 10;
 SELECT * FROM update_test;
- a  | b  |  c  
-----+----+-----
- 10 | 20 | foo
- 10 | 20 | 
+ a  | b  |  c  |     d      | e 
+----+----+-----+------------+---
+ 10 | 20 | foo | e          | b
+ 10 | 20 |     | e          | b
 (2 rows)
 
 --
@@ -46,10 +72,10 @@
 UPDATE update_test SET a=v.i FROM (VALUES(100, 20)) AS v(i, j)
   WHERE update_test.b = v.j;
 SELECT * FROM update_test;
-  a  | b  |  c  
------+----+-----
- 100 | 20 | foo
- 100 | 20 | 
+  a  | b  |  c  |     d      | e 
+-----+----+-----+------------+---
+ 100 | 20 | foo | e          | b
+ 100 | 20 |     | e          | b
 (2 rows)
 
 --
@@ -57,18 +83,18 @@
 --
 UPDATE update_test SET (c,b,a) = ('bugle', b+11, DEFAULT) WHERE c = 'foo';
 SELECT * FROM update_test;
-  a  | b  |   c   
------+----+-------
- 100 | 20 | 
-  10 | 31 | bugle
+  a  | b  |   c   |     d      | e 
+-----+----+-------+------------+---
+ 100 | 20 |       | e          | b
+  10 | 31 | bugle | e          | b
 (2 rows)
 
 UPDATE update_test SET (c,b) = ('car', a+b), a = a + 1 WHERE a = 10;
 SELECT * FROM update_test;
-  a  | b  |  c  
------+----+-----
- 100 | 20 | 
-  11 | 41 | car
+  a  | b  |  c  |     d      | e 
+-----+----+-----+------------+---
+ 100 | 20 |     | e          | b
+  11 | 41 | car | e          | b
 (2 rows)
 
 -- fail, multi assignment to same column:
diff -uNr postgresql-head-20131017/src/test/regress/output/misc.source postgresql-head-20131017-nchar/src/test/regress/output/misc.source
--- postgresql-head-20131017/src/test/regress/output/misc.source	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/output/misc.source	2013-10-17 14:06:41.000000000 +1100
@@ -643,6 +643,7 @@
  lseg_tbl
  main_table
  money_data
+ nchar_tbl
  num_data
  num_exp_add
  num_exp_div
@@ -654,6 +655,7 @@
  num_exp_sub
  num_input_test
  num_result
+ nvarchar_tbl
  onek
  onek2
  path_tbl
@@ -697,7 +699,7 @@
  tvvmv
  varchar_tbl
  xacttest
-(119 rows)
+(121 rows)
 
 SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer')));
  name 
diff -uNr postgresql-head-20131017/src/test/regress/parallel_schedule postgresql-head-20131017-nchar/src/test/regress/parallel_schedule
--- postgresql-head-20131017/src/test/regress/parallel_schedule	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/parallel_schedule	2013-10-22 15:16:06.000000000 +1100
@@ -13,7 +13,7 @@
 # ----------
 # The first group of parallel tests
 # ----------
-test: boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes
+test: boolean char name varchar nchar nvarchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes nvarchar_misc
 
 # Depends on things setup during char, varchar and text
 test: strings
@@ -109,3 +109,8 @@
 
 # run stats by itself because its delay may be insufficient under heavy load
 test: stats
+
+test: n_test
+test: nchar_test
+test: nvarchar_test
+
diff -uNr postgresql-head-20131017/src/test/regress/serial_schedule postgresql-head-20131017-nchar/src/test/regress/serial_schedule
--- postgresql-head-20131017/src/test/regress/serial_schedule	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/serial_schedule	2013-10-22 14:38:27.000000000 +1100
@@ -5,6 +5,9 @@
 test: char
 test: name
 test: varchar
+test: nvarchar
+test: nvarchar_misc
+test: nchar
 test: text
 test: int2
 test: int4
@@ -141,3 +144,7 @@
 test: with
 test: xml
 test: stats
+test: n_test
+test: nchar_test
+test: nvarchar_test
+
diff -uNr postgresql-head-20131017/src/test/regress/sql/n_test.sql postgresql-head-20131017-nchar/src/test/regress/sql/n_test.sql
--- postgresql-head-20131017/src/test/regress/sql/n_test.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/n_test.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,57 @@
+
+create domain test1_n_domain as varchar default (N'a');
+create domain test2_n_domain as varchar CHECK (value <> N'b');
+
+alter domain test1_n_domain set default (N'b');
+
+values (N'a');
+
+create table n_test (f varchar default N'a', val varchar);
+
+insert into n_test values (N'a', 'test_val') returning f;
+SELECT f from n_test;
+
+select N'a' as f into n_test1 from n_test;
+select * from n_test1;
+
+select N'a' as f from n_test;
+select val from n_test where f=N'a';
+
+copy (select N'a' from n_test) to stdout;
+
+alter table n_test alter f set default N'a';
+
+prepare stmt AS select N'a' as f from n_test;
+deallocate stmt;
+
+create index i on n_test((f||N'a'));
+
+create trigger tr before update on n_test for each row when (OLD.f=N'a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+
+do $$begin perform N'a' from n_test; end $$;
+
+create view v as select N'a' as f from n_test;
+select * from v;
+
+alter view v alter column f set default N'a';
+
+begin;declare c cursor for select N'a' as f from n_test;commit;
+
+prepare stmt(varchar) AS select val from n_test where f=$1;
+execute stmt(N'a');
+deallocate stmt;
+
+UPDATE n_test SET f=N'b';
+SELECT * from n_test;
+
+delete from n_test where f=N'b';
+
+CREATE FUNCTION foo(f varchar default N'a') returns setof varchar as $$SELECT N'a' from n_test;$$ LANGUAGE SQL;
+
+drop view v;
+drop table n_test;
+drop table n_test1;
+drop function foo(varchar);
+drop domain test1_n_domain;
+drop domain test2_n_domain;
+
diff -uNr postgresql-head-20131017/src/test/regress/sql/nchar.sql postgresql-head-20131017-nchar/src/test/regress/sql/nchar.sql
--- postgresql-head-20131017/src/test/regress/sql/nchar.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/nchar.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,75 @@
+--
+-- NCHAR
+--
+
+-- fixed-length by value
+-- internally passed by value if <= 4 bytes in storage
+
+SELECT nchar 'c' = nchar 'c' AS true;
+
+--
+-- Build a table for testing
+--
+
+CREATE TABLE NCHAR_TBL(f1 nchar);
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('A');
+
+-- any of the following three input formats are acceptable
+INSERT INTO NCHAR_TBL (f1) VALUES ('1');
+
+INSERT INTO NCHAR_TBL (f1) VALUES (2);
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('3');
+
+-- zero-length nchar
+INSERT INTO NCHAR_TBL (f1) VALUES ('');
+
+-- try nchar's of greater than 1 length
+INSERT INTO NCHAR_TBL (f1) VALUES ('cd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('c     ');
+
+
+SELECT '' AS seven, * FROM NCHAR_TBL;
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <> 'a';
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 = 'a';
+
+SELECT '' AS five, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 < 'a';
+
+SELECT '' AS six, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 <= 'a';
+
+SELECT '' AS one, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 > 'a';
+
+SELECT '' AS two, c.*
+   FROM NCHAR_TBL c
+   WHERE c.f1 >= 'a';
+
+DROP TABLE NCHAR_TBL;
+
+--
+-- Now test longer arrays of nchar
+--
+
+CREATE TABLE NCHAR_TBL(f1 nchar(4));
+
+INSERT INTO NCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcde');
+INSERT INTO NCHAR_TBL (f1) VALUES ('abcd    ');
+
+SELECT '' AS four, * FROM NCHAR_TBL;
diff -uNr postgresql-head-20131017/src/test/regress/sql/nchar_test.sql postgresql-head-20131017-nchar/src/test/regress/sql/nchar_test.sql
--- postgresql-head-20131017/src/test/regress/sql/nchar_test.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/nchar_test.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,101 @@
+
+CREATE AGGREGATE test_nchar_agg (nchar(10)) ( sfunc = array_append, stype = nchar(10)[], initcond = '{}');
+
+alter aggregate test_nchar_agg (nchar(10)) rename to test_nchar_aggregate;
+
+drop aggregate test_nchar_aggregate(nchar(10));
+
+create domain test_nchar_domain as nchar(10);
+
+drop domain test_nchar_domain;
+
+create table nchar_test (f nchar(10), val varchar);
+
+create index i on nchar_test(f);
+
+vacuum analyze nchar_test(f);
+
+insert into nchar_test values ('a', 'test_val') returning (f);
+SELECT f from nchar_test;
+
+select f into nchar_test1 from nchar_test;
+select * from nchar_test1;
+
+drop table nchar_test1;
+
+select f from nchar_test;
+select val from nchar_test where f='a';
+
+prepare stmt AS select f from nchar_test;
+prepare stmt1(nchar(10)) AS select val from nchar_test where f=$1;
+
+deallocate stmt;
+deallocate stmt1;
+
+prepare stmt(varchar) AS select val from nchar_test where f=$1;
+execute stmt('a');
+
+deallocate stmt;
+
+begin;declare c cursor for select f from nchar_test;commit;
+
+create view v as select f from nchar_test;
+select * from v;
+
+alter view v alter column f set default 'a';
+
+drop view v;
+
+do $$begin perform f from nchar_test; end $$;
+
+CREATE TYPE test_nchar_type AS (f nchar(10));
+
+comment on type test_nchar_type is 'comment';
+
+drop type test_nchar_type;
+
+create trigger tr before update on nchar_test for each row when (OLD.f='a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+
+copy nchar_test(f) to stdout;
+
+comment on COLUMN nchar_test.f is 'comment';
+
+analyze nchar_test(f);
+
+alter table nchar_test rename f to f_renamed;
+alter table nchar_test rename f_renamed to f;
+
+alter table nchar_test alter val type nchar(10);
+
+UPDATE nchar_test SET f='b';
+SELECT * from nchar_test;
+
+delete from nchar_test where f='b';
+
+CREATE FUNCTION foo(f nchar(10)) returns setof nchar(10) as $$SELECT f from nchar_test;$$ LANGUAGE SQL;
+
+alter function foo(nchar(10)) reset all;
+
+drop function foo(nchar(10));
+
+drop table nchar_test;
+
+create function dummy_eq (nchar, nchar) returns boolean as 'SELECT $1=$2;' LANGUAGE SQL;
+CREATE OPERATOR === (LEFTARG = nchar, RIGHTARG = nchar, PROCEDURE = dummy_eq);
+alter operator === (nchar,nchar) set schema pg_catalog;
+DROP OPERATOR  === (nchar,nchar);
+drop function dummy_eq (nchar, nchar);
+
+create cast (nchar as bytea) with FUNCTION nbpcharsend(nchar);
+drop cast (nchar as bytea);
+
+create foreign data wrapper dummy;
+CREATE SERVER foo FOREIGN DATA WRAPPER "dummy";
+create foreign table ft_nchar (f nchar(10), var varchar) server foo;
+alter foreign table ft_nchar alter var type nchar(10);
+alter foreign table ft_nchar rename column f to f_new;
+drop foreign table ft_nchar;
+drop SERVER foo;
+drop foreign data wrapper dummy;
+
+
diff -uNr postgresql-head-20131017/src/test/regress/sql/nvarchar.sql postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar.sql
--- postgresql-head-20131017/src/test/regress/sql/nvarchar.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,66 @@
+--
+-- NVARCHAR
+--
+
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(1));
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('A');
+
+-- any of the following three input formats are acceptable
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('1');
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES (2);
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('3');
+
+-- zero-length nvarchar
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('');
+
+-- try nvarchar's of greater than 1 length
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('cd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('c     ');
+
+
+SELECT '' AS seven, * FROM NVARCHAR_TBL;
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <> 'a';
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 = 'a';
+
+SELECT '' AS five, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 < 'a';
+
+SELECT '' AS six, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 <= 'a';
+
+SELECT '' AS one, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 > 'a';
+
+SELECT '' AS two, c.*
+   FROM NVARCHAR_TBL c
+   WHERE c.f1 >= 'a';
+
+DROP TABLE NVARCHAR_TBL;
+
+--
+-- Now test longer arrays of nvarchar
+--
+
+CREATE TABLE NVARCHAR_TBL(f1 nvarchar(4));
+
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('a');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('ab');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcde');
+INSERT INTO NVARCHAR_TBL (f1) VALUES ('abcd    ');
+
+SELECT '' AS four, * FROM NVARCHAR_TBL;
diff -uNr postgresql-head-20131017/src/test/regress/sql/nvarchar_misc.sql postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar_misc.sql
--- postgresql-head-20131017/src/test/regress/sql/nvarchar_misc.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar_misc.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,22 @@
+select N'a '=N'a';
+select N'a '='a';
+select N'a'='a';
+select N'a '='a'::char(1);
+select N'a '='a'::nchar(1);
+select N'a '='a'::varchar(1);
+select N'a '='a'::nvarchar(1);
+select N'a'='a'::nvarchar(1);
+select N'a'='a'::varchar(1);
+select N'a'='a'::char(1);
+select N'a'='a'::nchar(1);
+select 'a'::nchar(10)='a'::char(1);
+select 'a'::nchar(10)='a'::nchar(1);
+select 'a'::nchar(10)='a'::varchar(1);
+select 'a'::nchar(10)='a'::nvarchar(1);
+select 'a'::nvarchar(10)='a'::varchar(1);
+select 'a'::nvarchar(10)='a'::varchar(10);
+select 'a'::nvarchar(10)='a '::varchar(10);
+select 'a'::nvarchar(10)='a '::char(10);
+select 'a '::nchar(10)='a  '::nchar(5);
+select 'a '::nchar(10)='a  '::char(5);
+
diff -uNr postgresql-head-20131017/src/test/regress/sql/nvarchar_test.sql postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar_test.sql
--- postgresql-head-20131017/src/test/regress/sql/nvarchar_test.sql	1970-01-01 10:00:00.000000000 +1000
+++ postgresql-head-20131017-nchar/src/test/regress/sql/nvarchar_test.sql	2013-10-17 11:56:22.000000000 +1100
@@ -0,0 +1,101 @@
+
+CREATE AGGREGATE test_nvarchar_agg (nvarchar) ( sfunc = array_append, stype = nvarchar[], initcond = '{}');
+
+alter aggregate test_nvarchar_agg (nvarchar) rename to test_nvarchar_aggregate;
+
+drop aggregate test_nvarchar_aggregate(nvarchar);
+
+create domain test_nvarchar_domain as nvarchar;
+
+drop domain test_nvarchar_domain;
+
+create table nvarchar_test (f nvarchar, val varchar);
+
+create index i on nvarchar_test(f);
+
+vacuum analyze nvarchar_test(f);
+
+insert into nvarchar_test values ('a', 'test_val') returning (f);
+SELECT f from nvarchar_test;
+
+select f into nvarchar_test1 from nvarchar_test;
+select * from nvarchar_test1;
+
+drop table nvarchar_test1;
+
+select f from nvarchar_test;
+select val from nvarchar_test where f='a';
+
+prepare stmt AS select f from nvarchar_test;
+prepare stmt1(nvarchar) AS select val from nvarchar_test where f=$1;
+
+deallocate stmt;
+deallocate stmt1;
+
+prepare stmt(varchar) AS select val from nvarchar_test where f=$1;
+execute stmt('a');
+
+deallocate stmt;
+
+begin;declare c cursor for select f from nvarchar_test;commit;
+
+create view v as select f from nvarchar_test;
+select * from v;
+
+alter view v alter column f set default 'a';
+
+drop view v;
+
+do $$begin perform f from nvarchar_test; end $$;
+
+CREATE TYPE test_nvarchar_type AS (f nvarchar);
+
+comment on type test_nvarchar_type is 'comment';
+
+drop type test_nvarchar_type;
+
+create trigger tr before update on nvarchar_test for each row when (OLD.f='a') EXECUTE PROCEDURE suppress_redundant_updates_trigger();
+
+copy nvarchar_test(f) to stdout;
+
+comment on COLUMN nvarchar_test.f is 'comment';
+
+analyze nvarchar_test(f);
+
+alter table nvarchar_test rename f to f_renamed;
+alter table nvarchar_test rename f_renamed to f;
+
+alter table nvarchar_test alter val type nvarchar;
+
+UPDATE nvarchar_test SET f='b';
+SELECT * from nvarchar_test;
+
+delete from nvarchar_test where f='b';
+
+CREATE FUNCTION foo(f nvarchar) returns setof nvarchar as $$SELECT f from nvarchar_test;$$ LANGUAGE SQL;
+
+alter function foo(nvarchar) reset all;
+
+drop function foo(nvarchar);
+
+drop table nvarchar_test;
+
+create function dummy_eq (nvarchar, nvarchar) returns boolean as 'SELECT $1=$2;' LANGUAGE SQL;
+CREATE OPERATOR === (LEFTARG = nvarchar, RIGHTARG = nvarchar, PROCEDURE = dummy_eq);
+alter operator === (nvarchar,nvarchar) set schema pg_catalog;
+DROP OPERATOR  === (nvarchar,nvarchar);
+drop function dummy_eq (nvarchar, nvarchar);
+
+create cast (name as nvarchar) with FUNCTION  "nvarchar"(name);
+drop cast (name as nvarchar);
+
+create foreign data wrapper dummy;
+CREATE SERVER foo FOREIGN DATA WRAPPER "dummy";
+create foreign table ft_nvarchar (f nvarchar, var varchar) server foo;
+alter foreign table ft_nvarchar alter var type nvarchar;
+alter foreign table ft_nvarchar rename column f to f_new;
+drop foreign table ft_nvarchar;
+drop SERVER foo;
+drop foreign data wrapper dummy;
+
+
diff -uNr postgresql-head-20131017/src/test/regress/sql/strings.sql postgresql-head-20131017-nchar/src/test/regress/sql/strings.sql
--- postgresql-head-20131017/src/test/regress/sql/strings.sql	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/sql/strings.sql	2013-10-31 11:26:49.000000000 +1100
@@ -70,26 +70,36 @@
 --
 
 SELECT CAST(f1 AS text) AS "text(char)" FROM CHAR_TBL;
+SELECT CAST(f1 AS text) AS "text(nchar)" FROM NCHAR_TBL;
 
 SELECT CAST(f1 AS text) AS "text(varchar)" FROM VARCHAR_TBL;
+SELECT CAST(f1 AS text) AS "text(nvarchar)" FROM NVARCHAR_TBL;
 
 SELECT CAST(name 'namefield' AS text) AS "text(name)";
 
 -- since this is an explicit cast, it should truncate w/o error:
 SELECT CAST(f1 AS char(10)) AS "char(text)" FROM TEXT_TBL;
 -- note: implicit-cast case is tested in char.sql
+SELECT CAST(f1 AS nchar(10)) AS "nchar(text)" FROM TEXT_TBL;
+-- note: implicit-cast case is tested in nchar.sql
 
 SELECT CAST(f1 AS char(20)) AS "char(text)" FROM TEXT_TBL;
+SELECT CAST(f1 AS nchar(20)) AS "nchar(text)" FROM TEXT_TBL;
 
 SELECT CAST(f1 AS char(10)) AS "char(varchar)" FROM VARCHAR_TBL;
+SELECT CAST(f1 AS nchar(10)) AS "nchar(nvarchar)" FROM NVARCHAR_TBL;
 
 SELECT CAST(name 'namefield' AS char(10)) AS "char(name)";
+SELECT CAST(name 'namefield' AS nchar(10)) AS "nchar(name)";
 
 SELECT CAST(f1 AS varchar) AS "varchar(text)" FROM TEXT_TBL;
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(text)" FROM TEXT_TBL;
 
 SELECT CAST(f1 AS varchar) AS "varchar(char)" FROM CHAR_TBL;
+SELECT CAST(f1 AS nvarchar) AS "nvarchar(nchar)" FROM NCHAR_TBL;
 
 SELECT CAST(name 'namefield' AS varchar) AS "varchar(name)";
+SELECT CAST(name 'namefield' AS nvarchar) AS "nvarchar(name)";
 
 --
 -- test SQL string functions
@@ -330,10 +340,13 @@
 SELECT text 'text' || ' and unknown' AS "Concat text to unknown type";
 
 SELECT char(20) 'characters' || ' and text' AS "Concat char to unknown type";
+SELECT nchar(20) 'ncharacters' || ' and text' AS "Concat nchar to unknown type";
 
 SELECT text 'text' || char(20) ' and characters' AS "Concat text to char";
+SELECT text 'text' || nchar(20) ' and ncharacters' AS "Concat text to nchar";
 
 SELECT text 'text' || varchar ' and varchar' AS "Concat text to varchar";
+SELECT text 'text' || nvarchar ' and nvarchar' AS "Concat text to nvarchar";
 
 --
 -- test substr with toasted text values
diff -uNr postgresql-head-20131017/src/test/regress/sql/union.sql postgresql-head-20131017-nchar/src/test/regress/sql/union.sql
--- postgresql-head-20131017/src/test/regress/sql/union.sql	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/sql/union.sql	2013-10-17 11:56:22.000000000 +1100
@@ -89,6 +89,23 @@
 SELECT TRIM(TRAILING FROM f1) FROM CHAR_TBL
 ORDER BY 1;
 
+--NVARCHAR specific
+SELECT f1 AS three FROM NVARCHAR_TBL
+UNION
+SELECT CAST(f1 AS nvarchar) FROM NCHAR_TBL
+ORDER BY 1;
+
+SELECT f1 AS eight FROM NVARCHAR_TBL
+UNION ALL
+SELECT f1 FROM NCHAR_TBL;
+
+SELECT f1 AS five FROM TEXT_TBL
+UNION
+SELECT f1 FROM NVARCHAR_TBL
+UNION
+SELECT TRIM(TRAILING FROM f1) FROM NCHAR_TBL
+ORDER BY 1;
+
 --
 -- INTERSECT and EXCEPT
 --
diff -uNr postgresql-head-20131017/src/test/regress/sql/update.sql postgresql-head-20131017-nchar/src/test/regress/sql/update.sql
--- postgresql-head-20131017/src/test/regress/sql/update.sql	2013-10-17 04:22:55.000000000 +1100
+++ postgresql-head-20131017-nchar/src/test/regress/sql/update.sql	2013-10-17 11:56:22.000000000 +1100
@@ -5,14 +5,28 @@
 CREATE TABLE update_test (
     a   INT DEFAULT 10,
     b   INT,
-    c   TEXT
+    c   TEXT,
+    d   nchar(10),
+    e   nvarchar
 );
 
-INSERT INTO update_test VALUES (5, 10, 'foo');
+INSERT INTO update_test VALUES (5, 10, 'foo', 'a', 'a');
 INSERT INTO update_test(b, a) VALUES (15, 10);
 
 SELECT * FROM update_test;
 
+UPDATE update_test SET d='b', e='b';
+
+SELECT * FROM update_test;
+
+UPDATE update_test SET d='c' where e='b';
+
+SELECT * FROM update_test;
+
+UPDATE update_test SET d=N'e' where e=N'b';
+
+SELECT * FROM update_test;
+
 UPDATE update_test SET a = DEFAULT, b = DEFAULT;
 
 SELECT * FROM update_test;
#38Albe Laurenz
laurenz.albe@wien.gv.at
In reply to: Arulappan, Arul Shaji (#37)
Re: UTF8 national character data type support WIP patch and list of open issues.

Arul Shaji Arulappan wrote:

Attached is a patch that implements the first set of changes discussed
in this thread originally. They are:

(i) Implements NCHAR/NVARCHAR as distinct data types, not as synonyms so
that:
- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is,
not as CHAR/VARCHAR.
- Groundwork to implement additional features for NCHAR/NVARCHAR
in the future (For eg: separate encoding for nchar columns).
(ii) Support for NCHAR/NVARCHAR in ECPG
(iii) Documentation changes to reflect the new data type

If I understood the discussion correctly the use case is that
there are advantages to having a database encoding different
from UTF-8, but you'd still want sume UTF-8 columns.

Wouldn't it be a better design to allow specifying the encoding
per column? That would give you more flexibility.

I know that NCHAR/NVARCHAR is SQL Standard, but as I still think
that it is a wart.

Yours,
Laurenz Albe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39MauMau
maumau307@gmail.com
In reply to: Albe Laurenz (#38)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Albe Laurenz" <laurenz.albe@wien.gv.at>

If I understood the discussion correctly the use case is that
there are advantages to having a database encoding different
from UTF-8, but you'd still want sume UTF-8 columns.

Wouldn't it be a better design to allow specifying the encoding
per column? That would give you more flexibility.

Yes, you are right. In the previous discussion:

- That would be nice if available, but it is hard to implement multiple
encodings in one database.
- Some people (I'm not sure many or few) are NCHAR/NVARCHAR in other DBMSs.
To invite them to PostgreSQL, it's important to support national character
feature syntactically and document it in the manual. This is the first
step.
- As the second step, we can implement multiple encodings in one database.
According to the SQL standard, "NCHAR(n)" is equivalent to "CHAR(n)
CHARACTER SET cs", where cs is an implementation-defined character set.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#40Albe Laurenz
laurenz.albe@wien.gv.at
In reply to: MauMau (#39)
Re: UTF8 national character data type support WIP patch and list of open issues.

MauMau wrote:

From: "Albe Laurenz" <laurenz.albe@wien.gv.at>

If I understood the discussion correctly the use case is that
there are advantages to having a database encoding different
from UTF-8, but you'd still want sume UTF-8 columns.

Wouldn't it be a better design to allow specifying the encoding
per column? That would give you more flexibility.

Yes, you are right. In the previous discussion:

- That would be nice if available, but it is hard to implement multiple
encodings in one database.

Granted.

- Some people (I'm not sure many or few) are NCHAR/NVARCHAR in other DBMSs.
To invite them to PostgreSQL, it's important to support national character
feature syntactically and document it in the manual. This is the first
step.

I looked into the Standard, and it does not have NVARCHAR.
The type is called NATIONAL CHARACTER VARYING, NATIONAL CHAR VARYING
or NCHAR VARYING.

I guess that the goal of this patch is to support Oracle syntax.
But anybody trying to port CREATE TABLE statements from Oracle
is already exposed to enough incompatibilities that the difference between
NVARCHAR and NCHAR VARYING will not be the reason to reject PostgreSQL.

In other words, I doubt that introducing the nonstandard NVARCHAR
will have more benefits than drawbacks (new reserved word).

Regarding the Standard compliant names of these data types, PostgreSQL
already supports those. Maybe some documentation would help.

- As the second step, we can implement multiple encodings in one database.
According to the SQL standard, "NCHAR(n)" is equivalent to "CHAR(n)
CHARACTER SET cs", where cs is an implementation-defined character set.

That second step would definitely have benefits.

But I don't think that this requires the first step that your patch
implements, it is in fact orthogonal.

I don't think that there is any need to change NCHAR even if we
get per-column encoding, it is just syntactic sugar to support
SQL Feature F421.

Why not tackle the second step first?

Yours,
Laurenz Albe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#41MauMau
maumau307@gmail.com
In reply to: Albe Laurenz (#40)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Albe Laurenz" <laurenz.albe@wien.gv.at>

I looked into the Standard, and it does not have NVARCHAR.
The type is called NATIONAL CHARACTER VARYING, NATIONAL CHAR VARYING
or NCHAR VARYING.

OUch, that's just a mistake in my mail. You are correct.

I guess that the goal of this patch is to support Oracle syntax.

But anybody trying to port CREATE TABLE statements from Oracle
is already exposed to enough incompatibilities that the difference between
NVARCHAR and NCHAR VARYING will not be the reason to reject PostgreSQL.
In other words, I doubt that introducing the nonstandard NVARCHAR
will have more benefits than drawbacks (new reserved word).

Agreed. But I'm in favor of supporting other DBMS's syntax if it doesn't
complicate the spec or implementation too much, because it can help migrate
to PostgreSQL. I understand PostgreSQL has made such efforts like PL/pgSQL
which is similar to PL/SQL, text data type, AS in SELECT statement, etc.

But I don't think that this requires the first step that your patch
implements, it is in fact orthogonal.

(It's not "my" patch.)

Regarding the Standard compliant names of these data types, PostgreSQL
already supports those. Maybe some documentation would help.

I don't think that there is any need to change NCHAR even if we
get per-column encoding, it is just syntactic sugar to support
SQL Feature F421.

Maybe so. I guess the distinct type for NCHAR is for future extension and
user friendliness. As one user, I expect to get "national character"
instead of "char character set xxx" as output of psql \d and pg_dump when I
specified "national character" in DDL. In addition, that makes it easy to
use the pg_dump output for importing data to other DBMSs for some reason.

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#42Peter Eisentraut
peter_e@gmx.net
In reply to: Arulappan, Arul Shaji (#37)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 11/5/13, 1:04 AM, Arulappan, Arul Shaji wrote:

Implements NCHAR/NVARCHAR as distinct data types, not as synonyms

If, per SQL standard, NCHAR(x) is equivalent to CHAR(x) CHARACTER SET
"cs", then for some "cs", NCHAR(x) must be the same as CHAR(x).
Therefore, an implementation as separate data types is wrong.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#43Robert Haas
robertmhaas@gmail.com
In reply to: Peter Eisentraut (#42)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Tue, Nov 5, 2013 at 5:15 PM, Peter Eisentraut <peter_e@gmx.net> wrote:

On 11/5/13, 1:04 AM, Arulappan, Arul Shaji wrote:

Implements NCHAR/NVARCHAR as distinct data types, not as synonyms

If, per SQL standard, NCHAR(x) is equivalent to CHAR(x) CHARACTER SET
"cs", then for some "cs", NCHAR(x) must be the same as CHAR(x).
Therefore, an implementation as separate data types is wrong.

Interesting.

Since the point doesn't seem to be getting through, let me try to be
more clear: we're not going to accept any form of this patch. A patch
that makes some progress toward actually coping with multiple
encodings in the same database would be very much worth considering,
but adding compatible syntax with incompatible semantics is not of
interest to the PostgreSQL project. We have had this debate on many
other topics in the past and will no doubt have it again in the
future, but the outcome is always the same.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#44MauMau
maumau307@gmail.com
In reply to: Robert Haas (#43)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Robert Haas" <robertmhaas@gmail.com>

On Tue, Nov 5, 2013 at 5:15 PM, Peter Eisentraut <peter_e@gmx.net> wrote:

On 11/5/13, 1:04 AM, Arulappan, Arul Shaji wrote:

Implements NCHAR/NVARCHAR as distinct data types, not as synonyms

If, per SQL standard, NCHAR(x) is equivalent to CHAR(x) CHARACTER SET
"cs", then for some "cs", NCHAR(x) must be the same as CHAR(x).
Therefore, an implementation as separate data types is wrong.

Since the point doesn't seem to be getting through, let me try to be
more clear: we're not going to accept any form of this patch. A patch
that makes some progress toward actually coping with multiple
encodings in the same database would be very much worth considering,
but adding compatible syntax with incompatible semantics is not of
interest to the PostgreSQL project. We have had this debate on many
other topics in the past and will no doubt have it again in the
future, but the outcome is always the same.

It doesn't seem that there is any semantics incompatible with the SQL
standard as follows:

- In the first step, "cs" is the database encoding, which is used for
char/varchar/text.
- In the second (or final) step, where multiple encodings per database is
supported, "cs" is the national character encoding which is specified with
CREATE DATABASE ... NATIONAL CHARACTER ENCODING cs. If NATIONAL CHARACTER
ENCODING clause is omitted, "cs" is the database encoding as step 1.

Let me repeat myself: I think the biggest and immediate issue is that
PostgreSQL does not support national character types at least officially.
"Officially" means the description in the manual. So I don't have strong
objection against the current (hidden) implementation of nchar types in
PostgreSQL which are just synonyms, as long as the official support is
documented. Serious users don't want to depend on hidden features.

However, doesn't the current synonym approach have any problems? Wouldn't
it produce any trouble in the future? If we treat nchar as char, we lose
the fact that the user requested nchar. Can we lose the fact so easily and
produce irreversible result as below?

--------------------------------------------------
Maybe so. I guess the distinct type for NCHAR is for future extension and
user friendliness. As one user, I expect to get "national character"
instead of "char character set xxx" as output of psql \d and pg_dump when I
specified "national character" in DDL. In addition, that makes it easy to
use the pg_dump output for importing data to other DBMSs for some reason.
--------------------------------------------------

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#45Albe Laurenz
laurenz.albe@wien.gv.at
In reply to: MauMau (#44)
Re: UTF8 national character data type support WIP patch and list of open issues.

MauMau wrote:

Let me repeat myself: I think the biggest and immediate issue is that
PostgreSQL does not support national character types at least officially.
"Officially" means the description in the manual. So I don't have strong
objection against the current (hidden) implementation of nchar types in
PostgreSQL which are just synonyms, as long as the official support is
documented. Serious users don't want to depend on hidden features.

I agree with you there.
Actually it is somewhat documented in
http://www.postgresql.org/docs/9.3/static/features-sql-standard.html
as "F421", but that requires that you read the SQL standard.

However, doesn't the current synonym approach have any problems? Wouldn't
it produce any trouble in the future? If we treat nchar as char, we lose
the fact that the user requested nchar. Can we lose the fact so easily and
produce irreversible result as below?

I don't think that it is a problem.
According to the SQL standard, the user requested a CHAR or VARCHAR with
an encoding of the choice of the DBMS.
PostgreSQL chooses the database encoding.

In a way, it is similar to using the "data type" serial. The column will be
displayed as "integer", and the information that it was a serial can
only be inferred from the DEFAULT value.
It seems that this is working fine and does not cause many problems,
so I don't see why things should be different here.

Again, for serial the behaviour is well documented, so that seconds
your request for more documentation.
Would you like to write a patch for that?

Yours,
Laurenz Albe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#46MauMau
maumau307@gmail.com
In reply to: Albe Laurenz (#45)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Albe Laurenz" <laurenz.albe@wien.gv.at>
In a way, it is similar to using the "data type" serial. The column will be
displayed as "integer", and the information that it was a serial can
only be inferred from the DEFAULT value.
It seems that this is working fine and does not cause many problems,
so I don't see why things should be different here.

Yes, I agree with you in that serial being a synonym is almost no problem.
But that's because serial is not an SQL-standard data type but a type unique
to PostgreSQL.

On the other hand, nchar is an established data type in the SQL standard. I
think most people will expect to get "nchar" as output from psql \d and
pg_dump as they specified in DDL. If they get "char" as output for "nchar"
columns from pg_dump, wouldn't they get in trouble if they want to import
schema/data from PostgreSQL to other database products? The documentation
for pg_dump says that pg_dump pays attention to easing migrating to other
DBMSs. I like this idea and want to respect this.

http://www.postgresql.org/docs/current/static/app-pgdump.html
--------------------------------------------------
Script files can be used to reconstruct the database even on other machines
and other architectures; with some modifications, even on other SQL database
products.
...
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER
OWNER commands to determine object ownership. This makes the dump more
standards-compatible, ...
--------------------------------------------------

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47MauMau
maumau307@gmail.com
In reply to: Albe Laurenz (#45)
Re: UTF8 national character data type support WIP patch and list of open issues.

From: "Albe Laurenz" <laurenz.albe@wien.gv.at>
In a way, it is similar to using the "data type" serial. The column will be
displayed as "integer", and the information that it was a serial can
only be inferred from the DEFAULT value.
It seems that this is working fine and does not cause many problems,
so I don't see why things should be different here.

Yes, I agree with you in that serial being a synonym is almost no problem.
But that's because serial is not an SQL-standard data type but a type unique
to PostgreSQL.

On the other hand, nchar is an established data type in the SQL standard. I
think most people will expect to get "nchar" as output from psql \d and
pg_dump as they specified in DDL. If they get "char" as output for "nchar"
columns from pg_dump, wouldn't they get in trouble if they want to import
schema/data from PostgreSQL to other database products? The documentation
for pg_dump says that pg_dump pays attention to easing migrating to other
DBMSs. I like this idea and want to respect this.

http://www.postgresql.org/docs/current/static/app-pgdump.html
--------------------------------------------------
Script files can be used to reconstruct the database even on other machines
and other architectures; with some modifications, even on other SQL database
products.
...
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER
OWNER commands to determine object ownership. This makes the dump more
standards-compatible, ...
--------------------------------------------------

Regards
MauMau

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#48Tom Lane
tgl@sss.pgh.pa.us
In reply to: MauMau (#47)
Re: UTF8 national character data type support WIP patch and list of open issues.

"MauMau" <maumau307@gmail.com> writes:

On the other hand, nchar is an established data type in the SQL standard. I
think most people will expect to get "nchar" as output from psql \d and
pg_dump as they specified in DDL.

This argument seems awfully weak. You've been able to say
create table nt (nf national character varying(22));
in Postgres since around 1997, but I don't recall one single bug report
about how that is displayed as just "character varying(22)".

The other big problem with this line of argument is that you're trying
to claim better spec compliance for what is at best a rather narrow
interpretation with really minimal added functionality. (In fact,
until you have a solution for the problem that incoming and outgoing
data must be in the database's primary encoding, you don't actually have
*any* added functionality, just syntactic sugar that does nothing useful.)
Unless you can demonstrate by lawyerly reading of the spec that the spec
requires exactly the behavior this patch implements, you do not have a leg
to stand on here. But you can't demonstrate that, because it doesn't.

I'd be much more impressed by seeing a road map for how we get to a
useful amount of added functionality --- which, to my mind, would be
the ability to support N different encodings in one database, for N>2.
But even if you think N=2 is sufficient, we haven't got a road map, and
commandeering spec-mandated syntax for an inadequate feature doesn't seem
like a good first step. It'll just make our backwards-compatibility
problems even worse when somebody does come up with a real solution.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Tatsuo Ishii
ishii@postgresql.org
In reply to: Tom Lane (#48)
Re: UTF8 national character data type support WIP patch and list of open issues.

I'd be much more impressed by seeing a road map for how we get to a
useful amount of added functionality --- which, to my mind, would be
the ability to support N different encodings in one database, for N>2.
But even if you think N=2 is sufficient, we haven't got a road map, and
commandeering spec-mandated syntax for an inadequate feature doesn't seem
like a good first step. It'll just make our backwards-compatibility
problems even worse when somebody does come up with a real solution.

I have been thinking about this for years and I think the key idea for
this is, implementing "universal encoding". The universal encoding
should have following characteristics to implement N>2 encoding in a
database.

1) no loss of round trip encoding conversion

2) no mapping table is necessary to convert from/to existing encodings

Once we implement the universal encoding, other problem such as
"pg_database with multiple encoding problem" can be solved easily.

Currently there's no such an universal encoding in the universe, I
think the only way is, inventing it by ourselves.

At this point the design of the encoding I have in mind is,

1) 1 byte encoding identifier + 7 bytes body (totaly 8 bytes). The
encoding identifier's value is between 0x80 and 0xff and is
assigned to exiting encoding such as UTF-8, ascii, EUC-JP and so
on. The encodings should be limited to "database safe"
encodings. The encoding body is raw characters represented by
existing encodings. This form is called "word".

2) We also have "mutibyte" representation of the universal
encoding. The first byte represents the lenght of the multibyte
character (similar to the first byte of UTF-8). The second byte is
the encoding identifier explained in above. The rest of the
character is same as above.

#1 and #2 are logically same and converted to each other, and we can
use one of them whenever we like.

The form #1 is easy to handle because each word has fixed length (8
bytes). So probably used in temporary data in memory. The second form
can save space and will be used in the data itself.

If we want to have a table encoded in an encoding different from the
database encoding, the table is encoded in the universal
encoding. pg_class should remember the fact to avoid the confusion
about what encoding a table is using. I think majority of tables in a
database uses the same encoding as the database encoding. Only a few
tables want to have different encoding. The design pushes the penalty
to such minorities.

If we need to join two tables which have different encoding, we need
to convert them into the same encoding (this should succeed if the
encodings are "compatible"). If fails, the join will fail too.

We could expand the technique above for the design which allow each
column has different encoding.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#50Peter Eisentraut
peter_e@gmx.net
In reply to: Tatsuo Ishii (#49)
Re: UTF8 national character data type support WIP patch and list of open issues.

On 11/12/13, 1:57 AM, Tatsuo Ishii wrote:

Currently there's no such an universal encoding in the universe, I
think the only way is, inventing it by ourselves.

I think ISO 2022 is something in that direction, but it's not
ASCII-safe, AFAICT.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#51Martijn van Oosterhout
kleptog@svana.org
In reply to: Tatsuo Ishii (#49)
Re: UTF8 national character data type support WIP patch and list of open issues.

On Tue, Nov 12, 2013 at 03:57:52PM +0900, Tatsuo Ishii wrote:

I have been thinking about this for years and I think the key idea for
this is, implementing "universal encoding". The universal encoding
should have following characteristics to implement N>2 encoding in a
database.

1) no loss of round trip encoding conversion

2) no mapping table is necessary to convert from/to existing encodings

Once we implement the universal encoding, other problem such as
"pg_database with multiple encoding problem" can be solved easily.

Isn't this essentially what the MULE internal encoding is?

Currently there's no such an universal encoding in the universe, I
think the only way is, inventing it by ourselves.

This sounds like a terrible idea. In the future people are only going
to want more advanced text functions, regular expressions, indexing and
making encodings that don't exist anywhere else seems like a way to
make a lot of work for little benefit.

A better idea seems to me is to (if postgres is configured properly)
embed the non-round-trippable characters in the custom character part
of the unicode character set. In other words, adjust the mappings
tables on demand and voila.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

He who writes carelessly confesses thereby at the very outset that he does
not attach much importance to his own thoughts.

-- Arthur Schopenhauer

#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Martijn van Oosterhout (#51)
Re: UTF8 national character data type support WIP patch and list of open issues.

Martijn van Oosterhout <kleptog@svana.org> writes:

On Tue, Nov 12, 2013 at 03:57:52PM +0900, Tatsuo Ishii wrote:

Once we implement the universal encoding, other problem such as
"pg_database with multiple encoding problem" can be solved easily.

Isn't this essentially what the MULE internal encoding is?

MULE is completely evil. It has N different encodings for the same
character, not to mention no support code available.

Currently there's no such an universal encoding in the universe, I
think the only way is, inventing it by ourselves.

This sounds like a terrible idea. In the future people are only going
to want more advanced text functions, regular expressions, indexing and
making encodings that don't exist anywhere else seems like a way to
make a lot of work for little benefit.

Agreed.

A better idea seems to me is to (if postgres is configured properly)
embed the non-round-trippable characters in the custom character part
of the unicode character set. In other words, adjust the mappings
tables on demand and voila.

From the standpoint of what will happen with existing library code
(like strcoll), I'm not sure it's all that easy.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#53Tatsuo Ishii
ishii@postgresql.org
In reply to: Martijn van Oosterhout (#51)
Re: UTF8 national character data type support WIP patch and list of open issues.

Isn't this essentially what the MULE internal encoding is?

No. MULE is not powerfull enough and overly complicated to deal with
different encodings (character sets).

Currently there's no such an universal encoding in the universe, I
think the only way is, inventing it by ourselves.

This sounds like a terrible idea. In the future people are only going
to want more advanced text functions, regular expressions, indexing and
making encodings that don't exist anywhere else seems like a way to
make a lot of work for little benefit.

That is probably a misunderstanding. We don't need to modify existing
text handling modules such as text functions, regular expressions,
indexing etc. We just convert from the "universal" encoding X to the
original encoding before calling them. The process is pretty easy and
fast because it just requires skipping "encoding identifier" and
"encoding length" part.

Basically the encoding X should be used for lower layer modules of
PostgreSQL and higher layer module such as living in
src/backend/utils/adt should not aware it.

A better idea seems to me is to (if postgres is configured properly)
embed the non-round-trippable characters in the custom character part
of the unicode character set. In other words, adjust the mappings
tables on demand and voila.

Using Unicode requires overhead for encoding conversion because it
needs to look up mapping tables. That will be a huge handicap for
large data and that I want to avoid in the first place.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#54Tatsuo Ishii
ishii@postgresql.org
In reply to: Tom Lane (#52)
Re: UTF8 national character data type support WIP patch and list of open issues.

MULE is completely evil.

It has N different encodings for the same
character,

What's wrong with that? It aims that in the first place.

not to mention no support code available.

--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#54)
Re: UTF8 national character data type support WIP patch and list of open issues.

Tatsuo Ishii <ishii@postgresql.org> writes:

MULE is completely evil.
It has N different encodings for the same character,

What's wrong with that? It aims that in the first place.

It greatly complicates comparisons --- at least, if you'd like to preserve
the principle that strings that appear the same are equal.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#56Tatsuo Ishii
ishii@postgresql.org
In reply to: Tom Lane (#55)
Re: UTF8 national character data type support WIP patch and list of open issues.

Tatsuo Ishii <ishii@postgresql.org> writes:

MULE is completely evil.
It has N different encodings for the same character,

What's wrong with that? It aims that in the first place.

It greatly complicates comparisons --- at least, if you'd like to preserve
the principle that strings that appear the same are equal.

You don't need to consider it because there's no place in PostgreSQL
where a MULE encoded text consists of multiple encodings as far as I
know.

BTW, same characters are assigned different code points are pretty
common in many character sets (Unicode, for example).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#57Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#56)
Re: UTF8 national character data type support WIP patch and list of open issues.

Tatsuo Ishii <ishii@postgresql.org> writes:

BTW, same characters are assigned different code points are pretty
common in many character sets (Unicode, for example).

This is widely considered a security bug; read section 10 in RFC 3629 (the
definition of UTF8), and search the CVE database a bit if you still doubt
it's a threat. I'm going to push back very hard on any suggestion that
Postgres should build itself around a text representation with that kind
of weakness designed in.

regards, tom lane

[1]: http://tools.ietf.org/html/rfc3629#section-10

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#58Chapman Flack
chap@anastigmatix.net
In reply to: Robert Haas (#18)
Re: UTF8 national character data type support WIP patch and list of open issues.

Hi,

Although this is a ten-year-old message, it was the one I found quickly
when looking to see what the current state of play on this might be.

On 2013-09-20 14:22, Robert Haas wrote:

Hmm. So under that design, a database could support up to a total of
two character sets, the one that you get when you say 'foo' and the
other one that you get when you say n'foo'.

I guess we could do that, but it seems a bit limited. If we're going
to go to the trouble of supporting multiple character sets, why not
support an arbitrary number instead of just two?

Because that old thread came to an end without mentioning how the
standard approaches that, it seemed worth adding, just to complete the
record.

In the draft of the standard I'm looking at (which is also around a
decade old), n'foo' is nothing but a handy shorthand for _csname'foo'
(which is a syntax we do not accept) for some particular csname that
was chosen when setting up the db.

So really, the standard contemplates letting you have columns of
arbitrary different charsets (CHAR(x) CHARACTER SET csname), and
literals of arbitrary charsets _csname'foo'. Then, as a bit of
sugar, you get to pick which two of those charsets you'd like
to have easy shorter ways of writing, 'foo' or n'foo',
CHAR or NCHAR.

The grammar for csname is kind of funky. It can be nothing but
<SQL language identifier>, which has the nice restricted form
/[A-Za-z][A-Za-z0-9_]*/. But it can also be schema-qualified,
with the schema of course being a full-fledged <identifier>.

So yeah, to fully meet this part of the standard, the parser'd
have to know that
_U&"I am a schema nameZ0021" UESCAPE 'Z'/*hi!*/.LATIN1'foo'
is a string literal, expressing foo, in a character set named
LATIN1, in some cutely-named schema.

Never a dull moment.

Regards,
-Chap