JSON output functions.

Started by Andrew Dunstanalmost 14 years ago7 messages
#1Andrew Dunstan
andrew.dunstan@pgexperts.com

I've just been running some timings of my JSON-producing functions, in
particular array_to_json, and comparing them with the current
XML-producing functions. Here's a typical result:

andrew=# explain analyse select array_to_json(array_agg(q),true)
from (select * from pg_attribute) q;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Aggregate (cost=70.77..70.78 rows=1 width=203) (actual
time=38.919..38.920 rows=1 loops=1)
-> Seq Scan on pg_attribute (cost=0.00..65.01 rows=2301
width=203) (actual time=0.007..1.454 rows=2253 loops=1)
Total runtime: 39.300 ms
(3 rows)

Time: 62.753 ms
andrew=# explain analyse select table_to_xml('pg_attribute',
true,false,'');
QUERY PLAN
----------------------------------------------------------------------------------------
Result (cost=0.00..0.26 rows=1 width=0) (actual
time=519.170..526.737 rows=1 loops=1)
Total runtime: 526.780 ms
(2 rows)

As you can see, producing the JSON is a heck of a lot faster than
producing the equivalent XML. I had thought it might be necessary for
good performance to cache the type output info in the FunctionCallInfo
structure, rather than fetch it for each Datum we output, but that
doesn't seem to be so. For now I'm inclined not to proceed with that,
and leave it as an optimization to be considered later if necessary.
Thoughts?

cheers

andrew

#2Abhijit Menon-Sen
ams@toroid.org
In reply to: Andrew Dunstan (#1)
Re: JSON output functions.

At 2012-02-01 18:48:28 -0500, andrew.dunstan@pgexperts.com wrote:

For now I'm inclined not to proceed with that, and leave it as an
optimization to be considered later if necessary. Thoughts?

I agree, there doesn't seem to be a pressing need to do it now.

-- ams

#3Andrew Dunstan
andrew@dunslane.net
In reply to: Abhijit Menon-Sen (#2)
1 attachment(s)
Re: JSON output functions.

On 02/02/2012 04:35 AM, Abhijit Menon-Sen wrote:

At 2012-02-01 18:48:28 -0500, andrew.dunstan@pgexperts.com wrote:

For now I'm inclined not to proceed with that, and leave it as an
optimization to be considered later if necessary. Thoughts?

I agree, there doesn't seem to be a pressing need to do it now.

OK, here's my final version of the patch for constructor functions. If
there's no further comment I'll go with this.

cheers

andrew

Attachments:

json-constructors.patchtext/x-patch; name=json-constructors.patchDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index ec14004..22adcb8 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -9617,6 +9617,65 @@ table2-mapping
   </sect2>
  </sect1>
 
+ <sect1 id="functions-json">
+  <title>JSON functions</title>
+
+  <indexterm zone="datatype-json">
+	<primary>JSON</primary>
+	<secondary>Functions and operators</secondary>
+  </indexterm>
+
+  <para>
+    This section descripbes the functions that are available for creating
+    JSON (see <xref linkend="datatype-json">) data.
+  </para>
+
+  <table id="functions-json-table">
+    <title>JSON Support Functions</title>
+    <tgroup cols="4">
+     <thead>
+      <row>
+       <entry>Function</entry>
+       <entry>Description</entry>
+       <entry>Example</entry>
+       <entry>Example Result</entry>
+      </row>
+     </thead>
+     <tbody>
+      <row>
+       <entry>
+         <indexterm>
+          <primary>array_to_json</primary>
+         </indexterm>
+         <literal>array_to_json(anyarray [, pretty_bool])</literal>
+       </entry>
+       <entry>
+         Returns the array as JSON. A Postgres multi-dimensional array 
+         becomes a JSON array of arrays. Line feeds will be added between 
+         dimension 1 elements if pretty_bool is true.
+       </entry>
+       <entry><literal>array_to_json('{{1,5},{99,100}}'::int[])</literal></entry>
+       <entry><literal>[[1,5],[99,100]]</literal></entry>
+      </row>
+      <row>
+       <entry>
+         <indexterm>
+          <primary>row_to_json</primary>
+         </indexterm>
+         <literal>row_to_json(record [, pretty_bool])</literal>
+       </entry>
+       <entry>
+         Returns the row as JSON. Line feeds will be added between level 
+         1 elements if pretty_bool is true.
+       </entry>
+       <entry><literal>row_to_json(row(1,'foo'))</literal></entry>
+       <entry><literal>{"f1":1,"f2":"foo"}</literal></entry>
+      </row>
+     </tbody>
+    </tgroup>
+   </table>
+
+ </sect1>
 
  <sect1 id="functions-sequence">
   <title>Sequence Manipulation Functions</title>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index e35ac59..e57580e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -24,6 +24,7 @@
 #include "rewrite/rewriteHandler.h"
 #include "tcop/tcopprot.h"
 #include "utils/builtins.h"
+#include "utils/json.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/snapmgr.h"
@@ -99,7 +100,6 @@ static void ExplainDummyGroup(const char *objtype, const char *labelname,
 static void ExplainXMLTag(const char *tagname, int flags, ExplainState *es);
 static void ExplainJSONLineEnding(ExplainState *es);
 static void ExplainYAMLLineStarting(ExplainState *es);
-static void escape_json(StringInfo buf, const char *str);
 static void escape_yaml(StringInfo buf, const char *str);
 
 
@@ -2319,51 +2319,6 @@ ExplainYAMLLineStarting(ExplainState *es)
 }
 
 /*
- * Produce a JSON string literal, properly escaping characters in the text.
- */
-static void
-escape_json(StringInfo buf, const char *str)
-{
-	const char *p;
-
-	appendStringInfoCharMacro(buf, '\"');
-	for (p = str; *p; p++)
-	{
-		switch (*p)
-		{
-			case '\b':
-				appendStringInfoString(buf, "\\b");
-				break;
-			case '\f':
-				appendStringInfoString(buf, "\\f");
-				break;
-			case '\n':
-				appendStringInfoString(buf, "\\n");
-				break;
-			case '\r':
-				appendStringInfoString(buf, "\\r");
-				break;
-			case '\t':
-				appendStringInfoString(buf, "\\t");
-				break;
-			case '"':
-				appendStringInfoString(buf, "\\\"");
-				break;
-			case '\\':
-				appendStringInfoString(buf, "\\\\");
-				break;
-			default:
-				if ((unsigned char) *p < ' ')
-					appendStringInfo(buf, "\\u%04x", (int) *p);
-				else
-					appendStringInfoCharMacro(buf, *p);
-				break;
-		}
-	}
-	appendStringInfoCharMacro(buf, '\"');
-}
-
-/*
  * YAML is a superset of JSON; unfortuantely, the YAML quoting rules are
  * ridiculously complicated -- as documented in sections 5.3 and 7.3.3 of
  * http://yaml.org/spec/1.2/spec.html -- so we chose to just quote everything.
diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index cbb81d1..60addf2 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -13,11 +13,17 @@
  */
 #include "postgres.h"
 
+#include "catalog/pg_type.h"
+#include "executor/spi.h"
 #include "lib/stringinfo.h"
 #include "libpq/pqformat.h"
 #include "mb/pg_wchar.h"
+#include "parser/parse_coerce.h"
+#include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/lsyscache.h"
 #include "utils/json.h"
+#include "utils/typcache.h"
 
 typedef enum
 {
@@ -72,8 +78,11 @@ static void json_lex_number(JsonLexContext *lex, char *s);
 static void report_parse_error(JsonParseStack *stack, JsonLexContext *lex);
 static void report_invalid_token(JsonLexContext *lex);
 static char *extract_mb_char(char *s);
-
-extern Datum json_in(PG_FUNCTION_ARGS);
+static void composite_to_json(Datum composite, StringInfo result, bool use_line_feeds);
+static void array_dim_to_json(StringInfo result, int dim, int ndims,int * dims,
+							  Datum *vals, int * valcount, TYPCATEGORY tcategory,
+							  Oid typoutputfunc, bool use_line_feeds);
+static void array_to_json_internal(Datum array, StringInfo result, bool use_line_feeds);
 
 /*
  * Input.
@@ -663,3 +672,344 @@ extract_mb_char(char *s)
 
 	return res;
 }
+
+/*
+ * Turn a scalar Datum into JSON. Hand off a non-scalar datum to
+ * composite_to_json or array_to_json_internal as appropriate.
+ */
+static inline void
+datum_to_json(Datum val, StringInfo result, TYPCATEGORY tcategory,
+			  Oid typoutputfunc)
+{
+
+	char *outputstr;
+
+	if (val == (Datum) NULL)
+	{
+		appendStringInfoString(result,"null");
+		return;
+	}
+
+	switch (tcategory)
+	{
+		case TYPCATEGORY_ARRAY:
+			array_to_json_internal(val, result, false);
+			break;
+		case TYPCATEGORY_COMPOSITE:
+			composite_to_json(val, result, false);
+			break;
+		case TYPCATEGORY_BOOLEAN:
+			if (DatumGetBool(val))
+				appendStringInfoString(result,"true");
+			else
+				appendStringInfoString(result,"false");
+			break;
+		case TYPCATEGORY_NUMERIC:
+			outputstr = OidOutputFunctionCall(typoutputfunc, val);
+			/*
+			 * Don't call escape_json here. Numeric output should
+			 * be a valid JSON number and JSON numbers shouldn't
+			 * be quoted.
+			 */
+			appendStringInfoString(result, outputstr);
+			pfree(outputstr);
+			break;
+		default:
+			outputstr = OidOutputFunctionCall(typoutputfunc, val);
+			escape_json(result, outputstr);
+			pfree(outputstr);
+	}
+}
+
+/*
+ * Process a single dimension of an array.
+ * If it's the innermost dimension, output the values, otherwise call
+ * ourselves recursively to process the next dimension.
+ */
+static void
+array_dim_to_json(StringInfo result, int dim, int ndims,int * dims, Datum *vals,
+				  int * valcount, TYPCATEGORY tcategory, Oid typoutputfunc,
+				  bool use_line_feeds)
+{
+
+	int i;
+	char *sep;
+
+	Assert(dim < ndims);
+
+	sep = use_line_feeds ? ",\n " : ",";
+
+	appendStringInfoChar(result, '[');
+
+	for (i = 1; i <= dims[dim]; i++)
+	{
+		if (i > 1)
+			appendStringInfoString(result,sep);
+
+		if (dim + 1 == ndims)
+		{
+			datum_to_json(vals[*valcount],result,tcategory,typoutputfunc);
+			(*valcount)++;
+		}
+		else
+		{
+			/*
+			 * Do we want line feeds on inner dimensions of arrays?
+			 * For now we'll say no.
+			 */
+			array_dim_to_json(result, dim+1, ndims, dims, vals, valcount,
+							  tcategory,typoutputfunc,false);
+		}
+	}
+
+	appendStringInfoChar(result, ']');
+}
+
+/*
+ * Turn an array into JSON.
+ */
+static void
+array_to_json_internal(Datum array, StringInfo result, bool use_line_feeds)
+{
+	ArrayType  *v = DatumGetArrayTypeP(array);
+	Oid			element_type = ARR_ELEMTYPE(v);
+	int		   *dim;
+	int			ndim;
+	int			nitems;
+	int         count = 0;
+	Datum	   *elements;
+	bool       *nulls;
+
+	int16		typlen;
+	bool		typbyval;
+	char		typalign,
+				typdelim;
+	Oid			typioparam;
+	Oid			typoutputfunc;
+	TYPCATEGORY tcategory;
+
+	ndim = ARR_NDIM(v);
+	dim = ARR_DIMS(v);
+	nitems = ArrayGetNItems(ndim, dim);
+
+	if (nitems <= 0)
+	{
+		appendStringInfoString(result,"[]");
+		return;
+	}
+
+	get_type_io_data(element_type, IOFunc_output,
+					 &typlen, &typbyval, &typalign,
+					 &typdelim, &typioparam, &typoutputfunc);
+
+	deconstruct_array(v, element_type, typlen, typbyval,
+					  typalign, &elements, &nulls,
+					  &nitems);
+
+	/* can't have an array of arrays, so this is the only special case here */
+	if (element_type == RECORDOID)
+		tcategory = TYPCATEGORY_COMPOSITE;
+	else
+		tcategory = TypeCategory(element_type);
+
+	array_dim_to_json(result, 0, ndim, dim, elements, &count, tcategory,
+					  typoutputfunc, use_line_feeds);
+
+	pfree(elements);
+	pfree(nulls);
+}
+
+/*
+ * Turn a composite / record into JSON.
+ */
+static void
+composite_to_json(Datum composite, StringInfo result, bool use_line_feeds)
+{
+    HeapTupleHeader td;
+    Oid         tupType;
+    int32       tupTypmod;
+    TupleDesc   tupdesc;
+    HeapTupleData tmptup, *tuple;
+	int         i;
+	bool        needsep = false;
+	char       *sep;
+
+	sep = use_line_feeds ? ",\n " : ",";
+
+    td = DatumGetHeapTupleHeader(composite);
+
+    /* Extract rowtype info and find a tupdesc */
+    tupType = HeapTupleHeaderGetTypeId(td);
+    tupTypmod = HeapTupleHeaderGetTypMod(td);
+    tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod);
+
+    /* Build a temporary HeapTuple control structure */
+    tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
+    tmptup.t_data = td;
+	tuple = &tmptup;
+
+	appendStringInfoChar(result,'{');
+
+    for (i = 0; i < tupdesc->natts; i++)
+    {
+        Datum       val, origval;
+        bool        isnull;
+        char       *attname;
+		TYPCATEGORY tcategory;
+		Oid			typoutput;
+		bool		typisvarlena;
+
+		if (tupdesc->attrs[i]->attisdropped)
+            continue;
+
+		if (needsep)
+			appendStringInfoString(result,sep);
+		needsep = true;
+
+        attname = NameStr(tupdesc->attrs[i]->attname);
+		escape_json(result,attname);
+		appendStringInfoChar(result,':');
+
+        origval = heap_getattr(tuple, i + 1, tupdesc, &isnull);
+
+		if (tupdesc->attrs[i]->atttypid == RECORDARRAYOID)
+			tcategory = TYPCATEGORY_ARRAY;
+		else if (tupdesc->attrs[i]->atttypid == RECORDOID)
+			tcategory = TYPCATEGORY_COMPOSITE;
+		else
+			tcategory = TypeCategory(tupdesc->attrs[i]->atttypid);
+
+		getTypeOutputInfo(tupdesc->attrs[i]->atttypid,
+						  &typoutput, &typisvarlena);
+
+		/*
+		 * If we have a toasted datum, forcibly detoast it here to avoid memory
+		 * leakage inside the type's output routine.
+		 */
+		if (typisvarlena && ! isnull)
+			val = PointerGetDatum(PG_DETOAST_DATUM(origval));
+		else
+			val = origval;
+
+		datum_to_json(val, result, tcategory, typoutput);
+
+		/* Clean up detoasted copy, if any */
+		if (val != origval)
+			pfree(DatumGetPointer(val));
+	}
+
+	appendStringInfoChar(result,'}');
+    ReleaseTupleDesc(tupdesc);
+}
+
+/*
+ * SQL function array_to_json(row)
+ */
+extern Datum
+array_to_json(PG_FUNCTION_ARGS)
+{
+	Datum    array = PG_GETARG_DATUM(0);
+	StringInfo	result;
+
+	result = makeStringInfo();
+
+	array_to_json_internal(array, result, false);
+
+	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+};
+
+/*
+ * SQL function array_to_json(row, prettybool)
+ */
+extern Datum
+array_to_json_pretty(PG_FUNCTION_ARGS)
+{
+	Datum    array = PG_GETARG_DATUM(0);
+	bool     use_line_feeds = PG_GETARG_BOOL(1);
+	StringInfo	result;
+
+	result = makeStringInfo();
+
+	array_to_json_internal(array, result, use_line_feeds);
+
+	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+};
+
+/*
+ * SQL function row_to_json(row)
+ */
+extern Datum
+row_to_json(PG_FUNCTION_ARGS)
+{
+	Datum    array = PG_GETARG_DATUM(0);
+	StringInfo	result;
+
+	result = makeStringInfo();
+
+	composite_to_json(array, result, false);
+
+	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+};
+
+/*
+ * SQL function row_to_json(row, prettybool)
+ */
+extern Datum
+row_to_json_pretty(PG_FUNCTION_ARGS)
+{
+	Datum    array = PG_GETARG_DATUM(0);
+	bool     use_line_feeds = PG_GETARG_BOOL(1);
+	StringInfo	result;
+
+	result = makeStringInfo();
+
+	composite_to_json(array, result, use_line_feeds);
+
+	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+};
+
+/*
+ * Produce a JSON string literal, properly escaping characters in the text.
+ */
+void
+escape_json(StringInfo buf, const char *str)
+{
+	const char *p;
+
+	appendStringInfoCharMacro(buf, '\"');
+	for (p = str; *p; p++)
+	{
+		switch (*p)
+		{
+			case '\b':
+				appendStringInfoString(buf, "\\b");
+				break;
+			case '\f':
+				appendStringInfoString(buf, "\\f");
+				break;
+			case '\n':
+				appendStringInfoString(buf, "\\n");
+				break;
+			case '\r':
+				appendStringInfoString(buf, "\\r");
+				break;
+			case '\t':
+				appendStringInfoString(buf, "\\t");
+				break;
+			case '"':
+				appendStringInfoString(buf, "\\\"");
+				break;
+			case '\\':
+				appendStringInfoString(buf, "\\\\");
+				break;
+			default:
+				if ((unsigned char) *p < ' ')
+					appendStringInfo(buf, "\\u%04x", (int) *p);
+				else
+					appendStringInfoCharMacro(buf, *p);
+				break;
+		}
+	}
+	appendStringInfoCharMacro(buf, '\"');
+}
+
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 5dc6d05..8fc4ddb 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4031,6 +4031,14 @@ DATA(insert OID = 323 (  json_recv		   PGNSP PGUID 12 1 0 0 0 f f f t f s 1 0 11
 DESCR("I/O");
 DATA(insert OID = 324 (  json_send		   PGNSP PGUID 12 1 0 0 0 f f f t f s 1 0 17 "114" _null_ _null_ _null_ _null_ json_send _null_ _null_ _null_ ));
 DESCR("I/O");
+DATA(insert OID = 3153 (  array_to_json	   PGNSP PGUID 12 1 0 0 0 f f f t f s 1 0 114 "2277" _null_ _null_ _null_ _null_ array_to_json _null_ _null_ _null_ ));
+DESCR("map array to json");
+DATA(insert OID = 3154 (  array_to_json	   PGNSP PGUID 12 1 0 0 0 f f f t f s 2 0 114 "2277 16" _null_ _null_ _null_ _null_ array_to_json_pretty _null_ _null_ _null_ ));
+DESCR("map array to json with optional pretty printing");
+DATA(insert OID = 3155 (  row_to_json	   PGNSP PGUID 12 1 0 0 0 f f f t f s 1 0 114 "2249" _null_ _null_ _null_ _null_ row_to_json _null_ _null_ _null_ ));
+DESCR("map row to json");
+DATA(insert OID = 3156 (  row_to_json	   PGNSP PGUID 12 1 0 0 0 f f f t f s 2 0 114 "2249 16" _null_ _null_ _null_ _null_ row_to_json_pretty _null_ _null_ _null_ ));
+DESCR("map row to json with optional pretty printing");
 
 /* uuid */
 DATA(insert OID = 2952 (  uuid_in		   PGNSP PGUID 12 1 0 0 0 f f f t f i 1 0 2950 "2275" _null_ _null_ _null_ _null_ uuid_in _null_ _null_ _null_ ));
diff --git a/src/include/utils/json.h b/src/include/utils/json.h
index ee87fd6..415787b 100644
--- a/src/include/utils/json.h
+++ b/src/include/utils/json.h
@@ -20,5 +20,10 @@ extern Datum json_in(PG_FUNCTION_ARGS);
 extern Datum json_out(PG_FUNCTION_ARGS);
 extern Datum json_recv(PG_FUNCTION_ARGS);
 extern Datum json_send(PG_FUNCTION_ARGS);
+extern Datum array_to_json(PG_FUNCTION_ARGS);
+extern Datum array_to_json_pretty(PG_FUNCTION_ARGS);
+extern Datum row_to_json(PG_FUNCTION_ARGS);
+extern Datum row_to_json_pretty(PG_FUNCTION_ARGS);
+extern void  escape_json(StringInfo buf, const char *str);
 
 #endif   /* XML_H */
diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out
index 5ef65f7..f2148bf 100644
--- a/src/test/regress/expected/json.out
+++ b/src/test/regress/expected/json.out
@@ -256,3 +256,114 @@ ERROR:  invalid input syntax for type json: "    "
 LINE 1: SELECT '    '::json;
                ^
 DETAIL:  The input string ended unexpectedly.
+--constructors
+-- array_to_json
+SELECT array_to_json(array(select 1 as a));
+ array_to_json 
+---------------
+ [1]
+(1 row)
+
+SELECT array_to_json(array_agg(q),false) from (select x as b, x * 2 as c from generate_series(1,3) x) q;
+                   array_to_json                   
+---------------------------------------------------
+ [{"f1":1,"f2":2},{"f1":2,"f2":4},{"f1":3,"f2":6}]
+(1 row)
+
+SELECT array_to_json(array_agg(q),true) from (select x as b, x * 2 as c from generate_series(1,3) x) q;
+   array_to_json   
+-------------------
+ [{"f1":1,"f2":2},+
+  {"f1":2,"f2":4},+
+  {"f1":3,"f2":6}]
+(1 row)
+
+SELECT array_to_json(array_agg(q),false)
+  FROM ( SELECT $$a$$ || x AS b, y AS c, 
+               ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+         FROM generate_series(1,2) x, 
+              generate_series(4,5) y) q;
+                                                                                                                                 array_to_json                                                                                                                                 
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ [{"b":"a1","c":4,"z":[{"f1":1,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]},{"b":"a1","c":5,"z":[{"f1":1,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]},{"b":"a2","c":4,"z":[{"f1":2,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]},{"b":"a2","c":5,"z":[{"f1":2,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]}]
+(1 row)
+
+SELECT array_to_json(array_agg(x),false) from generate_series(5,10) x;
+ array_to_json  
+----------------
+ [5,6,7,8,9,10]
+(1 row)
+
+SELECT array_to_json('{{1,5},{99,100}}'::int[]);
+  array_to_json   
+------------------
+ [[1,5],[99,100]]
+(1 row)
+
+-- row_to_json
+SELECT row_to_json(row(1,'foo'));
+     row_to_json     
+---------------------
+ {"f1":1,"f2":"foo"}
+(1 row)
+
+SELECT row_to_json(q) 
+FROM (SELECT $$a$$ || x AS b, 
+         y AS c, 
+         ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+      FROM generate_series(1,2) x, 
+           generate_series(4,5) y) q;
+                            row_to_json                             
+--------------------------------------------------------------------
+ {"b":"a1","c":4,"z":[{"f1":1,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]}
+ {"b":"a1","c":5,"z":[{"f1":1,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]}
+ {"b":"a2","c":4,"z":[{"f1":2,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]}
+ {"b":"a2","c":5,"z":[{"f1":2,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]}
+(4 rows)
+
+SELECT row_to_json(q,true) 
+FROM (SELECT $$a$$ || x AS b, 
+         y AS c, 
+         ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+      FROM generate_series(1,2) x, 
+           generate_series(4,5) y) q;
+                     row_to_json                     
+-----------------------------------------------------
+ {"b":"a1",                                         +
+  "c":4,                                            +
+  "z":[{"f1":1,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]}
+ {"b":"a1",                                         +
+  "c":5,                                            +
+  "z":[{"f1":1,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]}
+ {"b":"a2",                                         +
+  "c":4,                                            +
+  "z":[{"f1":2,"f2":[1,2,3]},{"f1":4,"f2":[4,5,6]}]}
+ {"b":"a2",                                         +
+  "c":5,                                            +
+  "z":[{"f1":2,"f2":[1,2,3]},{"f1":5,"f2":[4,5,6]}]}
+(4 rows)
+
+CREATE TEMP TABLE rows AS
+SELECT x, 'txt' || x as y
+FROM generate_series(1,3) AS x;
+SELECT row_to_json(q,true) 
+FROM rows q;
+ row_to_json  
+--------------
+ {"x":1,     +
+  "y":"txt1"}
+ {"x":2,     +
+  "y":"txt2"}
+ {"x":3,     +
+  "y":"txt3"}
+(3 rows)
+
+SELECT row_to_json(row((select array_agg(x) as d from generate_series(5,10) x)),false);
+      row_to_json      
+-----------------------
+ {"f1":[5,6,7,8,9,10]}
+(1 row)
+
diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql
index 440398b..6127355 100644
--- a/src/test/regress/sql/json.sql
+++ b/src/test/regress/sql/json.sql
@@ -54,3 +54,46 @@ SELECT 'truf'::json;			-- ERROR, not a keyword
 SELECT 'trues'::json;			-- ERROR, not a keyword
 SELECT ''::json;				-- ERROR, no value
 SELECT '    '::json;			-- ERROR, no value
+
+--constructors
+-- array_to_json
+
+SELECT array_to_json(array(select 1 as a));
+SELECT array_to_json(array_agg(q),false) from (select x as b, x * 2 as c from generate_series(1,3) x) q;
+SELECT array_to_json(array_agg(q),true) from (select x as b, x * 2 as c from generate_series(1,3) x) q;
+SELECT array_to_json(array_agg(q),false)
+  FROM ( SELECT $$a$$ || x AS b, y AS c, 
+               ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+         FROM generate_series(1,2) x, 
+              generate_series(4,5) y) q;
+SELECT array_to_json(array_agg(x),false) from generate_series(5,10) x;
+SELECT array_to_json('{{1,5},{99,100}}'::int[]);
+
+-- row_to_json
+SELECT row_to_json(row(1,'foo'));
+
+SELECT row_to_json(q) 
+FROM (SELECT $$a$$ || x AS b, 
+         y AS c, 
+         ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+      FROM generate_series(1,2) x, 
+           generate_series(4,5) y) q;
+
+SELECT row_to_json(q,true) 
+FROM (SELECT $$a$$ || x AS b, 
+         y AS c, 
+         ARRAY[ROW(x.*,ARRAY[1,2,3]),
+               ROW(y.*,ARRAY[4,5,6])] AS z 
+      FROM generate_series(1,2) x, 
+           generate_series(4,5) y) q;
+
+CREATE TEMP TABLE rows AS
+SELECT x, 'txt' || x as y
+FROM generate_series(1,3) AS x;
+
+SELECT row_to_json(q,true) 
+FROM rows q;
+
+SELECT row_to_json(row((select array_agg(x) as d from generate_series(5,10) x)),false);
#4Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Dunstan (#3)
Re: JSON output functions.

2012/2/2 Andrew Dunstan <andrew@dunslane.net>:

On 02/02/2012 04:35 AM, Abhijit Menon-Sen wrote:

At 2012-02-01 18:48:28 -0500, andrew.dunstan@pgexperts.com wrote:

For now I'm inclined not to proceed with that, and leave it as an
optimization to be considered later if necessary. Thoughts?

I agree, there doesn't seem to be a pressing need to do it now.

OK, here's my final version of the patch for constructor functions. If
there's no further comment I'll go with this.

These function are super, Thank you

Do you plan to fix a issue with row attribute names in 9.2?

Regards

Pavel

Show quoted text

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Andrew Dunstan
andrew@dunslane.net
In reply to: Pavel Stehule (#4)
Re: JSON output functions.

On 02/02/2012 12:20 PM, Pavel Stehule wrote:

2012/2/2 Andrew Dunstan<andrew@dunslane.net>:

On 02/02/2012 04:35 AM, Abhijit Menon-Sen wrote:

At 2012-02-01 18:48:28 -0500, andrew.dunstan@pgexperts.com wrote:

For now I'm inclined not to proceed with that, and leave it as an
optimization to be considered later if necessary. Thoughts?

I agree, there doesn't seem to be a pressing need to do it now.

OK, here's my final version of the patch for constructor functions. If
there's no further comment I'll go with this.

These function are super, Thank you

Do you plan to fix a issue with row attribute names in 9.2?

Yeah. Tom did some initial work which he published here:
<http://archives.postgresql.org/message-id/28413.1321500388%40sss.pgh.pa.us&gt;,
noting:

It's not really ideal with respect to
the ValuesScan case, because what you get seems to always be the
hard-wired "columnN" names for VALUES columns, even if you try to
override that with an alias
...
Curiously, it works just fine if the VALUES can be folded

and later he said:

Upon further review, this patch would need some more work even for the
RowExpr case, because there are several places that build RowExprs
without bothering to build a valid colnames list. It's clearly soluble
if anyone cares to put in the work, but I'm not personally excited
enough to pursue it ..

I'm going to look at that issue first, since the unfolded VALUES clause seems like something of an obscure corner case. Feel free to chime in if you can.

cheers

andrew

#6Stefan Keller
sfkeller@gmail.com
In reply to: Andrew Dunstan (#5)
Re: JSON output functions.

Hi Andrew

Nice work!

Just for completeness: Did you also think of including geometry types
in JSON output functions in later releases? There's a nice extension
of JSON called GeoJSON for a starting point.

Yours, Stefan

2012/2/3 Andrew Dunstan <andrew@dunslane.net>:

Show quoted text

On 02/02/2012 12:20 PM, Pavel Stehule wrote:

2012/2/2 Andrew Dunstan<andrew@dunslane.net>:

On 02/02/2012 04:35 AM, Abhijit Menon-Sen wrote:

At 2012-02-01 18:48:28 -0500, andrew.dunstan@pgexperts.com wrote:

For now I'm inclined not to proceed with that, and leave it as an
optimization to be considered later if necessary. Thoughts?

I agree, there doesn't seem to be a pressing need to do it now.

OK, here's my final version of the patch for constructor functions. If
there's no further comment I'll go with this.

These function are super, Thank you

Do you plan to fix a issue with row attribute names in 9.2?

Yeah. Tom did some initial work which he published here:
<http://archives.postgresql.org/message-id/28413.1321500388%40sss.pgh.pa.us&gt;,
noting:

  It's not really ideal with respect to
  the ValuesScan case, because what you get seems to always be the
  hard-wired "columnN" names for VALUES columns, even if you try to
  override that with an alias
  ...
  Curiously, it works just fine if the VALUES can be folded

and later he said:

  Upon further review, this patch would need some more work even for the
  RowExpr case, because there are several places that build RowExprs
  without bothering to build a valid colnames list.  It's clearly soluble
  if anyone cares to put in the work, but I'm not personally excited
  enough to pursue it ..

I'm going to look at that issue first, since the unfolded VALUES clause
seems like something of an obscure corner case. Feel free to chime in if you
can.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Andrew Dunstan
andrew@dunslane.net
In reply to: Stefan Keller (#6)
Re: JSON output functions.

On 02/05/2012 02:31 PM, Stefan Keller wrote:

Hi Andrew

Nice work!

Just for completeness: Did you also think of including geometry types
in JSON output functions in later releases? There's a nice extension
of JSON called GeoJSON for a starting point.

[side note: please don't top-reply on -hackers. See
<http://idallen.com/topposting.html&gt;]

Currently, in array_to_json and row_to_json the only special cases are:

* record types are output as JSON records
* array types are output as JSON arrays
* numeric types are output without quoting
* boolean types are output as unquoted true or false
* NULLs are output as NULL

Everything else is output as its text representation, suitably quoted
and escaped.

If you want to change how those operate, now rather than later would be
the best time. But later we could certainly add various other
foo_to_json functions for things like geometry types and hstores.

cheers

andrew