proposal: row_to_array function

Started by Pavel Stehulealmost 11 years ago36 messages
#1Pavel Stehule
pavel.stehule@gmail.com

Hi

I am returning back to processing records in plpgsql.

I am thinking so it can be simply processed with transformations to array.

Now we have similar functions - hstore(row), row_to_json, ... but using of
these functions can be a useless step. Any row variable can be transformed
to 2D text array.

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

Both transformations can be simply implemented.

Comments, notices?

Regards

Pavel

#2Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#1)
Re: proposal: row_to_array function

On 1/16/15 3:45 AM, Pavel Stehule wrote:

I am returning back to processing records in plpgsql.

I am thinking so it can be simply processed with transformations to array.

Now we have similar functions - hstore(row), row_to_json, ... but using of these functions can be a useless step. Any row variable can be transformed to 2D text array.

How is it useless? Why wouldn't you just use JSON and be done with it?

Do you have some use cases you can share?

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to create an actual key/value type (ie: http://pgxn.org/dist/pair/doc/pair.html) and return an array of that.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#2)
Re: proposal: row_to_array function

2015-01-16 18:03 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/16/15 3:45 AM, Pavel Stehule wrote:

I am returning back to processing records in plpgsql.

I am thinking so it can be simply processed with transformations to array.

Now we have similar functions - hstore(row), row_to_json, ... but using
of these functions can be a useless step. Any row variable can be
transformed to 2D text array.

How is it useless? Why wouldn't you just use JSON and be done with it?

We can use a FOREACH IN ARRAY iteration in plpgsql (second variant is a
implementation FOREACH for jsonb)

so ROW->ARRAY is shorter than ROW->JSON->ARRAY or ROW->HSTORE->ARRAY

Do you have some use cases you can share?

processing of NEW, OLD variables in triggers

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to create an
actual key/value type (ie: http://pgxn.org/dist/pair/doc/pair.html) and
return an array of that.

ok

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#4Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#3)
Re: proposal: row_to_array function

On 1/16/15 11:22 AM, Pavel Stehule wrote:

2015-01-16 18:03 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com <mailto:Jim.Nasby@bluetreble.com>>:

On 1/16/15 3:45 AM, Pavel Stehule wrote:

I am returning back to processing records in plpgsql.

I am thinking so it can be simply processed with transformations to array.

Now we have similar functions - hstore(row), row_to_json, ... but using of these functions can be a useless step. Any row variable can be transformed to 2D text array.

How is it useless? Why wouldn't you just use JSON and be done with it?

We can use a FOREACH IN ARRAY iteration in plpgsql (second variant is a implementation FOREACH for jsonb)

so ROW->ARRAY is shorter than ROW->JSON->ARRAY or ROW->HSTORE->ARRAY

I think the real problem here is that we're inventing a bunch of different ways to do the same thing: iterate over a set. Instead of doing that, should we add the idea of an iterator to the type system? That would make sense for arrays, hstore, json and XML.

Do you have some use cases you can share?

processing of NEW, OLD variables in triggers

Note that last time I checked you couldn't do something like NEW.variable, and I don't think you could use EXEC to do it either. So there's more needed here than just converting a record to an array.

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to create an actual key/value type (ie: http://pgxn.org/dist/pair/doc/__pair.html <http://pgxn.org/dist/pair/doc/pair.html&gt;) and return an array of that.

ok

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#4)
Re: proposal: row_to_array function

2015-01-16 18:42 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/16/15 11:22 AM, Pavel Stehule wrote:

2015-01-16 18:03 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com <mailto:
Jim.Nasby@bluetreble.com>>:

On 1/16/15 3:45 AM, Pavel Stehule wrote:

I am returning back to processing records in plpgsql.

I am thinking so it can be simply processed with transformations
to array.

Now we have similar functions - hstore(row), row_to_json, ... but
using of these functions can be a useless step. Any row variable can be
transformed to 2D text array.

How is it useless? Why wouldn't you just use JSON and be done with it?

We can use a FOREACH IN ARRAY iteration in plpgsql (second variant is a
implementation FOREACH for jsonb)

so ROW->ARRAY is shorter than ROW->JSON->ARRAY or ROW->HSTORE->ARRAY

I think the real problem here is that we're inventing a bunch of different
ways to do the same thing: iterate over a set. Instead of doing that,
should we add the idea of an iterator to the type system? That would make
sense for arrays, hstore, json and XML.

what do you think? How this can be implemented?

Show quoted text

Do you have some use cases you can share?

processing of NEW, OLD variables in triggers

Note that last time I checked you couldn't do something like NEW.variable,
and I don't think you could use EXEC to do it either. So there's more
needed here than just converting a record to an array.

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to
create an actual key/value type (ie: http://pgxn.org/dist/pair/doc/
__pair.html <http://pgxn.org/dist/pair/doc/pair.html&gt;) and return an
array of that.

ok

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#6Andrew Dunstan
andrew@dunslane.net
In reply to: Pavel Stehule (#3)
Re: proposal: row_to_array function

On 01/16/2015 12:22 PM, Pavel Stehule wrote:

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to
create an actual key/value type (ie:
http://pgxn.org/dist/pair/doc/pair.html) and return an array of that.

ok

<http://BlueTreble.com&gt;

I think we'd possibly be better off with simply returning a flat array,
[key1, value1, ...]

Thats's what the hstore(text[]) and json_object(text[]) functions
accept, along with the 2D variant, if we want a precedent.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Dunstan (#6)
Re: proposal: row_to_array function

2015-01-16 22:35 GMT+01:00 Andrew Dunstan <andrew@dunslane.net>:

On 01/16/2015 12:22 PM, Pavel Stehule wrote:

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to
create an actual key/value type (ie:
http://pgxn.org/dist/pair/doc/pair.html) and return an array of that.

ok

<http://BlueTreble.com&gt;

I think we'd possibly be better off with simply returning a flat array,
[key1, value1, ...]

Thats's what the hstore(text[]) and json_object(text[]) functions accept,
along with the 2D variant, if we want a precedent.

It can be one of supported variant. I should not be one, because we cannot
to simply iterate over it

Next possibility is teach FOREACH to take key and value in one step.

Regards

Pavel

Show quoted text

cheers

andrew

#8Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#7)
Re: proposal: row_to_array function

2015-01-17 7:26 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-16 22:35 GMT+01:00 Andrew Dunstan <andrew@dunslane.net>:

On 01/16/2015 12:22 PM, Pavel Stehule wrote:

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to
create an actual key/value type (ie:
http://pgxn.org/dist/pair/doc/pair.html) and return an array of
that.

ok

<http://BlueTreble.com&gt;

I think we'd possibly be better off with simply returning a flat array,
[key1, value1, ...]

Thats's what the hstore(text[]) and json_object(text[]) functions accept,
along with the 2D variant, if we want a precedent.

It can be one of supported variant. I should not be one, because we cannot
to simply iterate over it

Next possibility is teach FOREACH to take key and value in one step.

I looked to code and iteration over pair (key, value) is more simple

FOREACH supports target list, but source should be composite array.

ostgres=# do $$
declare a int;
b int;
begin
foreach a,b in array ARRAY[(1,2),(3,4)]
loop
raise notice 'a = %, b = %', a,b;
end loop;
end;
$$ language plpgsql;
NOTICE: a = 1, b = 2
NOTICE: a = 3, b = 4
DO

Conversion from ARRAY[k1,v1,k2,v2, ... ] is not well consistent with
current design

Show quoted text

Regards

Pavel

cheers

andrew

#9Pavel Stehule
pavel.stehule@gmail.com
In reply to: Andrew Dunstan (#6)
1 attachment(s)
Re: proposal: row_to_array function

Hi

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't break
a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Regards

Pavel

2015-01-16 22:35 GMT+01:00 Andrew Dunstan <andrew@dunslane.net>:

Show quoted text

On 01/16/2015 12:22 PM, Pavel Stehule wrote:

There two possible transformations:

row_to_array --> [[key1, value1],[key2, value2], ...]
row_to_row_array --> [(key1, value1), (key2, value2), ... ]

If we're going to go that route, I think it makes more sense to
create an actual key/value type (ie:
http://pgxn.org/dist/pair/doc/pair.html) and return an array of that.

ok

<http://BlueTreble.com&gt;

I think we'd possibly be better off with simply returning a flat array,
[key1, value1, ...]

Thats's what the hstore(text[]) and json_object(text[]) functions accept,
along with the 2D variant, if we want a precedent.

cheers

andrew

Attachments:

foreach-key-value-PoC.patchtext/x-patch; charset=US-ASCII; name=foreach-key-value-PoC.patchDownload
diff --git a/contrib/hstore/expected/hstore.out b/contrib/hstore/expected/hstore.out
new file mode 100644
index 9749e45..e44532e
*** a/contrib/hstore/expected/hstore.out
--- b/contrib/hstore/expected/hstore.out
*************** select %% 'aa=>1, cq=>l, b=>g, fg=>NULL'
*** 1148,1153 ****
--- 1148,1169 ----
   {b,g,aa,1,cq,l,fg,NULL}
  (1 row)
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ NOTICE:  key: b, value: g
+ NOTICE:  key: aa, value: 1
+ NOTICE:  key: cq, value: l
+ NOTICE:  key: fg, value: <NULL>
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
          hstore_to_matrix         
  ---------------------------------
diff --git a/contrib/hstore/sql/hstore.sql b/contrib/hstore/sql/hstore.sql
new file mode 100644
index 5a9e9ee..7b9eb09
*** a/contrib/hstore/sql/hstore.sql
--- b/contrib/hstore/sql/hstore.sql
*************** select avals('');
*** 257,262 ****
--- 257,275 ----
  select hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %% 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ 
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %# 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
new file mode 100644
index ae5421f..e250b32
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2242,2247 ****
--- 2242,2250 ----
  	Datum		value;
  	bool		isnull;
  
+ 
+ 	bool		multiassign = false;
+ 
  	/* get the value of the array expression */
  	value = exec_eval_expr(estate, stmt->expr, &isnull, &arrtype);
  	if (isnull)
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2303,2308 ****
--- 2306,2328 ----
  				(errcode(ERRCODE_DATATYPE_MISMATCH),
  			  errmsg("FOREACH loop variable must not be of an array type")));
  
+ 	/*
+ 	 * Proof concept -- multiassign in FOREACH cycle
+ 	 *
+ 	 * Motivation: FOREACH key, value IN ARRAY hstore_to_array(hstore(NEW)) ...
+ 	 */
+ 	if (loop_var->dtype == PLPGSQL_DTYPE_ROW
+ 		 && !type_is_rowtype(ARR_ELEMTYPE(arr)))
+ 	{
+ 		if (stmt->slice != 0)
+ 			ereport(ERROR,
+ 					(errcode(ERRCODE_DATATYPE_MISMATCH),
+ 				  errmsg("cannot to assign non composite value to composite variable")));
+ 
+ 		/* only when target var is composite, SLICE=0 and source is scalar */
+ 		multiassign = true;
+ 	}
+ 
  	/* Create an iterator to step through the array */
  	array_iterator = array_create_iterator(arr, stmt->slice);
  
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2323,2335 ****
  	{
  		found = true;			/* looped at least once */
  
! 		/* Assign current element/slice to the loop variable */
! 		exec_assign_value(estate, loop_var, value, iterator_result_type,
! 						  &isnull);
  
! 		/* In slice case, value is temporary; must free it to avoid leakage */
! 		if (stmt->slice > 0)
! 			pfree(DatumGetPointer(value));
  
  		/*
  		 * Execute the statements
--- 2343,2382 ----
  	{
  		found = true;			/* looped at least once */
  
! 		if (!multiassign)
! 		{
! 			/* Assign current element/slice to the loop variable */
! 			exec_assign_value(estate, loop_var, value, iterator_result_type,
! 							  &isnull);
  
! 			/* In slice case, value is temporary; must free it to avoid leakage */
! 			if (stmt->slice > 0)
! 				pfree(DatumGetPointer(value));
! 		}
! 		else
! 		{
! 			int	i;
! 			bool	first = true;
! 			PLpgSQL_row *row = (PLpgSQL_row *) loop_var;
! 
! 			for (i = 0; i < row->nfields; i++)
! 			{
! 				int		varno = row->varnos[i];
! 
! 				if (varno != -1)
! 				{
! 					PLpgSQL_datum *var = (PLpgSQL_datum *) (estate->datums[varno]);
! 
! 					if (!first)
! 						array_iterate(array_iterator, &value, &isnull);
! 					else
! 						first = false;
! 
! 					exec_assign_value(estate, var, value, iterator_result_type,
! 							  &isnull);
! 				}
! 			}
! 		}
  
  		/*
  		 * Execute the statements
#10Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#9)
Re: proposal: row_to_array function

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2, value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2], ...] too

It is only a few lines more to current code, and this change doesn't break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on corner cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#10)
Re: proposal: row_to_array function

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on corner
cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

Regards

Pavel Stehule

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#12Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#11)
1 attachment(s)
Re: proposal: row_to_array function

Hi

2015-01-27 11:41 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on corner
cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Regards

Pavel

Show quoted text

Regards

Pavel Stehule

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

Attachments:

foreach-key-value-01.patchtext/x-patch; charset=US-ASCII; name=foreach-key-value-01.patchDownload
diff --git a/contrib/hstore/expected/hstore.out b/contrib/hstore/expected/hstore.out
new file mode 100644
index 9749e45..e44532e
*** a/contrib/hstore/expected/hstore.out
--- b/contrib/hstore/expected/hstore.out
*************** select %% 'aa=>1, cq=>l, b=>g, fg=>NULL'
*** 1148,1153 ****
--- 1148,1169 ----
   {b,g,aa,1,cq,l,fg,NULL}
  (1 row)
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ NOTICE:  key: b, value: g
+ NOTICE:  key: aa, value: 1
+ NOTICE:  key: cq, value: l
+ NOTICE:  key: fg, value: <NULL>
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
          hstore_to_matrix         
  ---------------------------------
diff --git a/contrib/hstore/sql/hstore.sql b/contrib/hstore/sql/hstore.sql
new file mode 100644
index 5a9e9ee..7b9eb09
*** a/contrib/hstore/sql/hstore.sql
--- b/contrib/hstore/sql/hstore.sql
*************** select avals('');
*** 257,262 ****
--- 257,275 ----
  select hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %% 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ 
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %# 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index 69a0885..4ef0299
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*************** NOTICE:  row = {7,8,9}
*** 2490,2495 ****
--- 2490,2518 ----
  NOTICE:  row = {10,11,12}
  </programlisting>
      </para>
+ 
+     <para>
+      <literal>FOREACH</> cycle can be used for iteration over record. You
+      need a <xref linkend="hstore"> extension. For this case a clause
+      <literal>SLICE</literal> should not be used. <literal>FOREACH</literal>
+      statements supports list of target variables. When source array is
+      a array of composites, then composite array element is saved to target
+      variables. When the array is a array of scalar values, then target 
+      variables are filled item by item.
+ <programlisting>
+ CREATE FUNCTION trig_function() RETURNS TRIGGER AS $$
+ DECLARE
+   key text; value text;
+ BEGIN
+   FOREACH key, value IN ARRAY hstore_to_array(hstore(NEW))
+   LOOP
+     RAISE NOTICE 'key = %, value = %', key, value;
+   END LOOP;
+   RETURN NEW;
+ END;
+ $$ LANGUAGE plpgsql;
+ </programlisting>
+     </para>
     </sect2>
  
     <sect2 id="plpgsql-error-trapping">
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
new file mode 100644
index ae5421f..4ab3d90
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2242,2247 ****
--- 2242,2250 ----
  	Datum		value;
  	bool		isnull;
  
+ 
+ 	bool		multiassign = false;
+ 
  	/* get the value of the array expression */
  	value = exec_eval_expr(estate, stmt->expr, &isnull, &arrtype);
  	if (isnull)
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2303,2308 ****
--- 2306,2328 ----
  				(errcode(ERRCODE_DATATYPE_MISMATCH),
  			  errmsg("FOREACH loop variable must not be of an array type")));
  
+ 	/*
+ 	 * Proof concept -- multiassign in FOREACH cycle
+ 	 *
+ 	 * Motivation: FOREACH key, value IN ARRAY hstore_to_array(hstore(NEW)) ...
+ 	 */
+ 	if (loop_var->dtype == PLPGSQL_DTYPE_ROW
+ 		 && !type_is_rowtype(ARR_ELEMTYPE(arr)))
+ 	{
+ 		if (stmt->slice != 0)
+ 			ereport(ERROR,
+ 					(errcode(ERRCODE_DATATYPE_MISMATCH),
+ 				  errmsg("cannot to assign non composite value to composite variable")));
+ 
+ 		/* only when target var is composite, SLICE=0 and source is scalar */
+ 		multiassign = true;
+ 	}
+ 
  	/* Create an iterator to step through the array */
  	array_iterator = array_create_iterator(arr, stmt->slice);
  
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2323,2335 ****
  	{
  		found = true;			/* looped at least once */
  
! 		/* Assign current element/slice to the loop variable */
! 		exec_assign_value(estate, loop_var, value, iterator_result_type,
! 						  &isnull);
  
! 		/* In slice case, value is temporary; must free it to avoid leakage */
! 		if (stmt->slice > 0)
! 			pfree(DatumGetPointer(value));
  
  		/*
  		 * Execute the statements
--- 2343,2387 ----
  	{
  		found = true;			/* looped at least once */
  
! 		if (!multiassign)
! 		{
! 			/* Assign current element/slice to the loop variable */
! 			exec_assign_value(estate, loop_var, value, iterator_result_type,
! 							  &isnull);
  
! 			/* In slice case, value is temporary; must free it to avoid leakage */
! 			if (stmt->slice > 0)
! 				pfree(DatumGetPointer(value));
! 		}
! 		else
! 		{
! 			int	i;
! 			bool	first = true;
! 			PLpgSQL_row *row = (PLpgSQL_row *) loop_var;
! 
! 			for (i = 0; i < row->nfields; i++)
! 			{
! 				int		varno = row->varnos[i];
! 
! 				if (varno != -1)
! 				{
! 					PLpgSQL_datum *var = (PLpgSQL_datum *) (estate->datums[varno]);
! 
! 					if (!first)
! 					{
! 						if (!array_iterate(array_iterator, &value, &isnull))
! 							ereport(ERROR,
! 									(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
! 								   errmsg("array is not well sized, missing data")));
! 					}
! 					else
! 						first = false;
! 
! 					exec_assign_value(estate, var, value, iterator_result_type,
! 							  &isnull);
! 				}
! 			}
! 		}
  
  		/*
  		 * Execute the statements
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
new file mode 100644
index daf3447..3b4e00e
*** a/src/test/regress/expected/plpgsql.out
--- b/src/test/regress/expected/plpgsql.out
*************** NOTICE:  {"(35,78)","(88,76)"}
*** 5115,5120 ****
--- 5115,5173 ----
  
  drop function foreach_test(anyarray);
  drop type xy_tuple;
+ -- multiassign (key,value) tests
+ create or replace function foreach_test_ab(anyarray)
+ returns void as $$
+ declare
+   a text; b text;
+ begin
+   foreach a,b in array $1
+   loop
+     raise notice 'a: %, b: %', a, b;
+   end loop;
+ end
+ $$ language plpgsql;
+ select foreach_test_ab(array[1,2,3,4]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[[1,2],[3,4]]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[[1,2,3]]);
+ NOTICE:  a: 1, b: 2
+ ERROR:  array is not well sized, missing data
+ CONTEXT:  PL/pgSQL function foreach_test_ab(anyarray) line 5 at FOREACH over array
+ select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+ NOTICE:  a: 5, b: 6
+ NOTICE:  a: 7, b: 8
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+ NOTICE:  a: <NULL>, b: <NULL>
+ NOTICE:  a: 1, b: <NULL>
+ NOTICE:  a: 1, b: 1
+ NOTICE:  a: <NULL>, b: 1
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ drop function foreach_test_ab(anyarray);
  --
  -- Assorted tests for array subscript assignment
  --
diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql
new file mode 100644
index a0840c9..38fc07e
*** a/src/test/regress/sql/plpgsql.sql
--- b/src/test/regress/sql/plpgsql.sql
*************** select foreach_test(ARRAY[[(10,20),(40,6
*** 4067,4072 ****
--- 4067,4093 ----
  drop function foreach_test(anyarray);
  drop type xy_tuple;
  
+ -- multiassign (key,value) tests
+ create or replace function foreach_test_ab(anyarray)
+ returns void as $$
+ declare
+   a text; b text;
+ begin
+   foreach a,b in array $1
+   loop
+     raise notice 'a: %, b: %', a, b;
+   end loop;
+ end
+ $$ language plpgsql;
+ 
+ select foreach_test_ab(array[1,2,3,4]);
+ select foreach_test_ab(array[[1,2],[3,4]]);
+ select foreach_test_ab(array[[1,2,3]]);
+ select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+ select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+ 
+ drop function foreach_test_ab(anyarray);
+ 
  --
  -- Assorted tests for array subscript assignment
  --
#13Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#12)
1 attachment(s)
Re: proposal: row_to_array function

Hello

here is a initial version of row_to_array function - transform any row to
array in format proposed by Andrew.

Regards

Pavel

2015-01-27 19:58 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

Show quoted text

Hi

2015-01-27 11:41 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on
corner cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Regards

Pavel

Regards

Pavel Stehule

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

Attachments:

row_to_array-PoC.patchtext/x-patch; charset=US-ASCII; name=row_to_array-PoC.patchDownload
diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c
new file mode 100644
index 3dc9a84..d758d2d
*** a/src/backend/utils/adt/rowtypes.c
--- b/src/backend/utils/adt/rowtypes.c
***************
*** 21,26 ****
--- 21,27 ----
  #include "catalog/pg_type.h"
  #include "funcapi.h"
  #include "libpq/pqformat.h"
+ #include "utils/array.h"
  #include "utils/builtins.h"
  #include "utils/lsyscache.h"
  #include "utils/typcache.h"
*************** btrecordimagecmp(PG_FUNCTION_ARGS)
*** 1810,1812 ****
--- 1811,1898 ----
  {
  	PG_RETURN_INT32(record_image_cmp(fcinfo));
  }
+ 
+ /*
+  * transform any record to array in format [key1, value1, key2, value2 [, ...]]
+  */
+ Datum
+ row_to_array(PG_FUNCTION_ARGS)
+ {
+ 	HeapTupleHeader		rec = PG_GETARG_HEAPTUPLEHEADER(0);
+ 	TupleDesc		rectupdesc;
+ 	Oid			rectuptyp;
+ 	int32			rectuptypmod;
+ 	HeapTupleData		rectuple;
+ 	int	ncolumns;
+ 	Datum 		*recvalues;
+ 	bool  		*recnulls;
+ 	ArrayBuildState		*builder;
+ 	int	i;
+ 
+ 	/* Extract type info from the tuple itself */
+ 	rectuptyp = HeapTupleHeaderGetTypeId(rec);
+ 	rectuptypmod = HeapTupleHeaderGetTypMod(rec);
+ 	rectupdesc = lookup_rowtype_tupdesc(rectuptyp, rectuptypmod);
+ 	ncolumns = rectupdesc->natts;
+ 
+ 	/* Build a temporary HeapTuple control structure */
+ 	rectuple.t_len = HeapTupleHeaderGetDatumLength(rec);
+ 	ItemPointerSetInvalid(&(rectuple.t_self));
+ 	rectuple.t_tableOid = InvalidOid;
+ 	rectuple.t_data = rec;
+ 
+ 	recvalues = (Datum *) palloc(ncolumns * sizeof(Datum));
+ 	recnulls = (bool *) palloc(ncolumns * sizeof(bool));
+ 
+ 	/* Break down the tuple into fields */
+ 	heap_deform_tuple(&rectuple, rectupdesc, recvalues, recnulls);
+ 
+ 	/* Prepare target array */
+ 	builder = initArrayResult(TEXTOID, CurrentMemoryContext);
+ 
+ 	for (i = 0; i < ncolumns; i++)
+ 	{
+ 		Oid	columntyp = rectupdesc->attrs[i]->atttypid;
+ 		Datum		value;
+ 		bool		isnull;
+ 
+ 		/* Ignore dropped columns */
+ 		if (rectupdesc->attrs[i]->attisdropped)
+ 			continue;
+ 
+ 		builder = accumArrayResult(builder,
+ 							CStringGetTextDatum(NameStr(rectupdesc->attrs[i]->attname)),
+ 							false,
+ 							TEXTOID,
+ 							CurrentMemoryContext);
+ 
+ 		if (!recnulls[i])
+ 		{
+ 			char *outstr;
+ 			bool		typIsVarlena;
+ 			Oid		typoutput;
+ 			FmgrInfo		proc;
+ 
+ 			getTypeOutputInfo(columntyp, &typoutput, &typIsVarlena);
+ 			fmgr_info_cxt(typoutput, &proc, CurrentMemoryContext);
+ 			outstr = OutputFunctionCall(&proc, recvalues[i]);
+ 
+ 			value = CStringGetTextDatum(outstr);
+ 			isnull = false;
+ 		}
+ 		else
+ 		{
+ 			value = (Datum) 0;
+ 			isnull = true;
+ 		}
+ 
+ 		builder = accumArrayResult(builder,
+ 						    value, isnull,
+ 						    TEXTOID,
+ 						    CurrentMemoryContext);
+ 	}
+ 
+ 	ReleaseTupleDesc(rectupdesc);
+ 
+ 	PG_RETURN_DATUM(makeArrayResult(builder, CurrentMemoryContext));
+ }
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
new file mode 100644
index 9edfdb8..a27cf4a
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
*************** DATA(insert OID = 376 (  string_to_array
*** 891,896 ****
--- 891,898 ----
  DESCR("split delimited text into text[], with null string");
  DATA(insert OID = 384 (  array_to_string   PGNSP PGUID 12 1 0 0 0 f f f f f f s 3 0 25 "2277 25 25" _null_ _null_ _null_ _null_ array_to_text_null _null_ _null_ _null_ ));
  DESCR("concatenate array elements, using delimiter and null string, into text");
+ DATA(insert OID = 4057 (  row_to_array   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 1009 "2249" _null_ _null_ _null_ _null_ row_to_array _null_ _null_ _null_ ));
+ DESCR("transform any record to text[]");
  DATA(insert OID = 515 (  array_larger	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_larger _null_ _null_ _null_ ));
  DESCR("larger of two");
  DATA(insert OID = 516 (  array_smaller	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_smaller _null_ _null_ _null_ ));
diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h
new file mode 100644
index bc4517d..315a1fc
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
*************** extern Datum record_image_gt(PG_FUNCTION
*** 668,673 ****
--- 668,674 ----
  extern Datum record_image_le(PG_FUNCTION_ARGS);
  extern Datum record_image_ge(PG_FUNCTION_ARGS);
  extern Datum btrecordimagecmp(PG_FUNCTION_ARGS);
+ extern Datum row_to_array(PG_FUNCTION_ARGS);
  
  /* ruleutils.c */
  extern bool quote_all_identifiers;
diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out
new file mode 100644
index 54525de..0e249be
*** a/src/test/regress/expected/rowtypes.out
--- b/src/test/regress/expected/rowtypes.out
*************** select row_to_json(r) from (select q2,q1
*** 634,636 ****
--- 634,644 ----
   {"q2":0,"q1":0}
  (3 rows)
  
+ select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
+          row_to_array         
+ ------------------------------
+  {q2,456,q1,123}
+  {q2,4567890123456789,q1,123}
+  {q2,0,q1,0}
+ (3 rows)
+ 
diff --git a/src/test/regress/sql/rowtypes.sql b/src/test/regress/sql/rowtypes.sql
new file mode 100644
index bc3f021..3450417
*** a/src/test/regress/sql/rowtypes.sql
--- b/src/test/regress/sql/rowtypes.sql
*************** create temp table tt1 as select * from i
*** 271,273 ****
--- 271,274 ----
  create temp table tt2 () inherits(tt1);
  insert into tt2 values(0,0);
  select row_to_json(r) from (select q2,q1 from tt1 offset 0) r;
+ select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
#14Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#13)
Re: proposal: row_to_array function

Example:

postgres=# do $$
declare r record;
declare k text; v text;
begin
for r in select * from foo loop
foreach k,v in array row_to_array(r) loop
raise notice 'k: %, v: %', k, v;
end loop;
end loop;
end;
$$;
NOTICE: k: a, v: 2
NOTICE: k: b, v: NAZDAR
NOTICE: k: c, v: 2015-01-27
NOTICE: k: a, v: 2
NOTICE: k: b, v: AHOJ
NOTICE: k: c, v: 2015-01-27
DO

Regards

Pavel

2015-01-27 21:26 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

Show quoted text

Hello

here is a initial version of row_to_array function - transform any row to
array in format proposed by Andrew.

Regards

Pavel

2015-01-27 19:58 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

Hi

2015-01-27 11:41 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on
corner cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Regards

Pavel

Regards

Pavel Stehule

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#15Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#13)
Re: proposal: row_to_array function

On 1/27/15 2:26 PM, Pavel Stehule wrote:

here is a initial version of row_to_array function - transform any row to array in format proposed by Andrew.

Please start a new thread for this... does it depend on the key-value patch?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#12)
Re: proposal: row_to_array function

On 1/27/15 12:58 PM, Pavel Stehule wrote:

postgres=# select array(select unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Hrm, this wasn't what I was expecting:

+ select foreach_test_ab(array[1,2,3,4]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4

I was expecting that foreach a,b array would be expecting something in the array to have a dimension of 2. :(

I think this is bad, because this:

foreach_test_ab('{{1,2,3},{4,5,6}}'::int[]);

will give you 1,2; 3,4; 5,6. I don't see any way that that makes sense. Even if it did make sense, I'm more concerned that adding this will seriously paint us into a corner when it comes to the (to me) more rational case of returning {1,2,3},{4,5,6}.

I think we need to think some more about this, at least to make sure we're not painting ourselves into a corner for more appropriate array iteration.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#16)
Re: proposal: row_to_array function

Dne 28.1.2015 0:25 "Jim Nasby" <Jim.Nasby@bluetreble.com> napsal(a):

On 1/27/15 12:58 PM, Pavel Stehule wrote:

postgres=# select array(select

unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));

array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Hrm, this wasn't what I was expecting:

+ select foreach_test_ab(array[1,2,3,4]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4

I was expecting that foreach a,b array would be expecting something in

the array to have a dimension of 2. :(

It is inconsist (your expectation) with current implementation of FOREACH.
It doesnt produce a array when SLICING is missing. And it doesnt calculate
with dimensions.

I would not to change this rule. It is not ambigonuous and it allows to
work with
1d, 2d, 3d dimensions array. You can process Andrew format well and my
proposed format (2d array) well too.

There can be differen behave when SLICING is used. There we can iterate
exactly with dimensions. We can design a behave in this case?

I think this is bad, because this:

foreach_test_ab('{{1,2,3},{4,5,6}}'::int[]);

will give you 1,2; 3,4; 5,6. I don't see any way that that makes sense.

Even if it did make sense, I'm more concerned that adding this will
seriously paint us into a corner when it comes to the (to me) more rational
case of returning {1,2,3},{4,5,6}.

I think we need to think some more about this, at least to make sure

we're not painting ourselves into a corner for more appropriate array
iteration.

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#18Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#15)
Re: proposal: row_to_array function

2015-01-28 0:16 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/27/15 2:26 PM, Pavel Stehule wrote:

here is a initial version of row_to_array function - transform any row to
array in format proposed by Andrew.

Please start a new thread for this... does it depend on the key-value
patch?

partially - a selected format should be well supported by FOREACH statement

Regards

Pavel

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#19Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#17)
Re: proposal: row_to_array function

2015-01-28 6:49 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

Dne 28.1.2015 0:25 "Jim Nasby" <Jim.Nasby@bluetreble.com> napsal(a):

On 1/27/15 12:58 PM, Pavel Stehule wrote:

postgres=# select array(select

unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));

array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

More tests, simple doc

Hrm, this wasn't what I was expecting:

+ select foreach_test_ab(array[1,2,3,4]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4

I was expecting that foreach a,b array would be expecting something in

the array to have a dimension of 2. :(

It is inconsist (your expectation) with current implementation of FOREACH.
It doesnt produce a array when SLICING is missing. And it doesnt calculate
with dimensions.

I would not to change this rule. It is not ambigonuous and it allows to
work with
1d, 2d, 3d dimensions array. You can process Andrew format well and my
proposed format (2d array) well too.

one small example

CREATE OR REPLACE FUNCTION iterate_over_pairs(text[])
RETURNS void AS $$
DECLARE v1 text; v2 text; e text; i int := 0;
BEGIN
FOREACH e IN ARRAY $1 LOOP
IF i % 2 = 0 THEN v1 := e;
ELSE v2 := e; RAISE NOTICE 'v1: %, v2: %', v1, v2; END IF;
i := i + 1;
END LOOP;
END;
$$ LANGUAGE plpgsql;

postgres=# SELECT iterate_over_pairs(ARRAY[1,2,3,4]::text[]);
NOTICE: v1: 1, v2: 2
NOTICE: v1: 3, v2: 4
iterate_over_pairs
--------------------

(1 row)

postgres=# SELECT iterate_over_pairs(ARRAY[[1,2],[3,4]]::text[]);
NOTICE: v1: 1, v2: 2
NOTICE: v1: 3, v2: 4
iterate_over_pairs
--------------------

(1 row)

I can use iterate_over_pairs for 1D or 2D arrays well -- a FOREACH was
designed in this direction - without SLICE a dimensions data are
unimportant.

Discussed enhancing of FOREACH is faster and shorter (readable)
iterate_over_pairs use case.

FOREACH v1, v2 IN ARRAY $1 LOOP
..
END LOOP;

It is consistent with current design

You can look to patch - in this moment a SLICE > 0 is disallowed for
situation, when target variable is ROW and source is not ROW.

Regards

Pavel

There can be differen behave when SLICING is used. There we can iterate

Show quoted text

exactly with dimensions. We can design a behave in this case?

I think this is bad, because this:

foreach_test_ab('{{1,2,3},{4,5,6}}'::int[]);

will give you 1,2; 3,4; 5,6. I don't see any way that that makes sense.

Even if it did make sense, I'm more concerned that adding this will
seriously paint us into a corner when it comes to the (to me) more rational
case of returning {1,2,3},{4,5,6}.

I think we need to think some more about this, at least to make sure

we're not painting ourselves into a corner for more appropriate array
iteration.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

#20Jeff Janes
jeff.janes@gmail.com
In reply to: Pavel Stehule (#12)
Re: proposal: row_to_array function

On Tue, Jan 27, 2015 at 10:58 AM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:

Hi

2015-01-27 11:41 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on
corner cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

This no longer applies due to conflicts in src/pl/plpgsql/src/pl_exec.c
caused by e524cbdc45ec6d677b1dd49

Also, what is the relationship of this patch to the row_to_array patch?
Are they independent, or does one depend on the other? row_to_array by
itself applies but doesn't compile.

Cheers,

Jeff

#21Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jeff Janes (#20)
1 attachment(s)
Re: proposal: row_to_array function

Hi

here is rebased patch.

It contains both patches - row_to_array function and foreach array support.

This design is in conformity with hstore functions. There can be good
synergy.

Regards

Pavel

2015-03-28 23:53 GMT+01:00 Jeff Janes <jeff.janes@gmail.com>:

Show quoted text

On Tue, Jan 27, 2015 at 10:58 AM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:

Hi

2015-01-27 11:41 GMT+01:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-01-26 21:44 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 1/25/15 4:23 AM, Pavel Stehule wrote:

I tested a concept iteration over array in format [key1, value1, key2,
value2, .. ] - what is nice, it works for [[key1,value1],[key2, value2],
...] too

It is only a few lines more to current code, and this change doesn't
break a compatibility.

Do you think, so this patch is acceptable?

Ideas, comments?

Aside from fixing the comments... I think this needs more tests on
corner cases. For example, what happens when you do

foreach a, b, c in array(array(1,2),array(3,4)) ?

it is relative simple behave -- empty values are NULL

array(1,2),array(3,4) -- do you think ARRAY[[1,2],[3,4]] is effectively
ARRAY[1,2,3,4]

Or the opposite case of

foreach a,b in array(array(1,2,3))

Also, what about:

foreach a,b in '{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[] ?

postgres=# select array(select
unnest('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]));
array
-------------------
{1,2,3,4,5,6,7,8}
(1 row)

so it generate pairs {1,2}{3,4},{5,6},{7,8}

I fixed situation when array has not enough elements.

This no longer applies due to conflicts in src/pl/plpgsql/src/pl_exec.c
caused by e524cbdc45ec6d677b1dd49

Also, what is the relationship of this patch to the row_to_array patch?
Are they independent, or does one depend on the other? row_to_array by
itself applies but doesn't compile.

Cheers,

Jeff

Attachments:

row_to_array-20150329-01.patchtext/x-patch; charset=US-ASCII; name=row_to_array-20150329-01.patchDownload
diff --git a/contrib/hstore/expected/hstore.out b/contrib/hstore/expected/hstore.out
new file mode 100644
index 9749e45..e44532e
*** a/contrib/hstore/expected/hstore.out
--- b/contrib/hstore/expected/hstore.out
*************** select %% 'aa=>1, cq=>l, b=>g, fg=>NULL'
*** 1148,1153 ****
--- 1148,1169 ----
   {b,g,aa,1,cq,l,fg,NULL}
  (1 row)
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ NOTICE:  key: b, value: g
+ NOTICE:  key: aa, value: 1
+ NOTICE:  key: cq, value: l
+ NOTICE:  key: fg, value: <NULL>
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
          hstore_to_matrix         
  ---------------------------------
diff --git a/contrib/hstore/sql/hstore.sql b/contrib/hstore/sql/hstore.sql
new file mode 100644
index 5a9e9ee..7b9eb09
*** a/contrib/hstore/sql/hstore.sql
--- b/contrib/hstore/sql/hstore.sql
*************** select avals('');
*** 257,262 ****
--- 257,275 ----
  select hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %% 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
+ -- fast iteration over keys
+ do $$
+ declare
+   key text;
+   value text;
+ begin
+   foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+   loop
+     raise notice 'key: %, value: %', key, value;
+   end loop;
+ end;
+ $$;
+ 
  select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
  select %# 'aa=>1, cq=>l, b=>g, fg=>NULL';
  
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index d36acf6..e4abb97
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*************** NOTICE:  row = {7,8,9}
*** 2505,2510 ****
--- 2505,2533 ----
  NOTICE:  row = {10,11,12}
  </programlisting>
      </para>
+ 
+     <para>
+      <literal>FOREACH</> cycle can be used for iteration over record. You
+      need a <xref linkend="hstore"> extension. For this case a clause
+      <literal>SLICE</literal> should not be used. <literal>FOREACH</literal>
+      statements supports list of target variables. When source array is
+      a array of composites, then composite array element is saved to target
+      variables. When the array is a array of scalar values, then target 
+      variables are filled item by item.
+ <programlisting>
+ CREATE FUNCTION trig_function() RETURNS TRIGGER AS $$
+ DECLARE
+   key text; value text;
+ BEGIN
+   FOREACH key, value IN ARRAY hstore_to_array(hstore(NEW))
+   LOOP
+     RAISE NOTICE 'key = %, value = %', key, value;
+   END LOOP;
+   RETURN NEW;
+ END;
+ $$ LANGUAGE plpgsql;
+ </programlisting>
+     </para>
     </sect2>
  
     <sect2 id="plpgsql-error-trapping">
diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c
new file mode 100644
index a65e18d..1a64d8e
*** a/src/backend/utils/adt/rowtypes.c
--- b/src/backend/utils/adt/rowtypes.c
***************
*** 21,26 ****
--- 21,27 ----
  #include "catalog/pg_type.h"
  #include "funcapi.h"
  #include "libpq/pqformat.h"
+ #include "utils/array.h"
  #include "utils/builtins.h"
  #include "utils/lsyscache.h"
  #include "utils/typcache.h"
*************** btrecordimagecmp(PG_FUNCTION_ARGS)
*** 1810,1812 ****
--- 1811,1900 ----
  {
  	PG_RETURN_INT32(record_image_cmp(fcinfo));
  }
+ 
+ /*
+  * transform any record to array in format [key1, value1, key2, value2 [, ...]]
+  *
+  * This format is compatible with hstore_to_array function
+  */
+ Datum
+ row_to_array(PG_FUNCTION_ARGS)
+ {
+ 	HeapTupleHeader		rec = PG_GETARG_HEAPTUPLEHEADER(0);
+ 	TupleDesc		rectupdesc;
+ 	Oid			rectuptyp;
+ 	int32			rectuptypmod;
+ 	HeapTupleData		rectuple;
+ 	int	ncolumns;
+ 	Datum 		*recvalues;
+ 	bool  		*recnulls;
+ 	ArrayBuildState		*builder;
+ 	int	i;
+ 
+ 	/* Extract type info from the tuple itself */
+ 	rectuptyp = HeapTupleHeaderGetTypeId(rec);
+ 	rectuptypmod = HeapTupleHeaderGetTypMod(rec);
+ 	rectupdesc = lookup_rowtype_tupdesc(rectuptyp, rectuptypmod);
+ 	ncolumns = rectupdesc->natts;
+ 
+ 	/* Build a temporary HeapTuple control structure */
+ 	rectuple.t_len = HeapTupleHeaderGetDatumLength(rec);
+ 	ItemPointerSetInvalid(&(rectuple.t_self));
+ 	rectuple.t_tableOid = InvalidOid;
+ 	rectuple.t_data = rec;
+ 
+ 	recvalues = (Datum *) palloc(ncolumns * sizeof(Datum));
+ 	recnulls = (bool *) palloc(ncolumns * sizeof(bool));
+ 
+ 	/* Break down the tuple into fields */
+ 	heap_deform_tuple(&rectuple, rectupdesc, recvalues, recnulls);
+ 
+ 	/* Prepare target array */
+ 	builder = initArrayResult(TEXTOID, CurrentMemoryContext, true);
+ 
+ 	for (i = 0; i < ncolumns; i++)
+ 	{
+ 		Oid	columntyp = rectupdesc->attrs[i]->atttypid;
+ 		Datum		value;
+ 		bool		isnull;
+ 
+ 		/* Ignore dropped columns */
+ 		if (rectupdesc->attrs[i]->attisdropped)
+ 			continue;
+ 
+ 		builder = accumArrayResult(builder,
+ 							CStringGetTextDatum(NameStr(rectupdesc->attrs[i]->attname)),
+ 							false,
+ 							TEXTOID,
+ 							CurrentMemoryContext);
+ 
+ 		if (!recnulls[i])
+ 		{
+ 			char *outstr;
+ 			bool		typIsVarlena;
+ 			Oid		typoutput;
+ 			FmgrInfo		proc;
+ 
+ 			getTypeOutputInfo(columntyp, &typoutput, &typIsVarlena);
+ 			fmgr_info_cxt(typoutput, &proc, CurrentMemoryContext);
+ 			outstr = OutputFunctionCall(&proc, recvalues[i]);
+ 
+ 			value = CStringGetTextDatum(outstr);
+ 			isnull = false;
+ 		}
+ 		else
+ 		{
+ 			value = (Datum) 0;
+ 			isnull = true;
+ 		}
+ 
+ 		builder = accumArrayResult(builder,
+ 						    value, isnull,
+ 						    TEXTOID,
+ 						    CurrentMemoryContext);
+ 	}
+ 
+ 	ReleaseTupleDesc(rectupdesc);
+ 
+ 	PG_RETURN_DATUM(makeArrayResult(builder, CurrentMemoryContext));
+ }
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
new file mode 100644
index a96d369..1b4c578
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
*************** DATA(insert OID = 376 (  string_to_array
*** 891,896 ****
--- 891,898 ----
  DESCR("split delimited text into text[], with null string");
  DATA(insert OID = 384 (  array_to_string   PGNSP PGUID 12 1 0 0 0 f f f f f f s 3 0 25 "2277 25 25" _null_ _null_ _null_ _null_ array_to_text_null _null_ _null_ _null_ ));
  DESCR("concatenate array elements, using delimiter and null string, into text");
+ DATA(insert OID = 4057 (  row_to_array   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 1009 "2249" _null_ _null_ _null_ _null_ row_to_array _null_ _null_ _null_ ));
+ DESCR("transform any record to text[]");
  DATA(insert OID = 515 (  array_larger	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_larger _null_ _null_ _null_ ));
  DESCR("larger of two");
  DATA(insert OID = 516 (  array_smaller	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_smaller _null_ _null_ _null_ ));
diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h
new file mode 100644
index 6310641..7aabfe1
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
*************** extern Datum record_image_gt(PG_FUNCTION
*** 668,673 ****
--- 668,674 ----
  extern Datum record_image_le(PG_FUNCTION_ARGS);
  extern Datum record_image_ge(PG_FUNCTION_ARGS);
  extern Datum btrecordimagecmp(PG_FUNCTION_ARGS);
+ extern Datum row_to_array(PG_FUNCTION_ARGS);
  
  /* ruleutils.c */
  extern bool quote_all_identifiers;
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
new file mode 100644
index deefb1f..5c34a03
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2261,2266 ****
--- 2261,2269 ----
  	Datum		value;
  	bool		isnull;
  
+ 
+ 	bool		multiassign = false;
+ 
  	/* get the value of the array expression */
  	value = exec_eval_expr(estate, stmt->expr, &isnull, &arrtype, &arrtypmod);
  	if (isnull)
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2322,2327 ****
--- 2325,2345 ----
  				(errcode(ERRCODE_DATATYPE_MISMATCH),
  			  errmsg("FOREACH loop variable must not be of an array type")));
  
+ 	/*
+ 	 * it is multiassign? Don't support slicing yet.
+ 	 */
+ 	if (loop_var->dtype == PLPGSQL_DTYPE_ROW
+ 		 && !type_is_rowtype(ARR_ELEMTYPE(arr)))
+ 	{
+ 		if (stmt->slice != 0)
+ 			ereport(ERROR,
+ 					(errcode(ERRCODE_DATATYPE_MISMATCH),
+ 				  errmsg("cannot to assign non composite value to composite variable")));
+ 
+ 		/* only when target var is composite, SLICE=0 and source is scalar */
+ 		multiassign = true;
+ 	}
+ 
  	/* Create an iterator to step through the array */
  	array_iterator = array_create_iterator(arr, stmt->slice, NULL);
  
*************** exec_stmt_foreach_a(PLpgSQL_execstate *e
*** 2344,2356 ****
  	{
  		found = true;			/* looped at least once */
  
! 		/* Assign current element/slice to the loop variable */
! 		exec_assign_value(estate, loop_var, value, isnull,
! 						  iterator_result_type, iterator_result_typmod);
  
! 		/* In slice case, value is temporary; must free it to avoid leakage */
! 		if (stmt->slice > 0)
! 			pfree(DatumGetPointer(value));
  
  		/*
  		 * Execute the statements
--- 2362,2406 ----
  	{
  		found = true;			/* looped at least once */
  
! 		if (!multiassign)
! 		{
! 			/* Assign current element/slice to the loop variable */
! 			exec_assign_value(estate, loop_var, value, isnull,
! 							  iterator_result_type, iterator_result_typmod);
  
! 			/* In slice case, value is temporary; must free it to avoid leakage */
! 			if (stmt->slice > 0)
! 				pfree(DatumGetPointer(value));
! 		}
! 		else
! 		{
! 			int	i;
! 			bool	first = true;
! 			PLpgSQL_row *row = (PLpgSQL_row *) loop_var;
! 
! 			for (i = 0; i < row->nfields; i++)
! 			{
! 				int		varno = row->varnos[i];
! 
! 				if (varno != -1)
! 				{
! 					PLpgSQL_datum *var = (PLpgSQL_datum *) (estate->datums[varno]);
! 
! 					if (!first)
! 					{
! 						if (!array_iterate(array_iterator, &value, &isnull))
! 							ereport(ERROR,
! 									(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
! 								   errmsg("array is not well sized, missing data")));
! 					}
! 					else
! 						first = false;
! 
! 					exec_assign_value(estate, var, value, isnull,
! 								  iterator_result_type, iterator_result_typmod);
! 				}
! 			}
! 		}
  
  		/*
  		 * Execute the statements
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
new file mode 100644
index 78e5a85..92d448d
*** a/src/test/regress/expected/plpgsql.out
--- b/src/test/regress/expected/plpgsql.out
*************** NOTICE:  {"(35,78)","(88,76)"}
*** 5127,5132 ****
--- 5127,5185 ----
  
  drop function foreach_test(anyarray);
  drop type xy_tuple;
+ -- multiassign (key,value) tests
+ create or replace function foreach_test_ab(anyarray)
+ returns void as $$
+ declare
+   a text; b text;
+ begin
+   foreach a,b in array $1
+   loop
+     raise notice 'a: %, b: %', a, b;
+   end loop;
+ end
+ $$ language plpgsql;
+ select foreach_test_ab(array[1,2,3,4]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[[1,2],[3,4]]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[[1,2,3]]);
+ NOTICE:  a: 1, b: 2
+ ERROR:  array is not well sized, missing data
+ CONTEXT:  PL/pgSQL function foreach_test_ab(anyarray) line 5 at FOREACH over array
+ select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+ NOTICE:  a: 1, b: 2
+ NOTICE:  a: 3, b: 4
+ NOTICE:  a: 5, b: 6
+ NOTICE:  a: 7, b: 8
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+ NOTICE:  a: <NULL>, b: <NULL>
+ NOTICE:  a: 1, b: <NULL>
+ NOTICE:  a: 1, b: 1
+ NOTICE:  a: <NULL>, b: 1
+  foreach_test_ab 
+ -----------------
+  
+ (1 row)
+ 
+ drop function foreach_test_ab(anyarray);
  --
  -- Assorted tests for array subscript assignment
  --
diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out
new file mode 100644
index 54525de..0e249be
*** a/src/test/regress/expected/rowtypes.out
--- b/src/test/regress/expected/rowtypes.out
*************** select row_to_json(r) from (select q2,q1
*** 634,636 ****
--- 634,644 ----
   {"q2":0,"q1":0}
  (3 rows)
  
+ select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
+          row_to_array         
+ ------------------------------
+  {q2,456,q1,123}
+  {q2,4567890123456789,q1,123}
+  {q2,0,q1,0}
+ (3 rows)
+ 
diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql
new file mode 100644
index e19e415..7640f5d
*** a/src/test/regress/sql/plpgsql.sql
--- b/src/test/regress/sql/plpgsql.sql
*************** select foreach_test(ARRAY[[(10,20),(40,6
*** 4075,4080 ****
--- 4075,4101 ----
  drop function foreach_test(anyarray);
  drop type xy_tuple;
  
+ -- multiassign (key,value) tests
+ create or replace function foreach_test_ab(anyarray)
+ returns void as $$
+ declare
+   a text; b text;
+ begin
+   foreach a,b in array $1
+   loop
+     raise notice 'a: %, b: %', a, b;
+   end loop;
+ end
+ $$ language plpgsql;
+ 
+ select foreach_test_ab(array[1,2,3,4]);
+ select foreach_test_ab(array[[1,2],[3,4]]);
+ select foreach_test_ab(array[[1,2,3]]);
+ select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+ select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+ 
+ drop function foreach_test_ab(anyarray);
+ 
  --
  -- Assorted tests for array subscript assignment
  --
diff --git a/src/test/regress/sql/rowtypes.sql b/src/test/regress/sql/rowtypes.sql
new file mode 100644
index bc3f021..3450417
*** a/src/test/regress/sql/rowtypes.sql
--- b/src/test/regress/sql/rowtypes.sql
*************** create temp table tt1 as select * from i
*** 271,273 ****
--- 271,274 ----
  create temp table tt2 () inherits(tt1);
  insert into tt2 values(0,0);
  select row_to_json(r) from (select q2,q1 from tt1 offset 0) r;
+ select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Pavel Stehule (#21)
Re: proposal: row_to_array function

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.
They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

Also, could we please *not* mix up these two very independent features?
"foreach array" as implemented here may or may not be a good thing, but
it should get its own discussion.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23Pavel Stehule
pavel.stehule@gmail.com
In reply to: Tom Lane (#22)
Re: proposal: row_to_array function

2015-03-29 20:27 GMT+02:00 Tom Lane <tgl@sss.pgh.pa.us>:

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array

support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.

This is complementation of ARRAY API - we have row_to_json, probably will
have row_to_jsonb, row_to_hstore and "row_to_array" is relative logical.
Casting to text is not fast, but on second hand - working with text arrays
is fast.

I know so casting to text is a problem, but if you iterate over record's
fields, then you have to find common shared type due sharing plans - and
text arrays can be simple solution.

Now, with current possibilities I'll do full sql expression SELECT key,
value FROM each(hstore(ROW)) or FOREACH ARRAY hstore_to_matrix(hstore(ROW))

row_to_array(ROW) can reduce a hstore overhead

any other solution based on PL/Perl or PL/Python are slower due PL engine
start and due same transformation to some form of structured text.

They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

Also, could we please *not* mix up these two very independent features?
"foreach array" as implemented here may or may not be a good thing, but
it should get its own discussion.

ok, I'll send two patches.

Show quoted text

regards, tom lane

#24Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#23)
2 attachment(s)
Re: proposal: row_to_array function

2015-03-29 21:20 GMT+02:00 Pavel Stehule <pavel.stehule@gmail.com>:

2015-03-29 20:27 GMT+02:00 Tom Lane <tgl@sss.pgh.pa.us>:

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array

support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.

This is complementation of ARRAY API - we have row_to_json, probably will
have row_to_jsonb, row_to_hstore and "row_to_array" is relative logical.
Casting to text is not fast, but on second hand - working with text arrays
is fast.

I know so casting to text is a problem, but if you iterate over record's
fields, then you have to find common shared type due sharing plans - and
text arrays can be simple solution.

Now, with current possibilities I'll do full sql expression SELECT key,
value FROM each(hstore(ROW)) or FOREACH ARRAY hstore_to_matrix(hstore(ROW))

row_to_array(ROW) can reduce a hstore overhead

any other solution based on PL/Perl or PL/Python are slower due PL engine
start and due same transformation to some form of structured text.

They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

Also, could we please *not* mix up these two very independent features?
"foreach array" as implemented here may or may not be a good thing, but
it should get its own discussion.

ok, I'll send two patches.

attachments contains previous patch separated to two independent patches.

Regards

Pavel

Show quoted text

regards, tom lane

Attachments:

row_to_array-20150331-01.patchtext/x-patch; charset=US-ASCII; name=row_to_array-20150331-01.patchDownload
commit 0b432fd3a42132ddddd287c4395b13f8a25ab294
Author: Pavel Stehule <pavel.stehule@gooddata.com>
Date:   Tue Mar 31 14:43:27 2015 +0200

    row_to_array

diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c
index a65e18d..1a64d8e 100644
--- a/src/backend/utils/adt/rowtypes.c
+++ b/src/backend/utils/adt/rowtypes.c
@@ -21,6 +21,7 @@
 #include "catalog/pg_type.h"
 #include "funcapi.h"
 #include "libpq/pqformat.h"
+#include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/typcache.h"
@@ -1810,3 +1811,90 @@ btrecordimagecmp(PG_FUNCTION_ARGS)
 {
 	PG_RETURN_INT32(record_image_cmp(fcinfo));
 }
+
+/*
+ * transform any record to array in format [key1, value1, key2, value2 [, ...]]
+ *
+ * This format is compatible with hstore_to_array function
+ */
+Datum
+row_to_array(PG_FUNCTION_ARGS)
+{
+	HeapTupleHeader		rec = PG_GETARG_HEAPTUPLEHEADER(0);
+	TupleDesc		rectupdesc;
+	Oid			rectuptyp;
+	int32			rectuptypmod;
+	HeapTupleData		rectuple;
+	int	ncolumns;
+	Datum 		*recvalues;
+	bool  		*recnulls;
+	ArrayBuildState		*builder;
+	int	i;
+
+	/* Extract type info from the tuple itself */
+	rectuptyp = HeapTupleHeaderGetTypeId(rec);
+	rectuptypmod = HeapTupleHeaderGetTypMod(rec);
+	rectupdesc = lookup_rowtype_tupdesc(rectuptyp, rectuptypmod);
+	ncolumns = rectupdesc->natts;
+
+	/* Build a temporary HeapTuple control structure */
+	rectuple.t_len = HeapTupleHeaderGetDatumLength(rec);
+	ItemPointerSetInvalid(&(rectuple.t_self));
+	rectuple.t_tableOid = InvalidOid;
+	rectuple.t_data = rec;
+
+	recvalues = (Datum *) palloc(ncolumns * sizeof(Datum));
+	recnulls = (bool *) palloc(ncolumns * sizeof(bool));
+
+	/* Break down the tuple into fields */
+	heap_deform_tuple(&rectuple, rectupdesc, recvalues, recnulls);
+
+	/* Prepare target array */
+	builder = initArrayResult(TEXTOID, CurrentMemoryContext, true);
+
+	for (i = 0; i < ncolumns; i++)
+	{
+		Oid	columntyp = rectupdesc->attrs[i]->atttypid;
+		Datum		value;
+		bool		isnull;
+
+		/* Ignore dropped columns */
+		if (rectupdesc->attrs[i]->attisdropped)
+			continue;
+
+		builder = accumArrayResult(builder,
+							CStringGetTextDatum(NameStr(rectupdesc->attrs[i]->attname)),
+							false,
+							TEXTOID,
+							CurrentMemoryContext);
+
+		if (!recnulls[i])
+		{
+			char *outstr;
+			bool		typIsVarlena;
+			Oid		typoutput;
+			FmgrInfo		proc;
+
+			getTypeOutputInfo(columntyp, &typoutput, &typIsVarlena);
+			fmgr_info_cxt(typoutput, &proc, CurrentMemoryContext);
+			outstr = OutputFunctionCall(&proc, recvalues[i]);
+
+			value = CStringGetTextDatum(outstr);
+			isnull = false;
+		}
+		else
+		{
+			value = (Datum) 0;
+			isnull = true;
+		}
+
+		builder = accumArrayResult(builder,
+						    value, isnull,
+						    TEXTOID,
+						    CurrentMemoryContext);
+	}
+
+	ReleaseTupleDesc(rectupdesc);
+
+	PG_RETURN_DATUM(makeArrayResult(builder, CurrentMemoryContext));
+}
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index a96d369..1b4c578 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -891,6 +891,8 @@ DATA(insert OID = 376 (  string_to_array   PGNSP PGUID 12 1 0 0 0 f f f f f f i
 DESCR("split delimited text into text[], with null string");
 DATA(insert OID = 384 (  array_to_string   PGNSP PGUID 12 1 0 0 0 f f f f f f s 3 0 25 "2277 25 25" _null_ _null_ _null_ _null_ array_to_text_null _null_ _null_ _null_ ));
 DESCR("concatenate array elements, using delimiter and null string, into text");
+DATA(insert OID = 4057 (  row_to_array   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 1009 "2249" _null_ _null_ _null_ _null_ row_to_array _null_ _null_ _null_ ));
+DESCR("transform any record to text[]");
 DATA(insert OID = 515 (  array_larger	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_larger _null_ _null_ _null_ ));
 DESCR("larger of two");
 DATA(insert OID = 516 (  array_smaller	   PGNSP PGUID 12 1 0 0 0 f f f f t f i 2 0 2277 "2277 2277" _null_ _null_ _null_ _null_ array_smaller _null_ _null_ _null_ ));
diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h
index 6310641..7aabfe1 100644
--- a/src/include/utils/builtins.h
+++ b/src/include/utils/builtins.h
@@ -668,6 +668,7 @@ extern Datum record_image_gt(PG_FUNCTION_ARGS);
 extern Datum record_image_le(PG_FUNCTION_ARGS);
 extern Datum record_image_ge(PG_FUNCTION_ARGS);
 extern Datum btrecordimagecmp(PG_FUNCTION_ARGS);
+extern Datum row_to_array(PG_FUNCTION_ARGS);
 
 /* ruleutils.c */
 extern bool quote_all_identifiers;
diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out
index 54525de..0e249be 100644
--- a/src/test/regress/expected/rowtypes.out
+++ b/src/test/regress/expected/rowtypes.out
@@ -634,3 +634,11 @@ select row_to_json(r) from (select q2,q1 from tt1 offset 0) r;
  {"q2":0,"q1":0}
 (3 rows)
 
+select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
+         row_to_array         
+------------------------------
+ {q2,456,q1,123}
+ {q2,4567890123456789,q1,123}
+ {q2,0,q1,0}
+(3 rows)
+
diff --git a/src/test/regress/sql/rowtypes.sql b/src/test/regress/sql/rowtypes.sql
index bc3f021..3450417 100644
--- a/src/test/regress/sql/rowtypes.sql
+++ b/src/test/regress/sql/rowtypes.sql
@@ -271,3 +271,4 @@ create temp table tt1 as select * from int8_tbl limit 2;
 create temp table tt2 () inherits(tt1);
 insert into tt2 values(0,0);
 select row_to_json(r) from (select q2,q1 from tt1 offset 0) r;
+select row_to_array(r) from (select q2,q1 from tt1 offset 0) r;
plpgsql-multiassign-foreach-20150331-01.patchtext/x-patch; charset=US-ASCII; name=plpgsql-multiassign-foreach-20150331-01.patchDownload
commit bd8ef3d652f3d4bc44012c6db3a018e2d14cd85f
Author: Pavel Stehule <pavel.stehule@gooddata.com>
Date:   Tue Mar 31 14:46:54 2015 +0200

    plpgsql - multiassing foreach

diff --git a/contrib/hstore/expected/hstore.out b/contrib/hstore/expected/hstore.out
index 9749e45..e44532e 100644
--- a/contrib/hstore/expected/hstore.out
+++ b/contrib/hstore/expected/hstore.out
@@ -1148,6 +1148,22 @@ select %% 'aa=>1, cq=>l, b=>g, fg=>NULL';
  {b,g,aa,1,cq,l,fg,NULL}
 (1 row)
 
+-- fast iteration over keys
+do $$
+declare
+  key text;
+  value text;
+begin
+  foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+  loop
+    raise notice 'key: %, value: %', key, value;
+  end loop;
+end;
+$$;
+NOTICE:  key: b, value: g
+NOTICE:  key: aa, value: 1
+NOTICE:  key: cq, value: l
+NOTICE:  key: fg, value: <NULL>
 select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
         hstore_to_matrix         
 ---------------------------------
diff --git a/contrib/hstore/sql/hstore.sql b/contrib/hstore/sql/hstore.sql
index 5a9e9ee..7b9eb09 100644
--- a/contrib/hstore/sql/hstore.sql
+++ b/contrib/hstore/sql/hstore.sql
@@ -257,6 +257,19 @@ select avals('');
 select hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
 select %% 'aa=>1, cq=>l, b=>g, fg=>NULL';
 
+-- fast iteration over keys
+do $$
+declare
+  key text;
+  value text;
+begin
+  foreach key, value in array hstore_to_array('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore)
+  loop
+    raise notice 'key: %, value: %', key, value;
+  end loop;
+end;
+$$;
+
 select hstore_to_matrix('aa=>1, cq=>l, b=>g, fg=>NULL'::hstore);
 select %# 'aa=>1, cq=>l, b=>g, fg=>NULL';
 
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
index d36acf6..e4abb97 100644
--- a/doc/src/sgml/plpgsql.sgml
+++ b/doc/src/sgml/plpgsql.sgml
@@ -2505,6 +2505,29 @@ NOTICE:  row = {7,8,9}
 NOTICE:  row = {10,11,12}
 </programlisting>
     </para>
+
+    <para>
+     <literal>FOREACH</> cycle can be used for iteration over record. You
+     need a <xref linkend="hstore"> extension. For this case a clause
+     <literal>SLICE</literal> should not be used. <literal>FOREACH</literal>
+     statements supports list of target variables. When source array is
+     a array of composites, then composite array element is saved to target
+     variables. When the array is a array of scalar values, then target 
+     variables are filled item by item.
+<programlisting>
+CREATE FUNCTION trig_function() RETURNS TRIGGER AS $$
+DECLARE
+  key text; value text;
+BEGIN
+  FOREACH key, value IN ARRAY hstore_to_array(hstore(NEW))
+  LOOP
+    RAISE NOTICE 'key = %, value = %', key, value;
+  END LOOP;
+  RETURN NEW;
+END;
+$$ LANGUAGE plpgsql;
+</programlisting>
+    </para>
    </sect2>
 
    <sect2 id="plpgsql-error-trapping">
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index deefb1f..5c34a03 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -2261,6 +2261,9 @@ exec_stmt_foreach_a(PLpgSQL_execstate *estate, PLpgSQL_stmt_foreach_a *stmt)
 	Datum		value;
 	bool		isnull;
 
+
+	bool		multiassign = false;
+
 	/* get the value of the array expression */
 	value = exec_eval_expr(estate, stmt->expr, &isnull, &arrtype, &arrtypmod);
 	if (isnull)
@@ -2322,6 +2325,21 @@ exec_stmt_foreach_a(PLpgSQL_execstate *estate, PLpgSQL_stmt_foreach_a *stmt)
 				(errcode(ERRCODE_DATATYPE_MISMATCH),
 			  errmsg("FOREACH loop variable must not be of an array type")));
 
+	/*
+	 * it is multiassign? Don't support slicing yet.
+	 */
+	if (loop_var->dtype == PLPGSQL_DTYPE_ROW
+		 && !type_is_rowtype(ARR_ELEMTYPE(arr)))
+	{
+		if (stmt->slice != 0)
+			ereport(ERROR,
+					(errcode(ERRCODE_DATATYPE_MISMATCH),
+				  errmsg("cannot to assign non composite value to composite variable")));
+
+		/* only when target var is composite, SLICE=0 and source is scalar */
+		multiassign = true;
+	}
+
 	/* Create an iterator to step through the array */
 	array_iterator = array_create_iterator(arr, stmt->slice, NULL);
 
@@ -2344,13 +2362,45 @@ exec_stmt_foreach_a(PLpgSQL_execstate *estate, PLpgSQL_stmt_foreach_a *stmt)
 	{
 		found = true;			/* looped at least once */
 
-		/* Assign current element/slice to the loop variable */
-		exec_assign_value(estate, loop_var, value, isnull,
-						  iterator_result_type, iterator_result_typmod);
+		if (!multiassign)
+		{
+			/* Assign current element/slice to the loop variable */
+			exec_assign_value(estate, loop_var, value, isnull,
+							  iterator_result_type, iterator_result_typmod);
+
+			/* In slice case, value is temporary; must free it to avoid leakage */
+			if (stmt->slice > 0)
+				pfree(DatumGetPointer(value));
+		}
+		else
+		{
+			int	i;
+			bool	first = true;
+			PLpgSQL_row *row = (PLpgSQL_row *) loop_var;
+
+			for (i = 0; i < row->nfields; i++)
+			{
+				int		varno = row->varnos[i];
 
-		/* In slice case, value is temporary; must free it to avoid leakage */
-		if (stmt->slice > 0)
-			pfree(DatumGetPointer(value));
+				if (varno != -1)
+				{
+					PLpgSQL_datum *var = (PLpgSQL_datum *) (estate->datums[varno]);
+
+					if (!first)
+					{
+						if (!array_iterate(array_iterator, &value, &isnull))
+							ereport(ERROR,
+									(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
+								   errmsg("array is not well sized, missing data")));
+					}
+					else
+						first = false;
+
+					exec_assign_value(estate, var, value, isnull,
+								  iterator_result_type, iterator_result_typmod);
+				}
+			}
+		}
 
 		/*
 		 * Execute the statements
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index 78e5a85..92d448d 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -5127,6 +5127,59 @@ NOTICE:  {"(35,78)","(88,76)"}
 
 drop function foreach_test(anyarray);
 drop type xy_tuple;
+-- multiassign (key,value) tests
+create or replace function foreach_test_ab(anyarray)
+returns void as $$
+declare
+  a text; b text;
+begin
+  foreach a,b in array $1
+  loop
+    raise notice 'a: %, b: %', a, b;
+  end loop;
+end
+$$ language plpgsql;
+select foreach_test_ab(array[1,2,3,4]);
+NOTICE:  a: 1, b: 2
+NOTICE:  a: 3, b: 4
+ foreach_test_ab 
+-----------------
+ 
+(1 row)
+
+select foreach_test_ab(array[[1,2],[3,4]]);
+NOTICE:  a: 1, b: 2
+NOTICE:  a: 3, b: 4
+ foreach_test_ab 
+-----------------
+ 
+(1 row)
+
+select foreach_test_ab(array[[1,2,3]]);
+NOTICE:  a: 1, b: 2
+ERROR:  array is not well sized, missing data
+CONTEXT:  PL/pgSQL function foreach_test_ab(anyarray) line 5 at FOREACH over array
+select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+NOTICE:  a: 1, b: 2
+NOTICE:  a: 3, b: 4
+NOTICE:  a: 5, b: 6
+NOTICE:  a: 7, b: 8
+ foreach_test_ab 
+-----------------
+ 
+(1 row)
+
+select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+NOTICE:  a: <NULL>, b: <NULL>
+NOTICE:  a: 1, b: <NULL>
+NOTICE:  a: 1, b: 1
+NOTICE:  a: <NULL>, b: 1
+ foreach_test_ab 
+-----------------
+ 
+(1 row)
+
+drop function foreach_test_ab(anyarray);
 --
 -- Assorted tests for array subscript assignment
 --
diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql
index e19e415..7640f5d 100644
--- a/src/test/regress/sql/plpgsql.sql
+++ b/src/test/regress/sql/plpgsql.sql
@@ -4075,6 +4075,27 @@ select foreach_test(ARRAY[[(10,20),(40,69)],[(35,78),(88,76)]]::xy_tuple[]);
 drop function foreach_test(anyarray);
 drop type xy_tuple;
 
+-- multiassign (key,value) tests
+create or replace function foreach_test_ab(anyarray)
+returns void as $$
+declare
+  a text; b text;
+begin
+  foreach a,b in array $1
+  loop
+    raise notice 'a: %, b: %', a, b;
+  end loop;
+end
+$$ language plpgsql;
+
+select foreach_test_ab(array[1,2,3,4]);
+select foreach_test_ab(array[[1,2],[3,4]]);
+select foreach_test_ab(array[[1,2,3]]);
+select foreach_test_ab('{{{1,2},{3,4}},{{5,6},{7,8}}}'::int[]);
+select foreach_test_ab(array[null,null, 1,null, 1,1, null,1]);
+
+drop function foreach_test_ab(anyarray);
+
 --
 -- Assorted tests for array subscript assignment
 --
#25Merlin Moncure
mmoncure@gmail.com
In reply to: Tom Lane (#22)
Re: proposal: row_to_array function

On Sun, Mar 29, 2015 at 1:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.
They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

You have a point. What does attached do that to_json does not do
besides completely discard type information? Our json api is pretty
rich and getting richer. For better or ill, we dumped all json
support into the already stupendously bloated public namespace and so
it's always available.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Brendan Jurd
direvus@gmail.com
In reply to: Merlin Moncure (#25)
Re: proposal: row_to_array function

On Thu, 2 Apr 2015 at 05:00 Merlin Moncure <mmoncure@gmail.com> wrote:

On Sun, Mar 29, 2015 at 1:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.

...

(In any case, those who insist can get there through row_to_json, no?)

You have a point. What does attached do that to_json does not do
besides completely discard type information?

FWIW, I think row_to_array is nice, and I would make use of it. If you
have a record, and you want to iterate over its fields in a generic way, at
least IMO converting to a text array is an obvious thing to reach for, and
it makes for very clearly intentioned code. While it's true that you could
go through JSON or hstore to achieve much the same thing, it is a bit of a
circumlocution.

I get Tom's point that smashing to text should not be done frivolously, but
there are circumstances when it's a reasonable move. Is it possible that
it might be used unwisely? Yes, but then you could say that about pretty
much everything.

Would it alleviate your concerns at all if the function was named
row_to_text_array, to stress the fact that you are throwing away data types?

If the patch was invasive, I would probably not support it, but from what I
can see it's a pretty cheap add.

Cheers,
BJ

#27Craig Ringer
craig@2ndquadrant.com
In reply to: Merlin Moncure (#25)
Re: proposal: row_to_array function

On 2 April 2015 at 01:59, Merlin Moncure <mmoncure@gmail.com> wrote:

On Sun, Mar 29, 2015 at 1:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.
They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

You have a point. What does attached do that to_json does not do
besides completely discard type information? Our json api is pretty
rich and getting richer. For better or ill, we dumped all json
support into the already stupendously bloated public namespace and so
it's always available.

I can see plenty of utility for a function like Pavel speaks of, but
I'd personally rather see it as a function that returns table (colname
name, coltype regtype, coltypmod integer, coltextvalue text,
colordinal integer) so it can carry more complete information and
there's no need to worry about foreach(array). The main use of a
function that includes text representations of the values would IMO be
using it from plain SQL, rather than PL/PgSQL, when faced with
anonymous records.

I'd find it more useful to have lvalue-expressions for dynamic access
to record fields and a function to get record metadata - field names,
types and typmods. Some kind of "pg_get_record_info(record) returns
table(fieldname text, fieldtype regtype, fieldtypmod integer)" and a
PL/PgSQL lvalue-expression for record field access like
"RECORD_FIELD(therecord, fieldname)". I would _certainly_ want to be
able to get the type metadata without the values.

That way you could interact natively with the fields in their true
types, without forcing conversion into and out of 'text', which is a
known performance pain-point with PL/PgSQL. (PL/PgSQL doesn't have a
VARIANT type or support for using 'anyelement', which would be the
other way to solve the type flattening problem IMO).

Think:

DECLARE
myrow record;
fi record;
BEGIN
EXECUTE user_supplied_dynamic_query INTO myrow;
FOR fi IN
SELECT fieldname, fieldtype, fieldtypmod
FROM pg_get_record_info(myrow)
LOOP
IF fi.fieldtype == 'int4'::regtype THEN
RECORD_FIELD(myrow, fi.fieldname) := RECORD_FIELD(myrow,
fi.fieldname) + 1;
END IF;
END LOOP;
END;

OK, so it's a stupid example - increment all int4 fields by one. It
conveys the rough idea though - native use of the field types.

Note that RECORD_FIELD is distinct from the existing support for

EXECUTE format('SELECT $1.%I', fieldname) USING therecord;

in that that approach doesn't work for all ways that a record can be
produced, it's slow, it doesn't have a good way to enumerate field
names, and there's no equivalent to write to the field. Current
approaches for that are ghastly:
http://stackoverflow.com/q/7711432/398670 .

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#28Pavel Stehule
pavel.stehule@gmail.com
In reply to: Craig Ringer (#27)
Re: proposal: row_to_array function

Hi

2015-06-22 5:18 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 2 April 2015 at 01:59, Merlin Moncure <mmoncure@gmail.com> wrote:

On Sun, Mar 29, 2015 at 1:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Pavel Stehule <pavel.stehule@gmail.com> writes:

here is rebased patch.
It contains both patches - row_to_array function and foreach array

support.

While I don't have a problem with hstore_to_array, I don't think that
row_to_array is a very good idea; it's basically encouraging people to
throw away SQL datatypes altogether and imagine that everything is text.
They've already bought into that concept if they are using hstore or
json, so smashing elements of those containers to text is not a problem.
But that doesn't make this version a good thing.

(In any case, those who insist can get there through row_to_json, no?)

You have a point. What does attached do that to_json does not do
besides completely discard type information? Our json api is pretty
rich and getting richer. For better or ill, we dumped all json
support into the already stupendously bloated public namespace and so
it's always available.

I can see plenty of utility for a function like Pavel speaks of, but
I'd personally rather see it as a function that returns table (colname
name, coltype regtype, coltypmod integer, coltextvalue text,
colordinal integer) so it can carry more complete information and
there's no need to worry about foreach(array). The main use of a
function that includes text representations of the values would IMO be
using it from plain SQL, rather than PL/PgSQL, when faced with
anonymous records.

I'd find it more useful to have lvalue-expressions for dynamic access
to record fields and a function to get record metadata - field names,
types and typmods. Some kind of "pg_get_record_info(record) returns
table(fieldname text, fieldtype regtype, fieldtypmod integer)" and a
PL/PgSQL lvalue-expression for record field access like
"RECORD_FIELD(therecord, fieldname)". I would _certainly_ want to be
able to get the type metadata without the values.

That way you could interact natively with the fields in their true
types, without forcing conversion into and out of 'text', which is a
known performance pain-point with PL/PgSQL. (PL/PgSQL doesn't have a
VARIANT type or support for using 'anyelement', which would be the
other way to solve the type flattening problem IMO).

Think:

DECLARE
myrow record;
fi record;
BEGIN
EXECUTE user_supplied_dynamic_query INTO myrow;
FOR fi IN
SELECT fieldname, fieldtype, fieldtypmod
FROM pg_get_record_info(myrow)
LOOP
IF fi.fieldtype == 'int4'::regtype THEN
RECORD_FIELD(myrow, fi.fieldname) := RECORD_FIELD(myrow,
fi.fieldname) + 1;
END IF;
END LOOP;
END;

I am thinking so this is separate task, that should not be solved simply
too. I wrote a set functions for working with record (
https://github.com/okbob/pltoolbox/blob/master/record.c). But it doesn't
solve the basic issues:

1. speed - FOR IN SELECT FROM is more expensive then just unpacking row or
record
2. unclean game with creating more code path for any special type.

I have little bit different idea. FOR IN RECORD can change type of any
automatic variable in any iteration. Internally we can do more code paths -
so your code can be rewritten to

FOREACH key, val IN RECORD myrow
LOOP
IF pg_typeof(val) IN ('int4', 'double precision', 'numeric') THEN
val := val + 1; -- these variables can be mutable
-- or maybe in futore
myrow[key] := val + 1;
END IF;
END LOOP;

What is important - "val" is automatic variable, and it can has different
type in any step.

It is little bit strange, but impossible to solve, so we cannot to support
row[var] as right value (without immutable casting). But we can do it with
left value.

Show quoted text

OK, so it's a stupid example - increment all int4 fields by one. It
conveys the rough idea though - native use of the field types.

Note that RECORD_FIELD is distinct from the existing support for

EXECUTE format('SELECT $1.%I', fieldname) USING therecord;

in that that approach doesn't work for all ways that a record can be
produced, it's slow, it doesn't have a good way to enumerate field
names, and there's no equivalent to write to the field. Current
approaches for that are ghastly:
http://stackoverflow.com/q/7711432/398670 .

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#29Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#28)
Re: proposal: row_to_array function

On 6/22/15 2:46 AM, Pavel Stehule wrote:

FOREACH key, val IN RECORD myrow
LOOP
IF pg_typeof(val) IN ('int4', 'double precision', 'numeric') THEN
val := val + 1; -- these variables can be mutable
-- or maybe in futore
myrow[key] := val + 1;
END IF;
END LOOP;

What is important - "val" is automatic variable, and it can has
different type in any step.

It is little bit strange, but impossible to solve, so we cannot to
support row[var] as right value (without immutable casting). But we can
do it with left value.

Actually, you can (theoretically) solve it for the right value as well
with if val is an actual type and you have operators on that type that
know to search for a specific operator given the actual types that are
involved. So if val is int4, val + 1 becomes int4 + int4.

The problem I've run into with this is by the time you've added enough
casts to make this workable you've probably created a situation where
val + something is going to recurse back to itself. I've partially
solved this in [1]https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846 -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Data in Trouble? Get it in Treble! http://BlueTreble.com, and intend to finish it by calling back in via SPI
to do the final resolution, the same way the RI triggers do.

What would be a lot better is if we had better control over function and
operator resolution.

[1]: https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846 -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Data in Trouble? Get it in Treble! http://BlueTreble.com
https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#30Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#29)
Re: proposal: row_to_array function

2015-06-23 1:56 GMT+02:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 6/22/15 2:46 AM, Pavel Stehule wrote:

FOREACH key, val IN RECORD myrow
LOOP
IF pg_typeof(val) IN ('int4', 'double precision', 'numeric') THEN
val := val + 1; -- these variables can be mutable
-- or maybe in futore
myrow[key] := val + 1;
END IF;
END LOOP;

What is important - "val" is automatic variable, and it can has
different type in any step.

It is little bit strange, but impossible to solve, so we cannot to
support row[var] as right value (without immutable casting). But we can
do it with left value.

Actually, you can (theoretically) solve it for the right value as well
with if val is an actual type and you have operators on that type that know
to search for a specific operator given the actual types that are involved.
So if val is int4, val + 1 becomes int4 + int4.

The problem I've run into with this is by the time you've added enough
casts to make this workable you've probably created a situation where val +
something is going to recurse back to itself. I've partially solved this in
[1], and intend to finish it by calling back in via SPI to do the final
resolution, the same way the RI triggers do.

What would be a lot better is if we had better control over function and
operator resolution.

[1]
https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846

The solution of dynamic operators changes philosophy about 180° - and I
afraid about a performance.

Now if I am thinking about possibilities - probably it is solvable on right
side too. It needs to solve two steps:

1. parametrized record reference syntax - some like SELECT $1[$]
2. possibility to throw plan cache, if result has different type than is
expected in cache.

Pavel

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX

Data in Trouble? Get it in Treble! http://BlueTreble.com

#31Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#30)
Re: proposal: row_to_array function

On 6/23/15 9:45 AM, Pavel Stehule wrote:

2015-06-23 1:56 GMT+02:00 Jim Nasby <Jim.Nasby@bluetreble.com
<mailto:Jim.Nasby@bluetreble.com>>:

On 6/22/15 2:46 AM, Pavel Stehule wrote:

FOREACH key, val IN RECORD myrow
LOOP
IF pg_typeof(val) IN ('int4', 'double precision', 'numeric')
THEN
val := val + 1; -- these variables can be mutable
-- or maybe in futore
myrow[key] := val + 1;
END IF;
END LOOP;

What is important - "val" is automatic variable, and it can has
different type in any step.

It is little bit strange, but impossible to solve, so we cannot to
support row[var] as right value (without immutable casting). But
we can
do it with left value.

Actually, you can (theoretically) solve it for the right value as
well with if val is an actual type and you have operators on that
type that know to search for a specific operator given the actual
types that are involved. So if val is int4, val + 1 becomes int4 + int4.

The problem I've run into with this is by the time you've added
enough casts to make this workable you've probably created a
situation where val + something is going to recurse back to itself.
I've partially solved this in [1], and intend to finish it by
calling back in via SPI to do the final resolution, the same way the
RI triggers do.

What would be a lot better is if we had better control over function
and operator resolution.

[1]
https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846

The solution of dynamic operators changes philosophy about 180° - and I
afraid about a performance.

Now if I am thinking about possibilities - probably it is solvable on
right side too. It needs to solve two steps:

1. parametrized record reference syntax - some like SELECT $1[$]
2. possibility to throw plan cache, if result has different type than is
expected in cache.

Well, the other option is we allow for cases where we don't know in
advance what the type will be. That would handle this, JSON, variant,
and possibly some other scenarios.

BTW, I think this relates to the desire to be able to do more OO-ish
things in the database. Like "do X to all elements in this array". And
to have actual classes, private members, real arrays of arrays. It seems
like there's a bigger need here that's only being addressed piecemeal. :/
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32Merlin Moncure
mmoncure@gmail.com
In reply to: Jim Nasby (#31)
Re: proposal: row_to_array function

On Tue, Jun 23, 2015 at 2:57 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 6/23/15 9:45 AM, Pavel Stehule wrote:

1. parametrized record reference syntax - some like SELECT $1[$]
2. possibility to throw plan cache, if result has different type than is
expected in cache.

Well, the other option is we allow for cases where we don't know in advance
what the type will be. That would handle this, JSON, variant, and possibly
some other scenarios.

BTW, I think this relates to the desire to be able to do more OO-ish things
in the database. Like "do X to all elements in this array". And to have
actual classes, private members, real arrays of arrays. It seems like
there's a bigger need here that's only being addressed piecemeal. :/

I would rephrase that to: "do X to all fields of an object".
Array handling is pretty good now (minus arrays of arrays, but arrays
of objects containing arrays is 'good enough' for most real world
cases). We've suffered for a while now with hstore/json as a
temporary container to handle operations that are not well supported
by postgres's particularly strongly typed flavor SQL. The "OO" of
postgres has been gradually diluting away; it's not a 'object
relational' database anymore and the OO features, very much a product
of the silly 90's OO hysteria, have been recast into more useful
features like inheritance and/or pruned back.

I don't mind having to push everything to jsonb and back for tuple
manipulation and I expect that's how these types of things are going
to be done moving forwards. jsonb has clearly caught a bid judging by
what I'm reading in the blogosphere and will continue to accrete
features things like this.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jim Nasby (#31)
Re: proposal: row_to_array function

2015-06-23 21:57 GMT+02:00 Jim Nasby <Jim.Nasby@bluetreble.com>:

On 6/23/15 9:45 AM, Pavel Stehule wrote:

2015-06-23 1:56 GMT+02:00 Jim Nasby <Jim.Nasby@bluetreble.com
<mailto:Jim.Nasby@bluetreble.com>>:

On 6/22/15 2:46 AM, Pavel Stehule wrote:

FOREACH key, val IN RECORD myrow
LOOP
IF pg_typeof(val) IN ('int4', 'double precision', 'numeric')
THEN
val := val + 1; -- these variables can be mutable
-- or maybe in futore
myrow[key] := val + 1;
END IF;
END LOOP;

What is important - "val" is automatic variable, and it can has
different type in any step.

It is little bit strange, but impossible to solve, so we cannot to
support row[var] as right value (without immutable casting). But
we can
do it with left value.

Actually, you can (theoretically) solve it for the right value as
well with if val is an actual type and you have operators on that
type that know to search for a specific operator given the actual
types that are involved. So if val is int4, val + 1 becomes int4 +
int4.

The problem I've run into with this is by the time you've added
enough casts to make this workable you've probably created a
situation where val + something is going to recurse back to itself.
I've partially solved this in [1], and intend to finish it by
calling back in via SPI to do the final resolution, the same way the
RI triggers do.

What would be a lot better is if we had better control over function
and operator resolution.

[1]

https://github.com/decibel/variant/commit/2b99067744a405da8a325de1ebabd213106f794f#diff-8aa2db4a577ee4201d6eb9041c2a457eR846

The solution of dynamic operators changes philosophy about 180° - and I
afraid about a performance.

Now if I am thinking about possibilities - probably it is solvable on
right side too. It needs to solve two steps:

1. parametrized record reference syntax - some like SELECT $1[$]
2. possibility to throw plan cache, if result has different type than is
expected in cache.

Well, the other option is we allow for cases where we don't know in
advance what the type will be. That would handle this, JSON, variant, and
possibly some other scenarios.

BTW, I think this relates to the desire to be able to do more OO-ish
things in the database. Like "do X to all elements in this array". And to
have actual classes, private members, real arrays of arrays. It seems like
there's a bigger need here that's only being addressed piecemeal. :/

I would not to open this box - and I would not to throw or redesign almost
all PostgreSQL type handling system. I am sure, so it is not necessary. PL
can be relative static if the dynamic is covered by query language. The few
features can implemented without to necessity to redesign all. Still there
are other PL - and we have not force to design new Perl, JavaScript, ...

Show quoted text

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com

#34Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Merlin Moncure (#32)
Re: proposal: row_to_array function

On 6/23/15 3:22 PM, Merlin Moncure wrote:

I would rephrase that to: "do X to all fields of an object".
Array handling is pretty good now (minus arrays of arrays, but arrays

Except that still won't make it easy to do something to each element of
an array in SQL, which I think would be nice to have.

of objects containing arrays is 'good enough' for most real world
cases). We've suffered for a while now with hstore/json as a
temporary container to handle operations that are not well supported
by postgres's particularly strongly typed flavor SQL. The "OO" of
postgres has been gradually diluting away; it's not a 'object
relational' database anymore and the OO features, very much a product
of the silly 90's OO hysteria, have been recast into more useful
features like inheritance and/or pruned back.

Admittedly I've never played with an OO database, but I think our data
features are pretty good [1]The one OO-ish data feature I'd like is the ability to de-reference a foreign key "pointer". So if. Where I do think we can improve though is
developing/coding things in the database. For example, I'd love to have
the equivalent to a class. Perhaps that could be accomplished by
allowing multiple instances of an extension. I'd also like stronger
support for private objects (permissions don't really fit that bill).

I don't mind having to push everything to jsonb and back for tuple
manipulation and I expect that's how these types of things are going
to be done moving forwards. jsonb has clearly caught a bid judging by
what I'm reading in the blogosphere and will continue to accrete
features things like this.

I think it's unfortunate to lose the strong typing that we have. That
can be especially important for something like numbers (was it
originally a float or a numeric?). But maybe JSON is good enough.

[1]: The one OO-ish data feature I'd like is the ability to de-reference a foreign key "pointer". So if
a foreign key "pointer". So if

CREATE TABLE b( a_id int REFERENCES a);

then have

SELECT a_id.some_field FROM b;

transform to

SELECT a.some_field FROM b JOIN a ...;
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Pavel Stehule (#33)
Re: proposal: row_to_array function

On 6/23/15 3:40 PM, Pavel Stehule wrote:

BTW, I think this relates to the desire to be able to do more OO-ish
things in the database. Like "do X to all elements in this array".
And to have actual classes, private members, real arrays of arrays.
It seems like there's a bigger need here that's only being addressed
piecemeal. :/

I would not to open this box - and I would not to throw or redesign
almost all PostgreSQL type handling system. I am sure, so it is not
necessary. PL can be relative static if the dynamic is covered by query
language. The few features can implemented without to necessity to
redesign all. Still there are other PL - and we have not force to design
new Perl, JavaScript, ...

By that argument why are we putting it into plpgsql either? You can
easily do the stuff we've been talking about in plperl (and presumably
most other pl's). So why mess around with adding it to plpgsql?

More importantly, these are things that would be extremely useful at the
SQL level. When it comes to records for example, we frequently know
exactly what's in them, so why do we force users to statically specify
that at the SQL level? This is why we don't support pivot tables (which
in the BI world is a Big Deal).

I think it's a mistake to try and solve this strictly through plpgsql
without recognizing the larger desire and trying to move the ball that
direction. I'm not saying a first effort should boil the ocean, but if
we keep piecemealing this without more though we're going to keep
getting more warts (like a lot of the gotchas we have with arrays).
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#36Merlin Moncure
mmoncure@gmail.com
In reply to: Jim Nasby (#34)
Re: proposal: row_to_array function

On Tue, Jun 23, 2015 at 3:45 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

On 6/23/15 3:22 PM, Merlin Moncure wrote:

I would rephrase that to: "do X to all fields of an object".
Array handling is pretty good now (minus arrays of arrays, but arrays

Except that still won't make it easy to do something to each element of an
array in SQL, which I think would be nice to have.

Maybe, or maybe we're framing the problem incorrectly. To me, it's
not really all that difficult to do:
select foo(x) from unnest(bar) x;

Unless you have to maintain state inside of foo(), in which case I'd
probably using the highly underutilized 'window over custom aggregate'
technique.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers