use of dblink_build_sql_insert() induces a server crash

Started by Rushabh Lathiaalmost 16 years ago5 messages
#1Rushabh Lathia
rushabh.lathia@gmail.com
1 attachment(s)

Hi All,

Testcase:

create table foo (a int );
postgres=# SELECT dblink_build_sql_insert('foo','1 2',2,'{\"0\",
\"a\"}','{\"99\", \"xyz\"}');
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Version: Latest

Description: The dblink_build_sql_insert()/get_tuple_of_interest functions
is not taking care number of attributes in the target.

PFA patch to fix the same.

Thanks,
Rushabh Lathia
(www.EnterpriseDB.com)

Attachments:

dblink.patchtext/x-diff; charset=US-ASCII; name=dblink.patchDownload
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 276c7e1..a067309 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -2083,6 +2083,11 @@ get_tuple_of_interest(Oid relid, int2vector *pkattnums, int16 pknumatts, char **
 		/* internal error */
 		elog(ERROR, "SPI connect failure - returned %d", ret);
 
+	if (pknumatts > tupdesc->natts)
+		ereport(ERROR,
+				(errcode(ERRCODE_SYNTAX_ERROR),
+				 errmsg("statement has more expression then specified relation")));
+
 	/*
 	 * Build sql statement to look up tuple of interest Use src_pkattvals as
 	 * the criteria.
#2Joe Conway
mail@joeconway.com
In reply to: Rushabh Lathia (#1)
Re: use of dblink_build_sql_insert() induces a server crash

On 02/03/2010 04:49 AM, Rushabh Lathia wrote:

Testcase:

create table foo (a int );
postgres=# SELECT dblink_build_sql_insert('foo','1 2',2,'{\"0\",
\"a\"}','{\"99\", \"xyz\"}');
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
server closed the connection unexpectedly

Thanks for the report -- will have a look later today.

Joe

#3Joe Conway
mail@joeconway.com
In reply to: Rushabh Lathia (#1)
1 attachment(s)
Re: use of dblink_build_sql_insert() induces a server crash

On 02/03/2010 04:49 AM, Rushabh Lathia wrote:

create table foo (a int );
postgres=# SELECT dblink_build_sql_insert('foo','1 2',2,'{\"0\",
\"a\"}','{\"99\", \"xyz\"}');
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
server closed the connection unexpectedly

The problem exists with all three dblink_build_sql_* functions. Here is
a more complete patch. If there are no objections I'll apply this to
HEAD and look at back-patching -- these functions have hardly been
touched since inception.

Joe

Attachments:

fix-dblink_build_sql.r0.patchtext/x-patch; name=fix-dblink_build_sql.r0.patchDownload
Index: dblink.c
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v
retrieving revision 1.87
diff -c -r1.87 dblink.c
*** dblink.c	24 Jan 2010 22:19:38 -0000	1.87
--- dblink.c	3 Feb 2010 17:45:26 -0000
***************
*** 1255,1260 ****
--- 1255,1262 ----
  	int32		pknumatts_tmp = PG_GETARG_INT32(2);
  	ArrayType  *src_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(3);
  	ArrayType  *tgt_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(4);
+ 	TupleDesc	tupdesc;
+ 	Relation	rel;
  	Oid			relid;
  	int16		pknumatts = 0;
  	char	  **src_pkattvals;
***************
*** 1290,1295 ****
--- 1292,1308 ----
  						"attributes too large")));
  
  	/*
+ 	 * Open relation using relid, ensure we don't ask for
+ 	 * more pk attributes than we have columns
+ 	 */
+ 	rel = relation_open(relid, AccessShareLock);
+ 	tupdesc = CreateTupleDescCopy(rel->rd_att);
+ 	relation_close(rel, AccessShareLock);
+ 	if (pknumatts > tupdesc->natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Source array is made up of key values that will be used to locate the
  	 * tuple of interest from the local system.
  	 */
***************
*** 1354,1359 ****
--- 1367,1374 ----
  	int2vector *pkattnums = (int2vector *) PG_GETARG_POINTER(1);
  	int32		pknumatts_tmp = PG_GETARG_INT32(2);
  	ArrayType  *tgt_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(3);
+ 	TupleDesc	tupdesc;
+ 	Relation	rel;
  	Oid			relid;
  	int16		pknumatts = 0;
  	char	  **tgt_pkattvals;
***************
*** 1387,1392 ****
--- 1402,1418 ----
  						"attributes too large")));
  
  	/*
+ 	 * Open relation using relid, ensure we don't ask for
+ 	 * more pk attributes than we have columns
+ 	 */
+ 	rel = relation_open(relid, AccessShareLock);
+ 	tupdesc = CreateTupleDescCopy(rel->rd_att);
+ 	relation_close(rel, AccessShareLock);
+ 	if (pknumatts > tupdesc->natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Target array is made up of key values that will be used to build the
  	 * SQL string for use on the remote system.
  	 */
***************
*** 1441,1446 ****
--- 1467,1474 ----
  	int32		pknumatts_tmp = PG_GETARG_INT32(2);
  	ArrayType  *src_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(3);
  	ArrayType  *tgt_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(4);
+ 	TupleDesc	tupdesc;
+ 	Relation	rel;
  	Oid			relid;
  	int16		pknumatts = 0;
  	char	  **src_pkattvals;
***************
*** 1476,1481 ****
--- 1504,1520 ----
  						"attributes too large")));
  
  	/*
+ 	 * Open relation using relid, ensure we don't ask for
+ 	 * more pk attributes than we have columns
+ 	 */
+ 	rel = relation_open(relid, AccessShareLock);
+ 	tupdesc = CreateTupleDescCopy(rel->rd_att);
+ 	relation_close(rel, AccessShareLock);
+ 	if (pknumatts > tupdesc->natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Source array is made up of key values that will be used to locate the
  	 * tuple of interest from the local system.
  	 */
Index: expected/dblink.out
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/expected/dblink.out,v
retrieving revision 1.27
diff -c -r1.27 dblink.out
*** expected/dblink.out	22 Nov 2009 05:20:38 -0000	1.27
--- expected/dblink.out	3 Feb 2010 18:01:25 -0000
***************
*** 39,44 ****
--- 39,47 ----
   INSERT INTO foo(f1,f2,f3) VALUES('99','xyz','{a0,b0,c0}')
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_insert('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- build an update statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_update('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
***************
*** 47,52 ****
--- 50,58 ----
   UPDATE foo SET f1 = '99', f2 = 'xyz', f3 = '{a0,b0,c0}' WHERE f1 = '99' AND f2 = 'xyz'
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_update('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- build a delete statement based on a local tuple,
  SELECT dblink_build_sql_delete('foo','1 2',2,'{"0", "a"}');
             dblink_build_sql_delete           
***************
*** 54,59 ****
--- 60,68 ----
   DELETE FROM foo WHERE f1 = '0' AND f2 = 'a'
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_delete('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- retest using a quoted and schema qualified table
  CREATE SCHEMA "MySchema";
  CREATE TABLE "MySchema"."Foo"(f1 int, f2 text, f3 text[], primary key (f1,f2));
Index: sql/dblink.sql
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/sql/dblink.sql,v
retrieving revision 1.22
diff -c -r1.22 dblink.sql
*** sql/dblink.sql	5 Aug 2009 16:11:07 -0000	1.22
--- sql/dblink.sql	3 Feb 2010 17:59:11 -0000
***************
*** 34,46 ****
--- 34,52 ----
  -- build an insert statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_insert('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_insert('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
  
  -- build an update statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_update('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_update('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
  
  -- build a delete statement based on a local tuple,
  SELECT dblink_build_sql_delete('foo','1 2',2,'{"0", "a"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_delete('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}');
  
  -- retest using a quoted and schema qualified table
  CREATE SCHEMA "MySchema";
#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Joe Conway (#3)
Re: use of dblink_build_sql_insert() induces a server crash

Joe Conway <mail@joeconway.com> writes:

The problem exists with all three dblink_build_sql_* functions. Here is
a more complete patch. If there are no objections I'll apply this to
HEAD and look at back-patching -- these functions have hardly been
touched since inception.

Do you really need to copy the relation tupdesc when you only are going
to make a one-time check of the number of attributes? This coding would
make some sense if you intended to use the tupdesc again later in the
functions, but it appears you don't.

Also, what about cases where the relation contains dropped columns ---
it's not obvious whether this test is correct in that case.

regards, tom lane

#5Joe Conway
mail@joeconway.com
In reply to: Tom Lane (#4)
1 attachment(s)
Re: use of dblink_build_sql_insert() induces a server crash

On 02/03/2010 10:18 AM, Tom Lane wrote:

Joe Conway <mail@joeconway.com> writes:

The problem exists with all three dblink_build_sql_* functions. Here is
a more complete patch. If there are no objections I'll apply this to
HEAD and look at back-patching -- these functions have hardly been
touched since inception.

Do you really need to copy the relation tupdesc when you only are going
to make a one-time check of the number of attributes? This coding would
make some sense if you intended to use the tupdesc again later in the
functions, but it appears you don't.

Also, what about cases where the relation contains dropped columns ---
it's not obvious whether this test is correct in that case.

Good input, as always. Here's another whack at it.

Thanks,

Joe

Attachments:

fix-dblink_build_sql.r1.patchtext/x-patch; name=fix-dblink_build_sql.r1.patchDownload
Index: dblink.c
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v
retrieving revision 1.87
diff -c -r1.87 dblink.c
*** dblink.c	24 Jan 2010 22:19:38 -0000	1.87
--- dblink.c	3 Feb 2010 19:23:51 -0000
***************
*** 101,106 ****
--- 101,107 ----
  static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail);
  static char *get_connect_string(const char *servername);
  static char *escape_param_str(const char *from);
+ static int get_nondropped_natts(Oid relid);
  
  /* Global */
  static remoteConn *pconn = NULL;
***************
*** 1262,1267 ****
--- 1263,1269 ----
  	int			src_nitems;
  	int			tgt_nitems;
  	char	   *sql;
+ 	int			nondropped_natts;
  
  	/*
  	 * Convert relname to rel OID.
***************
*** 1290,1295 ****
--- 1292,1306 ----
  						"attributes too large")));
  
  	/*
+ 	 * ensure we don't ask for more pk attributes than we have
+ 	 * non-dropped columns
+ 	 */
+ 	nondropped_natts = get_nondropped_natts(relid);
+ 	if (pknumatts > nondropped_natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Source array is made up of key values that will be used to locate the
  	 * tuple of interest from the local system.
  	 */
***************
*** 1354,1359 ****
--- 1365,1371 ----
  	int2vector *pkattnums = (int2vector *) PG_GETARG_POINTER(1);
  	int32		pknumatts_tmp = PG_GETARG_INT32(2);
  	ArrayType  *tgt_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(3);
+ 	int			nondropped_natts;
  	Oid			relid;
  	int16		pknumatts = 0;
  	char	  **tgt_pkattvals;
***************
*** 1387,1392 ****
--- 1399,1413 ----
  						"attributes too large")));
  
  	/*
+ 	 * ensure we don't ask for more pk attributes than we have
+ 	 * non-dropped columns
+ 	 */
+ 	nondropped_natts = get_nondropped_natts(relid);
+ 	if (pknumatts > nondropped_natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Target array is made up of key values that will be used to build the
  	 * SQL string for use on the remote system.
  	 */
***************
*** 1441,1446 ****
--- 1462,1468 ----
  	int32		pknumatts_tmp = PG_GETARG_INT32(2);
  	ArrayType  *src_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(3);
  	ArrayType  *tgt_pkattvals_arry = PG_GETARG_ARRAYTYPE_P(4);
+ 	int			nondropped_natts;
  	Oid			relid;
  	int16		pknumatts = 0;
  	char	  **src_pkattvals;
***************
*** 1476,1481 ****
--- 1498,1512 ----
  						"attributes too large")));
  
  	/*
+ 	 * ensure we don't ask for more pk attributes than we have
+ 	 * non-dropped columns
+ 	 */
+ 	nondropped_natts = get_nondropped_natts(relid);
+ 	if (pknumatts > nondropped_natts)
+ 		ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR),
+ 				errmsg("number of primary key fields exceeds number of specified relation attributes")));
+ 
+ 	/*
  	 * Source array is made up of key values that will be used to locate the
  	 * tuple of interest from the local system.
  	 */
***************
*** 2442,2444 ****
--- 2473,2500 ----
  
  	return buf->data;
  }
+ 
+ static int
+ get_nondropped_natts(Oid relid)
+ {
+ 	int			nondropped_natts = 0;
+ 	TupleDesc	tupdesc;
+ 	Relation	rel;
+ 	int			natts;
+ 	int			i;
+ 
+ 	rel = relation_open(relid, AccessShareLock);
+ 	tupdesc = rel->rd_att;
+ 	natts = tupdesc->natts;
+ 
+ 	for (i = 0; i < natts; i++)
+ 	{
+ 		if (tupdesc->attrs[i]->attisdropped)
+ 			continue;
+ 		nondropped_natts++;
+ 	}
+ 
+ 	relation_close(rel, AccessShareLock);
+ 	return nondropped_natts;
+ }
+ 
Index: expected/dblink.out
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/expected/dblink.out,v
retrieving revision 1.27
diff -c -r1.27 dblink.out
*** expected/dblink.out	22 Nov 2009 05:20:38 -0000	1.27
--- expected/dblink.out	3 Feb 2010 18:01:25 -0000
***************
*** 39,44 ****
--- 39,47 ----
   INSERT INTO foo(f1,f2,f3) VALUES('99','xyz','{a0,b0,c0}')
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_insert('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- build an update statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_update('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
***************
*** 47,52 ****
--- 50,58 ----
   UPDATE foo SET f1 = '99', f2 = 'xyz', f3 = '{a0,b0,c0}' WHERE f1 = '99' AND f2 = 'xyz'
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_update('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- build a delete statement based on a local tuple,
  SELECT dblink_build_sql_delete('foo','1 2',2,'{"0", "a"}');
             dblink_build_sql_delete           
***************
*** 54,59 ****
--- 60,68 ----
   DELETE FROM foo WHERE f1 = '0' AND f2 = 'a'
  (1 row)
  
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_delete('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}');
+ ERROR:  number of primary key fields exceeds number of specified relation attributes
  -- retest using a quoted and schema qualified table
  CREATE SCHEMA "MySchema";
  CREATE TABLE "MySchema"."Foo"(f1 int, f2 text, f3 text[], primary key (f1,f2));
Index: sql/dblink.sql
===================================================================
RCS file: /opt/src/cvs/pgsql/contrib/dblink/sql/dblink.sql,v
retrieving revision 1.22
diff -c -r1.22 dblink.sql
*** sql/dblink.sql	5 Aug 2009 16:11:07 -0000	1.22
--- sql/dblink.sql	3 Feb 2010 17:59:11 -0000
***************
*** 34,46 ****
--- 34,52 ----
  -- build an insert statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_insert('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_insert('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
  
  -- build an update statement based on a local tuple,
  -- replacing the primary key values with new ones
  SELECT dblink_build_sql_update('foo','1 2',2,'{"0", "a"}','{"99", "xyz"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_update('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}','{"99", "xyz", "{za0,zb0,zc0}"}');
  
  -- build a delete statement based on a local tuple,
  SELECT dblink_build_sql_delete('foo','1 2',2,'{"0", "a"}');
+ -- too many pk fields, should fail
+ SELECT dblink_build_sql_delete('foo','1 2 3 4',4,'{"0", "a", "{a0,b0,c0}"}');
  
  -- retest using a quoted and schema qualified table
  CREATE SCHEMA "MySchema";