Re: ToDo: allow to get a number of processed rows by COPY statement [Review of Patch]
Hello
Basic stuff:
------------
- Patch applies OK. but offset difference in line numbers.
- Compiles with errors in contrib [pg_stat_statements, sepgsql] modules
- Regression failed; one test-case in COPY due to incomplete test-case
attached patch. – same as reported by Heikki
fixed patch is in attachment
Details of compilation errors and regression failure:
------------------------------------------------------
PATH : postgres/src/contrib/pg_stat_statements
gcc -g -ggdb3 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I.
-I../../src/include -D_GNU_SOURCE -c -o pg_stat_statements.o
pg_stat_statements.c
pg_stat_statements.c: In function â_PG_initâ:
pg_stat_statements.c:363: warning: assignment from incompatible pointer type
pg_stat_statements.c: In function âpgss_ProcessUtilityâ:
pg_stat_statements.c:823: error: too few arguments to function
âprev_ProcessUtilityâ
pg_stat_statements.c:826: error: too few arguments to function
âstandard_ProcessUtilityâ
pg_stat_statements.c:884: error: too few arguments to function
âprev_ProcessUtilityâ
pg_stat_statements.c:887: error: too few arguments to function
âstandard_ProcessUtilityâ
make: *** [pg_stat_statements.o] Error 1PATH : postgres/src/contrib/sepgsql
gcc -g -ggdb3 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I.
-I../../src/include -D_GNU_SOURCE -c -o hooks.o hooks.c
In file included from hooks.c:26:
sepgsql.h:17:29: error: selinux/selinux.h: No such file or directory
sepgsql.h:18:25: error: selinux/avc.h: No such file or directory
In file included from hooks.c:26:
sepgsql.h:239: warning: âstruct av_decisionâ declared inside parameter list
sepgsql.h:239: warning: its scope is only this definition or declaration,
which is probably not what you want
hooks.c: In function âsepgsql_utility_commandâ:
hooks.c:331: error: too few arguments to function ânext_ProcessUtility_hookâ
hooks.c:334: error: too few arguments to function âstandard_ProcessUtilityâ
hooks.c: In function â_PG_initâ:
hooks.c:365: warning: implicit declaration of function âis_selinux_enabledâ
hooks.c:426: warning: assignment from incompatible pointer type
make: *** [hooks.o] Error 1REGRESSION TEST: ------------------ make installcheck test copy ... FAILED ======================== 1 of 132 tests failed. ======================== ~/postgres/src/test/regress> cat regression.diffs *** /home/postgres/code/src/test/regress/expected/copy.out 2012-09-18 19:57:02.000000000 +0530 --- /home/postgres/code/src/test/regress/results/copy.out 2012-09-18 19:57:18.000000000 +0530 *************** *** 71,73 **** --- 71,80 ---- c1,"col with , comma","col with "" quote" 1,a,1 2,b,2 + -- copy should to return processed rows + do $$ + + ERROR: unterminated dollar-quoted string at or near "$$ + " + LINE 1: do $$ + ^
fixed
What it does:
--------------
Modification to get the number of processed rows evaluated via SPI. The
changes are to add extra parameter in ProcessUtility to get the number of
rows processed by COPY command.Code Review Comments:
---------------------
1. New parameter is added to ProcessUtility_hook_type function
but the functions which get assigned to these functions like
sepgsql_utility_command, pgss_ProcessUtility, prototype & definition is
not modified.2. Why to add the new parameter if completionTag hold the number of
processed tuple information; can be extractedfrom it as follows:
_SPI_current->processed = strtoul(completionTag + 7,
NULL, 10);
this is basic question. I prefer a natural type for counter - uint64
instead text. And there are no simply way to get offset (7 in this
case)
3. Why new variable added in portal [portal->processed] this is not used in
code ?
This value can be used anywhere instead parsing completionTag to get
returned numbers
Testing:
------------
The following test is carried out on the patch.
1. Normal SQL command COPY FROM / TO is tested.
2. SQL command COPY FROM / TO is tested from plpgsql.Test cases are attached in Testcases_Copy_Processed.txt
Regards
Pavel
Show quoted text
With Regards,
Amit Kapila.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attachments:
copy-processed-rows.diffapplication/octet-stream; name=copy-processed-rows.diffDownload
*** a/contrib/pg_stat_statements/pg_stat_statements.c
--- b/contrib/pg_stat_statements/pg_stat_statements.c
***************
*** 242,248 **** static void pgss_ExecutorEnd(QueryDesc *queryDesc);
static void pgss_ProcessUtility(Node *parsetree,
const char *queryString, ParamListInfo params,
DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context);
static uint32 pgss_hash_fn(const void *key, Size keysize);
static int pgss_match_fn(const void *key1, const void *key2, Size keysize);
static uint32 pgss_hash_string(const char *str);
--- 242,248 ----
static void pgss_ProcessUtility(Node *parsetree,
const char *queryString, ParamListInfo params,
DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context, uint64 *processed);
static uint32 pgss_hash_fn(const void *key, Size keysize);
static int pgss_match_fn(const void *key1, const void *key2, Size keysize);
static uint32 pgss_hash_string(const char *str);
***************
*** 787,793 **** pgss_ExecutorEnd(QueryDesc *queryDesc)
static void
pgss_ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest,
! char *completionTag, ProcessUtilityContext context)
{
/*
* If it's an EXECUTE statement, we don't track it and don't increment the
--- 787,794 ----
static void
pgss_ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest,
! char *completionTag, ProcessUtilityContext context,
! uint64 *processed)
{
/*
* If it's an EXECUTE statement, we don't track it and don't increment the
***************
*** 820,829 **** pgss_ProcessUtility(Node *parsetree, const char *queryString,
{
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
nested_level--;
}
PG_CATCH();
--- 821,830 ----
{
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context, NULL);
nested_level--;
}
PG_CATCH();
***************
*** 881,890 **** pgss_ProcessUtility(Node *parsetree, const char *queryString,
{
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
}
}
--- 882,891 ----
{
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context, NULL);
}
}
*** a/contrib/sepgsql/hooks.c
--- b/contrib/sepgsql/hooks.c
***************
*** 267,273 **** sepgsql_utility_command(Node *parsetree,
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context)
{
sepgsql_context_info_t saved_context_info = sepgsql_context_info;
ListCell *cell;
--- 267,274 ----
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context,
! uint64 *processed)
{
sepgsql_context_info_t saved_context_info = sepgsql_context_info;
ListCell *cell;
***************
*** 328,337 **** sepgsql_utility_command(Node *parsetree,
if (next_ProcessUtility_hook)
(*next_ProcessUtility_hook) (parsetree, queryString, params,
! dest, completionTag, context);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
}
PG_CATCH();
{
--- 329,338 ----
if (next_ProcessUtility_hook)
(*next_ProcessUtility_hook) (parsetree, queryString, params,
! dest, completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context, NULL);
}
PG_CATCH();
{
*** a/src/backend/commands/extension.c
--- b/src/backend/commands/extension.c
***************
*** 753,759 **** execute_sql_string(const char *sql, const char *filename)
NULL,
dest,
NULL,
! PROCESS_UTILITY_QUERY);
}
PopActiveSnapshot();
--- 753,760 ----
NULL,
dest,
NULL,
! PROCESS_UTILITY_QUERY,
! NULL);
}
PopActiveSnapshot();
*** a/src/backend/commands/schemacmds.c
--- b/src/backend/commands/schemacmds.c
***************
*** 135,141 **** CreateSchemaCommand(CreateSchemaStmt *stmt, const char *queryString)
NULL,
None_Receiver,
NULL,
! PROCESS_UTILITY_SUBCOMMAND);
/* make sure later steps can see the object created here */
CommandCounterIncrement();
}
--- 135,142 ----
NULL,
None_Receiver,
NULL,
! PROCESS_UTILITY_SUBCOMMAND,
! NULL);
/* make sure later steps can see the object created here */
CommandCounterIncrement();
}
*** a/src/backend/commands/trigger.c
--- b/src/backend/commands/trigger.c
***************
*** 1012,1018 **** ConvertTriggerToFK(CreateTrigStmt *stmt, Oid funcoid)
/* ... and execute it */
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
! NULL, None_Receiver, NULL, PROCESS_UTILITY_GENERATED);
/* Remove the matched item from the list */
info_list = list_delete_ptr(info_list, info);
--- 1012,1019 ----
/* ... and execute it */
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
! NULL, None_Receiver, NULL, PROCESS_UTILITY_GENERATED,
! NULL);
/* Remove the matched item from the list */
info_list = list_delete_ptr(info_list, info);
*** a/src/backend/executor/functions.c
--- b/src/backend/executor/functions.c
***************
*** 786,792 **** postquel_getnext(execution_state *es, SQLFunctionCachePtr fcache)
es->qd->params,
es->qd->dest,
NULL,
! PROCESS_UTILITY_QUERY);
result = true; /* never stops early */
}
else
--- 786,793 ----
es->qd->params,
es->qd->dest,
NULL,
! PROCESS_UTILITY_QUERY,
! NULL);
result = true; /* never stops early */
}
else
*** a/src/backend/executor/spi.c
--- b/src/backend/executor/spi.c
***************
*** 1909,1921 **** _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
else
{
char completionTag[COMPLETION_TAG_BUFSIZE];
ProcessUtility(stmt,
plansource->query_string,
paramLI,
dest,
completionTag,
! PROCESS_UTILITY_QUERY);
/* Update "processed" if stmt returned tuples */
if (_SPI_current->tuptable)
--- 1909,1925 ----
else
{
char completionTag[COMPLETION_TAG_BUFSIZE];
+ uint64 processed;
ProcessUtility(stmt,
plansource->query_string,
paramLI,
dest,
completionTag,
! PROCESS_UTILITY_QUERY,
! &processed);
!
! _SPI_current->processed = (uint32) processed;
/* Update "processed" if stmt returned tuples */
if (_SPI_current->tuptable)
*** a/src/backend/tcop/pquery.c
--- b/src/backend/tcop/pquery.c
***************
*** 44,53 **** static uint32 RunFromStore(Portal portal, ScanDirection direction, long count,
static long PortalRunSelect(Portal portal, bool forward, long count,
DestReceiver *dest);
static void PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
! DestReceiver *dest, char *completionTag);
static void PortalRunMulti(Portal portal, bool isTopLevel,
DestReceiver *dest, DestReceiver *altdest,
! char *completionTag);
static long DoPortalRunFetch(Portal portal,
FetchDirection fdirection,
long count,
--- 44,54 ----
static long PortalRunSelect(Portal portal, bool forward, long count,
DestReceiver *dest);
static void PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
! DestReceiver *dest, char *completionTag,
! uint64 *processed);
static void PortalRunMulti(Portal portal, bool isTopLevel,
DestReceiver *dest, DestReceiver *altdest,
! char *completionTag, uint64 *processed);
static long DoPortalRunFetch(Portal portal,
FetchDirection fdirection,
long count,
***************
*** 813,819 **** PortalRun(Portal portal, long count, bool isTopLevel,
case PORTAL_MULTI_QUERY:
PortalRunMulti(portal, isTopLevel,
! dest, altdest, completionTag);
/* Prevent portal's commands from being re-executed */
MarkPortalDone(portal);
--- 814,821 ----
case PORTAL_MULTI_QUERY:
PortalRunMulti(portal, isTopLevel,
! dest, altdest, completionTag,
! &portal->processed);
/* Prevent portal's commands from being re-executed */
MarkPortalDone(portal);
***************
*** 1053,1064 **** FillPortalStore(Portal portal, bool isTopLevel)
* tuplestore. Auxiliary query outputs are discarded.
*/
PortalRunMulti(portal, isTopLevel,
! treceiver, None_Receiver, completionTag);
break;
case PORTAL_UTIL_SELECT:
PortalRunUtility(portal, (Node *) linitial(portal->stmts),
! isTopLevel, treceiver, completionTag);
break;
default:
--- 1055,1068 ----
* tuplestore. Auxiliary query outputs are discarded.
*/
PortalRunMulti(portal, isTopLevel,
! treceiver, None_Receiver, completionTag,
! &portal->processed);
break;
case PORTAL_UTIL_SELECT:
PortalRunUtility(portal, (Node *) linitial(portal->stmts),
! isTopLevel, treceiver, completionTag,
! &portal->processed);
break;
default:
***************
*** 1148,1154 **** RunFromStore(Portal portal, ScanDirection direction, long count,
*/
static void
PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
! DestReceiver *dest, char *completionTag)
{
bool active_snapshot_set;
--- 1152,1159 ----
*/
static void
PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
! DestReceiver *dest, char *completionTag,
! uint64 *processed)
{
bool active_snapshot_set;
***************
*** 1189,1195 **** PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
dest,
completionTag,
isTopLevel ?
! PROCESS_UTILITY_TOPLEVEL : PROCESS_UTILITY_QUERY);
/* Some utility statements may change context on us */
MemoryContextSwitchTo(PortalGetHeapMemory(portal));
--- 1194,1201 ----
dest,
completionTag,
isTopLevel ?
! PROCESS_UTILITY_TOPLEVEL : PROCESS_UTILITY_QUERY,
! processed);
/* Some utility statements may change context on us */
MemoryContextSwitchTo(PortalGetHeapMemory(portal));
***************
*** 1213,1219 **** PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
static void
PortalRunMulti(Portal portal, bool isTopLevel,
DestReceiver *dest, DestReceiver *altdest,
! char *completionTag)
{
bool active_snapshot_set = false;
ListCell *stmtlist_item;
--- 1219,1226 ----
static void
PortalRunMulti(Portal portal, bool isTopLevel,
DestReceiver *dest, DestReceiver *altdest,
! char *completionTag,
! uint64 *processed)
{
bool active_snapshot_set = false;
ListCell *stmtlist_item;
***************
*** 1316,1329 **** PortalRunMulti(Portal portal, bool isTopLevel,
Assert(!active_snapshot_set);
/* statement can set tag string */
PortalRunUtility(portal, stmt, isTopLevel,
! dest, completionTag);
}
else
{
Assert(IsA(stmt, NotifyStmt));
/* stmt added by rewrite cannot set tag */
PortalRunUtility(portal, stmt, isTopLevel,
! altdest, NULL);
}
}
--- 1323,1338 ----
Assert(!active_snapshot_set);
/* statement can set tag string */
PortalRunUtility(portal, stmt, isTopLevel,
! dest, completionTag,
! processed);
}
else
{
Assert(IsA(stmt, NotifyStmt));
/* stmt added by rewrite cannot set tag */
PortalRunUtility(portal, stmt, isTopLevel,
! altdest, NULL,
! processed);
}
}
*** a/src/backend/tcop/utility.c
--- b/src/backend/tcop/utility.c
***************
*** 323,329 **** ProcessUtility(Node *parsetree,
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context)
{
Assert(queryString != NULL); /* required as of 8.4 */
--- 323,330 ----
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context,
! uint64 *processed)
{
Assert(queryString != NULL); /* required as of 8.4 */
***************
*** 334,343 **** ProcessUtility(Node *parsetree,
*/
if (ProcessUtility_hook)
(*ProcessUtility_hook) (parsetree, queryString, params,
! dest, completionTag, context);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context);
}
void
--- 335,346 ----
*/
if (ProcessUtility_hook)
(*ProcessUtility_hook) (parsetree, queryString, params,
! dest, completionTag, context,
! processed);
else
standard_ProcessUtility(parsetree, queryString, params,
! dest, completionTag, context,
! processed);
}
void
***************
*** 346,352 **** standard_ProcessUtility(Node *parsetree,
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context)
{
bool isTopLevel = (context == PROCESS_UTILITY_TOPLEVEL);
bool isCompleteQuery = (context <= PROCESS_UTILITY_QUERY);
--- 349,356 ----
ParamListInfo params,
DestReceiver *dest,
char *completionTag,
! ProcessUtilityContext context,
! uint64 *processed)
{
bool isTopLevel = (context == PROCESS_UTILITY_TOPLEVEL);
bool isCompleteQuery = (context <= PROCESS_UTILITY_QUERY);
***************
*** 356,361 **** standard_ProcessUtility(Node *parsetree,
--- 360,368 ----
if (completionTag)
completionTag[0] = '\0';
+ if (processed != NULL)
+ *processed = 0;
+
switch (nodeTag(parsetree))
{
/*
***************
*** 576,582 **** standard_ProcessUtility(Node *parsetree,
params,
None_Receiver,
NULL,
! PROCESS_UTILITY_GENERATED);
}
/* Need CCI between commands */
--- 583,590 ----
params,
None_Receiver,
NULL,
! PROCESS_UTILITY_GENERATED,
! processed);
}
/* Need CCI between commands */
***************
*** 706,717 **** standard_ProcessUtility(Node *parsetree,
case T_CopyStmt:
{
! uint64 processed;
! processed = DoCopy((CopyStmt *) parsetree, queryString);
if (completionTag)
snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
! "COPY " UINT64_FORMAT, processed);
}
break;
--- 714,728 ----
case T_CopyStmt:
{
! uint64 processed_rows;
! processed_rows = DoCopy((CopyStmt *) parsetree, queryString);
if (completionTag)
snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
! "COPY " UINT64_FORMAT, processed_rows);
!
! if (processed != NULL)
! *processed = processed_rows;
}
break;
***************
*** 813,819 **** standard_ProcessUtility(Node *parsetree,
params,
None_Receiver,
NULL,
! PROCESS_UTILITY_GENERATED);
}
/* Need CCI between commands */
--- 824,831 ----
params,
None_Receiver,
NULL,
! PROCESS_UTILITY_GENERATED,
! processed);
}
/* Need CCI between commands */
*** a/src/include/tcop/utility.h
--- b/src/include/tcop/utility.h
***************
*** 28,42 **** typedef enum
typedef void (*ProcessUtility_hook_type) (Node *parsetree,
const char *queryString, ParamListInfo params,
DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context);
extern PGDLLIMPORT ProcessUtility_hook_type ProcessUtility_hook;
extern void ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context);
extern void standard_ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest,
! char *completionTag, ProcessUtilityContext context);
extern bool UtilityReturnsTuples(Node *parsetree);
--- 28,45 ----
typedef void (*ProcessUtility_hook_type) (Node *parsetree,
const char *queryString, ParamListInfo params,
DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context,
! uint64 *processed);
extern PGDLLIMPORT ProcessUtility_hook_type ProcessUtility_hook;
extern void ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest, char *completionTag,
! ProcessUtilityContext context,
! uint64 *processed);
extern void standard_ProcessUtility(Node *parsetree, const char *queryString,
ParamListInfo params, DestReceiver *dest,
! char *completionTag, ProcessUtilityContext context,
! uint64 *processed);
extern bool UtilityReturnsTuples(Node *parsetree);
*** a/src/include/utils/portal.h
--- b/src/include/utils/portal.h
***************
*** 131,136 **** typedef struct PortalData
--- 131,137 ----
const char *commandTag; /* command tag for original query */
List *stmts; /* PlannedStmts and/or utility statements */
CachedPlan *cplan; /* CachedPlan, if stmts are from one */
+ uint64 processed; /* returned rows */
ParamListInfo portalParams; /* params to pass to query */
*** a/src/test/regress/input/copy.source
--- b/src/test/regress/input/copy.source
***************
*** 106,108 **** this is just a line full of junk that would error out if parsed
--- 106,121 ----
\.
copy copytest3 to stdout csv header;
+
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
*** a/src/test/regress/output/copy.source
--- b/src/test/regress/output/copy.source
***************
*** 71,73 **** copy copytest3 to stdout csv header;
--- 71,87 ----
c1,"col with , comma","col with "" quote"
1,a,1
2,b,2
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
+ NOTICE: exported 4 rows
+ NOTICE: imported 4 rows
Import Notes
Reply to msg id not found: 505b158a.8751b40a.3a84.fffff430SMTPIN_ADDED@mx.google.comReference msg id not found: 505b158a.8751b40a.3a84.fffff430SMTPIN_ADDED@mx.google.com
On Friday, September 21, 2012 1:23 AM Pavel Stehule wrote:
Basic stuff:
------------
- Patch applies OK. but offset difference in line numbers.
- Compiles with errors in contrib [pg_stat_statements, sepgsql] modules
- Regression failed; one test-case in COPY due to incomplete test-case
attached patch. – same as reported by Heikkifixed patch is in attachment
After modifications:
---------------------------
- Patch applies OK
- Compiles cleanly without any errors/warnings
- Regression tests pass.
What it does:
--------------
Modification to get the number of processed rows evaluated via SPI. The
changes are to add extra parameter in ProcessUtility to get the number of
rows processed by COPY command.Code Review Comments:
---------------------
1. New parameter is added to ProcessUtility_hook_type function
but the functions which get assigned to these functions like
sepgsql_utility_command, pgss_ProcessUtility, prototype & definition is
not modified.
Functionality is not fixed correctly for hook functions, In function pgss_ProcessUtility
for bellow snippet of code processed parameter is passed NULL, as well as not initialized.
because of this when "pg_stat_statements" extention is utilized COPY command is giving garbage values.
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
dest, completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString, params,
dest, completionTag, context, NULL);
Testcase is attached.
In this testcase table has only 1000 records but it show garbage value.
postgres=# show shared_preload_libraries ;
shared_preload_libraries
--------------------------
pg_stat_statements
(1 row)
postgres=# CREATE TABLE tbl (a int);
CREATE TABLE
postgres=# INSERT INTO tbl VALUES(generate_series(1,1000));
INSERT 0 1000
postgres=# do $$
declare r int;
begin
copy tbl to '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'exported % rows', r;
truncate tbl;
copy tbl from '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'imported % rows', r;
end;
$$ language plpgsql;
postgres$#
NOTICE: exported 13281616 rows
NOTICE: imported 13281616 rows
DO
2. Why to add the new parameter if completionTag hold the number of
processed tuple information; can be extractedfrom it as follows:
_SPI_current->processed = strtoul(completionTag + 7,
NULL, 10);this is basic question. I prefer a natural type for counter - uint64
instead text. And there are no simply way to get offset (7 in this
case)
I agree with your point, but currently in few other places we are parsing the completion tag for getting number of tuples processed. So may be in future we can change those places as well. For example
pgss_ProcessUtility
{
..
/* parse command tag to retrieve the number of affected rows. */
if (completionTag &&
sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
rows = 0;
}
_SPI_execute_plan
{
..
..
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
_SPI_current->processed = strtoul(completionTag + 7,
NULL, 10);
..
}
With Regards,
Amit Kapila.
Hello
I reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to
get number of processed rows too. A users should to parse this number
from completationTag, but this problem is not too hot and usually is
not blocker, because anybody can process completationTag usually.
So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample
:)), and COPY should to have same impact on ROW_COUNT like this
statement (last patch implements it).
Regards
Pavel
2012/9/21 Amit Kapila <amit.kapila@huawei.com>:
On Friday, September 21, 2012 1:23 AM Pavel Stehule wrote:
Basic stuff:
------------
- Patch applies OK. but offset difference in line numbers.
- Compiles with errors in contrib [pg_stat_statements, sepgsql] modules
- Regression failed; one test-case in COPY due to incomplete test-case
attached patch. – same as reported by Heikkifixed patch is in attachment
After modifications:
---------------------------
- Patch applies OK
- Compiles cleanly without any errors/warnings
- Regression tests pass.What it does:
--------------
Modification to get the number of processed rows evaluated via SPI. The
changes are to add extra parameter in ProcessUtility to get the number of
rows processed by COPY command.Code Review Comments:
---------------------
1. New parameter is added to ProcessUtility_hook_type function
but the functions which get assigned to these functions like
sepgsql_utility_command, pgss_ProcessUtility, prototype & definition is
not modified.Functionality is not fixed correctly for hook functions, In function pgss_ProcessUtility
for bellow snippet of code processed parameter is passed NULL, as well as not initialized.
because of this when "pg_stat_statements" extention is utilized COPY command is giving garbage values.
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
dest, completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString, params,
dest, completionTag, context, NULL);Testcase is attached.
In this testcase table has only 1000 records but it show garbage value.
postgres=# show shared_preload_libraries ;
shared_preload_libraries
--------------------------
pg_stat_statements
(1 row)
postgres=# CREATE TABLE tbl (a int);
CREATE TABLE
postgres=# INSERT INTO tbl VALUES(generate_series(1,1000));
INSERT 0 1000
postgres=# do $$
declare r int;
begin
copy tbl to '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'exported % rows', r;
truncate tbl;
copy tbl from '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'imported % rows', r;
end;
$$ language plpgsql;
postgres$#
NOTICE: exported 13281616 rows
NOTICE: imported 13281616 rows
DO2. Why to add the new parameter if completionTag hold the number of
processed tuple information; can be extractedfrom it as follows:
_SPI_current->processed = strtoul(completionTag + 7,
NULL, 10);this is basic question. I prefer a natural type for counter - uint64
instead text. And there are no simply way to get offset (7 in this
case)I agree with your point, but currently in few other places we are parsing the completion tag for getting number of tuples processed. So may be in future we can change those places as well. For example
yes, this step can be done in future - together with libpq enhancing.
Is not adequate change API (and break lot of extensions) for this
isolated problem. So I propose fix plpgsql issue, and add to ToDo -
"enhance libpq to return a number of processed rows as number" - but
this change breaking compatibility.
Pavel
Show quoted text
pgss_ProcessUtility
{
../* parse command tag to retrieve the number of affected rows. */
if (completionTag &&
sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
rows = 0;}
_SPI_execute_plan
{
..
..
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
_SPI_current->processed = strtoul(completionTag + 7,
NULL, 10);..
}With Regards,
Amit Kapila.--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attachments:
copy-processed-rows-simple.patchapplication/octet-stream; name=copy-processed-rows-simple.patchDownload
*** a/src/backend/executor/spi.c
--- b/src/backend/executor/spi.c
***************
*** 1939,1944 **** _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
--- 1939,1954 ----
else
res = SPI_OK_UTILITY;
}
+ else if (IsA(stmt, CopyStmt))
+ {
+ /*
+ * usually utility statements doesn't return a number
+ * of processed rows, but COPY does it.
+ */
+ Assert(strncmp(completionTag, "COPY ", 5) == 0);
+ _SPI_current->processed = strtoul(completionTag + 5,
+ NULL, 10);
+ }
else
res = SPI_OK_UTILITY;
}
*** a/src/test/regress/input/copy.source
--- b/src/test/regress/input/copy.source
***************
*** 106,108 **** this is just a line full of junk that would error out if parsed
--- 106,122 ----
\.
copy copytest3 to stdout csv header;
+
+ -- test of taking number of processed rows via SPI interface
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
*** a/src/test/regress/output/copy.source
--- b/src/test/regress/output/copy.source
***************
*** 71,73 **** copy copytest3 to stdout csv header;
--- 71,88 ----
c1,"col with , comma","col with "" quote"
1,a,1
2,b,2
+ -- test of taking number of processed rows via SPI interface
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
+ NOTICE: exported 4 rows
+ NOTICE: imported 4 rows
Import Notes
Reply to msg id not found: 505c3547.c6dad80a.3978.ffff91e1SMTPIN_ADDED@mx.google.comReference msg id not found: 505b158a.8751b40a.3a84.fffff430SMTPIN_ADDED@mx.google.com
On Friday, September 28, 2012 2:28 AM Pavel Stehule wrote:
HelloI reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to get
number of processed rows too. A users should to parse this number from
completationTag, but this problem is not too hot and usually is not
blocker, because anybody can process completationTag usually.So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample :)),
and COPY should to have same impact on ROW_COUNT like this statement
(last patch implements it).
IMO now this patch is okay. I have marked it as Ready For Committer.
With Regards,
Amit Kapila.
Show quoted text
2012/9/21 Amit Kapila <amit.kapila@huawei.com>:
On Friday, September 21, 2012 1:23 AM Pavel Stehule wrote:
Basic stuff:
------------
- Patch applies OK. but offset difference in line numbers.
- Compiles with errors in contrib [pg_stat_statements, sepgsql]
modules
- Regression failed; one test-case in COPY due to incomplete
test-case attached patch. – same as reported by Heikkifixed patch is in attachment
After modifications:
---------------------------
- Patch applies OK
- Compiles cleanly without any errors/warnings
- Regression tests pass.What it does:
--------------
Modification to get the number of processed rows evaluated via SPI.
The changes are to add extra parameter in ProcessUtility to get the
number of rows processed by COPY command.Code Review Comments:
---------------------
1. New parameter is added to ProcessUtility_hook_type function
but the functions which get assigned to these functions like
sepgsql_utility_command, pgss_ProcessUtility, prototype &
definition is not modified.Functionality is not fixed correctly for hook functions, In function
pgss_ProcessUtility for bellow snippet of code processed parameter ispassed NULL, as well as not initialized.
because of this when "pg_stat_statements" extention is utilized COPY
command is giving garbage values.
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
dest,completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString,params,
dest,
completionTag, context, NULL);Testcase is attached.
In this testcase table has only 1000 records but it show garbagevalue.
postgres=# show shared_preload_libraries ;
shared_preload_libraries
--------------------------
pg_stat_statements
(1 row)
postgres=# CREATE TABLE tbl (a int);
CREATE TABLE
postgres=# INSERT INTO tbl VALUES(generate_series(1,1000));
INSERT 0 1000
postgres=# do $$
declare r int;
begin
copy tbl to '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'exported % rows', r;
truncate tbl;
copy tbl from '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'imported % rows', r;
end;
$$ language plpgsql;
postgres$#
NOTICE: exported 13281616 rows
NOTICE: imported 13281616 rows
DO2. Why to add the new parameter if completionTag hold the number of
processed tuple information; can be extractedfrom it as follows:
_SPI_current->processed = strtoul(completionTag
+ 7, NULL, 10);this is basic question. I prefer a natural type for counter - uint64
instead text. And there are no simply way to get offset (7 in this
case)I agree with your point, but currently in few other places we are
parsing the completion tag for getting number of tuples processed. So
may be in future we can change those places as well. For exampleyes, this step can be done in future - together with libpq enhancing.
Is not adequate change API (and break lot of extensions) for this
isolated problem. So I propose fix plpgsql issue, and add to ToDo -
"enhance libpq to return a number of processed rows as number" - but
this change breaking compatibility.Pavel
pgss_ProcessUtility
{
../* parse command tag to retrieve the number of affected rows. */ if
(completionTag &&
sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
rows = 0;}
_SPI_execute_plan
{
..
..
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
_SPI_current->processed = strtoul(completionTag + 7,NULL, 10);
..
}With Regards,
Amit Kapila.--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To
make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2012/9/28 Amit Kapila <amit.kapila@huawei.com>:
On Friday, September 28, 2012 2:28 AM Pavel Stehule wrote:
HelloI reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to get
number of processed rows too. A users should to parse this number from
completationTag, but this problem is not too hot and usually is not
blocker, because anybody can process completationTag usually.So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample :)),
and COPY should to have same impact on ROW_COUNT like this statement
(last patch implements it).IMO now this patch is okay. I have marked it as Ready For Committer.
Thank you very much for review
Regards
Pavel
Show quoted text
With Regards,
Amit Kapila.2012/9/21 Amit Kapila <amit.kapila@huawei.com>:
On Friday, September 21, 2012 1:23 AM Pavel Stehule wrote:
Basic stuff:
------------
- Patch applies OK. but offset difference in line numbers.
- Compiles with errors in contrib [pg_stat_statements, sepgsql]
modules
- Regression failed; one test-case in COPY due to incomplete
test-case attached patch. – same as reported by Heikkifixed patch is in attachment
After modifications:
---------------------------
- Patch applies OK
- Compiles cleanly without any errors/warnings
- Regression tests pass.What it does:
--------------
Modification to get the number of processed rows evaluated via SPI.
The changes are to add extra parameter in ProcessUtility to get the
number of rows processed by COPY command.Code Review Comments:
---------------------
1. New parameter is added to ProcessUtility_hook_type function
but the functions which get assigned to these functions like
sepgsql_utility_command, pgss_ProcessUtility, prototype &
definition is not modified.Functionality is not fixed correctly for hook functions, In function
pgss_ProcessUtility for bellow snippet of code processed parameter ispassed NULL, as well as not initialized.
because of this when "pg_stat_statements" extention is utilized COPY
command is giving garbage values.
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree, queryString, params,
dest,completionTag, context, NULL);
else
standard_ProcessUtility(parsetree, queryString,params,
dest,
completionTag, context, NULL);Testcase is attached.
In this testcase table has only 1000 records but it show garbagevalue.
postgres=# show shared_preload_libraries ;
shared_preload_libraries
--------------------------
pg_stat_statements
(1 row)
postgres=# CREATE TABLE tbl (a int);
CREATE TABLE
postgres=# INSERT INTO tbl VALUES(generate_series(1,1000));
INSERT 0 1000
postgres=# do $$
declare r int;
begin
copy tbl to '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'exported % rows', r;
truncate tbl;
copy tbl from '/home/kiran/copytest.csv' csv;
get diagnostics r = row_count;
raise notice 'imported % rows', r;
end;
$$ language plpgsql;
postgres$#
NOTICE: exported 13281616 rows
NOTICE: imported 13281616 rows
DO2. Why to add the new parameter if completionTag hold the number of
processed tuple information; can be extractedfrom it as follows:
_SPI_current->processed = strtoul(completionTag
+ 7, NULL, 10);this is basic question. I prefer a natural type for counter - uint64
instead text. And there are no simply way to get offset (7 in this
case)I agree with your point, but currently in few other places we are
parsing the completion tag for getting number of tuples processed. So
may be in future we can change those places as well. For exampleyes, this step can be done in future - together with libpq enhancing.
Is not adequate change API (and break lot of extensions) for this
isolated problem. So I propose fix plpgsql issue, and add to ToDo -
"enhance libpq to return a number of processed rows as number" - but
this change breaking compatibility.Pavel
pgss_ProcessUtility
{
../* parse command tag to retrieve the number of affected rows. */ if
(completionTag &&
sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
rows = 0;}
_SPI_execute_plan
{
..
..
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
_SPI_current->processed = strtoul(completionTag + 7,NULL, 10);
..
}With Regards,
Amit Kapila.--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To
make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Import Notes
Reply to msg id not found: 50652c2e.694c420a.271c.ffffd71aSMTPIN_ADDED@mx.google.comReference msg id not found: 505b158a.8751b40a.3a84.fffff430SMTPIN_ADDED@mx.google.com
On 27.09.2012 23:58, Pavel Stehule wrote:
Hello
I reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to
get number of processed rows too. A users should to parse this number
from completationTag, but this problem is not too hot and usually is
not blocker, because anybody can process completationTag usually.So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample
:)), and COPY should to have same impact on ROW_COUNT like this
statement (last patch implements it).
The comment where CREATE AS SELECT does this says:
/*
* CREATE TABLE AS is a messy special case for historical
* reasons. We must set _SPI_current->processed even though
* the tuples weren't returned to the caller, and we must
* return a special result code if the statement was spelled
* SELECT INTO.
*/
That doesn't sound like a very good precedent that we should copy to
COPY. I don't have a problem with this approach in general, though. But
if we're going to make it normal behavior, rather than an ugly
historical kluge, that utility statements can return a row count without
returning the tuples, we should document that. The above comment ought
to be changed, and the manual section about SPI_execute needs to mention
the possibility that SPI_processed != 0, but SPI_tuptable == NULL.
Regarding the patch itself:
+ else if (IsA(stmt, CopyStmt)) + { + /* + * usually utility statements doesn't return a number + * of processed rows, but COPY does it. + */ + Assert(strncmp(completionTag, "COPY ", 5) == 0); + _SPI_current->processed = strtoul(completionTag + 5, + NULL, 10); + } else res = SPI_OK_UTILITY;
'res' is not set for a CopyStmt, and there's an extra space in "COPY "
in the Assert.
- Heikki
2012/10/1 Heikki Linnakangas <hlinnakangas@vmware.com>:
On 27.09.2012 23:58, Pavel Stehule wrote:
Hello
I reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to
get number of processed rows too. A users should to parse this number
from completationTag, but this problem is not too hot and usually is
not blocker, because anybody can process completationTag usually.So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample
:)), and COPY should to have same impact on ROW_COUNT like this
statement (last patch implements it).The comment where CREATE AS SELECT does this says:
/*
* CREATE TABLE AS is a messy special case for historical
* reasons. We must set _SPI_current->processed even though
* the tuples weren't returned to the caller, and we must
* return a special result code if the statement was spelled
* SELECT INTO.
*/That doesn't sound like a very good precedent that we should copy to COPY. I
don't have a problem with this approach in general, though. But if we're
going to make it normal behavior, rather than an ugly historical kluge, that
utility statements can return a row count without returning the tuples, we
should document that. The above comment ought to be changed, and the manual
section about SPI_execute needs to mention the possibility that
SPI_processed != 0, but SPI_tuptable == NULL.Regarding the patch itself:
+ else if (IsA(stmt, CopyStmt)) + { + /* + * usually utility statements doesn't return a number + * of processed rows, but COPY does it. + */ + Assert(strncmp(completionTag, "COPY ", 5) == 0); + _SPI_current->processed = strtoul(completionTag + 5, + NULL, 10); + } else res = SPI_OK_UTILITY;'res' is not set for a CopyStmt, and there's an extra space in "COPY " in
the Assert.
res is issue,
Extra space is correct, we would to break, when completetionTag is changed.
Pavel
Show quoted text
- Heikki
2012/10/1 Heikki Linnakangas <hlinnakangas@vmware.com>:
On 27.09.2012 23:58, Pavel Stehule wrote:
Hello
I reduced this patch as just plpgsql (SPI) problem solution.
Correct generic solution needs a discussion about enhancing our libpq
interface - we can take a number of returned rows, but we should to
get number of processed rows too. A users should to parse this number
from completationTag, but this problem is not too hot and usually is
not blocker, because anybody can process completationTag usually.So this issue is primary for PL/pgSQL, where this information is not
possible. There is precedent - CREATE AS SELECT (thanks for sample
:)), and COPY should to have same impact on ROW_COUNT like this
statement (last patch implements it).The comment where CREATE AS SELECT does this says:
/*
* CREATE TABLE AS is a messy special case for historical
* reasons. We must set _SPI_current->processed even though
* the tuples weren't returned to the caller, and we must
* return a special result code if the statement was spelled
* SELECT INTO.
*/That doesn't sound like a very good precedent that we should copy to COPY. I
don't have a problem with this approach in general, though. But if we're
going to make it normal behavior, rather than an ugly historical kluge, that
utility statements can return a row count without returning the tuples, we
should document that. The above comment ought to be changed, and the manual
section about SPI_execute needs to mention the possibility that
SPI_processed != 0, but SPI_tuptable == NULL.Regarding the patch itself:
+ else if (IsA(stmt, CopyStmt)) + { + /* + * usually utility statements doesn't return a number + * of processed rows, but COPY does it. + */ + Assert(strncmp(completionTag, "COPY ", 5) == 0); + _SPI_current->processed = strtoul(completionTag + 5, + NULL, 10); + } else res = SPI_OK_UTILITY;'res' is not set for a CopyStmt, and there's an extra space in "COPY " in
the Assert.
fixed patch
Regards
Pavel Stehule
Show quoted text
- Heikki
Attachments:
copy-processed-rows-simple2.patchapplication/octet-stream; name=copy-processed-rows-simple2.patchDownload
*** a/doc/src/sgml/spi.sgml
--- b/doc/src/sgml/spi.sgml
***************
*** 397,403 **** typedef struct
<structfield>tupdesc</> is a row descriptor which you can pass to
SPI functions dealing with rows. <structfield>tuptabcxt</>,
<structfield>alloced</>, and <structfield>free</> are internal
! fields not intended for use by SPI callers.
</para>
<para>
--- 397,406 ----
<structfield>tupdesc</> is a row descriptor which you can pass to
SPI functions dealing with rows. <structfield>tuptabcxt</>,
<structfield>alloced</>, and <structfield>free</> are internal
! fields not intended for use by SPI callers. <varname>SPI_processed</varname>
! can be non zero, althoug <varname>SPI_tuptable</varname> is NULL. It is
! possible when <command>CREATE TABLE AS SELECT</command> or <command>COPY</command>
! statements was executed.
</para>
<para>
*** a/src/backend/executor/spi.c
--- b/src/backend/executor/spi.c
***************
*** 1922,1934 **** _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
_SPI_current->processed = _SPI_current->tuptable->alloced -
_SPI_current->tuptable->free;
! /*
! * CREATE TABLE AS is a messy special case for historical
! * reasons. We must set _SPI_current->processed even though
! * the tuples weren't returned to the caller, and we must
! * return a special result code if the statement was spelled
! * SELECT INTO.
! */
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
--- 1922,1928 ----
_SPI_current->processed = _SPI_current->tuptable->alloced -
_SPI_current->tuptable->free;
! /* Update "processed" when stmt doesn't returns tuples */
if (IsA(stmt, CreateTableAsStmt))
{
Assert(strncmp(completionTag, "SELECT ", 7) == 0);
***************
*** 1939,1944 **** _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
--- 1933,1949 ----
else
res = SPI_OK_UTILITY;
}
+ else if (IsA(stmt, CopyStmt))
+ {
+ /*
+ * usually utility statements doesn't return a number
+ * of processed rows, but COPY does it.
+ */
+ Assert(strncmp(completionTag, "COPY ", 5) == 0);
+ _SPI_current->processed = strtoul(completionTag + 5,
+ NULL, 10);
+ res = SPI_OK_UTILITY;
+ }
else
res = SPI_OK_UTILITY;
}
*** a/src/test/regress/input/copy.source
--- b/src/test/regress/input/copy.source
***************
*** 106,108 **** this is just a line full of junk that would error out if parsed
--- 106,122 ----
\.
copy copytest3 to stdout csv header;
+
+ -- test of taking number of processed rows via SPI interface
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
*** a/src/test/regress/output/copy.source
--- b/src/test/regress/output/copy.source
***************
*** 71,73 **** copy copytest3 to stdout csv header;
--- 71,88 ----
c1,"col with , comma","col with "" quote"
1,a,1
2,b,2
+ -- test of taking number of processed rows via SPI interface
+ do $$
+ declare r int;
+ begin
+ copy copytest2 to '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'exported % rows', r;
+ truncate copytest2;
+ copy copytest2 from '@abs_builddir@/results/copytest.csv' csv;
+ get diagnostics r = row_count;
+ raise notice 'imported % rows', r;
+ end;
+ $$ language plpgsql;
+ NOTICE: exported 4 rows
+ NOTICE: imported 4 rows