Use stack-allocated StringInfoData
Hi all,
While working on other things I noted that we have a lot of cases where
a StringInfo instance is allocated dynamically even when it is either
thrown away or destroyed at the end, which seems unnecessary, that is,
instead of using:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return info->data;
We can use
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return info.data;
It was corrected in an earlier commit, but that seems to have been
removed so we still have a lot of these cases.
I created a semantic patch to capture most of these cases, which is
present in [1]/messages/by-id/8895cba9-48cf-40fe-9c47-9b43ec6b2ab3@gmail.com, but this is a slightly modified version that might be
interesting to include regardless of other changes. The patch is applied
and one case that couldn't be matched is manually fixed.
[1]: /messages/by-id/8895cba9-48cf-40fe-9c47-9b43ec6b2ab3@gmail.com
/messages/by-id/8895cba9-48cf-40fe-9c47-9b43ec6b2ab3@gmail.com
Best wishes,
Mats Kindahl
Attachments:
0001-Use-stack-allocated-StringInfoData.v1.patchtext/x-patch; charset=UTF-8; name=0001-Use-stack-allocated-StringInfoData.v1.patchDownload
From 7967dd042ada954dc57ddff734f4139c97cd6802 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: Use stack-allocated StringInfoData
This uses a semantic patch to replace uses of StringInfo with StringInfoData
where the info is dynamically allocated but (optionally) freed at the end of
the block. This will avoid one dynamic allocation that otherwise have to be
dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure or assigning a StringInfo to the
variable, which is complicated to deal with, so we avoid this case.
- If the pointer is returned either using a normal return or any of the
PG_RETURN_X versions.
The semantic patch does not handle the case where you have multiple overlapping
uses in the same block similar to the above. In this case it does not do a
replacement at all and prints an error.
---
cocci/use_stringinfodata.cocci | 168 ++++++++++++++++++++++++
contrib/postgres_fdw/postgres_fdw.c | 14 +-
contrib/tcn/tcn.c | 27 ++--
src/backend/access/transam/xlogbackup.c | 32 ++---
src/backend/backup/basebackup.c | 11 +-
src/backend/commands/subscriptioncmds.c | 68 +++++-----
src/backend/utils/adt/json.c | 53 ++++----
src/backend/utils/adt/jsonb.c | 8 +-
src/backend/utils/adt/jsonfuncs.c | 7 +-
src/backend/utils/adt/multirangetypes.c | 14 +-
src/backend/utils/adt/rangetypes.c | 17 +--
src/backend/utils/adt/xml.c | 33 +++--
12 files changed, 320 insertions(+), 132 deletions(-)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..73e35f4ec89
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,168 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one avoids assigning a StringInfo pointer of unknown source to
+// the new StringInfoData variable.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+identifier f;
+position pos;
+expression E;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* \(pfree\|destroyStringInfo\)(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" since we cannot
+// inherit over modifications
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+identifier f;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+... when != S
+(
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index 456b267f70b..06b52c65300 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -2841,7 +2841,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
*/
if (list_length(fdw_private) > FdwScanPrivateRelations)
{
- StringInfo relations;
+ StringInfoData relations;
char *rawrelations;
char *ptr;
int minrti,
@@ -2875,7 +2875,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
rtoffset = bms_next_member(plan->fs_base_relids, -1) - minrti;
/* Now we can translate the string */
- relations = makeStringInfo();
+ initStringInfo(&relations);
ptr = rawrelations;
while (*ptr)
{
@@ -2897,24 +2897,24 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
char *namespace;
namespace = get_namespace_name_or_temp(get_rel_namespace(rte->relid));
- appendStringInfo(relations, "%s.%s",
+ appendStringInfo(&relations, "%s.%s",
quote_identifier(namespace),
quote_identifier(relname));
}
else
- appendStringInfoString(relations,
+ appendStringInfoString(&relations,
quote_identifier(relname));
refname = (char *) list_nth(es->rtable_names, rti - 1);
if (refname == NULL)
refname = rte->eref->aliasname;
if (strcmp(refname, relname) != 0)
- appendStringInfo(relations, " %s",
+ appendStringInfo(&relations, " %s",
quote_identifier(refname));
}
else
- appendStringInfoChar(relations, *ptr++);
+ appendStringInfoChar(&relations, *ptr++);
}
- ExplainPropertyText("Relations", relations->data, es);
+ ExplainPropertyText("Relations", relations.data, es);
}
/*
diff --git a/contrib/tcn/tcn.c b/contrib/tcn/tcn.c
index 3158dee0f26..ab1d416654f 100644
--- a/contrib/tcn/tcn.c
+++ b/contrib/tcn/tcn.c
@@ -66,13 +66,14 @@ triggered_change_notification(PG_FUNCTION_ARGS)
TupleDesc tupdesc;
char *channel;
char operation;
- StringInfo payload = makeStringInfo();
+ StringInfoData payload;
bool foundPK;
List *indexoidlist;
ListCell *indexoidscan;
/* make sure it's called as a trigger */
+ initStringInfo(&payload);
if (!CALLED_AS_TRIGGER(fcinfo))
ereport(ERROR,
(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
@@ -149,22 +150,30 @@ triggered_change_notification(PG_FUNCTION_ARGS)
foundPK = true;
- strcpy_quoted(payload, RelationGetRelationName(rel), '"');
- appendStringInfoCharMacro(payload, ',');
- appendStringInfoCharMacro(payload, operation);
+ strcpy_quoted(&payload,
+ RelationGetRelationName(rel),
+ '"');
+ appendStringInfoCharMacro(&payload, ',');
+ appendStringInfoCharMacro(&payload, operation);
for (i = 0; i < indnkeyatts; i++)
{
int colno = index->indkey.values[i];
Form_pg_attribute attr = TupleDescAttr(tupdesc, colno - 1);
- appendStringInfoCharMacro(payload, ',');
- strcpy_quoted(payload, NameStr(attr->attname), '"');
- appendStringInfoCharMacro(payload, '=');
- strcpy_quoted(payload, SPI_getvalue(trigtuple, tupdesc, colno), '\'');
+ appendStringInfoCharMacro(&payload,
+ ',');
+ strcpy_quoted(&payload,
+ NameStr(attr->attname),
+ '"');
+ appendStringInfoCharMacro(&payload,
+ '=');
+ strcpy_quoted(&payload,
+ SPI_getvalue(trigtuple, tupdesc, colno),
+ '\'');
}
- Async_Notify(channel, payload->data);
+ Async_Notify(channel, payload.data);
}
ReleaseSysCache(indexTuple);
break;
diff --git a/src/backend/access/transam/xlogbackup.c b/src/backend/access/transam/xlogbackup.c
index cda4b38b7d6..22f579b1612 100644
--- a/src/backend/access/transam/xlogbackup.c
+++ b/src/backend/access/transam/xlogbackup.c
@@ -31,9 +31,10 @@ build_backup_content(BackupState *state, bool ishistoryfile)
char startstrbuf[128];
char startxlogfile[MAXFNAMELEN]; /* backup start WAL file */
XLogSegNo startsegno;
- StringInfo result = makeStringInfo();
+ StringInfoData result;
char *data;
+ initStringInfo(&result);
Assert(state != NULL);
/* Use the log timezone here, not the session timezone */
@@ -42,7 +43,7 @@ build_backup_content(BackupState *state, bool ishistoryfile)
XLByteToSeg(state->startpoint, startsegno, wal_segment_size);
XLogFileName(startxlogfile, state->starttli, startsegno, wal_segment_size);
- appendStringInfo(result, "START WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result, "START WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->startpoint), startxlogfile);
if (ishistoryfile)
@@ -52,18 +53,19 @@ build_backup_content(BackupState *state, bool ishistoryfile)
XLByteToSeg(state->stoppoint, stopsegno, wal_segment_size);
XLogFileName(stopxlogfile, state->stoptli, stopsegno, wal_segment_size);
- appendStringInfo(result, "STOP WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result,
+ "STOP WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->stoppoint), stopxlogfile);
}
- appendStringInfo(result, "CHECKPOINT LOCATION: %X/%08X\n",
+ appendStringInfo(&result, "CHECKPOINT LOCATION: %X/%08X\n",
LSN_FORMAT_ARGS(state->checkpointloc));
- appendStringInfoString(result, "BACKUP METHOD: streamed\n");
- appendStringInfo(result, "BACKUP FROM: %s\n",
+ appendStringInfoString(&result, "BACKUP METHOD: streamed\n");
+ appendStringInfo(&result, "BACKUP FROM: %s\n",
state->started_in_recovery ? "standby" : "primary");
- appendStringInfo(result, "START TIME: %s\n", startstrbuf);
- appendStringInfo(result, "LABEL: %s\n", state->name);
- appendStringInfo(result, "START TIMELINE: %u\n", state->starttli);
+ appendStringInfo(&result, "START TIME: %s\n", startstrbuf);
+ appendStringInfo(&result, "LABEL: %s\n", state->name);
+ appendStringInfo(&result, "START TIMELINE: %u\n", state->starttli);
if (ishistoryfile)
{
@@ -73,22 +75,22 @@ build_backup_content(BackupState *state, bool ishistoryfile)
pg_strftime(stopstrfbuf, sizeof(stopstrfbuf), "%Y-%m-%d %H:%M:%S %Z",
pg_localtime(&state->stoptime, log_timezone));
- appendStringInfo(result, "STOP TIME: %s\n", stopstrfbuf);
- appendStringInfo(result, "STOP TIMELINE: %u\n", state->stoptli);
+ appendStringInfo(&result, "STOP TIME: %s\n", stopstrfbuf);
+ appendStringInfo(&result, "STOP TIMELINE: %u\n",
+ state->stoptli);
}
/* either both istartpoint and istarttli should be set, or neither */
Assert(XLogRecPtrIsInvalid(state->istartpoint) == (state->istarttli == 0));
if (!XLogRecPtrIsInvalid(state->istartpoint))
{
- appendStringInfo(result, "INCREMENTAL FROM LSN: %X/%08X\n",
+ appendStringInfo(&result, "INCREMENTAL FROM LSN: %X/%08X\n",
LSN_FORMAT_ARGS(state->istartpoint));
- appendStringInfo(result, "INCREMENTAL FROM TLI: %u\n",
+ appendStringInfo(&result, "INCREMENTAL FROM TLI: %u\n",
state->istarttli);
}
- data = result->data;
- pfree(result);
+ data = result.data;
return data;
}
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index bb7d90aa5d9..72d259344d9 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -239,7 +239,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
TimeLineID endtli;
backup_manifest_info manifest;
BackupState *backup_state;
- StringInfo tablespace_map;
+ StringInfoData tablespace_map;
/* Initial backup state, insofar as we know it now. */
state.tablespaces = NIL;
@@ -263,11 +263,11 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
/* Allocate backup related variables. */
backup_state = (BackupState *) palloc0(sizeof(BackupState));
- tablespace_map = makeStringInfo();
+ initStringInfo(&tablespace_map);
basebackup_progress_wait_checkpoint();
do_pg_backup_start(opt->label, opt->fastcheckpoint, &state.tablespaces,
- backup_state, tablespace_map);
+ backup_state, &tablespace_map);
state.startptr = backup_state->startpoint;
state.starttli = backup_state->starttli;
@@ -342,7 +342,9 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
if (opt->sendtblspcmapfile)
{
sendFileWithContent(sink, TABLESPACE_MAP,
- tablespace_map->data, -1, &manifest);
+ tablespace_map.data,
+ -1,
+ &manifest);
sendtblspclinks = false;
}
@@ -399,7 +401,6 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
endtli = backup_state->stoptli;
/* Deallocate backup-related variables. */
- destroyStringInfo(tablespace_map);
pfree(backup_state);
}
PG_END_ENSURE_ERROR_CLEANUP(do_pg_abort_backup, BoolGetDatum(false));
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1f45444b499..4a095d75ba1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -491,20 +491,19 @@ static void
check_publications(WalReceiverConn *wrconn, List *publications)
{
WalRcvExecResult *res;
- StringInfo cmd;
+ StringInfoData cmd;
TupleTableSlot *slot;
List *publicationsCopy = NIL;
Oid tableRow[1] = {TEXTOID};
- cmd = makeStringInfo();
- appendStringInfoString(cmd, "SELECT t.pubname FROM\n"
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT t.pubname FROM\n"
" pg_catalog.pg_publication t WHERE\n"
" t.pubname IN (");
- GetPublicationsStr(publications, cmd, true);
- appendStringInfoChar(cmd, ')');
+ GetPublicationsStr(publications, &cmd, true);
+ appendStringInfoChar(&cmd, ')');
- res = walrcv_exec(wrconn, cmd->data, 1, tableRow);
- destroyStringInfo(cmd);
+ res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
@@ -535,15 +534,16 @@ check_publications(WalReceiverConn *wrconn, List *publications)
if (list_length(publicationsCopy))
{
/* Prepare the list of non-existent publication(s) for error message. */
- StringInfo pubnames = makeStringInfo();
+ StringInfoData pubnames;
- GetPublicationsStr(publicationsCopy, pubnames, false);
+ initStringInfo(&pubnames);
+ GetPublicationsStr(publicationsCopy, &pubnames, false);
ereport(WARNING,
errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg_plural("publication %s does not exist on the publisher",
"publications %s do not exist on the publisher",
list_length(publicationsCopy),
- pubnames->data));
+ pubnames.data));
}
}
@@ -2597,33 +2597,38 @@ check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
*/
if (publist)
{
- StringInfo pubnames = makeStringInfo();
- StringInfo err_msg = makeStringInfo();
- StringInfo err_hint = makeStringInfo();
+ StringInfoData pubnames;
+ StringInfoData err_msg;
+ StringInfoData err_hint;
/* Prepare the list of publication(s) for warning message. */
- GetPublicationsStr(publist, pubnames, false);
+ initStringInfo(&pubnames);
+ initStringInfo(&err_msg);
+ initStringInfo(&err_hint);
+ GetPublicationsStr(publist, &pubnames, false);
if (check_table_sync)
{
- appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
subname);
- appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+ appendStringInfoString(&err_hint,
+ _("Verify that initial data copied from the publisher tables did not come from other origins."));
}
else
{
- appendStringInfo(err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
subname);
- appendStringInfoString(err_hint, _("Consider using origin = NONE or disabling retain_dead_tuples."));
+ appendStringInfoString(&err_hint,
+ _("Consider using origin = NONE or disabling retain_dead_tuples."));
}
ereport(WARNING,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg_internal("%s", err_msg->data),
+ errmsg_internal("%s", err_msg.data),
errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
"The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
- list_length(publist), pubnames->data),
- errhint_internal("%s", err_hint->data));
+ list_length(publist), pubnames.data),
+ errhint_internal("%s", err_hint.data));
}
ExecDropSingleTupleTableSlot(slot);
@@ -2716,14 +2721,16 @@ check_publications_origin_sequences(WalReceiverConn *wrconn, List *publications,
{
StringInfo pubnames = makeStringInfo();
StringInfo err_msg = makeStringInfo();
- StringInfo err_hint = makeStringInfo();
+ StringInfoData err_hint;
/* Prepare the list of publication(s) for warning message. */
+ initStringInfo(&err_hint);
GetPublicationsStr(publist, pubnames, false);
appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
subname);
- appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher sequences did not come from other origins."));
+ appendStringInfoString(&err_hint,
+ _("Verify that initial data copied from the publisher sequences did not come from other origins."));
ereport(WARNING,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
@@ -2731,7 +2738,7 @@ check_publications_origin_sequences(WalReceiverConn *wrconn, List *publications,
errdetail_plural("The subscription subscribes to a publication (%s) that contains sequences that are written to by other subscriptions.",
"The subscription subscribes to publications (%s) that contain sequences that are written to by other subscriptions.",
list_length(publist), pubnames->data),
- errhint_internal("%s", err_hint->data));
+ errhint_internal("%s", err_hint.data));
}
ExecDropSingleTupleTableSlot(slot);
@@ -2885,12 +2892,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
int server_version = walrcv_server_version(wrconn);
bool check_columnlist = (server_version >= 150000);
int column_count = check_columnlist ? 4 : 3;
- StringInfo pub_names = makeStringInfo();
+ StringInfoData pub_names;
+ initStringInfo(&pub_names);
initStringInfo(&cmd);
/* Build the pub_names comma-separated string. */
- GetPublicationsStr(publications, pub_names, true);
+ GetPublicationsStr(publications, &pub_names, true);
/* Get the list of relations from the publisher */
if (server_version >= 160000)
@@ -2917,7 +2925,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" FROM pg_publication\n"
" WHERE pubname IN ( %s )) AS gpt\n"
" ON gpt.relid = c.oid\n",
- pub_names->data);
+ pub_names.data);
/* From version 19, inclusion of sequences in the target is supported */
if (server_version >= 190000)
@@ -2926,7 +2934,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" SELECT DISTINCT s.schemaname, s.sequencename, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind, NULL::int2vector AS attrs\n"
" FROM pg_catalog.pg_publication_sequences s\n"
" WHERE s.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
else
{
@@ -2939,11 +2947,9 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
appendStringInfo(&cmd, "FROM pg_catalog.pg_publication_tables t\n"
" WHERE t.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
- destroyStringInfo(pub_names);
-
res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
pfree(cmd.data);
diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index 88a612b041d..06dd62f0008 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -631,13 +631,13 @@ Datum
array_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, false);
+ array_to_json_internal(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -648,13 +648,13 @@ array_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, use_line_feeds);
+ array_to_json_internal(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -664,13 +664,13 @@ Datum
row_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, false);
+ composite_to_json(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -681,13 +681,13 @@ row_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, use_line_feeds);
+ composite_to_json(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -763,12 +763,13 @@ to_json(PG_FUNCTION_ARGS)
Datum
datum_to_json(Datum val, JsonTypeCategory tcategory, Oid outfuncoid)
{
- StringInfo result = makeStringInfo();
+ StringInfoData result;
- datum_to_json_internal(val, false, result, tcategory, outfuncoid,
+ initStringInfo(&result);
+ datum_to_json_internal(val, false, &result, tcategory, outfuncoid,
false);
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -1347,25 +1348,25 @@ json_build_array_worker(int nargs, const Datum *args, const bool *nulls, const O
{
int i;
const char *sep = "";
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- appendStringInfoChar(result, '[');
+ appendStringInfoChar(&result, '[');
for (i = 0; i < nargs; i++)
{
if (absent_on_null && nulls[i])
continue;
- appendStringInfoString(result, sep);
+ appendStringInfoString(&result, sep);
sep = ", ";
- add_json(args[i], nulls[i], result, types[i], false);
+ add_json(args[i], nulls[i], &result, types[i], false);
}
- appendStringInfoChar(result, ']');
+ appendStringInfoChar(&result, ']');
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index da94d424d61..f7e3d84f216 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -125,15 +125,15 @@ jsonb_send(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
StringInfoData buf;
- StringInfo jtext = makeStringInfo();
+ StringInfoData jtext;
int version = 1;
- (void) JsonbToCString(jtext, &jb->root, VARSIZE(jb));
+ initStringInfo(&jtext);
+ (void) JsonbToCString(&jtext, &jb->root, VARSIZE(jb));
pq_begintypsend(&buf);
pq_sendint8(&buf, version);
- pq_sendtext(&buf, jtext->data, jtext->len);
- destroyStringInfo(jtext);
+ pq_sendtext(&buf, jtext.data, jtext.len);
PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 41862872e8a..936cd9dcd9a 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -4607,11 +4607,12 @@ Datum
jsonb_pretty(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
- StringInfo str = makeStringInfo();
+ StringInfoData str;
- JsonbToCStringIndent(str, &jb->root, VARSIZE(jb));
+ initStringInfo(&str);
+ JsonbToCStringIndent(&str, &jb->root, VARSIZE(jb));
- PG_RETURN_TEXT_P(cstring_to_text_with_len(str->data, str->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(str.data, str.len));
}
/*
diff --git a/src/backend/utils/adt/multirangetypes.c b/src/backend/utils/adt/multirangetypes.c
index 84733dc5019..e933b8dc02f 100644
--- a/src/backend/utils/adt/multirangetypes.c
+++ b/src/backend/utils/adt/multirangetypes.c
@@ -378,17 +378,18 @@ multirange_send(PG_FUNCTION_ARGS)
{
MultirangeType *multirange = PG_GETARG_MULTIRANGE_P(0);
Oid mltrngtypoid = MultirangeTypeGetOid(multirange);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeType **ranges;
int32 range_count;
MultirangeIOData *cache;
+ initStringInfo(&buf);
cache = get_multirange_io_data(fcinfo, mltrngtypoid, IOFunc_send);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendint32(buf, multirange->rangeCount);
+ pq_sendint32(&buf, multirange->rangeCount);
multirange_deserialize(cache->typcache->rngtype, multirange, &range_count, &ranges);
for (int i = 0; i < range_count; i++)
@@ -399,11 +400,12 @@ multirange_send(PG_FUNCTION_ARGS)
range = RangeTypePGetDatum(ranges[i]);
outputbytes = SendFunctionCall(&cache->typioproc, range);
- pq_sendint32(buf, VARSIZE(outputbytes) - VARHDRSZ);
- pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendbytes(&buf, VARDATA(outputbytes),
+ VARSIZE(outputbytes) - VARHDRSZ);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c
index 0b2ad8b0975..c016825a00e 100644
--- a/src/backend/utils/adt/rangetypes.c
+++ b/src/backend/utils/adt/rangetypes.c
@@ -264,13 +264,14 @@ Datum
range_send(PG_FUNCTION_ARGS)
{
RangeType *range = PG_GETARG_RANGE_P(0);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeIOData *cache;
char flags;
RangeBound lower;
RangeBound upper;
bool empty;
+ initStringInfo(&buf);
check_stack_depth(); /* recurses when subtype is a range type */
cache = get_range_io_data(fcinfo, RangeTypeGetOid(range), IOFunc_send);
@@ -280,9 +281,9 @@ range_send(PG_FUNCTION_ARGS)
flags = range_get_flags(range);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendbyte(buf, flags);
+ pq_sendbyte(&buf, flags);
if (RANGE_HAS_LBOUND(flags))
{
@@ -290,8 +291,8 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
if (RANGE_HAS_UBOUND(flags))
@@ -300,11 +301,11 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 35c915573a1..b608088d1e2 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -2133,7 +2133,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
node->type == XML_ELEMENT_NODE) ? node->name : NULL;
int domain = error->domain;
int level = error->level;
- StringInfo errorBuf;
+ StringInfoData errorBuf;
/*
* Defend against someone passing us a bogus context struct.
@@ -2210,16 +2210,16 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Prepare error message in errorBuf */
- errorBuf = makeStringInfo();
+ initStringInfo(&errorBuf);
if (error->line > 0)
- appendStringInfo(errorBuf, "line %d: ", error->line);
+ appendStringInfo(&errorBuf, "line %d: ", error->line);
if (name != NULL)
- appendStringInfo(errorBuf, "element %s: ", name);
+ appendStringInfo(&errorBuf, "element %s: ", name);
if (error->message != NULL)
- appendStringInfoString(errorBuf, error->message);
+ appendStringInfoString(&errorBuf, error->message);
else
- appendStringInfoString(errorBuf, "(no message provided)");
+ appendStringInfoString(&errorBuf, "(no message provided)");
/*
* Append context information to errorBuf.
@@ -2237,11 +2237,11 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
xmlGenericErrorFunc errFuncSaved = xmlGenericError;
void *errCtxSaved = xmlGenericErrorContext;
- xmlSetGenericErrorFunc(errorBuf,
+ xmlSetGenericErrorFunc(&errorBuf,
(xmlGenericErrorFunc) appendStringInfo);
/* Add context information to errorBuf */
- appendStringInfoLineSeparator(errorBuf);
+ appendStringInfoLineSeparator(&errorBuf);
xmlParserPrintFileContext(input);
@@ -2250,7 +2250,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Get rid of any trailing newlines in errorBuf */
- chopStringInfoNewlines(errorBuf);
+ chopStringInfoNewlines(&errorBuf);
/*
* Legacy error handling mode. err_occurred is never set, we just add the
@@ -2263,10 +2263,9 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (xmlerrcxt->strictness == PG_XML_STRICTNESS_LEGACY)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
- destroyStringInfo(errorBuf);
return;
}
@@ -2281,23 +2280,21 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (level >= XML_ERR_ERROR)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
xmlerrcxt->err_occurred = true;
}
else if (level >= XML_ERR_WARNING)
{
ereport(WARNING,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
else
{
ereport(NOTICE,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
-
- destroyStringInfo(errorBuf);
}
--
2.43.0
On Mon, 3 Nov 2025 at 20:27, Mats Kindahl <mats.kindahl@gmail.com> wrote:
We can use
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return info.data;It was corrected in an earlier commit, but that seems to have been removed so we still have a lot of these cases.
I created a semantic patch to capture most of these cases, which is present in [1], but this is a slightly modified version that might be interesting to include regardless of other changes. The patch is applied and one case that couldn't be matched is manually fixed.
I think this is a worthwhile conversion. Are you able to create a more
complete version of this? The draft version does introduce quite a few
whitespace changes that aren't wanted. You've also introduced a memory
leak in check_publications(), fetch_relation_list(), jsonb_send() and
xml_errorHandler() (NB: destroyStringInfo() doesn't just pfree the
memory for the struct, it pfree's the data too). The patch shouldn't
be leaving any memory around that the current master is careful to
pfree.
Do you have any semi-automated method to find these? Or is it a case
of manually reviewing code with a makeStringInfo() call?
David
On 11/3/25 11:27, David Rowley wrote:
On Mon, 3 Nov 2025 at 20:27, Mats Kindahl<mats.kindahl@gmail.com> wrote:
We can use
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return info.data;It was corrected in an earlier commit, but that seems to have been removed so we still have a lot of these cases.
I created a semantic patch to capture most of these cases, which is present in [1], but this is a slightly modified version that might be interesting to include regardless of other changes. The patch is applied and one case that couldn't be matched is manually fixed.
Hi David, and thanks for the review.
I think this is a worthwhile conversion. Are you able to create a more
complete version of this? The draft version does introduce quite a few
whitespace changes that aren't wanted.
Absolutely, I'll look into it.
You've also introduced a memory
leak in check_publications(), fetch_relation_list(), jsonb_send() and
xml_errorHandler() (NB: destroyStringInfo() doesn't just pfree the
memory for the struct, it pfree's the data too).
Ah, that was a mistake from my side. Will fix.
The patch shouldn't
be leaving any memory around that the current master is careful to
pfree.Do you have any semi-automated method to find these? Or is it a case
of manually reviewing code with a makeStringInfo() call?
This is using Coccinelle to transform the text based on a semantic patch
(which is included in the patch). Unfortunately it mess with the
whitespace a bit so it is necessary to run pg_intent after. I'm
surprised that there are whitespace changes that should not be there,
but I'll take a look.
Thanks again, and best wishes,
Mats Kindahl
On Tue, 4 Nov 2025 at 09:20, Mats Kindahl <mats.kindahl@gmail.com> wrote:
This is using Coccinelle to transform the text based on a semantic patch (which is included in the patch). Unfortunately it mess with the whitespace a bit so it is necessary to run pg_intent after. I'm surprised that there are whitespace changes that should not be there, but I'll take a look.
pgindent likely won't put back the types of changes that were made
here. I think you're going to have to manually review anything that
comes out of any automated tool, at least certainly until there's some
confidence that the scripts are configured correctly. Loading extra
work onto reviewers or committers that you could have easily done
yourself isn't really a sustainable thing. We just lack bandwidth for
that, plus it's bad etiquette.
A few tips: It's fine to post rough patches to demonstrate ideas to
the list, but make that clear when doing so. If your intention is that
what you're sending gets committed, then please aim for as high a
quality as you can. There just won't be any long-term patience for
series of unvetted-by-human transformation patches from tools such as
Coccinelle around here, especially so if you expect the community to
vet them for you. I'm not saying that's what you're doing, but if you
are, please take this advice.
David
On 11/3/25 22:32, David Rowley wrote:
On Tue, 4 Nov 2025 at 09:20, Mats Kindahl <mats.kindahl@gmail.com> wrote:
This is using Coccinelle to transform the text based on a semantic patch (which is included in the patch). Unfortunately it mess with the whitespace a bit so it is necessary to run pg_intent after. I'm surprised that there are whitespace changes that should not be there, but I'll take a look.
pgindent likely won't put back the types of changes that were made
here. I think you're going to have to manually review anything that
comes out of any automated tool, at least certainly until there's some
confidence that the scripts are configured correctly. Loading extra
work onto reviewers or committers that you could have easily done
yourself isn't really a sustainable thing. We just lack bandwidth for
that, plus it's bad etiquette.A few tips: It's fine to post rough patches to demonstrate ideas to
the list, but make that clear when doing so. If your intention is that
what you're sending gets committed, then please aim for as high a
quality as you can. There just won't be any long-term patience for
series of unvetted-by-human transformation patches from tools such as
Coccinelle around here, especially so if you expect the community to
vet them for you. I'm not saying that's what you're doing, but if you
are, please take this advice.
Hi David,
My apologies for not being clear. I understand the situation and thank
you for the advice. I'll keep that in mind in the future.
Here is an updated patch that I checked manually for unnecessary
whitespace changes.
I also checked that "destroyStringInfo(info)" was handled correctly,
which would be replacing it with a call to "pfree(info.data)" since the
StringInfoData structure is now on the stack but the data buffer needs
to be released since this is what a call to destroyStringInfo() would do.
Removing any calls of "pfree(info)" when "info" was changed to
"StringInfoData" since it is no longer a dynamically allocated variable.
Note that in this case it is not necessary to free "info.data" since the
original code does not do that.
There is one such case affected by this patch, and there the buffer
pointer is copied to another variable and then returned. This can
probably be improved by just returning the buffer pointer directly
without intermediate assignment to a variable, but I avoided this to
make reviewing the code easier. It is easy to add this change either now
or later and should not affect the generated code except at very low
optimization levels.
I also added fixes manually inside check_publications_origin_tables,
check_publications, and fetch_relation_list. In
check_publication_origin_table three StringInfo was dynamically
allocated but not released after. In check_publications and
fetch_relation_list there were extra cases of using a dynamically
allocated StringInfo that was not necessary.
Best wishes,
Mats Kindahl
Attachments:
0001-Use-stack-allocated-StringInfoData.v2.patchtext/x-patch; charset=UTF-8; name=0001-Use-stack-allocated-StringInfoData.v2.patchDownload
From d3a099536c3f9d47a0cb75691d5d694b0ba63ef3 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: Use stack-allocated StringInfoData
This uses a semantic patch to replace uses of StringInfo with StringInfoData
where the info is dynamically allocated but (optionally) freed at the end of
the block. This will avoid one dynamic allocation that otherwise have to be
dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure or assigning a StringInfo to the
variable, which is complicated to deal with, so we avoid this case.
- If the pointer is returned either using a normal return or any of the
PG_RETURN_X versions.
The semantic patch does not handle the case where you have multiple overlapping
uses in the same block similar to the above. In this case it does not do a
replacement at all and prints an error.
This commit fixes three such cases manually in check_publications_origin_tables,
check_publications, and fetch_relation_list respectively, all inside
subscriptioncmds.c and also add calls to pfree for StringInfo that was
dynamically allocated but not freed.
---
cocci/use_stringinfodata.cocci | 203 ++++++++++++++++++++++++
contrib/postgres_fdw/postgres_fdw.c | 14 +-
contrib/tcn/tcn.c | 19 +--
src/backend/access/transam/xlogbackup.c | 31 ++--
src/backend/backup/basebackup.c | 10 +-
src/backend/commands/subscriptioncmds.c | 57 ++++---
src/backend/utils/adt/json.c | 53 ++++---
src/backend/utils/adt/jsonb.c | 9 +-
src/backend/utils/adt/jsonfuncs.c | 7 +-
src/backend/utils/adt/multirangetypes.c | 13 +-
src/backend/utils/adt/rangetypes.c | 18 ++-
src/backend/utils/adt/xml.c | 34 ++--
12 files changed, 344 insertions(+), 124 deletions(-)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..d1b596741fe
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,203 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+//
+//
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one avoids assigning a StringInfo pointer of unknown source to
+// the new StringInfoData variable.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+identifier f;
+position pos;
+expression E;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* destroyStringInfo(info);
+|
+* pfree(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+// This rule replaces the StringInfo variable with a StringInfoData
+// variable and switches from using pointer-syntax to object-syntax.
+//
+// If destroyStringInfo() is used it frees the StringInfo and the data
+// held by the structure. When using StringInfoData we need to free
+// the data but not the StringInfo since that is now a StringInfoData.
+//
+// A pfree() of the StringInfo is not needed any more since the data
+// is not dynamically allocated. Is is also wrong since the structure
+// is now on the stack and not a pointer variable.
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- destroyStringInfo(info);
++ pfree(info->data);
+|
+- pfree(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" above since we cannot
+// inherit over modifications.
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+identifier f;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+// This rule replaces the StringInfo variable with a StringInfoData
+// variable and switches from using pointer-syntax to object-syntax.
+//
+// It should be identical to the rule above with the exception that
+// this rule matches initializer syntax, so make sure they are
+// identical.
+//
+// If destroyStringInfo() is used it frees the StringInfo and the
+// buffer data held by the structure. When using StringInfoData we
+// need to free the data but not the StringInfo since that is now a
+// StringInfoData.
+//
+// A pfree() of the StringInfo is not needed any more since the buffer
+// data is not dynamically allocated. Is is also wrong since the
+// structure is now on the stack and not a pointer variable.
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+... when != S
+(
+<...
+(
+- destroyStringInfo(info);
++ pfree(info.data);
+|
+- pfree(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index 456b267f70b..06b52c65300 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -2841,7 +2841,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
*/
if (list_length(fdw_private) > FdwScanPrivateRelations)
{
- StringInfo relations;
+ StringInfoData relations;
char *rawrelations;
char *ptr;
int minrti,
@@ -2875,7 +2875,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
rtoffset = bms_next_member(plan->fs_base_relids, -1) - minrti;
/* Now we can translate the string */
- relations = makeStringInfo();
+ initStringInfo(&relations);
ptr = rawrelations;
while (*ptr)
{
@@ -2897,24 +2897,24 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
char *namespace;
namespace = get_namespace_name_or_temp(get_rel_namespace(rte->relid));
- appendStringInfo(relations, "%s.%s",
+ appendStringInfo(&relations, "%s.%s",
quote_identifier(namespace),
quote_identifier(relname));
}
else
- appendStringInfoString(relations,
+ appendStringInfoString(&relations,
quote_identifier(relname));
refname = (char *) list_nth(es->rtable_names, rti - 1);
if (refname == NULL)
refname = rte->eref->aliasname;
if (strcmp(refname, relname) != 0)
- appendStringInfo(relations, " %s",
+ appendStringInfo(&relations, " %s",
quote_identifier(refname));
}
else
- appendStringInfoChar(relations, *ptr++);
+ appendStringInfoChar(&relations, *ptr++);
}
- ExplainPropertyText("Relations", relations->data, es);
+ ExplainPropertyText("Relations", relations.data, es);
}
/*
diff --git a/contrib/tcn/tcn.c b/contrib/tcn/tcn.c
index 3158dee0f26..9ca47f17cac 100644
--- a/contrib/tcn/tcn.c
+++ b/contrib/tcn/tcn.c
@@ -66,12 +66,13 @@ triggered_change_notification(PG_FUNCTION_ARGS)
TupleDesc tupdesc;
char *channel;
char operation;
- StringInfo payload = makeStringInfo();
+ StringInfoData payload;
bool foundPK;
List *indexoidlist;
ListCell *indexoidscan;
+ initStringInfo(&payload);
/* make sure it's called as a trigger */
if (!CALLED_AS_TRIGGER(fcinfo))
ereport(ERROR,
@@ -149,22 +150,22 @@ triggered_change_notification(PG_FUNCTION_ARGS)
foundPK = true;
- strcpy_quoted(payload, RelationGetRelationName(rel), '"');
- appendStringInfoCharMacro(payload, ',');
- appendStringInfoCharMacro(payload, operation);
+ strcpy_quoted(&payload, RelationGetRelationName(rel), '"');
+ appendStringInfoCharMacro(&payload, ',');
+ appendStringInfoCharMacro(&payload, operation);
for (i = 0; i < indnkeyatts; i++)
{
int colno = index->indkey.values[i];
Form_pg_attribute attr = TupleDescAttr(tupdesc, colno - 1);
- appendStringInfoCharMacro(payload, ',');
- strcpy_quoted(payload, NameStr(attr->attname), '"');
- appendStringInfoCharMacro(payload, '=');
- strcpy_quoted(payload, SPI_getvalue(trigtuple, tupdesc, colno), '\'');
+ appendStringInfoCharMacro(&payload, ',');
+ strcpy_quoted(&payload, NameStr(attr->attname), '"');
+ appendStringInfoCharMacro(&payload, '=');
+ strcpy_quoted(&payload, SPI_getvalue(trigtuple, tupdesc, colno), '\'');
}
- Async_Notify(channel, payload->data);
+ Async_Notify(channel, payload.data);
}
ReleaseSysCache(indexTuple);
break;
diff --git a/src/backend/access/transam/xlogbackup.c b/src/backend/access/transam/xlogbackup.c
index cda4b38b7d6..d9b6dc63ae3 100644
--- a/src/backend/access/transam/xlogbackup.c
+++ b/src/backend/access/transam/xlogbackup.c
@@ -31,18 +31,20 @@ build_backup_content(BackupState *state, bool ishistoryfile)
char startstrbuf[128];
char startxlogfile[MAXFNAMELEN]; /* backup start WAL file */
XLogSegNo startsegno;
- StringInfo result = makeStringInfo();
+ StringInfoData result;
char *data;
Assert(state != NULL);
+ initStringInfo(&result);
+
/* Use the log timezone here, not the session timezone */
pg_strftime(startstrbuf, sizeof(startstrbuf), "%Y-%m-%d %H:%M:%S %Z",
pg_localtime(&state->starttime, log_timezone));
XLByteToSeg(state->startpoint, startsegno, wal_segment_size);
XLogFileName(startxlogfile, state->starttli, startsegno, wal_segment_size);
- appendStringInfo(result, "START WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result, "START WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->startpoint), startxlogfile);
if (ishistoryfile)
@@ -52,18 +54,18 @@ build_backup_content(BackupState *state, bool ishistoryfile)
XLByteToSeg(state->stoppoint, stopsegno, wal_segment_size);
XLogFileName(stopxlogfile, state->stoptli, stopsegno, wal_segment_size);
- appendStringInfo(result, "STOP WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result, "STOP WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->stoppoint), stopxlogfile);
}
- appendStringInfo(result, "CHECKPOINT LOCATION: %X/%08X\n",
+ appendStringInfo(&result, "CHECKPOINT LOCATION: %X/%08X\n",
LSN_FORMAT_ARGS(state->checkpointloc));
- appendStringInfoString(result, "BACKUP METHOD: streamed\n");
- appendStringInfo(result, "BACKUP FROM: %s\n",
+ appendStringInfoString(&result, "BACKUP METHOD: streamed\n");
+ appendStringInfo(&result, "BACKUP FROM: %s\n",
state->started_in_recovery ? "standby" : "primary");
- appendStringInfo(result, "START TIME: %s\n", startstrbuf);
- appendStringInfo(result, "LABEL: %s\n", state->name);
- appendStringInfo(result, "START TIMELINE: %u\n", state->starttli);
+ appendStringInfo(&result, "START TIME: %s\n", startstrbuf);
+ appendStringInfo(&result, "LABEL: %s\n", state->name);
+ appendStringInfo(&result, "START TIMELINE: %u\n", state->starttli);
if (ishistoryfile)
{
@@ -73,22 +75,21 @@ build_backup_content(BackupState *state, bool ishistoryfile)
pg_strftime(stopstrfbuf, sizeof(stopstrfbuf), "%Y-%m-%d %H:%M:%S %Z",
pg_localtime(&state->stoptime, log_timezone));
- appendStringInfo(result, "STOP TIME: %s\n", stopstrfbuf);
- appendStringInfo(result, "STOP TIMELINE: %u\n", state->stoptli);
+ appendStringInfo(&result, "STOP TIME: %s\n", stopstrfbuf);
+ appendStringInfo(&result, "STOP TIMELINE: %u\n", state->stoptli);
}
/* either both istartpoint and istarttli should be set, or neither */
Assert(XLogRecPtrIsInvalid(state->istartpoint) == (state->istarttli == 0));
if (!XLogRecPtrIsInvalid(state->istartpoint))
{
- appendStringInfo(result, "INCREMENTAL FROM LSN: %X/%08X\n",
+ appendStringInfo(&result, "INCREMENTAL FROM LSN: %X/%08X\n",
LSN_FORMAT_ARGS(state->istartpoint));
- appendStringInfo(result, "INCREMENTAL FROM TLI: %u\n",
+ appendStringInfo(&result, "INCREMENTAL FROM TLI: %u\n",
state->istarttli);
}
- data = result->data;
- pfree(result);
+ data = result.data;
return data;
}
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index bb7d90aa5d9..2be4e069816 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -239,7 +239,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
TimeLineID endtli;
backup_manifest_info manifest;
BackupState *backup_state;
- StringInfo tablespace_map;
+ StringInfoData tablespace_map;
/* Initial backup state, insofar as we know it now. */
state.tablespaces = NIL;
@@ -263,11 +263,11 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
/* Allocate backup related variables. */
backup_state = (BackupState *) palloc0(sizeof(BackupState));
- tablespace_map = makeStringInfo();
+ initStringInfo(&tablespace_map);
basebackup_progress_wait_checkpoint();
do_pg_backup_start(opt->label, opt->fastcheckpoint, &state.tablespaces,
- backup_state, tablespace_map);
+ backup_state, &tablespace_map);
state.startptr = backup_state->startpoint;
state.starttli = backup_state->starttli;
@@ -342,7 +342,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
if (opt->sendtblspcmapfile)
{
sendFileWithContent(sink, TABLESPACE_MAP,
- tablespace_map->data, -1, &manifest);
+ tablespace_map.data, -1, &manifest);
sendtblspclinks = false;
}
@@ -399,7 +399,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
endtli = backup_state->stoptli;
/* Deallocate backup-related variables. */
- destroyStringInfo(tablespace_map);
+ pfree(tablespace_map.data);
pfree(backup_state);
}
PG_END_ENSURE_ERROR_CLEANUP(do_pg_abort_backup, BoolGetDatum(false));
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1f45444b499..492f38bdf0c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -491,20 +491,20 @@ static void
check_publications(WalReceiverConn *wrconn, List *publications)
{
WalRcvExecResult *res;
- StringInfo cmd;
+ StringInfoData cmd;
TupleTableSlot *slot;
List *publicationsCopy = NIL;
Oid tableRow[1] = {TEXTOID};
- cmd = makeStringInfo();
- appendStringInfoString(cmd, "SELECT t.pubname FROM\n"
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT t.pubname FROM\n"
" pg_catalog.pg_publication t WHERE\n"
" t.pubname IN (");
- GetPublicationsStr(publications, cmd, true);
- appendStringInfoChar(cmd, ')');
+ GetPublicationsStr(publications, &cmd, true);
+ appendStringInfoChar(&cmd, ')');
- res = walrcv_exec(wrconn, cmd->data, 1, tableRow);
- destroyStringInfo(cmd);
+ res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+ pfree(cmd.data);
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
@@ -2597,33 +2597,41 @@ check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
*/
if (publist)
{
- StringInfo pubnames = makeStringInfo();
- StringInfo err_msg = makeStringInfo();
- StringInfo err_hint = makeStringInfo();
+ StringInfoData pubnames;
+ StringInfoData err_msg;
+ StringInfoData err_hint;
+
+ initStringInfo(&pubnames);
+ initStringInfo(&err_msg);
+ initStringInfo(&err_hint);
/* Prepare the list of publication(s) for warning message. */
- GetPublicationsStr(publist, pubnames, false);
+ GetPublicationsStr(publist, &pubnames, false);
if (check_table_sync)
{
- appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
subname);
- appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+ appendStringInfoString(&err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
}
else
{
- appendStringInfo(err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
subname);
- appendStringInfoString(err_hint, _("Consider using origin = NONE or disabling retain_dead_tuples."));
+ appendStringInfoString(&err_hint, _("Consider using origin = NONE or disabling retain_dead_tuples."));
}
ereport(WARNING,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg_internal("%s", err_msg->data),
+ errmsg_internal("%s", err_msg.data),
errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
"The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
- list_length(publist), pubnames->data),
- errhint_internal("%s", err_hint->data));
+ list_length(publist), pubnames.data),
+ errhint_internal("%s", err_hint.data));
+
+ pfree(pubnames.data);
+ pfree(err_msg.data);
+ pfree(err_hint.data);
}
ExecDropSingleTupleTableSlot(slot);
@@ -2885,12 +2893,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
int server_version = walrcv_server_version(wrconn);
bool check_columnlist = (server_version >= 150000);
int column_count = check_columnlist ? 4 : 3;
- StringInfo pub_names = makeStringInfo();
+ StringInfoData pub_names;
initStringInfo(&cmd);
+ initStringInfo(&pub_names);
/* Build the pub_names comma-separated string. */
- GetPublicationsStr(publications, pub_names, true);
+ GetPublicationsStr(publications, &pub_names, true);
/* Get the list of relations from the publisher */
if (server_version >= 160000)
@@ -2917,7 +2926,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" FROM pg_publication\n"
" WHERE pubname IN ( %s )) AS gpt\n"
" ON gpt.relid = c.oid\n",
- pub_names->data);
+ pub_names.data);
/* From version 19, inclusion of sequences in the target is supported */
if (server_version >= 190000)
@@ -2926,7 +2935,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" SELECT DISTINCT s.schemaname, s.sequencename, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind, NULL::int2vector AS attrs\n"
" FROM pg_catalog.pg_publication_sequences s\n"
" WHERE s.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
else
{
@@ -2939,10 +2948,10 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
appendStringInfo(&cmd, "FROM pg_catalog.pg_publication_tables t\n"
" WHERE t.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
- destroyStringInfo(pub_names);
+ pfree(pub_names.data);
res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
pfree(cmd.data);
diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index 88a612b041d..06dd62f0008 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -631,13 +631,13 @@ Datum
array_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, false);
+ array_to_json_internal(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -648,13 +648,13 @@ array_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, use_line_feeds);
+ array_to_json_internal(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -664,13 +664,13 @@ Datum
row_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, false);
+ composite_to_json(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -681,13 +681,13 @@ row_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, use_line_feeds);
+ composite_to_json(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -763,12 +763,13 @@ to_json(PG_FUNCTION_ARGS)
Datum
datum_to_json(Datum val, JsonTypeCategory tcategory, Oid outfuncoid)
{
- StringInfo result = makeStringInfo();
+ StringInfoData result;
- datum_to_json_internal(val, false, result, tcategory, outfuncoid,
+ initStringInfo(&result);
+ datum_to_json_internal(val, false, &result, tcategory, outfuncoid,
false);
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -1347,25 +1348,25 @@ json_build_array_worker(int nargs, const Datum *args, const bool *nulls, const O
{
int i;
const char *sep = "";
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- appendStringInfoChar(result, '[');
+ appendStringInfoChar(&result, '[');
for (i = 0; i < nargs; i++)
{
if (absent_on_null && nulls[i])
continue;
- appendStringInfoString(result, sep);
+ appendStringInfoString(&result, sep);
sep = ", ";
- add_json(args[i], nulls[i], result, types[i], false);
+ add_json(args[i], nulls[i], &result, types[i], false);
}
- appendStringInfoChar(result, ']');
+ appendStringInfoChar(&result, ']');
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index da94d424d61..9399cdb491a 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -125,15 +125,16 @@ jsonb_send(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
StringInfoData buf;
- StringInfo jtext = makeStringInfo();
+ StringInfoData jtext;
int version = 1;
- (void) JsonbToCString(jtext, &jb->root, VARSIZE(jb));
+ initStringInfo(&jtext);
+ (void) JsonbToCString(&jtext, &jb->root, VARSIZE(jb));
pq_begintypsend(&buf);
pq_sendint8(&buf, version);
- pq_sendtext(&buf, jtext->data, jtext->len);
- destroyStringInfo(jtext);
+ pq_sendtext(&buf, jtext.data, jtext.len);
+ pfree(jtext.data);
PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 41862872e8a..936cd9dcd9a 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -4607,11 +4607,12 @@ Datum
jsonb_pretty(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
- StringInfo str = makeStringInfo();
+ StringInfoData str;
- JsonbToCStringIndent(str, &jb->root, VARSIZE(jb));
+ initStringInfo(&str);
+ JsonbToCStringIndent(&str, &jb->root, VARSIZE(jb));
- PG_RETURN_TEXT_P(cstring_to_text_with_len(str->data, str->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(str.data, str.len));
}
/*
diff --git a/src/backend/utils/adt/multirangetypes.c b/src/backend/utils/adt/multirangetypes.c
index 84733dc5019..95e9539591e 100644
--- a/src/backend/utils/adt/multirangetypes.c
+++ b/src/backend/utils/adt/multirangetypes.c
@@ -378,17 +378,18 @@ multirange_send(PG_FUNCTION_ARGS)
{
MultirangeType *multirange = PG_GETARG_MULTIRANGE_P(0);
Oid mltrngtypoid = MultirangeTypeGetOid(multirange);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeType **ranges;
int32 range_count;
MultirangeIOData *cache;
+ initStringInfo(&buf);
cache = get_multirange_io_data(fcinfo, mltrngtypoid, IOFunc_send);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendint32(buf, multirange->rangeCount);
+ pq_sendint32(&buf, multirange->rangeCount);
multirange_deserialize(cache->typcache->rngtype, multirange, &range_count, &ranges);
for (int i = 0; i < range_count; i++)
@@ -399,11 +400,11 @@ multirange_send(PG_FUNCTION_ARGS)
range = RangeTypePGetDatum(ranges[i]);
outputbytes = SendFunctionCall(&cache->typioproc, range);
- pq_sendint32(buf, VARSIZE(outputbytes) - VARHDRSZ);
- pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendbytes(&buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c
index 0b2ad8b0975..0e451e4693b 100644
--- a/src/backend/utils/adt/rangetypes.c
+++ b/src/backend/utils/adt/rangetypes.c
@@ -264,7 +264,7 @@ Datum
range_send(PG_FUNCTION_ARGS)
{
RangeType *range = PG_GETARG_RANGE_P(0);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeIOData *cache;
char flags;
RangeBound lower;
@@ -273,6 +273,8 @@ range_send(PG_FUNCTION_ARGS)
check_stack_depth(); /* recurses when subtype is a range type */
+ initStringInfo(&buf);
+
cache = get_range_io_data(fcinfo, RangeTypeGetOid(range), IOFunc_send);
/* deserialize */
@@ -280,9 +282,9 @@ range_send(PG_FUNCTION_ARGS)
flags = range_get_flags(range);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendbyte(buf, flags);
+ pq_sendbyte(&buf, flags);
if (RANGE_HAS_LBOUND(flags))
{
@@ -290,8 +292,8 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
if (RANGE_HAS_UBOUND(flags))
@@ -300,11 +302,11 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 35c915573a1..41e775570ec 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -2133,7 +2133,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
node->type == XML_ELEMENT_NODE) ? node->name : NULL;
int domain = error->domain;
int level = error->level;
- StringInfo errorBuf;
+ StringInfoData errorBuf;
/*
* Defend against someone passing us a bogus context struct.
@@ -2210,16 +2210,16 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Prepare error message in errorBuf */
- errorBuf = makeStringInfo();
+ initStringInfo(&errorBuf);
if (error->line > 0)
- appendStringInfo(errorBuf, "line %d: ", error->line);
+ appendStringInfo(&errorBuf, "line %d: ", error->line);
if (name != NULL)
- appendStringInfo(errorBuf, "element %s: ", name);
+ appendStringInfo(&errorBuf, "element %s: ", name);
if (error->message != NULL)
- appendStringInfoString(errorBuf, error->message);
+ appendStringInfoString(&errorBuf, error->message);
else
- appendStringInfoString(errorBuf, "(no message provided)");
+ appendStringInfoString(&errorBuf, "(no message provided)");
/*
* Append context information to errorBuf.
@@ -2237,11 +2237,11 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
xmlGenericErrorFunc errFuncSaved = xmlGenericError;
void *errCtxSaved = xmlGenericErrorContext;
- xmlSetGenericErrorFunc(errorBuf,
+ xmlSetGenericErrorFunc(&errorBuf,
(xmlGenericErrorFunc) appendStringInfo);
/* Add context information to errorBuf */
- appendStringInfoLineSeparator(errorBuf);
+ appendStringInfoLineSeparator(&errorBuf);
xmlParserPrintFileContext(input);
@@ -2250,7 +2250,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Get rid of any trailing newlines in errorBuf */
- chopStringInfoNewlines(errorBuf);
+ chopStringInfoNewlines(&errorBuf);
/*
* Legacy error handling mode. err_occurred is never set, we just add the
@@ -2263,10 +2263,10 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (xmlerrcxt->strictness == PG_XML_STRICTNESS_LEGACY)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
- destroyStringInfo(errorBuf);
+ pfree(errorBuf.data);
return;
}
@@ -2281,23 +2281,23 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (level >= XML_ERR_ERROR)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
xmlerrcxt->err_occurred = true;
}
else if (level >= XML_ERR_WARNING)
{
ereport(WARNING,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
else
{
ereport(NOTICE,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
- destroyStringInfo(errorBuf);
+ pfree(errorBuf.data);
}
--
2.43.0
On Tue, 4 Nov 2025 at 21:25, Mats Kindahl <mats.kindahl@gmail.com> wrote:
Here is an updated patch that I checked manually for unnecessary
whitespace changes.
Thanks for updating the patch.
There is one such case affected by this patch, and there the buffer
pointer is copied to another variable and then returned. This can
probably be improved by just returning the buffer pointer directly
without intermediate assignment to a variable, but I avoided this to
make reviewing the code easier. It is easy to add this change either now
or later and should not affect the generated code except at very low
optimization levels.
Yeah, I think you must mean in build_backup_content(). I noticed that too.
I also added fixes manually inside check_publications_origin_tables,
check_publications, and fetch_relation_list. In
check_publication_origin_table three StringInfo was dynamically
allocated but not released after. In check_publications and
fetch_relation_list there were extra cases of using a dynamically
allocated StringInfo that was not necessary.
It's not your fault, but check_publications_origin_tables() looks like
a mess. It seems like excessive use of additional code just to avoid
two separate ereport() calls. Might be worth a follow-up patch to
clean that up.
The patch looks good to me. The only thing I'd adjust is to get rid of
the "data" variable from build_backup_content(), which I think you
mentioned above. I can take care of that one.
I can push this tomorrow morning UTC+13. I'll leave it til then in
case anyone else has any comments.
David
On Nov 5, 2025, at 19:01, David Rowley <dgrowleyml@gmail.com> wrote:
It's not your fault, but check_publications_origin_tables() looks like
a mess. It seems like excessive use of additional code just to avoid
two separate ereport() calls. Might be worth a follow-up patch to
clean that up.The patch looks good to me. The only thing I'd adjust is to get rid of
the "data" variable from build_backup_content(), which I think you
mentioned above. I can take care of that one.I can push this tomorrow morning UTC+13. I'll leave it til then in
case anyone else has any comments.
No objection. But I just find that the patch missed one place in check_publications():
```
if (list_length(publicationsCopy))
{
/* Prepare the list of non-existent publication(s) for error message. */
StringInfo pubnames = makeStringInfo();
GetPublicationsStr(publicationsCopy, pubnames, false);
ereport(WARNING,
errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg_plural("publication %s does not exist on the publisher",
"publications %s do not exist on the publisher",
list_length(publicationsCopy),
pubnames->data));
}
```
This “pubnames” can be replaced as well, and “pfree(pubnames.data)” should be added after “ereport”. As one place in the function has been replaced in this patch, I guess we should not leave this place to a separate patch.
Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/
On Thu, 6 Nov 2025 at 00:01, David Rowley <dgrowleyml@gmail.com> wrote:
I can push this tomorrow morning UTC+13. I'll leave it til then in
case anyone else has any comments.
I found a few more cases that could get the same treatment and also
included the check_publications() one mentioned by Chao.
I did end up removing the subscriptioncmds.c. I thought those should
likely just be fixed to use errmsg() and errhint() instead of their
"internal" counterpart.
I left out the Coccinelle script. I'm not currently qualified to
review that, and not aware of the policy about that. I'm aware there's
another thread with some discussion about Coccinelle, but I've not
read it.
David
On 11/5/25 12:01, David Rowley wrote:
On Tue, 4 Nov 2025 at 21:25, Mats Kindahl<mats.kindahl@gmail.com> wrote:
Here is an updated patch that I checked manually for unnecessary
whitespace changes.Thanks for updating the patch.
There is one such case affected by this patch, and there the buffer
pointer is copied to another variable and then returned. This can
probably be improved by just returning the buffer pointer directly
without intermediate assignment to a variable, but I avoided this to
make reviewing the code easier. It is easy to add this change either now
or later and should not affect the generated code except at very low
optimization levels.Yeah, I think you must mean in build_backup_content(). I noticed that too.
Yes, that is the one I meant.
I also added fixes manually inside check_publications_origin_tables,
check_publications, and fetch_relation_list. In
check_publication_origin_table three StringInfo was dynamically
allocated but not released after. In check_publications and
fetch_relation_list there were extra cases of using a dynamically
allocated StringInfo that was not necessary.It's not your fault, but check_publications_origin_tables() looks like
a mess. It seems like excessive use of additional code just to avoid
two separate ereport() calls. Might be worth a follow-up patch to
clean that up.
Yeah, I have noted that too, but I wanted to avoid making reviews more
complicated than necessary. I thought it might be better to do as a
separate effort.
I'll take a look at it.
The patch looks good to me. The only thing I'd adjust is to get rid of
the "data" variable from build_backup_content(), which I think you
mentioned above. I can take care of that one.
Thanks David.
Best wishes,
Mats Kindahl
On 2025-Nov-06, Mats Kindahl wrote:
On 11/5/25 12:01, David Rowley wrote:
It's not your fault, but check_publications_origin_tables() looks like
a mess. It seems like excessive use of additional code just to avoid
two separate ereport() calls. Might be worth a follow-up patch to
clean that up.Yeah, I have noted that too, but I wanted to avoid making reviews more
complicated than necessary. I thought it might be better to do as a separate
effort.
Yeah, I came across this a few weeks ago as well. I was thinking maybe
we can expand the ereport() macro instead of duplicating things. It
looks a bit ugly and I don't think we do it anywhere else ... I mean
something like
errstart(WARNING, NULL);
if (cond)
errmsg( ... );
else
errmsg( ... );
// and so on
errfinish(__FILE__, __LINE__, __func__);
Is this too ugly to live? (I swear I had this on a branch already, but
can't find it right this instant.)
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"No necesitamos banderas
No reconocemos fronteras" (Jorge González)
On Fri, 7 Nov 2025 at 00:24, Álvaro Herrera <alvherre@kurilemu.de> wrote:
Yeah, I came across this a few weeks ago as well. I was thinking maybe
we can expand the ereport() macro instead of duplicating things. It
looks a bit ugly and I don't think we do it anywhere else ... I mean
something likeerrstart(WARNING, NULL);
if (cond)
errmsg( ... );
else
errmsg( ... );
// and so on
errfinish(__FILE__, __LINE__, __func__);Is this too ugly to live? (I swear I had this on a branch already, but
can't find it right this instant.)
I thought it could just become 2 separate ereport calls. The duplicate
string consts for the plural stuff are going to get de-duplicated
anyway. I didn't try it to see how it'd look, however.
David
=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@kurilemu.de> writes:
Yeah, I came across this a few weeks ago as well. I was thinking maybe
we can expand the ereport() macro instead of duplicating things. It
looks a bit ugly and I don't think we do it anywhere else ... I mean
something like
errstart(WARNING, NULL);
if (cond)
errmsg( ... );
else
errmsg( ... );
// and so on
errfinish(__FILE__, __LINE__, __func__);
Is this too ugly to live? (I swear I had this on a branch already, but
can't find it right this instant.)
I really dislike exposing errstart/errfinish to calling code like
that. I agree that check_publications_origin_tables is not pretty
as-is, but it's better than breaking open the ereport abstraction.
In the particular case at hand, it seems best to just write
two separate ereport calls for the check_table_sync and
not-check_table_sync cases, as David mentioned. Duplicating
the errdetail_plural call is slightly annoying but it's net less
code and less surprise than what's there now.
In general, in most other places where we have conditional message
text, it's done via ?: operators within the ereport macro,
for example this decades-old bit in sysv_shmem.c:
ereport(FATAL,
(errmsg("could not create shared memory segment: %m"),
errdetail("Failed system call was shmget(key=%lu, size=%zu, 0%o).",
(unsigned long) memKey, size,
IPC_CREAT | IPC_EXCL | IPCProtection),
(shmget_errno == EINVAL) ?
errhint("This error usually means that PostgreSQL's request for a shared memory "
"segment exceeded your kernel's SHMMAX parameter, or possibly that "
"it is less than "
"your kernel's SHMMIN parameter.\n"
"The PostgreSQL documentation contains more information about shared "
"memory configuration.") : 0,
(shmget_errno == ENOMEM) ?
errhint("This error usually means that PostgreSQL's request for a shared "
"memory segment exceeded your kernel's SHMALL parameter. You might need "
"to reconfigure the kernel with larger SHMALL.\n"
"The PostgreSQL documentation contains more information about shared "
"memory configuration.") : 0,
(shmget_errno == ENOSPC) ?
errhint("This error does *not* mean that you have run out of disk space. "
"It occurs either if all available shared memory IDs have been taken, "
"in which case you need to raise the SHMMNI parameter in your kernel, "
"or because the system's overall limit for shared memory has been "
"reached.\n"
"The PostgreSQL documentation contains more information about shared "
"memory configuration.") : 0));
We could do something similar here, but I'm not convinced
it's better than just writing two ereport's.
regards, tom lane